Essential Site Maintenance: Authorea-powered sites will be updated circa 15:00-17:00 Eastern on Tuesday 5 November.
There should be no interruption to normal services, but please contact us at [email protected] in case you face any issues.

Digital cameras on the surface are frequently used for monitoring atmospheric conditions. Several methods were developed to use the images for synoptic observations, cloud assessments, short term forecasting and so on. However, there are some restrictions not considered by these methods, especially when a linear camera is used to observe logarithmic ranges of atmospheric luminance. Cameras accommodate the scene to a linear scale causing distortions on pattern distributions by pixel value saturation (PVS) and drifts from its original hues. This brings on some simplifying practices commonly found in the literature to overcome these problems. But those practices result in loss of data, misinterpretation of valid pixels and restriction on the use of computer vision algorithms. The present work begins by illustrating these problems performing supervised learning for two reasons: all observation systems seek out automation of human synoptic observation in order to provide a sound mathematical modeling of the observed patterns. A new modeling paradigm is proposed to map the sky patterns to represent the existent physical atmospheric phenomena not considered by the literature. We validate the proposed method, and compared the results using 1630 images against two well-established methods. A hypothesis test showed that results are compatible with currently used binary approach with advantages. Differences were due to PVS and other restrictions not considered by the methods existent on literature. Finally, the present work concludes that the new paradigm presents more meaningful results of sky patterns interpretation, allows extended daylight observation periods and uses a higher dimensional space.