Abstract
Cloud optical property retrievals from passive satellite imagers tend to
be most accurate during the daytime due to the availability of visible
and near-infrared solar reflectances. Infrared (IR) channels have a
relative lack of spectral sensitivity to optically thick clouds and are
heavily influenced by cloud-top temperature making accurate retrievals
of cloud optical depth, cloud effective radius, and cloud water path
more difficult at night. In this work, we examine whether the use of
spatial context—information about the local structure and organization
of cloud features—can help overcome these limitations of IR channels
and provide more accurate estimates of nighttime cloud optical
properties. We trained several neural networks to emulate the Advanced
Baseline Imager (ABI) NOAA Daytime Cloud Optical and Microphysical
Properties (DCOMP) algorithm using only IR channels. We then compared
the neural networks to the NOAA operational daytime and nighttime
products, and the Nighttime Lunar Cloud Optical and Microphysical
Properties (NLCOMP) algorithm, which utilizes the low-light visible band
on VIIRS in collocated imagery. These comparisons show that the use of
spatial context can significantly improve estimates of nighttime cloud
optical properties. The primary model we trained, U-NetCOMP, can
reasonably match DCOMP during the day and significantly reduces
artifacts associated with day/night terminator. We also find that
U-NetCOMP estimates align more closely with NLCOMP at night compared to
the nighttime NOAA operational products for ABI.