Learning Representations of Satellite Images with Evaluations on
Synoptic Weather Events
Abstract
This study applied representation learning algorithms to satellite
images and evaluated the learned latent spaces with classifications of
various weather events. The algorithms investigated include the
classical linear transformation, i.e., principal component analysis
(PCA), state-of-the-art deep learning method, i.e., convolutional
autoencoder (CAE), and a residual network pre-trained with large image
datasets (PT). The experiment results indicated that the latent space
learned by CAE consistently showed higher threat scores for all
classification tasks. The classifications with PCA yielded high hit
rates but also high false-alarm rates. In addition, the PT performed
exceptionally well at recognizing tropical cyclones but was inferior in
other tasks.
Further experiments suggested that representations learned from
higher-resolution datasets are superior in all classification tasks for
deep-learning algorithms, i.e., CAE and PT. We also found that smaller
latent space sizes had little impact on the classification task’s hit
rate. Still, a latent space dimension smaller than 128 caused a
significantly higher false-alarm rate.
Though the CAE can learn latent spaces effectively and efficiently, the
interpretation of the learned representation lacks direct connections to
physical attributions. Therefore, developing a physics-informed version
of CAE can be a promising outlook for the current work.