loading page

This Looks Like That There: Interpretable neural networks for image tasks when location matters
  • +1
  • Elizabeth A. Barnes,
  • Randal J Barnes,
  • Zane K Martin,
  • Jamin K Rader
Elizabeth A. Barnes
Colorado State University, Colorado State University

Corresponding Author:[email protected]

Author Profile
Randal J Barnes
University of Minnesota, University of Minnesota
Author Profile
Zane K Martin
Colorado State University, Colorado State University
Author Profile
Jamin K Rader
Colorado State University, Colorado State University
Author Profile

Abstract

We develop and demonstrate a new interpretable deep learning model specifically designed for image analysis in earth system science applications. The neural network is designed to be inherently interpretable, rather than explained via post hoc methods. This is achieved by training the network to identify parts of training images that act as prototypes for correctly classifying unseen images. The new network architecture extends the interpretable prototype architecture of a previous study in computer science to incorporate absolute location. This is useful for earth system science where images are typically the result of physics-based processes, and the information is often geo-located. Although the network is constrained to only learn via similarities to a small number of learned prototypes, it can be trained to exhibit only a minimal reduction in accuracy compared to non-interpretable architectures. We apply the new model to two earth science use cases: a synthetic data set that loosely represents atmospheric high- and low-pressure systems, and atmospheric reanalysis fields to identify the state of tropical convective activity associated with the Madden-Julian oscillation. In both cases, we demonstrate that considering absolute location greatly improves testing accuracies. Furthermore, the network architecture identifies specific historical dates that capture multivariate, prototypical behaviour of tropical climate variability.