Detection of forced change within combined climate fields using
explainable neural networks
Abstract
Assessing forced climate change requires the extraction of the forced
signal from the background of climate noise. Traditionally, tools for
extracting forced climate change signals have focused on one atmospheric
variable at a time, however, using multiple variables can reduce noise
and allow for easier detection of the forced response. Following
previous work, we train artificial neural networks to predict the year
of single- and multi-variable maps from forced climate model
simulations. To perform this task, the neural networks learn patterns
that allow them to discriminate between maps from different years—that
is, the neural networks learn the patterns of the forced signal amidst
the shroud of internal variability and climate model disagreement. When
presented with combined input fields (multiple seasons, variables, or
both), the neural networks are able to detect the signal of forced
change earlier than when given single fields alone by utilizing complex,
nonlinear relationships between multiple variables and seasons. We use
layer-wise relevance propagation, a neural network explainability tool,
to identify the multivariate patterns learned by the neural networks
that serve as reliable indicators of the forced response. These
“indicator patterns” vary in time and between climate models,
providing a template for investigating inter-model differences in the
time evolution of the forced response. This work demonstrates how neural
networks and their explainability tools can be harnessed to identify
patterns of the forced signal within combined fields.