Explainable artificial intelligence in geoscience: a glimpse into the
future of landslide susceptibility modeling
Abstract
For decades, the distinction between statistical models and machine
learning ones has been clear. The former are optimized to produce
interpretable results, whereas the latter seeks to maximize the
predictive performance of the task at hand. This is valid for any
scientific field and for any method belonging to the two categories
mentioned above. When attempting to predict natural hazards, this
difference has lead researchers to make drastic decisions on which
aspect to prioritize, a difficult choice to make. In fact, one would
always seek the highest performance because at higher performances
correspond better decisions for disaster risk reduction. However,
scientists also wish to understand the results, as a way to rely on the
tool they developed. Today, very recent development in deep learning
have brought forward a new generation of interpretable artificial
intelligence, where the prediction power typical of machine learning
tools is equipped with a level of explanatory power typical of
statistical approaches. In this work, we attempt to demonstrate the
capabilities of this new generation of explainable artificial
intelligence (ExAI). To do so, we take the landslide susceptibility
context as reference. Specifically, we build an ExAI trained to model
landslides occurred in response to the Gorkha earthquake (25 April
2015), providing an educational overview of the model design and its
querying opportunities. The results are surprising, the performance are
extremely high, while the interpretability can be extended to the
probabilistic result assigned to single mapping units. This is also
showcased in a web-GIS
(\textcolor{blue}{https://arcg.is/0unziD}) platform
we built.