loading page

Spatio-temporal machine learning for continental scale terrestrial hydrology
  • +5
  • Andrew Bennett,
  • Hoang Tran,
  • Luis De la Fuente,
  • Amanda Triplett,
  • Yueling Ma,
  • Peter Melchior,
  • Reed M. Maxwell,
  • Laura Elizabeth Condon
Andrew Bennett
University of Arizona

Corresponding Author:[email protected]

Author Profile
Hoang Tran
Pacific Northwest National Laboratory
Author Profile
Luis De la Fuente
University of Arizona
Author Profile
Amanda Triplett
University of Arizona
Author Profile
Yueling Ma
Princeton University
Author Profile
Peter Melchior
Princeton University
Author Profile
Reed M. Maxwell
Princeton University
Author Profile
Laura Elizabeth Condon
University of Arizona
Author Profile

Abstract

Integrated hydrologic models can simulate coupled surface and subsurface processes but are computationally expensive to run at high resolutions over large domains. Here we develop a novel deep learning model to emulate continental-scale subsurface flows simulated by the integrated ParFlow-CLM model. We compare convolutional neural networks like ResNet and UNet run autoregressively against our novel architecture called the Forced SpatioTemporal RNN (FSTR). The FSTR model incorporates separate encoding of initial conditions, static parameters, and meteorological forcings, which are fused in a recurrent loop to produce spatiotemporal predictions of groundwater. We evaluate the model architectures on their ability to reproduce 4D pressure heads, water table depths, and surface soil moisture over the contiguous US at 1km resolution and daily time steps over the course of a full water year. The FSTR model shows superior performance to the baseline models, producing stable simulations that capture both seasonal and event-scale dynamics across a wide array of hydroclimatic regimes. The emulators provide over 1000x speedup compared to the original physical model, which will enable new capabilities like uncertainty quantification and data assimilation for integrated hydrologic modeling that were not previously possible. Our results demonstrate the promise of using specialized deep learning architectures like FSTR for emulating complex process-based models without sacrificing fidelity.