Symmetry Constraints Enhance Long-term Stability and Accuracy in
Unsupervised Learning of Geophysical Fluid Flows
Abstract
Fluid dynamical systems are well described by discretized partial
differential equations, but computational costs limit accuracy, duration
and/or resolution in numerical integrations. Recent studies showed that
deep neural networks trained on simulations or PDE-derived losses can
improve cost-accuracy tradeoffs, but purely data-centric approaches
discard physical and mathematical insights and require computationally
costly training data. Here we draw on advances in geometric deep
learning to design solver networks that respect PDE symmetries as hard
constraints. We construct equivariant convolutional layers for mixed
scalar-vector input fields in order to capture the symmetries inherent
to specific PDEs. We demonstrate our approach on a challenging 1D
semi-implicit shallow water scheme with closed boundaries, applying
unsupervised learning with a physics-derived loss function. We report
strong improvements in accuracy and stability of equivariant solvers
compared to standard convolutional networks with the same architectures
and parameter counts. Solver equivariance also improves performance on
new initial conditions not encountered during training, and suppresses
error accumulation in global momentum and energy. Strikingly, these
benefits do not reduce loss values during training, but appear later
during ML-assisted rollouts over time steps. Our results suggest that
symmetry constraints could improve deep learning performance across a
wide range of fluid dynamical tasks, learning algorithms and neural
architectures.