Essential Site Maintenance: Authorea-powered sites will be updated circa 15:00-17:00 Eastern on Tuesday 5 November.
There should be no interruption to normal services, but please contact us at [email protected] in case you face any issues.

loading page

Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning
  • +2
  • Mohamad Abed El Rahman Hammoud,
  • Naila Raboudi,
  • Edriss S Titi,
  • Omar M Knio,
  • Ibrahim Hoteit
Mohamad Abed El Rahman Hammoud
King Abdullah University of Science and Technology
Author Profile
Naila Raboudi
King Abdullah University of Science and Technology
Author Profile
Edriss S Titi
University of Cambridge
Author Profile
Omar M Knio
King Abdullah University of Science and Technology
Author Profile
Ibrahim Hoteit
King Abdullah Univerity of Science and Technology

Corresponding Author:[email protected]

Author Profile

Abstract

Data assimilation (DA) plays a pivotal role in diverse applications, ranging from climate predictions and weather forecasts to trajectory planning for autonomous vehicles. A prime example is the widely used ensemble Kalman filter (EnKF), which relies on linear updates to minimize variance among the ensemble of forecast states. Recent advancements have seen the emergence of deep learning approaches in this domain, primarily within a supervised learning framework. However, the adaptability of such models to untrained scenarios remains a challenge. In this study, we introduce a novel DA strategy that utilizes reinforcement learning (RL) to apply state corrections using full or partial observations of the state variables. Our investigation focuses on demonstrating this approach to the chaotic Lorenz ’63 system, where the agent’s objective is to minimize the root-mean-squared error between the observations and corresponding forecast states. Consequently, the agent develops a correction strategy, enhancing model forecasts based on available system state observations. Our strategy employs a stochastic action policy, enabling a Monte Carlo-based DA framework that relies on randomly sampling the policy to generate an ensemble of assimilated realizations. Results demonstrate that the developed RL algorithm performs favorably when compared to the EnKF. Additionally, we illustrate the agent’s capability to assimilate non-Gaussian data, addressing a significant limitation of the EnKF.
23 Dec 2023Submitted to ESS Open Archive
27 Dec 2023Published in ESS Open Archive