loading page

Reinforcement learning-based adaptive strategies for climate change adaptation: An application for flood risk management
  • +2
  • Kairui Feng,
  • Ning Lin,
  • Robert E Kopp,
  • Siyuan Xian,
  • Michael Oppenheimer
Kairui Feng
Department of Civil and Environmental Engineering, Princeton University
Ning Lin
Department of Civil and Environmental Engineering, Princeton University

Corresponding Author:

Robert E Kopp
Department of Earth and Planetary Sciences, Rutgers University, Rutgers Climate and Energy Institute, Rutgers University
Siyuan Xian
Department of Civil and Environmental Engineering, Princeton University
Michael Oppenheimer
School of Public and International Affairs, Princeton University, Department of Geosciences, Princeton University, High Meadows Environmental Institute, Princeton University

Abstract

Climate change is posing unprecedented challenges, necessitating the development of effective climate adaptation. Conventional computational models of climate adaptation frameworks inadequately account for our capacity to learn, update, and enhance decisions as exogenous information is collected. Here we investigate the potential of reinforcement learning (RL), a machine learning technique that exhibits efficacy in acquiring knowledge from the environment and systematically optimizing dynamic decisions, to model and inform adaptive climate decision-making. To illustrate, we derive adaptive stratigies for coastal flood protections for Manhattan, New York City, considering continuous observations of sea-level rise throughout the 21st century. We find that, when designing adaptive seawalls to protect Manhattan, the RL-derived strategy leads to a significant reduction in the expected cost, 6% to 36% under the moderate emissions scenario SSP2-4.5 (9% to 77% under the high emissions scenario SSP5-8.5), compared to previous methods. When considering multiple adaptive policies (buyout, accommodate, and dike), the RL approach leads to a further 5% (15%) reduction in cost, showcasing RL’s flexibility in addressing complex policy design problems when multiple policies interact. RL also outperforms conventional methods in controlling tail risk (i.e., low probability, high impacts) and avoiding losses induced by misinformation (e.g., biased sea-level projections), demonstrating the importance of systematic learning and updating in addressing extremes and uncertainties related to climate adaptation. The analysis also reveals that, given the large uncertainty and potential misjudgment about climate projection, “preparing for the worst” is economically more beneficial when adaptive strategies, such as those supported by the RL approach, are applied.
26 Feb 2024Submitted to ESS Open Archive
28 Feb 2024Published in ESS Open Archive