Abstract
Unlike the traditional subgrid scale parameterizations used in climate
models, current machine learning (ML) parameterizations are only tuned
offline, by minimizing a loss function on outputs from high
resolution models. This approach often leads to numerical instabilities
and long-term biases. Here, we propose a method to design tunable ML
parameterizations and calibrate them online. The calibration of
the ML parameterization is achieved in two steps. First, some model
parameters are included within the ML model input. This ML model is
fitted at once for a range of values of the parameters, using an
offline metric. Second, once the ML parameterization has been
plugged into the climate model, the parameters included among the ML
inputs are optimized with respect to an online metric quantifying
errors on long-term statistics. We illustrate our method with a simple
dynamical system. Our approach significantly reduces long-term biases of
the ML model.