Fleur Couvreux

and 16 more

The development of parameterizations is a major task in the development of weather and climate models. Model improvement has been slow in the past decades, due to the difficulty of encompassing key physical processes into parameterizations, but also of calibrating or â\euro˜tuningâ\euro™ the many free parameters involved in their formulation. Machine learning techniques have been recently used for speeding up the development process. While some studies propose to replace parameterizations by data-driven neural networks, we rather advocate that keeping physical parameterizations is key for the reliability of climate projections. In this paper we propose to harness machine learning to improve physical parameterizations. In particular we use Gaussian process-based methods from uncertainty quantification to calibrate the model free parameters at a process level. To achieve this, we focus on the comparison of single-column simulations and reference large-eddy simulations over multiple boundary-layer cases. Our method returns all values of the free parameters consistent with the references and any structural uncertainties, allowing a reduced domain of acceptable values to be considered when tuning the 3D global model. This tool allows to disentangle deficiencies due to poor parameter calibration from intrinsic limits rooted in the parameterization formulations. This paper describes the tool and the philosophy of tuning in single-column mode. Part 2 shows how the results from our process-based tuning can help in the 3D global model tuning.

Olivier Audouin

and 3 more

The representation of stable boundary layers (SBLs) still challenges turbulence parameterizations implemented in current weather or climate models. The present work assesses whether these model deficiencies reflect calibration choices or intrinsic limits in currently-used turbulence parameterization formulations and implementations. This question is addressed for the ARPEGE-Climat 6.3 CNRM atmospheric model in a single-column model/large-eddy simulation (SCM/LES) comparison framework, using the history matching with iterative refocusing statistical approach. The GABLS4 case, which samples a nocturnal strong SBL observed at Dome C, Antarctic Plateau, is used. The standard calibration of the ARPEGE-Climat 6.3 turbulence parameterization leads to a too deep SBL, a too high low-level jet and misses the nocturnal wind rotation. This behavior is found for low and high vertical resolution model configurations. The statistical tool then proves that these model deficiencies reflect a poor parameterization calibration rather than intrinsic limits of the parameterization formulation itself. In particular, the role of two lower bounds that were heuristically introduced during the parameterization implementation to increase mixing in the free troposphere and to avoid runaway cooling in snow- or ice-covered region is emphasized. The statistical tool identifies the space of the parameterization free parameters compatible with the LES reference, accounting for the various sources of uncertainty. This space is non-empty, thus proving that the ARPEGE-Climat 6.3 turbulence parameterization contains the required physics to capture the GABLS4 SBL. The SCM framework is also used to validate the statistical framework and a few guidelines for its use in parameterization development and calibration are discussed.