Evaluation of sub-hourly MRMS quantitative precipitation estimates in
mountainous terrain using machine learning
Abstract
The Multi-Radar Multi-Sensor (MRMS) product incorporates radar, climate
model, and gage data at a high spatiotemporal resolution for the
contiguous United States. MRMS is subject to various sources of
measurement error, especially in complex terrain. The goal of this study
is to provide a framework for understanding the uncertainty of MRMS in
mountainous areas with limited observations. We evaluate 8-hour time
series samples of MRMS 15-minute intensity through a comparison to 204
gages located in the mountains of Colorado. This analysis shows that the
MRMS surface precipitation rate product tends to overestimate rainfall
with a median normalized root mean squared error (RMSE) of
42\% of the maximum MRMS 15-minute intensity. For each
time series sample, various features related to the physical
characteristics influencing MRMS performance are calculated from the
topography, surrounding storms, and rainfall observed at the gage
location. A gradient-boosting regressor is trained on these features and
is optimized with quantile loss, using the RMSE as a target, to model
nonlinear patterns in the features that relate to a range of error. This
model was used to predict a range of error throughout the mountains of
Colorado during warm months, spanning 6 years, resulting in a
spatiotemporally varying error model of MRMS for sub-hourly
precipitation rates. Mapping of this dataset by aggregating normalized
RMSE over time reveals that areas further from radar sites in higher
elevation terrain show consistently greater error. However, the model
predicts larger performance variability in these regions compared to
alternative error assessments.