Flood Frequency Sampling Error: Insights from Regional Analysis,
Stochastic Storm Transposition, and Physics-based Modeling
Abstract
Flood Frequency Analysis (FFA) typically relies on fitting a probability
distribution to Annual Maximum Peak flows (AMPs) to estimate the
frequency of various flood magnitudes. It is generally assumed that
longer observational records enhance the reliability of FFA. In this
study, we challenge this assumption by examining the Kickapoo watershed
in the north-central United States where at-site FFA is susceptible to
significant sampling errors despite a relatively long record of
observations (90 years). We demonstrate that three exceptionally large
events, with only a 1.7% chance to occur in a single watershed,
significantly affect extreme quantiles and their associated confidence
intervals. We argue that FFA using a weighted skewness coefficient,
recommended by Bulletin 17C FFA guidelines, can yield more reliable
flood frequency estimates than at-site methods by combining both local
and regional characteristics. We also leverage a process-driven FFA
approach, which integrates Stochastic Storm Transposition (SST) with
Monte Carlo (MC) physics-based hydrologic modeling (SST-MC), to gain
additional insights into flood frequency. We employed the WRF-Hydro
hydrologic model and a process-based calibration approach with Fusion, a
new high-resolution forcing dataset over the continental United States.
By expanding the sample size and incorporating watershed-scale and
regional information, SST-MC can effectively reduce the sensitivity of
FFA to individual extreme events and provide more reliable frequency
estimates. The SST-MC method also adds physical interpretations by
quantifying internal variability in flood frequency. Our study
highlights the benefits of integrating regional analysis and advanced
physic-based hydrologic modeling techniques into traditional FFA.