Abstract
Modelling the spatial distribution of infrasound attenuation (or
transmission loss, TL) is key to understanding and interpreting
microbarometer data and observations. Such predictions enable the
reliable assessment of infrasound source characteristics such as ground
pressure levels associated with earthquakes, man-made or volcanic
explosion properties, and ocean-generated microbarom wavefields.
However, the computational cost inherent in full-waveform modelling
tools, such as Parabolic Equation (PE) codes, often prevents the
exploration of a large parameter space, i.e., variations in wind models,
source frequency, and source location, when deriving reliable estimates
of source or atmospheric properties – in particular for real-time and
near-real-time applications. Therefore, many studies rely on analytical
regression-based heuristic TL equations that neglect complex vertical
wind variations and the range-dependent variation in the atmospheric
properties. This introduces significant uncertainties in the predicted
TL. In the current contribution, we propose a deep learning approach
trained on a large set of simulated wavefields generated using PE
simulations and realistic atmospheric winds to predict infrasound
ground-level amplitudes up to 1000 km from a ground-based source.
Realistic range dependent atmospheric winds are constructed by combining
ERA5, NRLMSISE-00, and HWM-14 atmospheric models, and small-scale
gravity-wave perturbations computed using the Gardner model. Given a set
of wind profiles as input, our new modelling framework provides a fast
(0.05 s runtime) and reliable (~5 dB error on average,
compared to PE simulations) estimate of the infrasound TL.