Antonio Scala

and 9 more

Tsunamis are rare, destructive events, whose generation, propagation and coastal impact processes involve several complex physical phenomena. Most tsunami applications, like probabilistic tsunami hazard assessment, make extensive use of large sets of numerical simulations, facing a systematic trade-off between the computational costs and the modelling accuracy. For seismogenic tsunami, the source is often modelled as an instantaneous sea-floor displacement due to the fault static slip distribution, while the propagation in open-sea is computed through a shallow water approximation. Here, through 1D earthquake-tsunami coupled simulations of large M>8 earthquakes in Tohoku-like subduction zone, we tested for which conditions the instantaneous source (IS) and/or the shallow water (SW) approximations can be used to simulate with enough accuracy the whole tsunami evolution. We used as a reference a time-dependent (TD), multi-layer, non-hydrostatic (NH) model whose source features, duration, and size, are based on seismic rupture dynamic simulations with realistic stress drop and rigidity, within a Tohoku-like environment. We showed that slow ruptures, generating slip in shallow part of subduction slabs (e.g. tsunami earthquakes), and very large events, with an along-dip extension comparable with the trench-coast distance (e.g. mega-thrust) require a TD-NH modelling, in particular when the bathymetry close to the coast features sharp depth gradients. Conversely, deeper, higher stress-drop events can be accurately modelled through an IS-SW approximation. We finally showed to what extent inundation depend on bathymetric geometrical features: (i) steeper bathymetries generate larger inundations and (ii) a resonant mechanism emerges with run-up amplifications associated with larger source size on flatter bathymetries.
Understanding mechanical processes occurring on faults requires detailed information on the microseismicity that can be enhanced today by advanced techniques for earthquake detection. This problem is challenging when the seismicity rate is low and most of the earthquakes occur at depth. In this study, we compare three detection techniques, the autocorrelation FAST, the machine learning EQTransformer, and the template matching EQCorrScan, to assess their ability to improve catalogs associated with seismic sequences in the normal fault system of Southern Apennines (Italy) using data from the Irpinia Near Fault Observatory (INFO). We found that the integration of the machine learning and template matching detectors, the former providing templates for the cross-correlation, largely outperforms techniques based on autocorrelation and machine learning alone, featuring an enrichment of the automatic and manual catalogs of factors 21 and 7 respectively. Since output catalogs can be polluted by many false positives, we applied refined event selection based on the cumulative distribution of their similarity level. We can thus clean up the detection lists and analyze final subsets dominated by real events. The magnitude of completeness decreases by more than one unit compared to the reference value for the network. We report b-values associated with sequences smaller than the average, likely corresponding to larger differential stresses than for the background seismicity of the area. For all the analyzed sequences, we found that main events are anticipated by foreshocks, indicating a possible preparation process for mainshocks at sub-kilometric scales.