Matteo Picozzi

and 4 more

We consider approximately 32,000 microearthquakes that occurred between 2005 and 2016 in central Italy to investigate the crustal strength before and after the three largest earthquakes of the 2016 seismic sequence (i.e., the Mw 6.2, 24 August 2016 Amatrice, the Mw 6.1, 26 October 2016 Visso, and the Mw 6.5, 30 October 2016 Norcia earthquakes). We monitor the spatio-temporal deviations of the scaling between the seismic moment, M0, and the radiated energy, ES, with respect to a model calibrated for background seismicity. These deviations, defined here as Energy Index (EI), allow us to identify in the years following the Mw 6.1, 2009 L’Aquila earthquake a progressive evolution of the dynamic properties of the microearthquakes and the existence of high EI patches close to the Amatrice earthquake hypocenter. We show the existence of a crustal volume with high EI even before the Mw 6.5 Norcia earthquake. Our results agree with the suggested hypothesis that the Norcia earthquake nucleated at the boundary of a large patch that was highly stressed by the two previous mainshocks of the sequence. Furthermore, we highlight the mainshocks interaction both in terms of EI and of the mean loading shear stress associated to microearthquakes occurring within the crustal volumes comprising the mainshock hypocenters. Our study shows that the dynamic characteristics of microearthquakes can be exploited as beacons of stress change in the crust, and thus be exploited to monitor the seismic hazard of a region and help to intercept the preparation phase of large earthquakes.

Jannes Münchmeyer

and 3 more

Recent research showed that machine learning, in particular deep learning, can be applied with great success to a multitude of seismological tasks, e.g. phase picking and earthquake localization. One reason is that neural networks can be used as feature extractors, generating generally applicable representations of complex data. We employ a convolutional network to condense earthquake waveforms from a varying set of stations into a high dimensional vector, which we call event embedding. For each event the embedding is calculated from instrument-corrected waveforms beginning at the first P pick and updated continuously with incoming data. We employ event embeddings for real time magnitude estimation, earthquake localization and ground motion prediction, which are central tasks for early warning and for guiding rapid disaster response. We evaluate our model on the IPOC catalog for Northern Chile, containing ∼100,000 events with low uncertainty hypocenters and magnitude estimates. We split the catalog sequentially into a training and a test set, with the 2014 Iquique event (Mw 8.1) and its fore- and aftershocks contained in the test set. Following preliminary results the system achieves a test RMSE of 0.28 magnitude units (m.u.) and 35 km hypocentral distance 1 s after the first P arrival at the nearest station, which improves to 0.17 m.u. and 22 km after 5 s and 0.11 m.u. and 15 km after 25 s. As applications in the hazard domain require proper uncertainty estimates, we propose a probabilistic model using Gaussian mixture density networks. By analyzing the predictions in terms of their calibration, we show that the model exhibits overconfidence i.e. overly optimistic confidence intervals. We show that deep ensembles substantially improve calibration. To assess the limitations of our model and elucidate the pitfalls of machine learning for early warning in general, we conduct an error analysis and discuss mitigation strategies. Despite the size of our catalog, we observe issues with two kinds of data sparsity. First, we observe increased residuals for the source parameters of the largest events, as training data for these events is scarce. Second, similar inaccuracies occur in areas without events of a certain size in the training catalog. We investigate the impact of these limitations on the Iquique fore- and aftershocks.

Dino Bindi

and 6 more

Although the non-uniqueness of the solution is commonly mentioned in the context of studies that perform spectral decompositions to separate source and propagation effects, its impact on the interpretation of the results is often overlooked. The purpose of this study is to raise awareness on this important subject for modelers and users of the models and to evaluate the impact of strategies commonly applied to constrain the solution. In the first part, we study the connection between the source-station geometry of an actual data set and the properties of the design matrix that defines the spectral decomposition. We exemplify the analyses by considering a geometry extracted from the data set prepared for the benchmark Community Stress Drop Validation Study (Baltay et al., 2021). In the second part, we analyze two different strategies followed to constrain the solutions. The first strategy assumes a reference site condition where the average site amplification for a set of stations is constrained to values fixed a-priori. The second strategy consists in correcting the decomposed source spectra for unresolved global propagation effects. Using numerical analysis, we evaluate the impact on source scaling relationships of constraining the corner frequency of magnitude 2 events to 30 Hz when the true scaling deviates from this assumption. We show that the assumption can not only shift the overall seismic moment versus corner frequency scaling but can also affect the source parameters of larger events and modify their spectral shape.