Bin He

and 6 more

Increasing deployment of dense arrays has facilitated detailed structure imaging for tectonic investigation, hazard assessment and resource exploration. Strong velocity heterogeneity and topographic changes have to be considered during passive source imaging. However, it is quite challenging for ray-based methods, such as Kirchhoff migration or the widely used teleseismic receiver function, to handle these problems. In this study, we propose a 3-D passive source reverse time migration strategy based on the spectral element method. It is realized by decomposing the time reversal full elastic wavefield into amplitude-preserved vector P and S wavefields by solving the corresponding weak-form solutions, followed by a dot-product imaging condition to get images for the subsurface structures. It enables us to use regional 3-D migration velocity models and take topographic variations into account, helping us to locate reflectors at more accurate positions than traditional 1-D model-based methods, like teleseismic receiver functions. Two synthetic tests are used to demonstrate the advantages of the proposed method to handle topographic variations and complex velocity heterogeneities. Furthermore, applications to the Laramie array data using both teleseismic P and S waves enable us to identify several south-dipping structures beneath the Laramie basin in southeast Wyoming, which are interpreted as the Cheyenne Belt suture zone and agree with, and improve upon previous geological interpretations.

Bin He

and 2 more

For seismographic stations with short acquisition duration, the signal-to-noise ratios (SNRs) of ambient noise cross-correlation functions (CCFs) are typically low, preventing us from accurately measuring surface wave dispersion curves or waveform characteristics. In addition, with low-quality CCFs, it is difficult to monitor temporal variations of subsurface physical states or extract relatively weak signals such as body waves. In this study, we propose to use local attributes to improve the SNRs of ambient noise CCFs, which allows us to enhance the quality of CCFs for stations with limited acquisition duration. Two local attributes: local cross-correlation and local similarity, are used in this study. The local cross-correlation allows us to extend the dimensionality of daily CCFs with computational costs similar to global cross-correlation. Taking advantage of this extended dimensionality, the local similarity is then used to measure non-stationary similarity between the extended daily CCFs with a reference stacking trace, which enables us to design better stacking weights to enhance coherent features and attenuate incoherent background noises. Ambient noise recorded by several broadband stations from the USArray in North Texas and Oklahoma, the Superior Province Rifting EarthScope Experiment in Minnesota and Wisconsin and a high-frequency nodal array deployed in the San Bernardino basin are used to demonstrate the performance of the proposed approach for improving the SNR of CCFs.

Jaewook Lee

and 1 more

An accurate estimation of the shale permeability is essential to understand heterogeneous organic-rich shale reservoir rocks and predict the complexity of pore fluid transport in the rocks. However, predicting the matrix permeability by traditional models is still challenging because they require information often measured from core measurements. First, Kozeny’s equation (Kozeny, 1927) uses porosity and specific surface area of solid grains. However, it is difficult to characterize the specific surface area values or grain sizes from the logs. Second, Herron’s method (Herron, 1987) has been used for predicting permeability based on the mineral contents provided by well log data in conventional sandstone reservoirs. However, the predictive accuracy is low due to the different pore network structures of the shales. In this study, we estimate shale matrix permeability by a combined exploratory data analysis (EDA) and nonlinear regression estimation from the wireline logs. First, we conduct a bivariate correlation analysis for permeability and rock properties in core measurements. According to the correlation and Shapley value sensitivity test, we find that permeability change has a significant effect on the variation in porosity. Also, we investigate a nonlinear behavior between porosity and permeability. Second, we derive a nonlinear polylogarithmic estimation function of porosity to permeability, comparing it to the multivariate linear regression of porosity and clay volume fraction. As a result, a cubic logarithmic function of porosity significantly improves the fitting performance of the permeability values, better than the traditional methods. Moreover, we generate the permeability logs from the calibrated porosity logs, and they imply better shale permeability prediction as well. Since we can invert the porosity distribution from seismic data, this approach can provide a more accurate permeability estimation and reliable fluid flow modeling for shale and mudrock.
Large-scale injection of carbon dioxide (CO2) into the earth started in the 1980’s for enhanced oil recovery (EOR). Geological sequestration (injection and storage) of industrial CO2 to reduce greenhouse gas emissions began in the 1990’s using saline aquifers and depleted hydrocarbon reservoirs. Today the two processes are being co-optimized as Carbon Capture, Use and Storage (CCUS). As a result, the time-lapse seismology community has gathered about 20-30 years of experience monitoring CO2 injection projects of various types and sizes, primarily using controlled active seismic sources and Large-N receiver arrays. To help achieve IPCC projections of 2C global temperature change versus CO2 emissions, society would need to scale up current CO2 injection rates by a factor of 250x from 40 Megatons to 10 Gigatons, per year. CCUS regulations for Monitoring and Verification requirements are in various stages of development around the world, including the EU CCS Directive, US EPA, and international ISO standards. A typical commercial-scale CCUS project injects > 1 Mt CO2/yr for > 20 years. After CO2 injection ceases, the project operator must further monitor the post-injection CO2 plume behavior to establish regulatory compliance for 20-50 years in order to “handover” the project to the regulator. After handover, the regulator is then responsible for monitoring the post-injection plume stability for another 20-30 years (in the US there is no handover, the operator maintains all project and monitoring responsibilities). These CCUS requirements imply that we will need to monitor CO2 projects, both during and after injection, for 50-100 years or more. We simply do not have experience in the time-lapse seismology community with such long-term monitoring periods, in terms of data acquisition, processing, imaging, and minimizing environmental footprint and costs. It would certainly not be practical nor affordable to conduct a full-scale 4D seismic survey every year, for 100 years. These long-term monitoring requirements thus present both challenges and new research opportunities. I will present some experimental results I have obtained over the past decade to help develop long-term, near real-time, continuous monitoring of CO2 injection projects using ambient seismic noise (ASN).
Over the past 60+ years, an enormous amount of exploration geophysics survey data has been collected around the globe, the majority of which is high-quality 2D and 3D seismic data acquired by the petroleum energy industry. Much of this ‘legacy’ data still has significant commercial value today and in the future, for hydrocarbon exploration, gas storage (methane, helium, hydrogen…), groundwater, minerals, CO2 sequestration, geothermal, and other purposes. However, there is likely a subset of this exploration data that is of little further commercial value, but may be of immense value to academic, government and industry researchers, for example. This may include very long 2D seismic lines recorded in frontier exploration areas which turned out to be non-prospective; for example along convergent margin subduction zones or major continental tectonic fault zones, which are absent of major sedimentary basins. Shared access to this subset of legacy data would provide an extremely useful opportunity and resource for academic researchers and others, much as shared earthquake data via the IRIS network has revolutionized our understanding of earthquakes, faults and tectonics. There are several challenges that would need to be overcome before such legacy data can be shared widely among the broader geophysical community. Much of this legacy data is archived on old magnetic tape media that is now physically degrading, making it difficult to recover the data. The data would need to be collected, recovered, QC’ed, archived with associated metadata, and stored on modern digital data storage systems like the IRIS DMC, newer cloud-based systems like OSDU, or other options. The data archival, maintenance and support for a shared data distribution system would require a sustainable business model and funding from all of the data stakeholders, including government agencies like NSF, academic universities, and industry users. Data use agreements would need to be structured to ensure that data is used for non-commercial purposes as appropriate, data users respect various legal terms and conditions, and data is not shared freely among individual recipients without the approval of the host data center.