Recently, many in the space weather community have taken up the cause to advocate for an orphan among our own. It’s an important fight – for ground-based sensor networks. Although ground-based sensors are used across all disciplines of space weather, in terms of long-term support, they have no single clear home in any United States agency or department. This has resulted in an ongoing struggle throughout the community to maintain important space weather sensors and networks.The Promoting Research and Observations of Space Weather to Improve the Forecasting of Tomorrow (PROSWIFT) Act of 2020 (Public Law 116-181) attempts to clarify Federal roles and responsibilities, stating that “… ground-based observations provide crucial data necessary to understand, forecast, and prepare for space weather phenomena”, which it defines as ”radars, lidars, magnetometers, neutron monitors, radio receivers, aurora and airglow imagers, spectrometers, interferometers, and solar observatories.”The data from this list of sensors and arrays support research across the space weather domains, including magnetospheric, ionospheric, and atmospheric science. Networks are run by governmental, academic, and commercial providers, and are used to support a range of end-users, from aviation to the power sector. Given the wide range of applications, it’s not surprising that no single entity has primary custody.In separate sections of PROSWIFT, sustainment of these instruments is assigned to “The Director of the National Science Foundation, the Director of the United States Geological Survey, the Secretary of the Air Force, and, as practicable in support of the Air Force, the Secretary of the Navy” who are directed to “maintain and improve ground-based observations of the Sun, as necessary and advisable”, and also to the National Oceanic and Atmospheric Administration (NOAA), as the civil operational space weather agency that is responsible for maintaining “ground-based… assets to provide observations needed for space weather forecasting, prediction, and warnings”.While PROSWIFT’s clarification of federal responsibilities is welcome, what is highlighted is a problem of the “ownership” of the issue of long-term sustainability of such varied instruments.We can start to unravel the ownership problem by understanding its history. One complication to an easy definition is that ground-based sensor networks support both space weather science and operations. The National Science Foundation (NSF) has a long history of supporting novel instrument development, small arrays of sensors placed for scientific research (fundamental research is the foundation of NSF’s mandate), and mid- and larger-scale facilities. But the needs of science do not necessarily intersect the needs of operations, and neither do their requirements in terms of engineering and support. Operational sensors, in many cases, are entirely different than scientific sensors.Like scientific arrays, operational sensors must provide the “right” data - accurate and relevant – but the delivery of those data must also be timely, consistent, and reliable. In other words, the data must be usable for space weather predictions, forecasts, and alerts. The United States Geological Survey (USGS) is one example of a federal provider of operational ground-based data. The commercial sector, by mandate of PROSWIFT, is another.Whether scientific or operational, ground-based networks need to be supported and maintained long-term to fulfill their missions. It is more expensive to shut down and rebuild an array than to keep it operating, and strategic planning is required to prioritize and balance needs across the space weather enterprise.Those taking up the initiative to support ground-based sensors span the space weather enterprise, reflecting the interdisciplinary and cross-sector need for these data. In addition to a myriad of white papers submitted to the Heliophysics Decadal Survey (e.g., Hartinger et al., and Bhatt et al.) and publications (see Engebretson and Zesta, 2017, and Bain et al., 2023), advisory groups such as the Space Weather Advisory Group (SWAG) and the National Academies Space Weather Roundtable, both put into place by the PROSWIFT Act itself, have taken up the cause. The SWAG, in a public meeting on March 20, 2023 (https://www.weather.gov/swag), called for a “paradigm shift”, agreeing upon a recommendation that there is a need “Provide long-term support for operational ground-based and airborne sensors and networks”.It’s clear that these data are crucial for space weather – both space weather research and operations. With the approach of solar maximum, and the associated rise in space weather hazard, what’s less clear is whether this problem will be solved in time. The community efforts have been effective in raising awareness about the dire situation facing many ground-based sensor networks. What is needed now is a mechanism to maintain these networks long-term, and advocacy for new Federal appropriations to support the organizations that take on the responsibility.
80 years after aerial photography revealed thousands of aligned oval depressions on the USA’s Atlantic Coastal Plain, the geomorphology of the “Carolina bays” remains enigmatic. Geologists and astronomers alike hold that invoking a cosmic impact for their genesis is indefensible. Rather, the bays are commonly attributed to gradualistic fluvial, marine and/or aeolian processes operating during the Pleistocene era. The major axis orientations of Carolina bays are noted for varying statistically by latitude, suggesting that, should there be any merit to a cosmic hypothesis, a highly accurate triangulation network and suborbital analysis would yield a locus and allow for identification of a putative impact site. Digital elevation maps using LiDAR technology offer the precision necessary to measure their exquisitely-carved circumferential rims and orientations reliably. To support a comprehensive geospatial survey of Carolina bay landforms (Survey) we generated about a million km2 of false-color hsv-shaded bare-earth topographic maps as KML-JPEG tile sets for visualization on virtual globes. Considering the evidence contained in the Survey, we maintain that interdisciplinary research into a possible cosmic origin should be encouraged. Consensus opinion does hold a cosmic impact accountable for an enigmatic Pleistocene event - the Australasian tektite strewn field - despite the failure of a 60-year search to locate the causal astroblem. Ironically, a cosmic link to the Carolina bays is considered soundly falsified by the identical lack of a causal impact structure. Our conjecture suggests both these events are coeval with a cosmic impact into the Great Lakes area during the Mid-Pleistocene Transition, at 786 ka ± 5 k. All Survey data and imagery produced for the Survey are available on the Internet to support independent research. A table of metrics for 50,000 bays examined for the Survey is available from an on-line Google Fusion Table: https://goo.gl/XTHKC4 . Each bay is also geospatially referenceable through a map containing clickable placemarks that provide information windows displaying that bay’s measurements as well as further links which allows visualization of the associated LiDAR imagery and the bay’s planform measurement overlay within the Google Earth virtual globe: https://goo.gl/EHR4Lf .
The year 2020 marks the 10th anniversary of the Deepwater Horizon (DWH) disaster. From April through July 2010, an estimated total of 4.9 million barrels of oil and 250,000 metric tonnes of natural gas were discharged into the Gulf of Mexico. Not only were eleven lives lost, but the tragedy also left a lasting impact on the Gulf’s marine and coastal ecosystems and on the residents who depend on these habitats for their livelihood. After the oil spill, the Gulf of Mexico’s microbial communities played a critical role in the cleanup, contributing core hydrocarbon bioremediation services. Despite its importance, marine hydrocarbon microbiology is a young field. Prior to the spill relatively little was known about marine hydrocarbon degraders. Beginning in 2010, the development and application of genomics and bioinformatics tools enabled researchers – for the first time - to identify and examine individual microorganisms within their complex communities in unprecedented detail. Today, technical advances and new discoveries reveal a natural capacity of microbes in the Gulf of Mexico to catalyze bioremediation of petroleum hydrocarbons. This knowledge is critical to guide mitigation and restoration strategies that build on microbes’ natural bioremediation capabilities without further disturbing sensitive ecosystems. This report is based on the deliberations of experts who participated in the joint colloquium of the American Academy of Microbiology, ASM’s honorific leadership group, the American Geophysical Union (AGU), and Gulf of Mexico Research Initiative (GoMRI) in April 2019. The report highlights new research tools, methodology, data resources, collaborations, and models that will advance basic and applied research to provide data-driven solutions to environmental challenges. The report is available at www.ASM. org/microbe_oceansystem.
Key Points: • Machine learning (ML) helps model the interaction between clouds and climate using large datasets. • We review physics-guided/explainable ML applied to cloud-related processes in the climate system. • We also provide a guide to scientists who would like to get started with ML. Abstract: Machine learning (ML) algorithms are powerful tools to build models of clouds and climate that are more faithful to the rapidly-increasing volumes of Earth system data than commonly-used semiempirical models. Here, we review ML tools, including interpretable and physics-guided ML, and outline how they can be applied to cloud-related processes in the climate system, including radiation, microphysics, convection, and cloud detection , classification, emulation, and uncertainty quantification. We additionally provide a short guide to get started with ML and survey the frontiers of ML for clouds and climate .
Quantifying the response of human activities to different COVID-19 measures may serve as a potential way to evaluate the effectiveness of the measures and optimize measures. Recent studies reported that seismic noise reduction caused by less human activities due to COVID-19 lockdown had been observed by seismometers. However, it is difficult for current seismic infrastructure in urban cities to characterize spatiotemporal seismic noise during the post-COVID-19 lockdown because of sparse distribution. Here we show key connections between progressive COVID-19 measures and spatiotemporal seismic noise changes recorded by a distributed acoustic sensing (DAS) array deployed in State College, PA. We first show spatiotemporal seismic noise reduction (up to 90%) corresponding to the reduced human activities in different city blocks during the period of stay-at-home. We also show partial noise recovery corresponding to increased road traffics and machines in Phase Yellow/Green. It is interesting to note that non-recovery seismic noise in 0.01-10 Hz suggests the low level of pedestrian movement in Phase Yellow/Green. Despite of a linear correlation between mobility change and seismic noise change, we emphasize that DAS recordings using city-wide fiber optics could provide a way for quantifying the impact of COVID-19 measures on human activities in city blocks.
The core tools of science (data, software, and computers) are undergoing a rapid and historic evolution, changing what questions scientists ask and how they find answers. Earth science data are being transformed into new formats optimized for cloud storage that enable rapid analysis of multi-petabyte datasets. Datasets are moving from archive centers to vast cloud data storage, adjacent to massive server farms. Open source cloud-based data science platforms, accessed through a web-browser window, are enabling advanced, collaborative, interdisciplinary science to be performed wherever scientists can connect to the internet. Specialized software and hardware for machine learning and artificial intelligence (AI/ML) are being integrated into data science platforms, making them more accessible to average scientists. Increasing amounts of data and computational power in the cloud are unlocking new approaches for data-driven discovery. For the first time, it is truly feasible for scientists to bring their analysis to data in the cloud without specialized cloud computing knowledge. This shift in paradigm has the potential to lower the threshold for entry, expand the science community, and increase opportunities for collaboration while promoting scientific innovation, transparency, and reproducibility. Yet, we have all witnessed promising new tools which seem harmless and beneficial at the outset become damaging or limiting. What do we need to consider as this new way of doing science is evolving?
Modern seismic networks provide a huge amount of data received in real-time, being impossible the manual identification of relevant events useful to monitor the activity of the volcano. Thus, many volcano observatories are interested in tools to perform an online, automatic analysis of the seismic activity. Machine Learning area provides various of Volcano-Seismic Recognition (VSR) systems designed to classify seismic events in real-time. However, only a few approaches can also detect them in a continuous data streams. Most of those VSR systems are based on the 2-step supervised paradigm: 1. A training database (X-DB) of a given volcano ’X’ is prepared with hundreds of events manually detected and classified according to their physical origin. 2. Statistical models are built analysing this DB, and are later used to automatically identify events in new data recorded at the volcano X. This supervised procedure is the major drawback to achieve a fast deployment of a VSR system for another volcano Y, as the preparation of its own Y-DB takes considerable time, and requires qualified operators and previous recordings, which is difficult for volcanoes without recent activity or which haven’t been monitored. In order to overcome these limitations, the EU-funded project ’VULCAN.ears’ focused on real-time, Volcano-Independent VSR (VI.VSR) approaches. It proposes alternative solutions based on state-of-the-art technologies as universal DBs and models, waveform standardisation and parallel architectures. Recent results obtained by mixing DBs from Popocatépetl, Colima, Deception and Arenal active volcanoes will be presented. We apply VULCAN.ears technologies to evaluate VSR systems on joint DBs built with data of several volcanoes. We also use volcano-independent models to automatically classify events of another volcano, analysing how the recognition accuracy varies as the training DB becomes more complex. All tests are carried out by an easy to use, user-friendly graphical application (geoStudio). All these achievements produce new insights useful to redesign the next-generation, portable and robust VSR systems.
Machine learning (ML) based models have demonstrated very strong predictive capabilities for hydrologic modeling, but are often criticized for being black-boxes. In this paper we use a technique from the field of explainable AI (XAI), called layerwise relevance propagation (LRP) to “open the black box”. Specifically we train a deep neural network on data from a set of hydroclimatically diverse FluxNet sites to predict turbulent heat fluxes, and then use the LRP technique to analyze what it learned. We show that the neural network learns physically plausible relationships, including different ways of partitioning the turbulent heat fluxes according to moisture or energy limiting characteristics of the sites. That is, the neural network learns different behaviors at arid and non-arid sites. We also develop and demonstrate a novel technique that uses the output of the LRP analysis to explore how the neural network learned to regionalize between sites. We find that the neural network primarily learned behaviors that differed between evergreen forested sites and all other vegetation classes. Our analysis shows that even simple neural networks can extract physically-plausible relationships and that by using XAI methods we can learn new information from the ML-based methods.
High quality citizen science data can be instrumental in advancing science toward new discoveries and a deeper understanding of under-observed phenomena. However, the error structure of citizen scientist (CS) data must be well-defined. Within a citizen science program, the errors in submitted observations vary, and their occurrence may depend on CS-specific characteristics. This study develops a graphical Bayesian inference model of error types in CS data. The model assumes that: (1) each CS observation is subject to a 5 specific error type, each with its own bias and noise; and (2) an observation’s error type depends on the error community of the CS, which in turn relates to characteristics of the CS submitting the observation. Given a set of CS observations and corresponding ground-truth values, the model can be calibrated for a specific application, yielding (i) number of error types and error communities, (ii) bias and noise for each error type, (iii) error distribution of each error community, and (iv) the error community to which each CS belongs. The model, applied to Nepal CS rainfall observations, 10 identifies five error types and sorts CSs into four model-inferred communities. In the case study, 73% of CSs submitted data with errors in fewer than 5% of their observations. The remaining CSs submitted data with unit, meniscus, unknown, and outlier errors. A CS’s assigned community, coupled with model-inferred error probabilities, can identify observations that require verification. With such a system, the onus of validating CS data is partially transferred from human effort to machine-learned algorithms.
Identifying the main environmental drivers of SARS-CoV-2 transmissibility in the population is crucial for understanding current and potential future outbursts of COVID-19 and other infectious diseases. To address this problem, we concentrate on basic reproduction number R0, which is not sensitive to testing coverage and represents transmissibility in an absence of social distancing and in a completely susceptible population. While many variables may potentially influence R0, a high correlation between these variables may obscure the result interpretation. Consequently, we combine Principal Component Analysis with feature selection methods from several regression-based approaches to identify the main demographic and meteorological drivers behind R0. We robustly obtain that country’s wealth/development (GDP per capita or Human Development Index) is by far the most important R0 predictor, probably being a good proxy for the overall contact frequency in a population. This main effect is modulated by built-up area per capita (crowdedness in indoor space), onset of infection (likely related to increased awareness of infection risks), net migration, unhealthy living lifestyle/conditions including pollution, seasonality, and possibly BCG vaccination prevalence. Also, we show that several variables that significantly correlate with transmissibility do not directly influence R0 or affect it differently than suggested by naive analysis.
This study introduces the results from fitting a Bayesian hierarchical spatiotemporal model to COVID-19 cases and deaths at the county-level in the United States for the year 2020. Two models were created, one for cases and one for deaths, utilizing a scaled Besag, York, Mollié model with Type I spatial-temporal interaction. Each model accounts for 16 social vulnerability variables and 7 environmental measurements as fixed effects. The spatial structure of COVID-19 infections is heavily focused in the southern U.S. and the states of Indiana, Iowa, and New Mexico. The spatial structure of COVID-19 deaths covers less of the same area but also encompasses a cluster in the Northeast. The spatiotemporal trend of the pandemic in the U.S. illustrates a shift out of many of the major metropolitan areas into the U.S. Southeast and Southwest during the summer months and into the upper Midwest beginning in autumn. Analysis of the major social vulnerability predictors of COVID-19 infection and death found that counties with higher percentages of those not having a high school diploma and having minority status to be significant. Age 65 and over was a significant factor in deaths but not in cases. Among the environmental variables, above ground level (AGL) temperature had the strongest effect on relative risk to both cases and deaths. Hot and cold spots of COVID-19 cases and deaths derived from the convolutional spatial effect show that areas with a high probability of above average relative risk have significantly higher SVI composite scores. Hot and cold spot analysis utilizing the spatiotemporal interaction term exemplifies a more complex relationship between social vulnerability, environmental measurements, and cases/deaths.
The steepness of the beach face is a fundamental parameter for coastal morphodynamic research. Despite its importance, it remains extremely difficult to obtain reliable estimates of the beach-face slope over large spatial scales (1000’s of km of coastline). In this letter, a novel approach to estimate this slope from time-series of satellite-derived shoreline positions is presented. This new technique uses a frequency-domain analysis to find the optimum slope that minimises high-frequency tidal fluctuations relative to lower-frequency erosion/accretion signals. A detailed assessment of this new approach at 8 locations spanning a range of tidal regimes, wave climates and sediment grain sizes shows strong agreement (R = 0.9) with field measurements. The automated technique is then applied across 1000’s of beaches in eastern Australia and California USA, revealing similar regional-scale distributions along these two contrasting coastlines and highlights the potential for new global-scale insight to beach-face slope spatial distribution, variability and trends.
Journals occasionally solicit manuscripts for special collections, in which all papers are focused on a particular topic within the journal’s scope. For the Journal of Geophysical Research: Space Physics, there have been 51 special collections from 2005 through 2018, with a total of 1009 papers out of the 8881 total papers in the journal over those years (11%). Taken together, the citations to special collection papers, as well as other metrics, are essentially the same as the non-special-collection papers. Several paper characteristics were examined to assess whether they could explain the higher citation and download values for SC papers, but they cannot. In addition, indirect methods were conducted for assessing self-citations as an explanation for the increased citations, but no evidence was found to support this hypothesis. It was found that some paper types, notably Commentaries and Technical Reports, have lower average citations but higher average downloads than Research Articles (the most common type of paper in this journal). This implies that such paper types have a different kind of impact than “regular” science-result-focused papers. In addition to having higher average citations and downloads, special collections focus community attention on that particular research topic, providing a deadline for manuscript submissions and a single webpage at which many related papers are listed. It is concluded that special collections are worth the extra community effort in organization, writing, and reviewing these papers.
Public health communication strategies, including entertainment-education, can effectively change human behavior, improving health outcomes from climate change. Tools from social psychology, including social modeling and building self and collective efficacy, can help us to create a new model for current, culturally-relevant stories that can help communities adapt to climate change. As an example we will share key learnings from Rhythm and Glue, an applied television prototype, based on research from an NSF Advancing Informal STEM Learning submission. Best practices for climate communication include adaptations of entertainment-education techniques for culturally grounded representations of climate engagement positive outliers. As science communication progresses in adapting social psychology and sociology practices for climate communication, we would like to share how this prototype applies the methods and suggests some new directions that further adapt the practices to account for limited resources and media fragmentation challenges. While this work focuses on climate, it has broad implications for future science communication practices.
Seismic bursts in Southern California are sequences of small earthquakes strongly clustered in space and time, and include seismic swarms and aftershock sequences. A readily observable property of these events, the radius of gyration (), allows us connect the bursts to the temporal occurrence of the largest ³7 earthquakes in California since 1984. In the Southern California earthquake catalog, we identify hundreds of these potentially coherent space-time structures in a region defined by a circle of radius 600 km around Los Angeles. We compute for each cluster, then filter them to identify those bursts with large numbers of events closely clustered in space, which we call “compact” bursts. Our basic assumption is that these compact bursts reflect the dynamics associated with large earthquakes. Once we have filtered the burst catalog, we apply an exponential moving average to construct a time series for the Southern California region. We observe that the of these bursts systematically decreases prior to large earthquakes, in a process that we might term “radial localization.” The then rapidly increases during an aftershock sequence, and a new cycle of “radial localization” then begins. These time series display cycles of recharge and discharge reminiscent of seismic stress accumulation and release in the elastic rebound process. The complex burst dynamics we observe are evidently a property of the region as a whole, rather than being associated with individual faults. This new method allows us to improve earthquake nowcasting in a seismically active region.
Over the last decades, the receiver function technique has been widely used to image sharp discontinuities in elastic properties of the solid Earth at regional scales. To date, very few studies have attempted to use receiver functions for global imaging. One such endeavour has been pursued through the project “Global Lithospheric Imaging using Earthquake Recordings” (GLImER). Building on the advances of GLImER, we have developed PyGLImER - a Python-based software suite capable of creating global images from both P-to-S and S-to-P converted waves via a comprehensive receiver function workflow. This workflow creates a database of receiver functions by downloading seismograms from selected earthquakes and analysing the data via a series of steps that include pre-processing, quality control, deconvolution, and stacking. The stacking can be performed for common conversion points or single stations. All steps leading to the creation of receiver functions are automated. To visualise the generated stacks, the user can choose the desired survey area in a graphical user interface, and then explore the selected region either through 2D cross-sections or a 3D volume. By incorporating results from two independent seismic phases, we can combine the advantages of both phases for imaging different discontinuities. This results in an increased robustness and resolution of the final image. For example, we can use constraints from S receiver function images, which are multiple-free but relatively low resolution, to differentiate between real lithospheric/asthenospheric structures and multiple-induced artefacts in higher-resolution P receiver function images. Our preliminary results agree with those from recent regional and global studies, confirming the workflowís robustness. They also indicate that the new workflow combining P and S receiver functions has the potential to resolve global lithospheric discontinuities such as the lithosphere-asthenosphere boundary (LAB) or the midlithospheric discontinuity (MLD) more reliably than approaches using only one type of incident phase. PyGLImER will be distributed as open-source software, providing an easily accessible tool to rapidly generate high-resolution images of structures in the lithosphere and asthenosphere over large scales.