Recently, many in the space weather community have taken up the cause to advocate for an orphan among our own. It’s an important fight – for ground-based sensor networks. Although ground-based sensors are used across all disciplines of space weather, in terms of long-term support, they have no single clear home in any United States agency or department. This has resulted in an ongoing struggle throughout the community to maintain important space weather sensors and networks.The Promoting Research and Observations of Space Weather to Improve the Forecasting of Tomorrow (PROSWIFT) Act of 2020 (Public Law 116-181) attempts to clarify Federal roles and responsibilities, stating that “… ground-based observations provide crucial data necessary to understand, forecast, and prepare for space weather phenomena”, which it defines as ”radars, lidars, magnetometers, neutron monitors, radio receivers, aurora and airglow imagers, spectrometers, interferometers, and solar observatories.”The data from this list of sensors and arrays support research across the space weather domains, including magnetospheric, ionospheric, and atmospheric science. Networks are run by governmental, academic, and commercial providers, and are used to support a range of end-users, from aviation to the power sector. Given the wide range of applications, it’s not surprising that no single entity has primary custody.In separate sections of PROSWIFT, sustainment of these instruments is assigned to “The Director of the National Science Foundation, the Director of the United States Geological Survey, the Secretary of the Air Force, and, as practicable in support of the Air Force, the Secretary of the Navy” who are directed to “maintain and improve ground-based observations of the Sun, as necessary and advisable”, and also to the National Oceanic and Atmospheric Administration (NOAA), as the civil operational space weather agency that is responsible for maintaining “ground-based… assets to provide observations needed for space weather forecasting, prediction, and warnings”.While PROSWIFT’s clarification of federal responsibilities is welcome, what is highlighted is a problem of the “ownership” of the issue of long-term sustainability of such varied instruments.We can start to unravel the ownership problem by understanding its history. One complication to an easy definition is that ground-based sensor networks support both space weather science and operations. The National Science Foundation (NSF) has a long history of supporting novel instrument development, small arrays of sensors placed for scientific research (fundamental research is the foundation of NSF’s mandate), and mid- and larger-scale facilities. But the needs of science do not necessarily intersect the needs of operations, and neither do their requirements in terms of engineering and support. Operational sensors, in many cases, are entirely different than scientific sensors.Like scientific arrays, operational sensors must provide the “right” data - accurate and relevant – but the delivery of those data must also be timely, consistent, and reliable. In other words, the data must be usable for space weather predictions, forecasts, and alerts. The United States Geological Survey (USGS) is one example of a federal provider of operational ground-based data. The commercial sector, by mandate of PROSWIFT, is another.Whether scientific or operational, ground-based networks need to be supported and maintained long-term to fulfill their missions. It is more expensive to shut down and rebuild an array than to keep it operating, and strategic planning is required to prioritize and balance needs across the space weather enterprise.Those taking up the initiative to support ground-based sensors span the space weather enterprise, reflecting the interdisciplinary and cross-sector need for these data. In addition to a myriad of white papers submitted to the Heliophysics Decadal Survey (e.g., Hartinger et al., and Bhatt et al.) and publications (see Engebretson and Zesta, 2017, and Bain et al., 2023), advisory groups such as the Space Weather Advisory Group (SWAG) and the National Academies Space Weather Roundtable, both put into place by the PROSWIFT Act itself, have taken up the cause. The SWAG, in a public meeting on March 20, 2023 (https://www.weather.gov/swag), called for a “paradigm shift”, agreeing upon a recommendation that there is a need “Provide long-term support for operational ground-based and airborne sensors and networks”.It’s clear that these data are crucial for space weather – both space weather research and operations. With the approach of solar maximum, and the associated rise in space weather hazard, what’s less clear is whether this problem will be solved in time. The community efforts have been effective in raising awareness about the dire situation facing many ground-based sensor networks. What is needed now is a mechanism to maintain these networks long-term, and advocacy for new Federal appropriations to support the organizations that take on the responsibility.
Efforts to validate, monitor, and verify ocean-based carbon dioxide removal (CDR) will require a rich understanding of the ocean carbon system. Ocean observations anchor this understanding, but we know that some ongoing observations are precariously funded, that data products like SOCAT rely on volunteer effort, that regions essential to our understanding of the ocean carbon system are under-observed, and that some observation data is under-used. This presentation will be a progress report on our efforts to identify and document ocean carbon data flows using systematic literature reviews and examination of ocean data repositories. These data flows are essential to identify what data the scientific community already relies on; what data and observation gaps exist; and what data might be under-used. We examined variables of interest based on GOOS EOVs, including Oxygen (and supporting variables), Stable Carbon Isotopes (and supporting variables), Ocean Surface Stress (and supporting variables), and Ocean Surface Heat Flux (and supporting variables). Commonly observed supporting variables include O2, alkalinity, pCO2, pH, temperature, and near-surface air temperature, humidity, pressure, and wind speed.
80 years after aerial photography revealed thousands of aligned oval depressions on the USA’s Atlantic Coastal Plain, the geomorphology of the “Carolina bays” remains enigmatic. Geologists and astronomers alike hold that invoking a cosmic impact for their genesis is indefensible. Rather, the bays are commonly attributed to gradualistic fluvial, marine and/or aeolian processes operating during the Pleistocene era. The major axis orientations of Carolina bays are noted for varying statistically by latitude, suggesting that, should there be any merit to a cosmic hypothesis, a highly accurate triangulation network and suborbital analysis would yield a locus and allow for identification of a putative impact site. Digital elevation maps using LiDAR technology offer the precision necessary to measure their exquisitely-carved circumferential rims and orientations reliably. To support a comprehensive geospatial survey of Carolina bay landforms (Survey) we generated about a million km2 of false-color hsv-shaded bare-earth topographic maps as KML-JPEG tile sets for visualization on virtual globes. Considering the evidence contained in the Survey, we maintain that interdisciplinary research into a possible cosmic origin should be encouraged. Consensus opinion does hold a cosmic impact accountable for an enigmatic Pleistocene event - the Australasian tektite strewn field - despite the failure of a 60-year search to locate the causal astroblem. Ironically, a cosmic link to the Carolina bays is considered soundly falsified by the identical lack of a causal impact structure. Our conjecture suggests both these events are coeval with a cosmic impact into the Great Lakes area during the Mid-Pleistocene Transition, at 786 ka ± 5 k. All Survey data and imagery produced for the Survey are available on the Internet to support independent research. A table of metrics for 50,000 bays examined for the Survey is available from an on-line Google Fusion Table: https://goo.gl/XTHKC4 . Each bay is also geospatially referenceable through a map containing clickable placemarks that provide information windows displaying that bay’s measurements as well as further links which allows visualization of the associated LiDAR imagery and the bay’s planform measurement overlay within the Google Earth virtual globe: https://goo.gl/EHR4Lf .
Gravity fluctuations produced by ambient seismic fields are predicted to limit the sensitivity of the next-generation, gravitational-wave detector Einstein Telescope at frequencies below 20 Hz. The detector will be hosted in an underground infrastructure to reduce seismic disturbances and associated gravity fluctuations. Additional mitigation might be required by monitoring the seismic field and using the data to estimate the associated gravity fluctuations and to subtract the estimate from the detector data, a technique called coherent noise cancellation. In this paper, we present a calculation of correlations between surface displacement of a seismic field and the associated gravitational fluctuations using the spectral-element SPECFEM3D Cartesian software. The model takes into account the local topography at a candidate site of the Einstein Telescope at Sardinia. This paper is a first demonstration of SPECFEM3D's capabilities to provide estimates of gravitoelastic correlations, which are required for an optimized deployment of seismometers for gravity-noise cancellation.
For science to reliably support new discoveries, its results must be reproducible. This has proven to be a challenge in many fields including fields that rely on computational methods as a means for supporting new discoveries. Reproducibility in these studies is particularly difficult because they require open, documented sharing of data and models and careful control of underlying hardware and software dependencies so that computational procedures executed by the original researcher are portable and can be run on different hardware or software and produce consistent results. Despite recent advances in making scientific work more findable, accessible, interoperable and reusable (FAIR), fundamental questions in the conduct of reproducible computational studies remain: Can published results be repeated in different computing environments? If yes, how similar are they to previous results? Can we further verify and build on the results by using additional data or changing computational methods? Can these changes be automatically and systematically tracked? This presentation will describe our EarthCube project to advance computational reproducibility and make it easier and more efficient for geoscientists to preserve, share, repeat and replicate scientific computations. Our approach is based on Sciunit software developed by prior EarthCube projects which encapsulates application dependencies composed of system binaries, code, data, environment and application provenance so that the resulting computational research object can be shared and re-executed on different platforms. We have deployed Sciunit within the HydroShare JupyterHub platform operated by the Consortium of Universities for the Advancement of Hydrologic Science Inc. (CUAHSI) for the hydrology research community and will present use cases that demonstrate how to preserve, share, repeat and replicate scientific results from the field of hydrologic modeling. While illustrated in the context of hydrology, the methods and tools developed as part of this project have the potential to be extended to other geoscience domains. They also have the potential to inform the reproducibility evaluation process as currently undertaken by journals and publishers.
A new model validation and performance assessment tool is introduced, the sliding threshold of observation for numeric evaluation (STONE) curve. It is based on the relative operating characteristic (ROC) curve technique, but instead of sorting all observations in a categorical classification, the STONE tool uses the continuous nature of the observations. Rather than defining events in the observations and then sliding the threshold only in the classifier/model data set, the threshold is changed simultaneously for both the observational and model values, with the same threshold value for both data and model. This is only possible if the observations are continuous and the model output is in the same units and scale as the observations; the model is trying to exactly reproduce the data. The STONE curve has several similarities with the ROC curve – plotting probability of detection against probability of false detection, ranging from the (1,1) corner for low thresholds to the (0,0) corner for high thresholds, and values above the zero-intercept unity-slope line indicating better than random predictive ability. The main difference is that the STONE curve can be nonmonotonic, doubling back in both the x and y directions. These ripples reveal asymmetries in the data-model value pairs. This new technique is applied to modeling output of a common geomagnetic activity index as well as energetic electron fluxes in the Earth’s inner magnetosphere. It is not limited to space physics applications but can be used for any scientific or engineering field where numerical models are used to reproduce observations.
Water volume estimates of shallow desert lakes are the basis for water balance calculations, important both for water resource management and paleohydrology/climatology. Water volumes are typically inferred from bathymetry mapping; however, being shallow, ephemeral and remote, bathymetric surveys are scarce in such lakes. We propose a new, remote-sensing based, method to derive the bathymetry of such lakes using the relation between water occurrence, during >30-yr of optical satellite data, and accurate elevation measurements from the new Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2). We demonstrate our method at three locations where we map bathymetries with ~0.3 m error. This method complements other remotely sensed, bathymetry-mapping methods as it can be applied to: (a) complex lake systems with sub-basins, (b) remote lakes with no in-situ records, and (c) flooded lakes. The proposed method can be easily implemented in other shallow lakes as it builds on publically accessible global data sets.
Reproducibility and replicability in analyzing data is one of the main requirements for the advancement of scientific fields that rely heavily on computational data analysis, such as atmospheric science. However, there are very few research activities that field in Indonesia that emphasize the principle of transparency of codes and data in the dissemination of the results. This issue is a major challenge for the Indonesian scientific community to verify the output of research activities from their peers. One common obstacle to the reproducibility of data-driven research is the portability issue of the computing environment used to reproduce the results. Therefore, in this article, we would like to offer a solution through Debian-based dockerized Jupyter Notebook that have been installed with several Python libraries that are often used in atmospheric science research. Through this containerized computing environment, we expect to overcome the portability and dependency constraints that often faced by atmospheric scientists and also to encourage the growth of research ecosystem in Indonesia through an open and replicable environment.
Soft x-ray and EUV radiation from the Sun is absorbed by and ionizes the atmosphere, creating both the ionosphere and thermosphere. Temporal changes in irradiance energy and spectral distribution can have drastic impacts on the ionosphere, impacting technologies such as satellite drag and radio communication. Because of this, it is necessary to estimate and predict changes in Solar EUV spectral irradiance. Ideally, this would be done by direct measurement but the high cost of solar EUV spectrographs makes this prohibitively expensive. Instead, scientists must use data driven models to predict the solar spectrum for a given irradiance measurement. In this study, we further develop the Synthetic Reference Spectral Irradiance Model (SynRef). The SynRef model, which uses broadband EUV irradiance data from EUVM at Mars, was created to mirror the SORCE XPS model which uses data from the TIMED SEE instrument and the SORCE XPS instrument at Earth. Both models superpose theoretical Active Region and Quiet Sun spectra generated by CHIANTI to match daily measured irradiance data, and output a modeled solar EUV spectrum for that day. By adjusting the weighting of Active Region and Quiet Sun spectra, we update the SynRef model to better agree with the FISM model and with spectral data collected from sounding rocket flights. We also use the broadband EUVM measurements to estimate AR temperature. This will allow us to select from a library of AR reference spectra with different temperatures. We present this updated SynRef model to more accurately characterize the Solar EUV and soft x-ray spectra.
Journals occasionally solicit manuscripts for special collections, in which all papers are focused on a particular topic within the journal’s scope. For the Journal of Geophysical Research: Space Physics, there have been 51 special collections from 2005 through 2018, with a total of 1009 papers out of the 8881 total papers in the journal over those years (11%). Taken together, the citations to special collection papers, as well as other metrics, are essentially the same as the non-special-collection papers. Several paper characteristics were examined to assess whether they could explain the higher citation and download values for SC papers, but they cannot. In addition, indirect methods were conducted for assessing self-citations as an explanation for the increased citations, but no evidence was found to support this hypothesis. It was found that some paper types, notably Commentaries and Technical Reports, have lower average citations but higher average downloads than Research Articles (the most common type of paper in this journal). This implies that such paper types have a different kind of impact than “regular” science-result-focused papers. In addition to having higher average citations and downloads, special collections focus community attention on that particular research topic, providing a deadline for manuscript submissions and a single webpage at which many related papers are listed. It is concluded that special collections are worth the extra community effort in organization, writing, and reviewing these papers.
Ultrasonic transmission is sensitive to the variation in mechanical properties of materials. Wave propagation through fractured media introduces changes in the frequency content, travel time and transmission coefficient of the wave. A workflow based on physics-driven unsupervised learning is developed to process the transmitted ultrasonic-shear waveforms to non-invasively visualize the geomechanical alterations due to hydraulic fracturing of a tight sandstone. Novelty of the work involves the assignment of physically consistent clusters to the measurements of shear waveforms across the axial and frontal planes by incorporating the travel time of the peak of spectral energy and transmission coefficient. The proposed workflow generates maps of geomechanical alterations across the frontal and axial planes of the sample. The outputs of the workflow are in good agreement with independent techniques viz. acoustic emission and X-ray computed tomography. The proposed workflow can be adapted for improved fracture characterization in the subsurface when processing sonic-logging, cross-wellbore seismic or surface seismic waveform data.
Oceanography has entered an era of new observing platforms, such as biogeochemical Argo floats and gliders, some of which will provide three-dimensional maps of essential ecosystem variables on the North-West European (NWE) Shelf. In a foreseeable future operational centres will use multi-platform assimilation to integrate those valuable data into ecosystem reanalyses and forecast systems. Here we address some important questions related to glider biogeochemical data assimilation and introduce multi-platform data assimilation in a (pre)operational model of the NWE Shelf-sea ecosystem. We test the impact of the different multi-platform system components (glider vs satellite, physical vs biogeochemical) on the simulated biogeochemical variables. To characterize the model performance we focus on the period around the phytoplankton spring bloom, since the bloom is a major ecosystem driver on the NWE Shelf. We found that the timing and magnitude of the phytoplankton bloom is insensitive to the physical data assimilation, which is explained in the study. To correct the simulated phytoplankton bloom one needs to assimilate chlorophyll observations from glider or satellite Ocean Color (OC) into the model. Although outperformed by the glider chlorophyll assimilation, we show that OC assimilation has mostly desirable impact on the sub-surface chlorophyll. Since the OC assimilation updates chlorophyll only in the mixed layer, the impact on the sub-surface chlorophyll is the result of the model dynamical response to the assimilation. We demonstrate that the multi-platform assimilation combines the advantages of its components and always performs comparably to its best performing component.
High quality citizen science data can be instrumental in advancing science toward new discoveries and a deeper understanding of under-observed phenomena. However, the error structure of citizen scientist (CS) data must be well-defined. Within a citizen science program, the errors in submitted observations vary, and their occurrence may depend on CS-specific characteristics. This study develops a graphical Bayesian inference model of error types in CS data. The model assumes that: (1) each CS observation is subject to a 5 specific error type, each with its own bias and noise; and (2) an observation’s error type depends on the error community of the CS, which in turn relates to characteristics of the CS submitting the observation. Given a set of CS observations and corresponding ground-truth values, the model can be calibrated for a specific application, yielding (i) number of error types and error communities, (ii) bias and noise for each error type, (iii) error distribution of each error community, and (iv) the error community to which each CS belongs. The model, applied to Nepal CS rainfall observations, 10 identifies five error types and sorts CSs into four model-inferred communities. In the case study, 73% of CSs submitted data with errors in fewer than 5% of their observations. The remaining CSs submitted data with unit, meniscus, unknown, and outlier errors. A CS’s assigned community, coupled with model-inferred error probabilities, can identify observations that require verification. With such a system, the onus of validating CS data is partially transferred from human effort to machine-learned algorithms.
We describe a new way to apply a spatial filter to gridded data from models or observations, focusing on low-pass filters. The new method is analogous to smoothing via diffusion, and its implementation requires only a discrete Laplacian operator appropriate to the data. The new method can approximate arbitrary filter shapes, including Gaussian filters, and can be extended to spatially-varying and anisotropic filters. The new diffusion-based smoother's properties are illustrated with examples from ocean model data and ocean observational products. An open-source python package implementing this algorithm, called gcm-filters, is currently under development.
Machine learning (ML) based models have demonstrated very strong predictive capabilities for hydrologic modeling, but are often criticized for being black-boxes. In this paper we use a technique from the field of explainable AI (XAI), called layerwise relevance propagation (LRP) to “open the black box”. Specifically we train a deep neural network on data from a set of hydroclimatically diverse FluxNet sites to predict turbulent heat fluxes, and then use the LRP technique to analyze what it learned. We show that the neural network learns physically plausible relationships, including different ways of partitioning the turbulent heat fluxes according to moisture or energy limiting characteristics of the sites. That is, the neural network learns different behaviors at arid and non-arid sites. We also develop and demonstrate a novel technique that uses the output of the LRP analysis to explore how the neural network learned to regionalize between sites. We find that the neural network primarily learned behaviors that differed between evergreen forested sites and all other vegetation classes. Our analysis shows that even simple neural networks can extract physically-plausible relationships and that by using XAI methods we can learn new information from the ML-based methods.