Public health communication strategies, including entertainment-education, can effectively change human behavior, improving health outcomes from climate change. Tools from social psychology, including social modeling and building self and collective efficacy, can help us to create a new model for current, culturally-relevant stories that can help communities adapt to climate change. As an example we will share key learnings from Rhythm and Glue, an applied television prototype, based on research from an NSF Advancing Informal STEM Learning submission. Best practices for climate communication include adaptations of entertainment-education techniques for culturally grounded representations of climate engagement positive outliers. As science communication progresses in adapting social psychology and sociology practices for climate communication, we would like to share how this prototype applies the methods and suggests some new directions that further adapt the practices to account for limited resources and media fragmentation challenges. While this work focuses on climate, it has broad implications for future science communication practices.
As Earth System Science (ESS) becomes more data-intensive, collaborative, and interdisciplinary, it is important to understand how best to support and advance data reuse. We conducted an online survey of active ESS researchers from 126 U.S. universities and research centers, representing a wide variety of scientific fields. Of the 207 respondents, 51.7% had more than 20 years of research experience. Results indicated that the current primary purposes for reusing data are to conduct new analysis (87%), followed by comparing results (70.4%), with only 18.5% reusing data to reproduce published studies. As expected, data hosted by federally funded data centers were reused most frequently, with open government data and data provided directly from other researchers also widely used. Reuse of data from other types of repositories lags far behind, due in part to a range of service limitations. At the same time, data sharing by respondents is strong—96.6% actively release their data, primarily as supplements to published papers, with moderate use of open access repositories. Of the 45.9% who had attempted to reproduce research, 73.7% failed at least once, often due to the limited detail provided in published papers. Still, 92.3% believe it is the researcher’s responsibility to ensure their work is reproducible. The majority favored traditional modes of documenting research—word processors, text editors, and code commenting over electronic notebooks or workflow systems. Interestingly, 59.9% continue to use hand-written notebooks. Challenges to data reuse and reproducibility specific to ESS included the complex nature of earth systems, increasingly complicated models, lack of data management resources, and limited emphasis on reproducibility in the field. Open-ended responses raised questions about whether “exact replication” is necessary or possible for ESS. Most researchers agreed that data and code should be considered important research products and that outlets are needed for publishing negative results. Taken together, the results suggest a strong data sharing culture in ESS with high levels of reuse and commitment to open science. The research community would benefit greatly from better documentation and sharing of methods and research processes, as well as targeted improvements in data services and tools.
Urban areas (i.e. cities, towns and suburbs) provide a home to over 70% of the EU‑population, and this number is expected to exceed 80% by 2050 (Tapia et al., ECOL INDIC, 2017). The increase in frequency and intensity of extreme precipitation events caused by the changing climate (e.g. cloudbursts, rainstorms, heavy rainfall, hail, heavy snow) combined with the high population density and concentration of assets in urban areas makes them particularly vulnerable to pluvial flooding, hence, assessing their vulnerability under current and future climate scenarios is of paramount importance. Detailed hydrologic-hydraulic numerical modelling is resource intensive and therefore scarcely suitable for a consistent hazard assessment across large urban settlements. Given the steadily increasing availability of LiDAR (Light Detection And Ranging) high-resolution DEMs (Digital Elevation Models), several studies highlighted the potential for consistent pluvial flood hazard characterization of fast-processing DEM-based methods, such as the Hierarchical Filling and Spilling or Puddle-to-Puddle Dynamic Filling and Spilling (see e.g. Zhang et al., J HYDROL, 2014; Chu et al., WATER RESOUR RES, 2013). As part of the activities of the EIT Climate-KIC Demonstrator project SAFERPLACES (https://saferplaces.co/), we developed a fast-processing algorithm, named Safer_RAIN, that enables one to map pluvial flooding in large urban areas by implementing a filling and spilling procedure that accounts for spatially distributed rainfall input and infiltration processes (Green Ampt method). We present the first applications of the algorithm to model recent urban inundations occurred in Northern Italy. These preliminary applications, compared against ground evidence and detailed output from a two-dimensional hydrologic and hydraulic numerical model, highlight limitations and potential of Safer_RAIN for identifying pluvial-hazard hotspots across large urban environments.
Bridge foundation scour is the most common cause for the failure of highway bridges. The assessment of local scouring mechanism around bridge piers provides information for decision-making regarding the pile footing design, predicting the safety of bridges under critical scoured conditions, and as a result, may help prevent unnecessary loses. Since scour in bridges is the water-induced erosion of soil particles around bridge foundations, the loss of lateral load capacity at bridge foundations may induce bridges to become highly vulnerable to failure when the effects of scour and floods are combined. In this study, high definition 3D models of the flood plain and the amount of current scour in bridge piles were acquired by Unmanned aerial vehicle (UAV) based measurements which provide a practical approach and bring high precision solutions considering traditional measurement systems. The present study evaluated the performance of bridges with reinforced concrete (RC) pile foundations under the effects of local scour and flood. Thus, a RC bridge constructed over Boğaçayı in Antalya, Turkey was selected as the case study. The vulnerability of the bridge was assessed under flood loading considering the predicted scour amount. The maximum flood loads according to different return periods (5, 20, 50, 100 and 500 years) and the corresponding maximum scour depths were determined by HEC-RAS software. As a result, the outputs were regarded as input parameters for the evaluation of lateral behavior of the bridge under consideration. The soil-pile foundation-structure interaction was implemented in the finite element models of pile groups. The multi-hazard performance of the bridge was evaluated under the maximum predicted scour depth and corresponding flood load. In conclusion, as the scour depth increased the fundamental periods, shear forces and the bending moments were observed to increase while the pile lateral load capacities diminished. Therefore, it was ascertained that the scour substantially deteriorated the performance of the bridge under multi-hazard environment.
Many disciplines within the geosciences require computational skills to access, analyze, and visualize data. These are skills students need to be competitive in the work environment. Applying computational thinking and basic coding into the classroom can diversify student learning, develop 21st century skills, and demonstrate real world applications through project based learning. Teaching students to have a broad base of knowledge and a range of skills is paramount in developing career ready-students. Working in collaboration with members of the Earth Science Information Partners (ESIP) Education committee, education professionals from UNAVCO, NOAA, Cooperative Institute for Mesoscale Meteorological Studies and teachers from schools around the country are exploring the use of basic coding and programmable robots as a springboard to learn computational thinking and skills within an Earth science context. By encouraging teachers to learn how to code, we help them to encourage their students to be creators, more than just consumers, of the technology around us and to foster curiosity that whets their appetites to learn more!. This presentation will elaborate upon coding in the classroom initiatives these partners are facilitating from workshops, learning materials, and insights from workshop feedback.
Sea surface temperature (SST) observations made at ships are distributed irregularly in space and time and are affected by systematic biases and random errors. Such observations are often “binned”: split into samples, contained within “bins” - grid boxes of a space-time grid (1oX1o monthly bins are used here), and their statistics are computed. Bin averages often serve as gridded representations of such data, thus requiring reliable uncertainty estimates, which for ship observations are particularly important because of their domination in the early observational records. Here ship SST observations for 1992–2010 are compared with an independent high-resolution satellite-based SST data set. To remove systematic biases, seasonal means were subtracted from the difference between bin-averaged data sets. In more than 66%(50%) of locations with binned temporal coverage exceeding 50%(66%), the magnitude of remaining anomalies agreed within 20%(10%) with random error model estimates. Separate estimates for sampling and measurement error components were obtained.
Remote sensing approaches based on VIS-NIR spectroscopy can be used for getting near real-time information about soil fertility. However, the main challenge limiting the application of spectroscopy in soil fertility evaluation is finding suitable data pre-processing and calibration strategies. We have compared various pre-processing techniques using the reflectance spectra obtained from AVIRIS-NG hyperspectral images, for quantification of organic carbon (OC), available phosphorus (P) and available potassium (K) in the surface soils of Surendranagar area (Western parts of India) and Raichur (Southern parts of India). Surface (0 - 0.15 m) soil samples were collected from these two areas synchronously with the dates of the AVIRIS-NG campaign. The soil samples were air dried, sieved to <2 mm, and analyzed for OC, P, and K using standard methods. The AVIRIS spectra (spectral range of 380-2500 nm with an interval of 5 nm) corresponding to soil sampling points were extracted. The pre-processing steps were used in the order: Continuum Removal (Yes/No), Moving Window Abstraction (Yes/No), No transformation or Euclidean Normalization or Standard Normal Variate (SNV), No transformation or Savitsky-Golay (SG) first-order smoothing, and No transformation or first derivative OR second derivative. We have used the partial least squares regression (PLSR) to calibrate the model from pre-processed spectra. The PLSR with Continuum Removal, SNV, SG first-order smoothing, and first derivative was selected as the best algorithm for estimating soil properties from the Western parts of India, and the corresponding R2 were 0.77 for OC, 0.79 for P and 0.83 for K (RMSE <0.3 for all the parameters). The PLSR with Moving Window Abstraction, SG first-order smoothing, and second derivative were selected as the best algorithm for estimating soil properties from the Southern parts of India, and the corresponding R2 were 0.54 for OC, 0.49 for P and 0.56 for K (RMSE <0.3 for all the parameters). These results suggest that the optimization of AVIRIS spectra using various pre-processing techniques and modeling approaches is required for rapid and non-destructive assessment and monitoring of soil health for precision agriculture.
A hierarchical Bayesian classifier is trained at pixel scale with spectral data from the CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) images. Its utility in detecting small exposures of uncommon phases is demonstrated with new geologic discoveries near the Mars-2020 rover landing site. Akaganeite is found in sediments on the Jezero crater floor and in fluvial deposits at NE Syrtis. Jarosite and silica are found on the Jezero crater floor while chlorite-smectite and Al phyllosilicates are found in the Jezero crater walls. These detections point to a multi-stage, multi-chemistry history of water in Jezero crater and the surrounding region and provide new information for guiding the Mars-2020 rover’s landed exploration. In particular, the akaganeite, silica, and jarosite in the floor deposits suggest either a later episode of salty, Fe-rich waters that post-date the Jezero crater delta or groundwater alteration of portions of the Jezero crater sedimentary sequence.
The application of neural networks (NN) in groundwater (GW) level prediction has been shown promising by previous works. Yet, previous works have relied on a variety of inputs, such as air temperature, pumping rates, precipitation, service population, and others. This work presents a long short-term memory neural network (LSTM-NN) for GW level forecasting using only previously observed GW level data as the input without resorting to any other type of data and information about a groundwater basin. This work applies the LSTM-NN for short-term and long-term GW level forecasting in the Edwards aquifer in Texas. The Adam optimizer is employed for training the LSTM-NN. The performance of the LSTM-NN was compared with that of a simple NN under 36 different scenarios with prediction horizons ranging from one day to three months, and covering several conditions of data availability. This paper’s results demonstrate the superiority of the LSTM-NN over the simple-NN in all scenarios and the success of the LSTM-NN in accurate GW level prediction. The LSTM-NN predicts one lag, up to four lags, and up to 26 lags ahead GW level with an accuracy (R2) of at least 99.89%, 99.00%, and 90.00%, respectively, over a testing period longer than 17 years of the most recent records. The quality of this work’s results demonstrates the capacity of machine learning (ML) in groundwater prediction, and affirms the importance of gathering high-quality, long-term, GW level data for predicting key groundwater characteristics useful in sustainable groundwater management.
Geoscientists often spend significant research time identifying, downloading, and refining geospatial data before they can use it for analysis. Exploring interdisciplinary data is even more challenging because it may be difficult to evaluate data quality outside of one’s expertise. QGreenland, a newly funded EarthCube project, is designed to remove these barriers for interdisciplinary Greenland-focused research and analysis via an open data, open platform Greenland GIS tool. QGreenland will combine interdisciplinary data (e.g., glaciology, human health, geopolitics, hydrology, biology, etc.) curated by an international Editorial Board into a unified, all-in-one GIS environment for offline and online use. The package is designed for the open source GIS platform QGIS. QGreenland will include multiple levels of data use: 1) a fully downloadable base package ready for offline use, 2) additional disciplinary and/or high-resolution data extension packages for select download, and 3) online-access-only data to facilitate especially large datasets or updating time series. Software development has begun and we look forward to discussing techniques to create the best open access, reproducible methods for package creation and future sustainability. We also now have a beta version available for experimentation and feedback from interested users and the Editorial Board. The version 1 public release is slated for fall 2020, with two subsequent annual updates. As an interdisciplinary data package, QGreenland is designed to aid collaboration and discovery across fields. Along with discussing QGreenland development, we will also provide an example use case to demonstrate the potential utility of QGreenland for researchers, educators, planners, and communities.
Microbial eukaryotes (protists) are important contributors to marine biogeochemistry and play essential roles as both producers and consumers in marine ecosystems. Among protists, mixotrophs—those that use both heterotrophy and autotrophy to satisfy their energy requirements—are especially important to primary production in oligotrophic regions where nutrient availability is otherwise limiting. For instance, acantharians accomplish mixotrophy by hosting Phaeocystis spp. as endosymbionts. Despite their ecological importance, Acantharea-Phaeocystis symbioses are understudied due to host fragility and inability to survive in culture. We investigated the evolution and ecological functioning of these symbioses by sequencing single-cell transcriptomes from sixteen acantharians. Since hosts harbor multiple Phaeocystis species, we prepared transcriptomes for the two most common symbiont species available in culture—P. cordata and P. jahnii—and evaluated differential gene expression between symbiotic and free-living cells. Results indicate photosynthesis genes are upregulated in symbiosis for both symbiont species, suggesting symbionts are photosynthesizing at elevated rates within hosts. However, biosynthesis and metabolism of storage carbohydrates and lipids are downregulated in symbiosis, indicating that extra energy captured through elevated photosynthesis is not retained. Symbiont gene expression suggests symbionts relinquish fixed carbon as small organonitrogen compounds, such as amides and amino acids, while receiving host-supplied nitrogen as urea and ammonium. Importantly, genes associated with protein kinase signaling pathways that promote cell proliferation are deactivated in symbionts. Manipulation of these pathways may prevent symbionts from overgrowing hosts and therefore represents a key component of maintaining the symbiosis. This study illuminates mechanisms of host control and nutrient transfer in an important microbial symbiosis in oligotrophic waters.
Research data are a vital component of the scientific record. Discovering and assessing data for possible reuse in future research is challenging. The Belmont Forum has recently awarded funds to three international teams as part of a four-year Collaborative Research Action (CRA) on Science-driven e-Infrastructure Innovation (SEI) for the Enhancement of Transnational, Interdisciplinary and Transdisciplinary Data Use to improve data management practices that will increase data reuse. One of these awardees, PARSEC, comprises two interwoven strands, one focused on improving data practices for reuse and credit, and one for synthesis science. The data specialists work alongside synthesis science researchers as they determine the influence of natural protected areas on socioeconomic outcomes for local communities. They collaborate with the researchers to better understand their motivations and work practices, and to aid them in the data-related steps that need to be taken during the research lifecycle. This will ensure their data and code are FAIR-compliant and thus enhance the likelihood of their data being reused and their analyses reproducible. The PARSEC team is working with Research Data Alliance (RDA), Earth Science Information Partners (ESIP), DataCite and ORCID to build awareness of the elements required for data creators to receive credit and automated attribution for their data contributions, and the tools that will make it easier to observe usage. Credit for data is an important incentive for researchers to make their data reusable. When data are FAIR and cited, their related publications have higher visibility. We shall discuss various ways in which we are working across the science-data interface in our multi-country and multi-disciplinary working environment to improve data (and code) reuse through better management and crediting. Make your Data FAIR, Cite your Data, Get Credit, Increase Reuse and reap the rewards!
The alluvial wetlands are one of the most important ecosystems of the world and are in abundance in the vast Indo-Gangetic plains. The wetlands of this region are of variable sizes and characteristics but currently face similar problems of drying-out and fragmentation. It is empirical to understand the evolutionary pathways and hydrological connectivity of these wetlands for planning and execution of management and restoration for them. These pathways have been studied for a wetland namely, the Kaabar Tal, situated in the Kosi-Gandak interfan region of the eastern Gangetic plains. Its geomorphic evolutionary pathways have been established using satellite imageries, DEMs, toposheets, and high resolution aerial imagery obtained using unmanned aerial vehicle (UAV). Various geomorphic units characterized by an assemblage of geomorphic features have been mapped for the Kaabar Tal and its surroundings. Seasonal, annual, and decadal variability in the hydrological status of this wetland were estimated for a time-period of 1976-2017 using the historical Landsat datasets. Seasonal variability in hydrological connectivity structure of the wetland with its catchment for the time-period of 1989 to 2017 was estimated in a GIS framework. The structural connectivity was estimated using the technique of diffusion kernel interpolation. The dynamic connectivity was estimated using the Getis-Ord Gi* statistic and Mann-Kendall trend test using the concepts of space-time cubes. The detailed geomorphic mapping revealed that this wetland primarily originated through fluvial processes. A historical reconstruction of its hydrological status revealed that in the recent times the wetland is getting fragmented, and the connectivity potential of different areas of the catchment is a function of the prevalent land-use and land-cover (LULC) pattern and seasonality. Therefore, the heterogeneity and complexity of the geomorphic units of the wetland and the historical LULC patterns of the catchment should be considered in designing any management and restoration plan.
Data that are FAIR demonstrate specific characteristics including: ease of discovery, ability to access, community acceptable formats allowing interoperability, and information that supports the decision for reuse. The process used to determine data reuse is commonly called “fit for purpose” or “fit for use”. These criteria are defined using relevant factors established by the community for which the data was originally created, and also a “best effort” for criteria needed by other research communities. The FAIR Data Principles support robust documentation of datasets to include the necessary information for reuse. An important part of that documentation, or metadata, is clear documentation of the quality and uncertainty related to the data being considered. When this information is not complete, data has a higher tendency of being used incorrectly leading to inaccurate research, rejected papers, or even retracted papers. The importance of data creators to make their data FAIR – including uncertainty information – directly improves the transparency and integrity of our science today and into the future.
Continuous Structural Parameterization (CSP) is a method for approximating different numerical model parameterizations of the same process as functions of the same gridscale variables. This allows systematic comparison of parameterizations with each other and observations or resolved simulations of the same process. Using the example of two convection schemes running in the Met Office Unified Model (UM), we show that a CSP is able to capture concisely the broad behavior of the two schemes, and differences between the parameterizations and resolved convection simulated by a high resolution simulation. When the original convection schemes are replaced with their CSP emulators within the UM, basic features of the original model climate and some features of climate change are reproduced, demonstrating that CSP can capture much of the important behavior of the schemes. Our results open the possibility that future work will estimate uncertainty in model projections of climate change from estimates of uncertainty in simulation of the relevant physical processes.
Most state-of-the-art deep learning systems have their roots in computer vision, which force the remote sensing community to develop ad-hoc procedures for applying deep learning methods in the analysis of remote sensing data. In this Juypuiter notebook, we present Keras Spatial (https://pypi.org/project/keras-spatial), a new python package for pre-processing and augmenting geospatial data for deep learning models. Keras Spatial is composed of loosely-coupled components, which allow users to pre-process geospatial raster data on-the-fly before ingesting them into neural networks. The advantage of using Keras Spatial over more traditional Ad-hoc pipelines are (1) allowing scientists and developers to work in projected coordinates rather than pixels and (2) controlling the sample space and hence avoiding issues such as bias and class imbalance during training. We will demonstrate Keras Spatial using the case study of processing digital elevation data for a segmentation model. We will also demonstrate advanced data pre-processing features of this package, such as accessing remote data sources directly, easy integration of multiple datasets using automatic reprojection and resampling, and decoupling training samples dimensions from the geographic extent to open the door for prediction across different scales.
In 2003, the New York State Department of Environmental Conservation began designating Potential Environmental Justice Areas (PEJA) for the purpose of providing additional public participation opportunities to disadvantaged populations during permitting deliberations. We developed NYenviroScreen to help stakeholders understand, review, and provide input for how future PEJA designation might be updated and improved, including for identifying disadvantaged communities under the newly enacted Climate Leadership and Community Protection Act (CLCPA). We present and compare three potential update methods and provide an interactive web application for investigating model components and composition. The three methods are: (i) three factor clustering using the Jenks natural breaks algorithm, (ii) a cumulative impact model adapted from CalEPA’s CalEnviroScreen, and (iii) a hybrid approach that utilizes both methods and incorporates Native American land areas. NYenviroScreen brings together federal and state data sources related to population health, sociodemographics, environmental risk factors, and potential pollution exposures for 15,463 census block groups. We find that a hybrid approach provides the most robust coverage for both rural and urban areas of New York State. By innovating new approaches to such designations and making them publicly accessible, we contribute to the pursuit of environmental justice in New York by generating actionable science.
We present a book entitled “Machine Learning, Statistics, and Data Mining for Heliophysics,” an online and open source book available at helioml.org. This book includes a collection of interactive Jupyter notebooks, written in the programming language Python, that walks the reader through the process of applying machine learning, statistics, and data mining techniques on various kinds of solar and space physics data sets to reproduce published results. We consider this book to be a living document with frequent updates. Please contact us if you’d like to submit a chapter!