Moisture recycling via evapotranspiration (ET) is often invoked as a mechanism for the high deuterium excess signals observed in continental precipitation (dP). However, a global-scale analysis of precipitation monitoring station isotope data shows that metrics of ET contributions to precipitation (van der Ent et al., 2014) explain little dp variability on seasonal timescales. This occurs despite the fact that ET contributions increase by ~50% in continental locations such as the Eurasian interior from wet to dry seasons. To explain this apparent paradox, we hypothesize that the effects of ET on dP are dampened during dry seasons due to contributions from isotopically-evolved residual water storage that act to lower the d-excess of ET fluxes (dET), in combination with changes in transpiration fraction (T/ET). To test this hypothesis, we develop a parsimonious two-season (wet, dry) model for dET incorporating residual water storage and ET partitioning effects. We find that in environments with limited water storage, such as shallow-rooted grasslands, dry season dET is lower than wet season dET despite lower relative humidity. As global average ratios of annual water storage to precipitation are relatively low (Guntner et al., 2007), these dynamics may be widespread over continents. In environments where water storage is not limiting, such as groundwater-dependent ecosystems, dry season dET is still likely lower; however, this effect arises instead due to higher seasonal T/ET when energy-driven plant water use is enhanced and surface evaporation is relatively limited by water availability. Together, these analyses also indicate multiple mechanisms by which dET may be lower than dp during the same season, challenging the view that moisture recycling feedback increases the dp in continental interiors. This work demonstrates the potential complexity of seasonal dp dynamics and cautions against simple interpretations of dP as a process tracer for moisture recycling. References: Guntner et al., 2007. Water Resour. Res., 43, W05416. van der Ent et al., 2014. Earth Syst. Dynam., 5, 471–489.
Mangrove forests are among the most productive ecosystems in the world. These tropical and subtropical coastal forests provide a wide array of ecosystem services, including the ability to sequester and store large amounts of ‘blue carbon’. Given rising concerns over anthropogenic carbon dioxide (CO2) emissions, mangrove forests have been increasingly recognized for their potential in climate change mitigation programs. However, their productivity differs considerably across environments, making it difficult to estimate carbon sequestration potentials at regional scales. Additionally, most research has focused in humid and tropical latitudes, with limited studies in arid and semi-arid regions. A semi-arid mangrove forest in Magdalena Bay, Baja California Sur, Mexico was studied to quantify the average net ecosystem exchange (NEE), determine the annual carbon (C) budget and the environmental controls driving those fluxes. Measurements were taken during 2012-2013 using the eddy covariance technique, with a daily mean NEE of -2.25 +/- 0.4 g C m-2 d-1 and annual carbon uptake of 894 g C m-2 y-1. Daily variations in NEE were primarily regulated by light, but air temperature and vapor pressure deficit were strong seasonal drivers. Our research demonstrates that despite the harsh and arid climate, the mangroves of Magdalena Bay were nearly as productive as mangroves found in tropical and subtropical climates. These results broaden understanding of the ecosystem services of one of the largest mangrove ecosystems in the Baja California peninsula, and highlight the potential role of arid mangrove ecosystems for C accounting, management and mitigation plans for the region.
Concerns about water security often inform climate risk-related decisions made by environmentally focused investors (Porritt, 2001; Stern, 2006). Yet, potential liabilities for damage caused by extreme flood and drought events linked to global warming present risks that are not always reflected in share prices (Krosinsky et al., 2012). Considering the highly destructive nature of such events, we query whether companies, or specific sectors, could and should be held at least partially liable for their emission-releasing business activities. Recent articles (Rayer & Millar, 2018; Rayer et al., 2020) estimate that under a hypothetical climate liability regime, North Atlantic hurricane seasons might increasingly generate 1-2% losses on market capitalizations (or share prices) for the top seven carbon-emitting, publicly listed companies. In this paper, we extend the concept of the climate liability regime to estimate the impact of global flood- and drought-related damages on the share prices of nine fossil-fuel firms (including the seven mentioned by Rayer et al. (2020)). Following Rayer et al. (2020), we use incremental climate impacts and historical corporate emissions to estimate that climate change-related global flood and drought damages for the period of 2012 to 2016 amount to approximately 2-3% of the top nine carbon-emitting companies’ market capitalizations. We also include a discussion of moral responsibility and the proportion of obligations between producers and users. Quantifying impacts from extreme weather events increases salience and serves as an example of how science can identify and address the important business questions, pertinent to both investors and companies, that arise from a changing climate. References Krosinsky, C., Robins, N., & Viederman, S. (2012). Evolutions in sustainable investing. John Wiley & Sons. Porritt, J. (2001). The world in context. HRH The Prince of Wales’ Business and the Environment Programme, Cambridge. Rayer, Q. G., & Millar, R. J. (2018). Investing in Extreme Weather Conditions. Citywire Wealth Manager®, (429) 36. Rayer, Q., Pfleiderer, P., & Haustein, K. (2020). Global Warming and Extreme Weather Investment Risks. Palgrave Macmillan. https://doi.org/10.1007/978-3-030-38858-4_3 Stern, N. (2006). Stern Review executive summary. London.
Construction with freeboard – vertical height of a structure above the minimum required – is commonly accepted as a sound investment for flood hazard mitigation. However, determining the optimal height of freeboard poses a major decision problem. This research introduces a life-cycle benefit-cost analysis (LCBCA) approach for optimizing freeboard height for a new, single-family residence, while incorporating uncertainty, and, in the case of insured homes, considering the costs from losses, insurance, and freeboard (if any) to the homeowner and National Flood Insurance Program (NFIP) separately. Using a hypothetical, case study home in Metairie, Louisiana, results show that adding 2 ft. of freeboard at the time of construction might be considered the optimal option given that it yields the highest net benefit, but the highest net benefit-cost ratio occurs for the 1 ft. freeboard. Even if flood loss reduction is not considered when adding freeboard, the savings in annual insurance premiums alone are sufficient to recover the construction costs paid by the homeowner if at least one foot of freeboard is included at construction. Collectively, these results based on conservative assumptions suggest that at the time of construction, even a small amount of freeboard provides a huge savings for the homeowner and (especially) for the financially-strapped NFIP. For community planners, the results suggest that wise planning with reasonable expectations on the front end makes for a more sustainable community.
The discipline of land change science has been evolving rapidly in the past decades. Remote sensing played a major role in one of the essential components of land change science, which includes observation, monitoring, and characterization of land change. In this paper, we proposed a new framework of the multifaceted view of land change through the lens of remote sensing and recommended five facets of land change including change location, time, target, metric, and agent. We also evaluated the impacts of spatial, spectral, temporal, angular, and data-integration domains of the remotely sensed data on observing, monitoring, and characterization of different facets of land change, as well as discussed some of the current land change products. We recommend clarifying the specific land change facet being studied in remote sensing of land change, reporting multiple or all facets of land change in remote sensing products, shifting the focus from land cover change to specific change metric and agent, integrating social science data and multi-sensor datasets for a deeper and fuller understanding of land change, and recognizing limitations and weaknesses of remote sensing in land change studies.
Climate models generally project an increase in the winter North Atlantic Oscillation (NAO) index under a future high-emissions scenario, alongside an increase in winter precipitation in northern Europe and a decrease in southern Europe. The extent to which future forced NAO trends are important for European winter precipitation trends and their uncertainty remains unclear. We show using the Multimodel Large Ensemble Archive that the NAO plays a small role in northern European mean winter precipitation projections for 2080-2099. Conversely, half of the model uncertainty in southern European mean winter precipitation projections is potentially reducible through improved understanding of the NAO projections. Extreme positive NAO winters increase in frequency in most models as a consequence of mean NAO changes. These extremes also have more severe future precipitation impacts, largely because of mean precipitation changes. This has implications for future resilience to extreme positive NAO winters, which frequently have severe societal impacts.
We present a Python package geared towards the intuitive analysis and visualization of paleoclimate timeseries, Pyleoclim. The code is open-source, object-oriented, and built upon the standard scientific Python stack, allowing users to take advantage of a large collection of existing and emerging techniques. We describe the code’s philosophy, structure and base functionalities, and apply it to three paleoclimate problems: (1) orbital-scale climate variability in a deep-sea core, illustrating spectral, wavelet and coherency analysis in the presence of age uncertainties; (2) correlating a high-resolution speleothem to a climate field, illustrating correlation analysis in the presence of various statistical pitfalls (including age uncertainties); (3) model-data confrontations in the frequency domain, illustrating the characterization of scaling behavior. We show how the package may be used for transparent and reproducible analysis of paleoclimate and paleoceanographic datasets, supporting FAIR software and an open science ethos. The package is supported by an extensive documentation and a growing library of tutorials shared publicly as videos and cloud-executable Jupyter notebooks, to encourage adoption by new users.
In Mongolia, overuse and degradation of groundwater is a serious issue, mainly in the urban and economic hub, Ulaanbaatar, and the Southern Gobi mining hub. In order to explicitly quantify spatio-temporal variations in water availability, a process-based eco-hydrology model, NICE (National Integrated Catchment-based Eco-hydrology) (Nakayama and Watanabe, 2004), was applied to two contrasting river basins including these hubs. The authors built a high-resolution grid data representing water use for livestock, urban populations, and mining by combining a global dataset, statistical data, GIS data, observation data, and field surveys. The model simulated the effects of climatic change and human-induced disturbances on water resources during 1980-2018 (Nakayama et al., 2021). Although drinking by herders’ livestock had some impact on the hydrologic change, the groundwater level in the Tuul River was shown to have been extremely degraded by water use in Ulaanbaatar over the last few decades whereas that in the Galba River has declined markedly as a result of Oyu Tolgoi mining since 2010. Analysis of the relative contribution of environmental factors also helped us to separate the effects of climatic change and human activities on spatio-temporal change in the groundwater level. Further, they extended NICE to couple with inverse method for sensitivity analysis and parameter estimation of anthropogenic water uses (NICE-INVERSE). This new model quantified the spatio-temporal variations of livestock water use in these river basins (Nakayama, et al., in press). The livestock water use was generally small for each soum (district), and could also be heavily returned back to the ecosystems. The result also showed a temporal decreasing trend of unit water use in some typical livestock (cattle, sheep, and goats), suggesting a substantial increase in water stress due to local-regional eco-hydrological degradation by urbanization and mining. Sensitivity analysis and inverse estimation of model parameters helped to improve the accuracy of hydrologic budgets in basins. This methodology is powerful for evaluating spatio-temporal variations of water availability and supporting water management in regions with fewer inventory data.
Accurate flood inundation modelling using a complex high-resolution hydrodynamic (high-fidelity) model can be very computationally demanding. To address this issue, efficient approximation methods (surrogate models) have been developed. Despite recent developments, there remain significant challenges in using surrogate methods for modelling the dynamical behaviour of flood inundation in an efficient manner. Most methods focus on estimating the maximum flood extent due to the high spatial-temporal dimensionality of the data. This study presents a hybrid surrogate model, consisting of a low-resolution hydrodynamic (low-fidelity) and a Sparse Gaussian Process (Sparse GP) model, to capture the dynamic evolution of the flood extent. The low-fidelity model is computationally efficient but has reduced accuracy compared to a high-fidelity model. To account for the reduced accuracy, a Sparse GP model is used to correct the low-fidelity modelling results. To address the challenges posed by the high dimensionality of the data from the low- and high-fidelity models, Empirical Orthogonal Functions (EOF) analysis is applied to reduce the spatial-temporal data into a few key features. This enables training of the Sparse GP model to predict high-fidelity flood data from low-fidelity flood data, so that the hybrid surrogate model can accurately simulate the dynamic flood extent without using a high-fidelity model. The hybrid surrogate model is validated on the flat and complex Chowilla floodplain in Australia. The hybrid model was found to improve the results significantly compared to just using the low-fidelity model and incurred only 39% of the computational cost of a high-fidelity model.
Chemical and biological composition of surface materials and physical structure and arrangement of those materials determine the intrinsic spectral reflectance of Earth’s land surface at the plot scale. As measured by a spaceborne or airborne sensor, the apparent reflectance depends on the intrinsic reflectance, the surface texture, the contribution and attenuation by the atmosphere, and the topography. Compensation or correction for the topographic effect requires information in digital elevation models (DEMs). Available DEMs with global coverage at ~30 m spatial resolution are derived from interferometric radar and stereo-photogrammetry. Locally or regionally, airborne lidar altimetry, airborne interferometric radar, or stereo-photogrammetry from airborne or fine-resolution satellite imagery produces DEMs with finer spatial resolutions. Characterization of the quality of DEMs typically expresses the root-mean-square (RMS) error of the elevation, but the accuracy of remote sensing retrievals is acutely sensitive to uncertainties in the topographic properties that affect the illumination geometry. The essential variables are the cosine of the local illumination angle and the shadows cast by neighboring terrain. We show that calculations with globally available DEMs underrepresent shadows and consistently underestimate the values of the cosine of illumination angle; the RMS error increases with solar zenith angle and in more rugged terrain. Analyzing imagery of Earth’s mountains from current and future missions requires addressing the uncertainty introduced by errors in DEMs on algorithms that estimate surface properties from retrievals of the apparent spectral reflectance. Intriguing potential improvements lie in novel methods to gain information about topography from the imagery itself.
Earth System Models’ complex land components simulate a patchwork of increases and decreases in surface water availability when driven by projected future climate changes. Yet, commonly-used simple theories for surface water availability, such as the Aridity Index (P/E0) and Palmer Drought Severity Index (PDSI), obtain severe, globally dominant drying when driven by those same climate changes, leading to disagreement among published studies. In this work, we use a common modeling framework to show that ESM simulated runoff-ratio and soil-moisture responses become much more consistent with the P/E0 and PDSI responses when several previously known factors that the latter do not account for are cut out of the simulations. This reconciles the disagreement and makes the full ESM responses more understandable. For ESM runoff ratio, the most important factor causing the more positive global response compared to P/E0 is the concentration of precipitation in time with greenhouse warming. For ESM soil moisture, the most important factor causing the more positive global response compared to PDSI is the effect of increasing carbon dioxide on plant physiology, which also drives most of the spatial variation in the runoff ratio enhancement. The effect of increasing vapor-pressure deficit on plant physiology is a key secondary factor for both. Future work will assess the utility of both the ESMs and the simple indices for understanding observed, historical trends.
The predicted Antarctic contribution to global-mean sea-level rise is one of the most uncertain among all major sources. Partly this is because of instability mechanisms of the ice flow over deep basins. Errors in bedrock topography can substantially impact the projected resilience of glaciers against such instabilities. Here we analyze the Pine Island Glacier topography to derive a statistical model representation. Our model allows for inhomogeneous and spatially dependent uncertainties and avoids unnecessary smoothing from spatial averaging or interpolation. A set of topography realizations is generated representing our best estimate of the topographic uncertainty in ice sheet model simulations. The bedrock uncertainty alone creates a 5% to 25% uncertainty in the predicted sea level rise contribution at year 2100, depending on friction law and climate forcing. Pine Island Glacier simulations on this new set are consistent with simulations on the BedMachine reference topography but diverge from Bedmap2 simulations.
Increasing ice flux from glaciers retreating over deepening bed topography has been implicated in the recent acceleration of mass loss from the Greenland and Antarctic ice sheets. We show in observations that some glaciers have remained at peaks in bed topography without retreating despite enduring significant changes in climate. Observations also indicate that some glaciers which persist at bed peaks undergo sudden retreat years or decades after the onset of local ocean or atmospheric warming. Using model simulations, we show that glacier persistence may lead to two very different futures: one where glaciers persist at bed peaks indefinitely, and another where glaciers retreat from the bed peak suddenly without a concurrent climate forcing. However, it is difficult to distinguish which of these two futures will occur from current observations. We conclude that inferring glacier stability from observations of persistence obscures our true commitment to future sea-level rise under climate change.
The tandem rise in satellite-based observations and computing power has changed the way we (can) see rivers across the Earth’s surface. Global datasets of river and river network characteristics at unprecedented resolutions are becoming common enough that the sheer amount of available information presents problems itself. Fully exploiting this new knowledge requires linking these geospatial datasets to each other within the context of a river network. In order to cope with this wealth of information, we are developing Veins of the Earth (VotE), a flexible system designed to synthesize knowledge about rivers and their networks into an adaptable and readily-usable form. VotE is not itself a dataset, but rather a database of relationships linking existing datasets that allows for rapid comparison and exports of river networks at arbitrary resolutions. VotE’s underlying river network (and drainage basins) is extracted from MERIT-Hydro. We link within VotE a newly-compiled dam dataset, streamflow gages from the GRDC, and published global river network datasets characterizing river widths, slopes, and intermittency. We highlight VotE’s utility with a demonstration of how vector-based river networks can be exported at any requested resolution, a global comparison of river widths from three independent datasets, and an example of computing watershed characteristics by coupling VotE to Google Earth Engine. Future efforts will focus on including real-time datasets such as SWOT river discharges and ReaLSAT reservoir areas.
Despite the proliferation of computer-based research on hydrology and water resources, such research is typically poorly reproducible. Published studies have low reproducibility due to incomplete availability of data and computer code, and a lack of documentation of workflow processes. This leads to a lack of transparency and efficiency because existing code can neither be quality controlled nor re-used. Given the commonalities between existing process-based hydrological models in terms of their required input data and preprocessing steps, open sharing of code can lead to large efficiency gains for the modeling community. Here we present a model configuration workflow that provides full reproducibility of the resulting model instantiations in a way that separates the model-agnostic preprocessing of specific datasets from the model-specific requirements that models impose on their input files. We use this workflow to create large-domain (global, continental) and local configurations of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model connected to the mizuRoute routing model. These examples show how a relatively complex model setup over a large domain can be organized in a reproducible and structured way that has the potential to accelerate advances in hydrologic modeling for the community as a whole. We provide a tentative blueprint of how community modeling initiatives can be built on top of workflows such as this. We term our workflow the “Community Workflows to Advance Reproducibility in Hydrologic Modeling’‘ (CWARHM; pronounced “swarm”).
The isotopic composition of dissolved oxygen offers a family of potentially unique tracers of respiration and transport in the subsurface ocean. Uncertainties in transport parameters and isotopic fractionation factors, however, have limited the strength of the constraints offered by 18O/16O and 17O/16O ratios in dissolved oxygen. In particular, puzzlingly low 17O/16O ratios observed for some low-oxygen samples have been difficult to explain. To improve our understanding of oxygen cycling in the ocean’s interior, we investigated the systematics of oxygen isotopologues in the subsurface Pacific using new data and a 2-D isotopologue-enabled isopycnal reaction-transport model. We measured 18O/16O and 17O/16O ratios, as well as the “clumped” 18O18O isotopologue in the northeast Pacific, and compared the results to previously published data. We find that transport and respiration rates constrained by O2 concentrations in the oligotrophic Pacific yield good measurement-model agreement across all O2 isotopologues only when using a recently reported set of respiratory isotopologue fractionation factors that differ from those most often used for oxygen cycling in the ocean. These fractionation factors imply that an elevated proportion of 17O compared to 18O in dissolved oxygen―i.e., its triple-oxygen isotope composition―does not uniquely reflect gross primary productivity and mixing. For all oxygen isotopologues, transport, respiration, and photosynthesis comprise important parts of their respective budgets. Mechanisms of oxygen removal in the subsurface ocean are discussed.
Sea Surface Salinity (SSS) is an increasingly-used Essential Ocean and Climate Variable. The SMOS, Aquarius, and SMAP satellite missions all provide SSS measurements, with very different instrumental features leading to specific measurement characteristics. The Climate Change Initiative Salinity project (CCI+SSS) aims to produce a SSS Climate Data Record (CDR) that addresses well-established user needs based on those satellite measurements. To generate a homogeneous CDR, instrumental differences are carefully adjusted based on in-depth analysis of the measurements themselves, together with some limited use of independent reference data. An optimal interpolation in the time domain without temporal relaxation to reference data or spatial smoothing is applied. This allows preserving the original datasets variability. SSS CCI fields are well-suited for monitoring weekly to interannual signals, at spatial scales ranging from 50 km to the basin scale. They display large year-to-year seasonal variations over the 2010-2019 decade, sometimes by more than +/-0.4 over large regions. The robust standard deviation of the monthly CCI SSS minus in situ Argo salinities is 0.15 globally, while it is at least 0.20 with individual satellite SSS fields. r2 is 0.97, similar or better than with original datasets. The correlation with independent ship thermosalinographs SSS further highlights the CCI dataset excellent performance, especially near land areas. During the SMOS-Aquarius period, when the representativity uncertainties are the largest, r2 is 0.84 with CCI while it is 0.48 with the Aquarius original dataset. SSS CCI data are freely available and will be updated and extended as more satellite data become available.
Alkalinization of natural waters by the dissolution of natural or artificial minerals is a promising solution to sequester atmospheric CO$_2$ and counteract acidification. Here we address the alkalinization carbon capture efficiency (ACCE) by deriving an analytical factor that quantifies the increase in dissolved inorganic carbon in the water due to variations in alkalinity. We show that ACCE strongly depends on the water pH, with a sharp transition from minimum to maximum in a narrow interval of pH values. We also compare ACCE in surface freshwater and seawater and discuss potential bounds for ACCE in the soil water. Finally, we present two applications of ACCE. The first is a local application to 156 lakes in an acid-sensitive region, highlighting the great sensitivity of ACCE to the lake pH. The second is a global application to the surface ocean, revealing a latitudinal pattern of ACCE driven by differences in temperature and salinity.