These Earth Energy Budgets (EEBs) came to prominence in 1997 when Kiehl and Trenberth produced their EEB known commonly as KT97. They have regularly come under attack. Primarily they show the Earth emitting 300% more radiation than it receives from the Sun. This energy is being generated out of nothing and violates the 1 st Law of Thermodynamics. They also show the Sun shining on the dark side of the Earth, something that just doesn't happen. All the radiation data in these EEBs, with the exception of Long Wave Down LWD and Long Wave Up LWU infrared IR radiation at the surface, have been divided by 4. This shows the Sun shining equally on all 4 quadrants of the Earth. This has the effect of having the Earth emitting 300% more radiation than it receives from the Sun. This 300% extra radiation is supposedly being generated out of nothing by a greenhouse effect GHE in the atmosphere. It seems apparent that this divide by 4 system is being used as a means of justifying the GHE theory. IR radiation is 100 times less energetic than visible radiation. That means the 322 W/m 2 of IR LWD is the equivalent of 3.22 W/m 2 of visible or Short Wave Down SWD radiation from the Sun. Since it appears these EEBs are being used to calibrate climate models, it has become necessary to review these EEBs and that in turn led to it becoming necessary to generate a new Earth Energy Budget to bring some realism back into them. This paper produces a new Earth Energy budget based on measured data. The Earth receives 1,361 W/m 2 of Short Wave Down SWD solar radiation at the top of atmosphere TOA and 1,361 W/m 2 of Short Wave Up SWU and LWU arrive back at the TOA. 589 W/m 2 of solar radiation is absorbed in the surface and 589 W/m 2 of LWU, latent heat and thermals is emitted by the surface. There is no mystery radiation being generated in the atmosphere and the budget is in balance.
Supervolcanic eruptions induced abrupt global cooling (roughly at a rate of ~1ºC/year lasting for years to decades), such as the prehistoric Yellowstone eruption released, by some estimates, SO2 about 100 times higher than the 1991 Mt. Pinatubo eruption. An abrupt global cooling of several ºC, even if only lasting a few years, would present immediate and drastic stress on biodiversity and food production - posing a global catastrophic risk to human society. Using a simple climate model, this paper discusses the possibility of counteracting supervolcanic cooling with the intentional release of greenhouse gases. Although well-known longer-lived compounds such as CO2 and CH₄ are found to be unsuitable for this purpose, select fluorinated gases (F-gases), either individually or in combinations, may be released at gigaton scale to offset most of the supervolcanic cooling. We identify candidate F-gases (viz. C4F6 and CH3F) and derive radiative and chemical properties of ‘ideal’ compounds matching specific cooling events. Geophysical constraints on manufacturing and stockpiling due to mineral availability are considered alongside technical and economic implications based on present-day market assumptions. The consequences of F-gas release in perturbing atmospheric chemistry are discussed in the context of those due to the supervolcanic eruption itself. The conceptual analysis here suggests the possibility of mitigating certain global catastrophic risks via intentional intervention.
Accurately identifying liquid water layers in mixed-phase clouds is crucial for estimating cloud radiative effects. Lidar-based retrievals are limited in optically thick or multilayer clouds, leading to positive biases in simulated shortwave radiative fluxes. At the same time, general circulation models also tend to overestimate the downwelling shortwave radiation at the surface especially in the Southern Ocean regions. To reduce this SW radiation bias in models, we first need better observational-based retrievals for liquid detection which can later be used for model validation. To address this, a machine-learning-based liquid-layer detection method called VOODOO was employed in a proof-of-concept study using a single column radiative transfer model to compare shortwave cloud radiative effects of liquid-containing clouds detected by Cloudnet and VOODOO to ground-based and satellite observations. Results showed a reduction in shortwave radiation bias, indicating that liquid-layer detection with machine-learning retrievals can improve radiative transfer simulations.
During NASA’s Apollo missions, inhalation of dust particles from lunar regolith was identified as a potential occupational hazard for astronauts. These fine particles adhered tightly to spacesuits and were brought accidentally into the living areas of the spacecraft. Apollo astronauts reported that exposure to the dust caused intense respiratory and ocular irritation. This problem is a potential challenge for the Artemis Program, which aims to return humans to the Moon for extended stays in this decade. Since lunar dust is “weathered” by space radiation, solar wind, and the incessant bombardment of micrometeorites, we investigated whether treatment of lunar regolith simulants to mimic space weathering enhanced their toxicity. Two such simulants were employed in this research, Lunar Mare Simulant-1 (LMS-1), and Lunar Highlands Simulant-1 (LHS-1), which were applied to human lung epithelial cells (A549). In addition to pulverization, previously shown to increase dust toxicity sharply, the simulants were exposed to hydrogen gas at high temperature as a proxy for solar wind exposure. This treatment further increased the toxicity of both simulants, as measured by the disruption of mitochondrial function, and damage to DNA both in mitochondria and in the nucleus. By testing the effects of supplementing the cells with an antioxidant (N-acetylcysteine), we showed that a substantial component of this toxicity arises from free radicals. It remains to be determined to what extent the radicals arise from the dust itself, as opposed to their active generation by inflammatory processes in the treated cells.
Global reanalyses like ERA5 accurately capture atmospheric processes at spatial scales of O(10) km or larger. By downscaling ERA5 with large-eddy simulation (LES), LES can provide details about processes at spatio-temporal scales down to meters and seconds. Here, we present an open-source Python package named the “Large-eddy simulation and Single-column model - Large-Scale Dynamics”, or (LS)2D in short, designed to simplify the downscaling of ERA5 with doubly-periodic LES. A validation with observations, for several sensitivity experiments consisting of month-long LESs over Cabauw (the Netherlands), demonstrates both its usefulness and limitations. The day-to-day variability in the weather is well captured by (LS)2D and LES, but the setup under-performs in conditions with broken or near overcast clouds. As a novel application of this modeling system, we used (LS)2D to study surface solar irradiance variability, as this quantity directly links land-surface processes, turbulent transport, and clouds, to radiation. At a horizontal resolution of 25 m, the setup reproduces satisfactorily the solar irradiance variability down to a timescale of seconds. This demonstrates that the coupled LES-ERA5 setup is a useful tool that can provide details on the physics of turbulence and clouds, but can only improve on its host reanalysis when applied to meteorological suitable conditions.
Tropical cyclogenesis can be influenced by convectively coupled equatorial waves; yet, existing datasets prevent a complete analysis of the multi-scale processes governing both tropical cyclones (TCs) and equatorial waves. This study introduces a convection-permitting aquaplanet simulation that can be used as a laboratory to study TCs, equatorial waves, and their interactions. The simulation was produced with the Model for Prediction Across Scales-Atmosphere (MPAS-A) using a variable resolution mesh with convection-permitting resolution (i.e., 3-km cell spacing) between 10oS–30oN. The underlying sea-surface temperature is given by a zonally symmetric profile with a peak at 10oN, which allows for the formation of TCs. A comparison between the simulation and satellite, reanalysis, and airborne dropsonde data is presented to determine the realism of the simulated phenomena. The simulation captures a realistic TC intensity distribution, including major hurricanes, but their lifetime maximum intensities may be limited by the stronger vertical wind shear in the simulation compared to the observed tropical Pacific region. The simulation also captures convectively coupled equatorial waves, including Kelvin waves and easterly waves. Despite the idealization of the aquaplanet setup, the simulated three-dimensional structure of both groups of waves is consistent with their observed structure as deduced from satellite and reanalysis data. Easterly waves, however, have peak rotation and meridional winds at a slightly higher altitude than in the reanalysis. Future studies may use this simulation to understand how convectively coupled equatorial waves influence the multi-scale processes leading to tropical cyclogenesis.
Although adequately detailed kerosene chemical-combustion Arrhenius reaction-rate suites were not readily available for combustion modeling until ca. the 1990’s (e.g., Marinov ), it was already known from mass-spectrometer measurements during the early Apollo era that fuel-rich liquid oxygen + kerosene (RP-1) gas generators yield large quantities (e.g., several percent of total fuel flows) of complex hydrocarbons such as benzene, butadiene, toluene, anthracene, fluoranthene, etc. (Thompson ), which are formed concomitantly with soot (Pugmire ). By the 1960’s, virtually every fuel-oxidizer combination for liquid-fueled rocket engines had been tested, and the impact of gas phase combustion-efficiency governing the rocket-nozzle efficiency factor had been empirically well-determined (Clark ). Up until relatively recently, spacelaunch and orbital-transfer engines were increasingly designed for high efficiency, to maximize orbital parameters while minimizing fuels and structural masses: Preburners and high-energy atomization have been used to pre-gasify fuels to increase (gas-phase) combustion efficiency, decreasing the yield of complex/aromatic hydrocarbons (which limit rocket-nozzle efficiency and overall engine efficiency) in hydrocarbon-fueled engine exhausts, thereby maximizing system launch and orbital-maneuver capability (Clark; Sutton; Sutton/Yang). The combustion community has been aware that the choice of Arrhenius reaction-rate suite is critical to computer engine-model outputs. Specific combustion suites are required to estimate the yield of high-molecular-weight/reactive/toxic hydrocarbons in the rocket engine combustion chamber, nonetheless such GIGO errors can be seen in recent documents. Low-efficiency launch vehicles also need larger fuels loads to achieve the same launched mass, further increasing the yield of complex hydrocarbons and radicals deposited by low-efficiency rocket engines along launch trajectories and into the stratospheric ozone layer, the mesosphere, and above. With increasing launch rates from low-efficiency systems, these persistent (Ross/Sheaffer ; Sheaffer ), reactive chemical species must have a growing impact on critical, poorly-understood upper-atmosphere chemistry systems.
This work presents a comparison of the meteorology and the surface energy and mass fluxes of the clean ice and debris-covered ice surfaces of the Djankuat Glacier, a partly debris-covered valley glacier situated in the Caucasus. A 2D spatially distributed and physically-based energy and mass balance model at high spatial and temporal resolution is used, driven by meteorological data from two automatic weather stations and ERA5-Land reanalysis data. Our model is the first that attempts to assesses the spatial variability of meteorological variables, energy fluxes, mass fluxes, and the melt-altering effects of supraglacial debris over the entire surface of a (partly) debris-covered glacier during one complete measurement year. The results show that the meteorological variables and the surface energy and mass balance components are significantly modified due to the supraglacial debris. As such, changing surface characteristics and different surface temperature/moisture and near-surface wind regimes persist over debris-covered ice, consequently altering the pattern of the energy and mass fluxes when compared to clean ice areas. The eventual effect of the supraglacial debris on the energy and mass balance and the surface-atmosphere interaction is found to highly depend upon the debris thickness and area: for thin and patchy debris, sub-debris ice melt is enhanced when compared to clean ice, whereas for thicker and continuous debris, the melt is increasingly suppressed. Our results highlight the importance of the effect of supraglacial debris on glacier-atmosphere interactions and the corresponding implications for the changing melting patterns and the climate change response of (partly) debris-covered glaciers.
The challenge of reconstructing air temperature for environmental applications is to accurately estimate past exposures even where monitoring is sparse. We present XGBoost-IDW Synthesis for air temperature (XIS-Temperature), a high-resolution machine-learning model for daily minimum, mean, and maximum air temperature, covering the contiguous US from 2003 through 2021. XIS uses remote sensing (land surface temperature and vegetation) along with a parsimonious set of additional predictors to make predictions at arbitrary points, allowing the estimation of address-level exposures. We built XIS with a computationally tractable workflow for extensibility to future years, and we used weighted evaluation to fairly assess performance in sparsely monitored regions. The weighted root mean square error (RMSE) of predictions in site-level cross-validation for 2021 was 1.89 K for the minimum daily temperature, 1.27 K for the mean, and 1.72 K for the maximum. We obtained higher RMSEs in earlier years with fewer ground monitors. Comparing to three leading gridded temperature models in 2021 at thousands of private weather stations not used in model training, XIS had at most 49% of the mean square error for the minimum temperature and 87% for the maximum. In a national application, we report a stronger relationship between minimum temperature in a heatwave and social vulnerability with XIS than with the other models. Thus, XIS-Temperature has potential for reconstructing important environmental exposures, and its predictions have applications in environmental justice and human health.
The urban morphology determined by urban canopy parameters (UCPs) plays an important role in simulating the interaction of urban land surface and atmosphere. The impact of urbanization on a typical summer rainfall event in Hangzhou, China, is investigated using the integrated WRF/urban modelling system. Three groups of numerical experiments are designed to assess the uncertainty in parameterization schemes, the sensitivity of urban canopy parameters (UCPs), and the individual and combined impacts of thermal and dynamical effects of urbanization, respectively. The results suggest that the microphysics scheme has the highest level of uncertainty in simulating precipitation, followed by the planetary boundary layer scheme, whereas the land surface and urban physics schemes have minimal impacts. The choices of the physical parameterization schemes for simulating precipitation are much more sensitive than those for simulating temperature, mixing ratio, and wind speed. Of the eight selected UCPs, changes in heat capacity, thermal conductivity, surface albedo, and roughness length have a greater impact on temperature, mixing ratio, and precipitation, while changes in building height, roof width, and road width affect the wind speed more. The total urban impact could lead to higher temperature, less mixing ratio, lower wind speed, and more precipitation in and around the urban area. Comparing the thermal and dynamical effects of urbanization separately, both of them contribute to an increase in temperature and precipitation and the thermal effect plays a major role. However, their impacts are opposite in changes of mixing ratio and wind speed, and each play a major role respectively.
A weather station in Nukuʻalofa (NUKU), Tonga, ~68km away from the epicenter of the 2022 Tonga eruption, recorded exceptional pressure, temperature, and wind data representative of the eruption source hydrodynamics. These high-quality data are available for further source and propagation studies. In contrast to other barometers and infrasound sensors at greater ranges, the NUKU barometer recorded a decrease in pressure during the climactic stage of the eruption. A simple fluid dynamic explanation of the depressurization is provided, with a commentary on near- vs far-field pressure observations of very large eruptions.
This paper is a contribution to the exploration of the parametric Kalman filter (PKF), which is an approximation of the Kalman filter, where the error covariance are approximated by a covariance model. Here we focus on the covariance model parameterized from the variance and the anisotropy of the local correlations, and whose parameters dynamics provides a proxy for the full error-covariance dynamics. For this covariance mode, we aim to provide the boundary condition to specify in the prediction of PKF for bounded domains, focusing on Dirichlet and Neumann conditions when they are prescribed for the physical dynamics. An ensemble validation is proposed for the transport equation and for the heterogeneous diffusion equations over a bounded 1D domain. This ensemble validation requires to specify the auto-correlation time-scale needed to populate boundary perturbation that leads to prescribed uncertainty characteristics. The numerical simulations show that the PKF is able to reproduce the uncertainty diagnosed from the ensemble of forecast appropriately perturbed on the boundaries, which show the ability of the PKF to handle boundaries in the prediction of the uncertainties. It results that Dirichlet condition on the physical dynamics implies Dirichlet condition on the variance and on the anisotropy.
The Central Highlands of Vietnam is the biggest Robusta coffee (Coffea canephora Pierre ex A.Froehner) growing region in the world. This study aims to identify the most important climatic variables that determine the current distribution of coffee in the Central Highlands and build a “coffee suitability” model to assess changes in this distribution due to climate change scenarios. A suitability model based on neural networks was trained on coffee occurrence data derived from national statistics on coffee-growing areas. Bias-corrected regional climate models were used for two climate change scenarios (RCP8.5 and RCP2.6) to assess changes in suitability for three future time periods (i.e., 2038-2048, 2059-2069, 2060-2070) relative to the 2009-2019 baseline. Average expected losses in suitable areas were 62% and 27% for RCP8.5 and RCP2.6, respectively. The loss in suitability due to RCP8.5 is particularly pronounced after 2060. Increasing mean minimum temperature during harvest (October-November) and growing season (March-September) and decreasing precipitation during late growing season (July-September) mainly determined the loss in suitable areas. If the policy commitments made at the Paris agreement are met, the loss in coffee suitability could potentially be compensated by climate change adaptation measures such as making use of shade trees and adapted clones.
Extreme heat waves beset western North America during 2021, including a 46.7°C (116°F) observation in Portland, Oregon, an astonishing 5°C above the previous record. Using Portland as an example we provide evidence for a latent risk of extreme heat waves in the Pacific Northwest (PNW) and along the west coast of the United States where a maritime climate and its intrinsic variations yield a positive skewness in summertime daily maximum temperatures. A generalized Pareto extreme value analysis yields a heavy tailed distribution with a return period of 300-1000 years, indicating that, while rare, the event was possible, contrary to prior claims that the event was “virtually impossible”. We demonstrate that the extreme temperatures can be explained by the coincident extreme values of geopotential heights, and that the relationship between heights and extreme temperatures has not materially changed over the observational record. The dynamical nature of the event along with recent developments in stochastic theory justifies the use of skewed and heavy-tailed distributions which may provide the basis for a more proactive approach to managing the risk of future events.