A novel, multi-scale climate modeling approach is used to show the potential for increases in future tornado intensity due to anthropogenic climate change. Historical warm- and cool-season (WARM and COOL) tornado events are virtually placed in a globally warmed future via the “pseudo-global warming” method. As hypothesized based on meteorological arguments, the tornadic-storm and associated vortex of the COOL event experiences consistent and robust increases in intensity, and size in an ensemble of imposed climate-change experiments. The tornadic-storm and associated vortex of the WARM event experiences increases in intensity in some of the experiments, but the response is neither consistent nor robust, and is overall weaker than in the COOL event. An examination of environmental parameters provides further support of the disproportionately stronger response in the cool-season event. These results have implications on future tornadoes forming outside of climatologically favored seasons.
Forestation is a major component of future long-term emissions reduction and CO2 removal strategies, but the viability of carbon stored in vegetation under future climates is highly uncertain. We analyze the results from seven CMIP6 models for a combined scenario with high fossil fuel emissions (from SSP5-8.5) and moderate forest expansion (from SSP1-2.6). This scenario aims to demonstrate the ability of forestation strategies to mitigate climate change under continued increasing CO2 removal strategy has limited impact on global climate under a high global warming scenario, despite generating a substantial cumulative carbon sink of 10–60 Pg C over the period 2015–2100. Using a single model ensemble, we show that there are local increases in warm extremes in response to forestation associated with decreases in the number of cool days. Furthermore, we find evidence of a shift in the global carbon balance, whereby increased carbon storage on land of ~25 Pg C by 2100 associated with forestation has a concomitant decrease in the carbon uptake by the ocean due to reduced atmospheric CO2 concentrations.
These Earth Energy Budgets (EEBs) came to prominence in 1997 when Kiehl and Trenberth produced their EEB known commonly as KT97. They have regularly come under attack. Primarily they show the Earth emitting 300% more radiation than it receives from the Sun. This energy is being generated out of nothing and violates the 1 st Law of Thermodynamics. They also show the Sun shining on the dark side of the Earth, something that just doesn't happen. All the radiation data in these EEBs, with the exception of Long Wave Down LWD and Long Wave Up LWU infrared IR radiation at the surface, have been divided by 4. This shows the Sun shining equally on all 4 quadrants of the Earth. This has the effect of having the Earth emitting 300% more radiation than it receives from the Sun. This 300% extra radiation is supposedly being generated out of nothing by a greenhouse effect GHE in the atmosphere. It seems apparent that this divide by 4 system is being used as a means of justifying the GHE theory. IR radiation is 100 times less energetic than visible radiation. That means the 322 W/m 2 of IR LWD is the equivalent of 3.22 W/m 2 of visible or Short Wave Down SWD radiation from the Sun. Since it appears these EEBs are being used to calibrate climate models, it has become necessary to review these EEBs and that in turn led to it becoming necessary to generate a new Earth Energy Budget to bring some realism back into them. This paper produces a new Earth Energy budget based on measured data. The Earth receives 1,361 W/m 2 of Short Wave Down SWD solar radiation at the top of atmosphere TOA and 1,361 W/m 2 of Short Wave Up SWU and LWU arrive back at the TOA. 589 W/m 2 of solar radiation is absorbed in the surface and 589 W/m 2 of LWU, latent heat and thermals is emitted by the surface. There is no mystery radiation being generated in the atmosphere and the budget is in balance.
Supervolcanic eruptions induced abrupt global cooling (roughly at a rate of ~1ºC/year lasting for years to decades), such as the prehistoric Yellowstone eruption released, by some estimates, SO2 about 100 times higher than the 1991 Mt. Pinatubo eruption. An abrupt global cooling of several ºC, even if only lasting a few years, would present immediate and drastic stress on biodiversity and food production - posing a global catastrophic risk to human society. Using a simple climate model, this paper discusses the possibility of counteracting supervolcanic cooling with the intentional release of greenhouse gases. Although well-known longer-lived compounds such as CO2 and CH₄ are found to be unsuitable for this purpose, select fluorinated gases (F-gases), either individually or in combinations, may be released at gigaton scale to offset most of the supervolcanic cooling. We identify candidate F-gases (viz. C4F6 and CH3F) and derive radiative and chemical properties of ‘ideal’ compounds matching specific cooling events. Geophysical constraints on manufacturing and stockpiling due to mineral availability are considered alongside technical and economic implications based on present-day market assumptions. The consequences of F-gas release in perturbing atmospheric chemistry are discussed in the context of those due to the supervolcanic eruption itself. The conceptual analysis here suggests the possibility of mitigating certain global catastrophic risks via intentional intervention.
Correspondence to: Dimitre Karamanev (email@example.com) Recently, an Editorial titled Global warming is due to an enhanced greenhouse effect, and anthropogenic heat emissions currently play a negligible role at the global scale (Kleidon et al., 2023) was published in the journal Earth System Dynamics. In it the Chief Editors state: "From time to time, we receive submissions at Earth System Dynamics claiming that global warming, or at least a significant part of it, is caused by factors other than the direct and indirect effect of anthropogenic greenhouse gas emissions. A number of these submissions claim that the increase in observed temperatures is due to the emission of heat from human activities… Such submissions would not have passed peer review in Earth System Dynamics as they ignore basic textbook knowledge and would indeed typically be rejected prior to entering the open-discussion peer review phase." It should be emphasized that discoveries "ignoring basic textbook knowledge" are among the strongest drivers of science (Newton, 1687; Einstein, 1905; Galilei, 1590) and should not be ignored unless they are deemed incorrect. And the determination of their correctness is performed in a peer-review process. On a smaller scale, it was recently found that the assumption that the motion of free rising and free falling rigid bodies are governed by the same physical principles (Newton, 1687; Galilei, 1590) was incorrect (Karamanev and Nikolov, 1992). While this discovery was "ignoring basic textbook knowledge" at the time, the peer-review process confirmed that Galileo and later Newton were wrong in that regard (mainly because the phenomena of turbulence was unknown at their respective times), and the new discovery is now part of the mainstream knowledge base (Green, 2008; Chhabra and Basavaraj, 2019). Further, the Editorial states: "A quick look at the global surface energy balance illustrates this clear picture: human primary energy consumption amounted to 595 EJ in 2021 (BP, 2022), which translates into an average heat release of 18.9 TW. When averaged over land, this yields 18.9 TW / (29% x 510x10 12 m 2) = 0.13 W m −2 (as in Jin et al., 2019), while globally, this yields 0.04 W m −2 when evenly distributed over the Earth's surface. This heat release is minute compared to the downwelling flux of longwave radiation of 346 W m −2 (Stephens et al., 2012) and the observed radiative forcing change at the top of the atmosphere of 2.7 W m−2 that can clearly be attributed to the increase in greenhouse gases (Forster et al., 2021). The greenhouse gas forcing
Water vapor and cirrus clouds in the Tropical Tropopause Layer (TTL) are important for the climate and are largely controlled by temperature in the TTL. On interannual timescales, both stratospheric and tropospheric modes of variability affect temperatures in the TTL. In this study, we use satellite observations to investigate the explained variance in cold point temperature (CPT), 83 hPa water vapor (WV83), and TTL cirrus cloud fraction (TTLCCF) over the equatorial region (15°N - 15°S) using a multiple linear regression (MLR) model where predictors are stratospheric and tropospheric modes of variability. The MLR model can explain 68%, 60%, and 52% of the variance in CPT, WV83, and TTLCCF. The model suggests that these variables are dominated by stratospheric ‘top-down’ processes associated with the Quasi-Biennial Oscillation (QBO) and Brewer Dobson Circulation (BDC) as opposed to tropospheric ‘bottom-up’ processes associated with the El Nino Southern Oscillation (ENSO) and the Madden-Julian Oscillation (MJO). Although cold point temperature is controlled by ‘top-down’ mechanisms, the cold point tropopause height is related to both ‘top-down’ stratospheric and ‘bottom-up’ tropospheric processes. Our MLR model explains more variance during boreal winter. We also investigate how these modes of variability correlate with zonal mean temperature, water vapor, and cloud fraction globally in the upper troposphere and lower stratosphere (UTLS) and find significant relationships between clouds and the modes of variability.
Quantifying sector-resolved methane fluxes in complex emissions environments is challenging yet necessary for inventory validations. We separate energy and agriculture sector methane using a dynamic linear model of methane, ethane, and ammonia mixing ratios measured at a Northern Colorado site from November 2021 to January 2022. Combining observations with spatially resolved inventories and inverse methods, energy and agriculture methane fluxes are constrained across a ~850 km2 area. Optimized energy sector fluxes were 22% lower than the inventory despite a ~360% increase in regional energy production since the inventory was constructed, suggesting a regional decline in emissions factors. In contrast, optimized agriculture fluxes were 3× larger than the inventory; we demonstrate this discrepancy is consistent with the spatial distribution of agricultural sources. These results highlight the utility of sector-apportioned methane observations for multi-sector inventory optimization in complex environments, which may prove valuable for national and global quantification of sector-resolved methane fluxes.
Eutrophication represents a major threat to freshwater systems and climate change is expected to drive further increases in freshwater primary productivity. However, long-term in-situ data is available for very few lakes and makes identifying trends and drivers of eutrophication challenging. Using remote sensing data, we conducted a retrospective analysis of long-term trends in trophic status across the Intermountain West, a region with understudied water quality trends and limited long-term datasets. We found that most lakes (55%) were not exhibiting shifts in trophic status from 1984-2019. Our results also show that increases in eutrophication were rare (3% of lakes) during this period, and that lakes exhibiting negative trends in trophic status were more common (17% of lakes). Lakes that were not trending occupied a wide range of lake and landscape characteristics, whereas lakes that were becoming less eutrophic tended to be in more heavily developed catchments. Our results highlight that while there are well-established narratives that climate change can lead to more eutrophication of lakes, this is not broadly observed in our dataset, with more lakes becoming more oligotrophic than lakes becoming eutrophic.
Although adequately detailed kerosene chemical-combustion Arrhenius reaction-rate suites were not readily available for combustion modeling until ca. the 1990’s (e.g., Marinov ), it was already known from mass-spectrometer measurements during the early Apollo era that fuel-rich liquid oxygen + kerosene (RP-1) gas generators yield large quantities (e.g., several percent of total fuel flows) of complex hydrocarbons such as benzene, butadiene, toluene, anthracene, fluoranthene, etc. (Thompson ), which are formed concomitantly with soot (Pugmire ). By the 1960’s, virtually every fuel-oxidizer combination for liquid-fueled rocket engines had been tested, and the impact of gas phase combustion-efficiency governing the rocket-nozzle efficiency factor had been empirically well-determined (Clark ). Up until relatively recently, spacelaunch and orbital-transfer engines were increasingly designed for high efficiency, to maximize orbital parameters while minimizing fuels and structural masses: Preburners and high-energy atomization have been used to pre-gasify fuels to increase (gas-phase) combustion efficiency, decreasing the yield of complex/aromatic hydrocarbons (which limit rocket-nozzle efficiency and overall engine efficiency) in hydrocarbon-fueled engine exhausts, thereby maximizing system launch and orbital-maneuver capability (Clark; Sutton; Sutton/Yang). The combustion community has been aware that the choice of Arrhenius reaction-rate suite is critical to computer engine-model outputs. Specific combustion suites are required to estimate the yield of high-molecular-weight/reactive/toxic hydrocarbons in the rocket engine combustion chamber, nonetheless such GIGO errors can be seen in recent documents. Low-efficiency launch vehicles also need larger fuels loads to achieve the same launched mass, further increasing the yield of complex hydrocarbons and radicals deposited by low-efficiency rocket engines along launch trajectories and into the stratospheric ozone layer, the mesosphere, and above. With increasing launch rates from low-efficiency systems, these persistent (Ross/Sheaffer ; Sheaffer ), reactive chemical species must have a growing impact on critical, poorly-understood upper-atmosphere chemistry systems.