80 years after aerial photography revealed thousands of aligned oval depressions on the USA’s Atlantic Coastal Plain, the geomorphology of the “Carolina bays” remains enigmatic. Geologists and astronomers alike hold that invoking a cosmic impact for their genesis is indefensible. Rather, the bays are commonly attributed to gradualistic fluvial, marine and/or aeolian processes operating during the Pleistocene era. The major axis orientations of Carolina bays are noted for varying statistically by latitude, suggesting that, should there be any merit to a cosmic hypothesis, a highly accurate triangulation network and suborbital analysis would yield a locus and allow for identification of a putative impact site. Digital elevation maps using LiDAR technology offer the precision necessary to measure their exquisitely-carved circumferential rims and orientations reliably. To support a comprehensive geospatial survey of Carolina bay landforms (Survey) we generated about a million km2 of false-color hsv-shaded bare-earth topographic maps as KML-JPEG tile sets for visualization on virtual globes. Considering the evidence contained in the Survey, we maintain that interdisciplinary research into a possible cosmic origin should be encouraged. Consensus opinion does hold a cosmic impact accountable for an enigmatic Pleistocene event - the Australasian tektite strewn field - despite the failure of a 60-year search to locate the causal astroblem. Ironically, a cosmic link to the Carolina bays is considered soundly falsified by the identical lack of a causal impact structure. Our conjecture suggests both these events are coeval with a cosmic impact into the Great Lakes area during the Mid-Pleistocene Transition, at 786 ka ± 5 k. All Survey data and imagery produced for the Survey are available on the Internet to support independent research. A table of metrics for 50,000 bays examined for the Survey is available from an on-line Google Fusion Table: https://goo.gl/XTHKC4 . Each bay is also geospatially referenceable through a map containing clickable placemarks that provide information windows displaying that bay’s measurements as well as further links which allows visualization of the associated LiDAR imagery and the bay’s planform measurement overlay within the Google Earth virtual globe: https://goo.gl/EHR4Lf .
Examination of the radiation budget at the surface of the Earth shows that there are three factors affecting the surface temperature; the amount of solar radiation absorbed by the atmosphere and by the surface respectively, and the amount of leakage of infrared radiation emitted from the surface directly into space. If there were no leakage, the upwelling infrared radiation from the Earth’s surface would be equal to the incoming solar radiation absorbed by the atmosphere plus twice the solar radiation absorbed by the surface. This results from the summation of a sequence of equal upward and downward re-emissions of infrared radiation absorbed by the atmosphere following the initial absorption of solar radiation. At current levels of solar absorption, this would result in total upwelling radiation of approximately 398.6 W/m2, or a maximum surface temperature of 16.4°C. Allowing for leakage of infrared radiation through the atmospheric window, the resulting emission from the Earth’s surface is reduced to around 396 W/m2, corresponding to the current average global surface temperature of around 15.9°C. Absorption of solar and infrared radiation by greenhouse gases is determined by the absorption bands for the respective gases and their concentrations. Absorption of incoming solar radiation is largely by water vapor and ozone, and an increase in absorption would reduce not increase the surface temperature. Moreover, it is probable that all emitted infrared radiation that can be absorbed by greenhouse gases, primarily water vapor, with a small contribution from carbon dioxide and ozone, is already fully absorbed, and the leakage of around 5.5 % corresponds to the part of the infrared red spectrum that is not absorbed by greenhouse gases. The carbon dioxide absorption bands, which represent a very small percentage of the infrared spectrum, are most likely fully saturated. In these circumstances, increased concentrations of greenhouse gases, and carbon dioxide in particular, will have no effect on the emitted radiation. The surface temperature is probably at the thermodynamic limit for the current luminosity of the sun. Satellite based measurements since 1979 suggest that any global warming over the past 150 years may be due to an increase in total solar irradiance, which we are still a decade or two from being able to confirm.
By using datasets of HadISST monthly SST from 1895 to 2014 and 600-year simulations of two CESM model experiments with/without doubling of CO2 concentration, ENSO characteristics are compared pre- and post- global warming. The main results are as follows. Due to global warming, the maximum climatological SST warming occurs in the tropical western Pacific (La Niña-like background warming) and the tropical eastern Pacific (El Niño-like background warming) for observations and model, respectively, resulting in opposite zonal SST gradient anomalies in the tropical Pacific. The La Niña-like background warming induces intense surface divergence in the tropical central Pacific, which enhances the easterly trade winds in the tropical central-western Pacific and shifts the strongest ocean-atmosphere coupling westward, correspondingly. On the contrary, the El Niño-like background warming causes westerly winds in the whole tropical Pacific and moves the strongest ocean-atmosphere coupling eastward. Under the La Niña-like background warming, ENSO tends to develop and mature in the tropical central Pacific, because the background easterly wind anomaly weakens the ENSO-induced westerly wind anomaly in the tropical western Pacific, leading to the so-called &quot;Central Pacific ENSO (CP ENSO)&quot;. However, the so-called &quot;Eastern Pacific ENSO (EP ENSO)&quot; is likely formed due to increased westerly wind anomaly by the El Niño-like background warming. ENSO lifetime is significantly extended under both the El Niño-like and the La Niña-like background warmings, and especially, it can be prolonged by up to 3 months in the situation of El Niño-like background warming. The prolonged El Nino lifetime mainly applies to extreme El Niño events, which is caused by earlier outbreak of the westerly wind bursts, shallower climatological thermocline depth and weaker &quot;discharge&quot; rate of the ENSO warm signal in response to global warming. Results from both observations and the model also show that the frequency of ENSO events greatly increases due to global warming, and many more extreme El Niño and La Niña events appear under the El Niño-like and the La Niña-like background warmings, respectively. This study reconciles the phenomena and mechanisms of different characteristics of ENSO changes in observations and models.
In this poster, we begin to explore how socio-geographical considerations can inform the development of data infrastructure, notably Persistent Identifiers. PIDs have become largely accepted within the Research Data Alliance, W3C, and elsewhere as core elements of data infrastructure. Science is comprised of many divergent formal and informal viewpoints at many different levels with a need for generalizable findings. PIDs act as “Boundary Objects” (Star & Griesemer, 1989) — objects that are part of multiple social worlds and facilitate communication between them. They allow meaning to be understood in different contexts and are “plastic enough to adapt to local needs, … yet robust enough to maintain a common identity across sites. They are weakly structured in common use and become strongly structured in individual site use.” Boundary objects work to reduce local uncertainty without damaging cooperation. It is a question of re-representations across intersecting worlds not consensus. PIDs work to allow machines and humans to understand which digital object is in question (identity), what it is (type), and where it is (location). Each of these questions is surprisingly fraught and complex.
Accurately constraining N emissions in space and time has been a challenge for atmospheric scientists. It has been suggested that 15N isotopes may be a way of tracking N emission sources across various spatial and temporal scales. However, the complexity of multiple N sources that can quickly change in intensity has made this a difficult problem. We have used a SMOKE emission model to parse NOx emission across the Midwestern United States for a one-year simulation. An isotope mass balance methods was used to assign 15N values to road, non-road, point, and area sources. The SMOKE emissions and isotope mass balance were then combined to predict the 15N of NOx emissions (Figure 1). This δ15N of NOx emissions model was then incorporated into CMAQ to assess the role of transport and chemistry would impact the 15N value of NOx due to mixing and removal processes. The predicted 15N value of NOx was compared to those in recent measurements of NOx and atmospheric nitrate.
It is not uncommon that students in introductory survey courses are reluctant to participate in verbal inquiry. In a survey submitted to students of CLIMATE 102, Extreme Weather, over the past four semesters about 45% of male students professed comfort in asking verbal questions in a large lecture hall but less than 25% of females and only 15% of students for whom English is not their first language. Hence, large lecture hall courses may be inadvertently dissuading the inclusion of many of the students we wish to encourage to participate in our discipline. To combat this a system was used in CLIMATE 102 wherein students could pose questions digitally and anonymously. These questions could be seen by all and answered by all. The instructor and/or teaching assistant can also participate and answer or offer corrections to others’ answers. The use of this system had three important outcomes: 1. The number of questions posed during class time rose dramatically from previous semesters when only verbal questions were entertained. The number of questions in CLIMATE 102 with this system generally exceeded 500 per semester where the number of students ~200. 2. The number of per-capita questions from female students exceeded the male students, thus differences in gender inquiry was eliminated. 3. The number of per-capita questions from students whose first language was not English equaled the native English-speaking students. While it is the goal of higher education to encourage students to participate verbally in class discussions it is important to provide a “safe” environment in the first year(s) as many students are initially uncomfortable participating verbally in class. We hypothesize, but have not researched, that through this process students have the opportunity to see that their questions are as valid as others’ in the class and will subsequently gain the confidence to participate verbally.
Coronal mass ejections (CMEs) are fast-moving magnetic field structures of enhanced plasma density that play an important role in space weather. The Solar Orbiter and Parker Solar Probe will usher in a new era of in situ measurements, probing CMEs within distances of 60 and 10 solar radii, respectively. At the present, only remote-sensing techniques such as Faraday rotation can probe the plasma structure of CMEs at these distances. Faraday rotation is the change in polarization position angle of linearly polarized radiation as it propagates through a magnetized plasma (e.g. a CME) and is proportional to the path integral of the electron density and line-of-sight magnetic field. In conjunction with white-light coronagraph measurements, Faraday rotation observations have been used in recent years to determine the magnetic field strength of CMEs. We report recent results from simultaneous white-light and radio observations made of a CME in July 2015. We made radio observations using the Karl G. Jansky Very Large Array (VLA) at 1 - 2 GHz frequencies of a set of radio sources through the solar corona at heliocentric distances that ranged between 8 - 23 solar radii. These Faraday rotation observations provide a priori estimates for comparison with future in situ measurements made by the Solar Orbiter and Parker Solar Probe. Similar Faraday rotation observations made simultaneously with observations by the Solar Orbiter and Parker Solar Probe in the future could provide information about the global structure of CMEs sampled by these probes and, therefore, aid in understanding the in situ measurements.
The structure(s), distribution and dynamics of CDOM have been investigated over the last several decades largely through optical spectroscopy (including both absorption and fluorescence) due to the fairly inexpensive instrumentation and the easy-to-gather data (over thousands published papers from 1990-2016). Yet, the chemical structure(s) of the light absorbing and emitting species or constituents within CDOM has only recently being proposed and tested through chemical manipulation of selected functional groups (such as carbonyl and carboxylic/phenolic containing molecules) naturally occurring within the organic matter pool. Similarly, fitting models (among which the PArallel FACtor analysis, PARAFAC) have been developed to better understand the nature of a subset of DOM, the CDOM fluorescent matter (FDOM). Fluorescence spectroscopy coupled with chemical tests and PARAFAC analyses could potentially provide valuable insights on CDOM sources and chemical nature of the FDOM pool. However, despite that applications (and publications) of PARAFAC model to FDOM have grown exponentially since its first application/publication (2003), a large fraction of such publications has misinterpreted the chemical meaning of the delivered PARAFAC ‘components’ leading to more confusion than clarification on the nature, distribution and dynamics of the FDOM pool. In this context, we employed chemical manipulation of selected functional groups to gain further insights on the chemical structure of the FDOM and we tested to what extent the PARAFAC ‘components’ represent true fluorophores through a controlled chemical approach with the ultimate goal to provide insights on the chemical nature of such ‘components’ (as well as on the chemical nature of the FDOM) along with the advantages and limitations of the PARAFAC application.
Discrete fracture network (DFN) models provide a natural analysis framework for rock conditions where flow is predominately through a series of connected discrete features. Mechanistic models to predict the structural patterns of networks are generally intractable due to inherent uncertainties (e.g. deformation history) and as such fracture characterisation typically involves empirical descriptions of fracture statistics for location, intensity, orientation, size, aperture etc. from analyses of field data. These DFN models are used to make probabilistic predictions of likely flow or solute transport conditions for a range of applications in underground resource and construction projects. However, there are many instances when the volumes in which predictions are most valuable are close to data sources. For example, in the disposal of hazardous materials such as radioactive waste, accurate predictions of flow-rates and network connectivity around disposal areas are required for long-term safety evaluation. The problem at hand is thus: how can probabilistic predictions be conditioned on local-scale measurements? This presentation demonstrates conditioning of a DFN model based on the current structural and hydraulic characterisation of the Demonstration Area at the ONKALO underground research facility. The conditioned realisations honour (to a required level of similarity) the locations, orientations and trace lengths of fractures mapped on the surfaces of the nearby ONKALO tunnels and pilot drillholes. Other data used as constraints include measurements from hydraulic injection tests performed in pilot drillholes and inflows to the subsequently reamed experimental deposition holes. Numerical simulations using this suite of conditioned DFN models provides a series of prediction-outcome exercises detailing the reliability of the DFN model to make local-scale predictions of measured geometric and hydraulic properties of the fracture system; and provides an understanding of the reduction in uncertainty in model predictions for conditioned DFN models honouring different aspects of this data.
Backprojection (BP) of teleseismic P waves is a widely-used method to study the evolution of earthquake radiation and is particularly effective for large earthquakes. We can harness key information on the spatiotemporal evolution during the rupture process from waveform similarity or coherency. Understanding the relation between earthquake physics and the spatiotemporal evolution from BP imaging, which are usually obtained from high frequency seismic waveforms, is of great importance. Theoretical studies indicate that the high-frequency bursts can be related to abrupt changes in rupture velocity (e.g. stopping of rupture or kinks on the fault). Moreover, the BP images are thought to be equivalent to either slip or slip rate on the fault, provided that the Green’s functions from the sources to the receivers are incoherent delta functions. Furthermore, recent studies propose that the frequency dependent features of BP results can reflect the stress status, frictional and/or geometrical heterogeneity on the fault surface. It is promising that we can obtain more observational constraints and information about the earthquake dynamic source from the backprojection results combined with other independent techniques. In this study, we attempt to figure out the relation between the BP results and earthquake source process by testing both kinematic and dynamic source models. With these source models, we can synthesise the seismic waveforms and trace them back to the fault surface using the BP method. Therefore, we can directly compare the BP results with the already-known earthquake sources and further explore the possible relation to the source properties by varying our source models such as the friction laws, fault geometries. To simplify our problem and exclude the potential effects from complex earth structure, our tests are carried out in a purely elastic medium, whole space, allowing us to solve analytically for the far-field body waves. From these systematical tests and comparisons, we aim at building a comprehensive relation between the BP images and various source properties. Moreover, our results can provide significant help to better understand the physics of earthquake source process from seismic observations.
Harvey was formed around 17 August with maximum winds 40 miles/hour. At the moment, Irma formation started on 20 August faster becoming a hurricane category five. Some days before these two events, the Uruguay and Brazil coast suffered a recede in the oceans near Punta del Este and Rio Grande do Sul it happened in the week of 11 August. The energy accumulated in the recede of waters was not released by a tsunami and the water slowly was back to the shore. This event repeated at August 25, however at higher latitudes as in Parana and Sao Paulo. And consequent high tides in Chile again. The absence of a recurrent tsunami at the Brazilian coast indicates the energy accumulated from the receding ocean was released as a tsunami in the oceans, it became another huge hurricane formation as Jose and Katia. All those events pointed out for an atmospheric pressure disturbance on the Atlantic East Coast. In South America happened a suddenly increase at the atmospheric pressure which made the ocean waves receded for many days. Another similar disturbance happened in Caribe area resulting in several huge hurricanes.
Understanding the distribution of organic material, mineral inclusions, and porosity are critical to properly model the flow of fluids through rock formations in applications ranging from hydraulic fracturing and gas extraction, CO2 sequestration, geothermal power, and aquifer management. Typically, this information is obtained on the pore scale using destructive techniques such as focused ion beam scanning electron microscopy. Neutrons and X-rays provide non-destructive, complementary probes to gain three-dimensional distributions of porosity, minerals, and organic content along with fluid interactions in fractures and pore networks on the core scale. By capturing both neutron and X-ray tomography simultaneously it is possible to capture slowly dynamic or stochastic processes with both imaging modes. To facilitate this, NIST offers a system for simultaneous neutron and X-ray tomography at the Center for Neutron Research. This instrument provides neutron and X-ray beams capable of penetrating through pressure vessels to image the specimen inside at relevant geological conditions at resolutions ranging from 15 micrometers to 100 micrometers. This talk will discuss current efforts at identifying mineral and organic content and fracture and wettability in shales relevant to gas extraction.
Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset--one year of daily mean temperature data--particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.
The evidence of bodies of elemental sulfur (S ) beneath acid crater lakes at the summit of composite active volcanoes has been recognized several decades ago (Oppenheimer and Stevenson, 1989; Christenson and Woods, 1993). But S accumulation was already hypothesized a century ago at Kusatzu Shirane (Japan) based on the observation of sulfur spherules floating on its crater-lake (Ohashi, 1919). Since these pioneering works, other studies have focused on understanding key aspects of molten sulfur bodies, considered a feature unique of volcanic lakes. Instead, it is reasonable to assume that S bodies occur in several volcanic settings because a) several reactions may lead to S deposition from S-bearing gases, and b) crater-lakes, surface expressions of hydrothermal systems, are transient features. The scrubbing of several magmatic gases, some of which critical for volcano monitoring, has been attributed to ground/surface waters (Symonds et al. 2001). Nevertheless, gas scrubbing could reflect viscosity variations of impure S within hydrothermal systems. Industrial experiments indicated that impurities (organics, H2S, ammonia, HCl, HF, HBr, HI) hinder S polymerization at T ≥ 160ºC, allowing viscosity to remain low for a long time depending on the maximum T achieved and heating rates (Bacon and Fanelli, 1943). However, a prolonged heating destroys the viscosity-modifying substances (e.g. H2Sx formed by reactions with organics, H2S) and dramatic S viscosity increases occur after a certain number of heating and cooling cycles. A prolonged boiling of S with organics was observed to release H2S, following H2Sx disruption. Some gases (e.g. SO2) do not affect S viscosity. In volcanic environments, non-reactive species (e.g. SO2, CO2) could, therefore, escape under S low viscosity regimes. Also, halogens absence in gas emissions could be caused by their participation in reactions within S-layers causing its viscosity to remain low. More data are needed to validate the hypothesis stated above. References Bacon RF, Fanelli R, J Am Chem Soc 65, 639-648 (1943). Christenson, BW, Wood CP, Bull Volcan 55, 547-565 (1993). Ohashi R, J. Akita Min. Coll 1, 1-10 (1919) Oppenheimer C, Stevenson D, Nature 342, 790-793 (1989) Symonds RB, Gerlach TM, Reed MH, J. Volc Geot Res 108, 303-341 (2001)
Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit ‘equifinal’ behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.
The spatial and time characterisation of trapped charged particle trajectories in magnetospheres has been extensively studied using dipole magnetic field structures. Such studies have allowed the calculation of spatial quantities such as equatorial loss cone size as a function of radial distance, the location of the mirror points along particular field lines (’L shells’) as a function of the particle’s equatorial pitch angle, and time quantities such as the bounce period and drift period as a function of the radial distance and the particle’s pitch angle at the equator. In this study, we present analogous calculations for the ‘disc-like’ field structure associated with the giant rotation-dominated magnetosphere of Jupiter as described by the UCL/Achilleos-Guio-Arridge (UCL/AGA) magnetodisc model. We discuss the effect of the magnetodisc field on various particle parameters, and make a comparison with the analogous motion in a dipole field.
The outer heliosphere is an interesting region characterized by the interaction between the solar wind and the interstellar neutral atoms. Having accomplished the mission to Pluto in 2015 and currently on the way to the Kuiper Belt, the New Horizons spacecraft is following the footsteps of the two Voyager spacecraft that previously explored this region lying roughly beyond 30 AU from the Sun. We model the three-dimensional, time-dependent solar wind plasma flow to the outer heliosphere using our own software Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS), which, in addition to the thermal solar wind plasma, takes into account charge exchange of the solar wind protons with interstellar neutral atoms and treats nonthermal ions (i.e., pickup ions) born during this process as a separate fluid. Additionally, MS-FLUKSS allows us to model turbulence generated by pickup ions. We use MS-FLUKSS to investigate the evolution of plasma and turbulent fluctuations along the trajectory of the New Horizons spacecraft using plasma and turbulence parameters from OMNI data as time-dependent boundary conditions at 1 AU for the Reynolds-averaged MHD equations. We compare the model with in situ plasma observations by New Horizons, Voyager 2, and Ulysses. We also compare the model pickup proton parameters with those derived from the Ulysses-SWICS data.
Currently available soil volumetric water content (VWC) sensors have several drawbacks that pose certain challenges for implementation on large scale for farms. Such issues include cost, scalability, maintenance, wires running through fields, and single-spot resolution. The development of a passive soil moisture sensing system utilizing Radio Frequency Identification (RFID) would allay many of these issues. The type of passive RFID tags discussed in this paper currently cost between 8 to 15 cents retail per tag when purchased in bulk. An incredibly cheap, scalable, low-maintenance, wireless, high-resolution system for sensing soil moisture would be possible if such tags were introduced into the agricultural world. This paper discusses both the use cases as well as examines one implementation of the tags. In 2015, RFID tag manufacturer SmarTrac started selling RFID moisture sensing tags for use in the automotive industry to detect leaks during quality assurance. We place those tags in soil at a depth of 4 inches and compared the moisture levels sensed by the RFID tags with the relative permittivity (εr) of the soil as measured by an industry-standard probe. Using an equation derived by Topp et al, we converted to VWC. We tested this over a wide range of moisture conditions and found a statistically significant, correlational relationship between the sensor values from the RFID tags and the probe’s measurement of εr. We also identified a possible function for mapping vales from the RFID tag to the probe bounded by a reasonable margin of error.