The South American longitudinal sector presents the unique feature of the presence of daytime equatorial electrojet currents (EEJ) and the South Atlantic Magnetic Anomaly (SAMA), where the global minimum intensity of the geomagnetic field is observed. Enhanced amplitudes are observed in the horizontal magnetic components recorded on the ground within the areas of influence of both the EEJ and the SAMA and therefore it is expected that significant enhancements of GIC magnitude also occur in these regions. We use here geomagnetic field variations data recorded by fluxgate magnetometers from the Brazilian space weather program (EMBRACE) to evaluate GIC effects during two strong geomagnetic storms in March (Dst = −222nT) and June (Dst = −204nT) 2015. Among the available geomagnetic stations, we selected those with information about the underground electrical conductivity structure and that can be approximated by 1-D models for calculation of the geoelectric field. GIC levels are estimated using a realistic local power grid model located in the central region of Brazil, artificially moved to the sites where the geomagnetic measurements are available. Maximum GIC amplitude of about 8 A was estimated at an equatorial station positioned over high resistivity underground, associated with the arrival of an interplanetary pressure pulse just behind two other pulses during the June storm. The results are also interpreted in terms of the ionospheric currents over the measurement sites and the conductivity distribution beneath these sites. It is observed that both EEJ and SAMA increase the GIC amplitudes, with the greatest effects associated with EEJ. In relation to the underlying conductivity structure, the higher GIC effects are associated with low conductance at crustal depths, with upper mantle depths showing minor effect.
Author networks play a key role in doing science. Developing networks is critical for career advancement in a wide variety of ways, and differences in networks may be a core reason for persistence of implicit gender bias. Combining the AGU Fall Meeting abstracts from 2014-2018 with self-identified AGU member data on birth year and gender provides a large database of more than 400,000 unique co-author interactions that we use to examine author networks by age, gender, and country. Age data are necessary to disambiguate the effect that a historic lack of women in the Earth and space science. The data show that women’s networks are closer to those expected from the age-gender distribution of the overall membership; whereas networks of men include more men than expected, although women are also interacting with men of similar age more than expected from the membership. Women’s networks are also less international than their male colleagues in most age cohorts. These differences start in the youngest age cohort. These data indicate that addressing implicit bias requires efforts at encouraging and developing more balanced author networks, particularly in early-career scientists. Recent work suggest that this will also improve science outputs.
This study aims to present the morphology of GPS L-band scintillations at the equatorial anomaly station Bahir Dar (11030’N, 37030’E) using GPS-SCINDA data in the descending high solar activity period between January 2014 to December 2014. In studying low-latitude scintillation, we have used millions of data recorded every minute of one year by 32 GPS satellite and it is found that intense scintillation occurred during the day time with a small frequency and very frequent occurrences of relatively moderate scintillation during the night time. In the period of observation, the variation of scintillations with local time and season are analyzed and it is found that occurrence of scintillation is minimum in summer months and maximum in equinox months with highest values observed in the months of March and September. Pre-midnight and post-midnight occurrence of scintillation is also studied and Pre-midnight scintillation was found to be maximum in equinox whereas it is minimum in winter months. Generally, it is found that most of scintillations are weak (s4<0.1) and intense scintillations with s4>0.3 are rare.
An engaged community of scientific programmers is an invaluable asset to any open data provider. The National Ecological Observatory Network (NEON) is a long-term observatory focused on collecting and providing open, continental-scale data that characterize and quantify complex and rapidly changing ecological patterns and processes. The observatory provides over 180 different data products that cover a wide range of variables of interest to researchers across the earth and life sciences. NEON creates and provides code and tools to enhance researchers’ ability to work with these data. In addition, NEON provides several platforms to help connect researchers sharing open code related to NEON data products with those who are also interested in using them. Code and tools created by NEON scientists are distributed through the NEONScience GitHub organization (https://github.com/NEONScience). Current tools include the neonUtilities R package that provides basic tools for accessing and working with most NEON data products, as well as the geoNEON package that facilitates access to NEON spatial data. Other code packages contain the algorithms used to produce specific data products, including the eddy4R package, used to create the bundled eddy-covariance data product. Finally, some code packages are designed to build upon published NEON data to create value-added, derived products. Members of NEON’s user community have contributed to some of the packages described above, and others are creating their own open code resources for using NEON data. Use of NEON code packages and development of open code are highly variable within the NEON user community, and NEON has explored several approaches to engage users in this aspect of the observatory, including online tutorials, webinars, workshops, and hackathons. Developing and expanding an engaged community of open code users around NEON data is a continuing and evolving effort for the NEON project.
Results from a study of automatic aurora classification using machine learning techniques are presented. The aurora is the manifestation of physical phenomena in the ionosphere-magnetosphere environment. Automatic classification of of auroral images from the Arctic and Antarctic is therefore an attractive tool for developing auroral statistics and for supporting scientists to study auroral images in an objective, organized and repeatable manner. Although previous studies have presented tools for detecting aurora, there has been a lack of tools for classifying aurora into subclasses with a high precision (>90%). This work considers seven auroral subclasses; breakup, colored, arcs-bands, discrete, patchy, edge and clear-faint. Five different deep neural network architectures have been tested along with the well known classification algorithms; k nearest neighbor (KNN) and a support vector machine (SVM). A set of clean nighttime color auroral images, without ambiguous auroral forms, moonlight, twilight, clouds etc., were used for training and testing. The deep neural networks generally outperformed the KNN and SVM methods, and the ResNet-50 architecture achieved the highest performance with an average classification precision of 92%. Although the results indicate that high precision aurora classification is an attainable objective using deep neural networks, it is stressed that a common consensus of the auroral morphology and the criteria for each class needs to be established before classification of ambiguous images can be readily achieved.
South Asia is among the most populous regions of the earth which houses fast-developing economies. The unique geographical settings and socio-economic-demographic structure of the region make it highly vulnerable to the risks posed by climate change as documented by several comprehensive scientific research reports. Human-induced climate change signatures have already been noted in the form of increasing extremes (e.g. cyclones, droughts, floods, heat waves, thunderstorms, etc.), rising sea level, and changing monsoon patterns over the region. Though considerable progress has been made towards understanding the science of climate change, regional climate change consequences are still not well understood and limited by sparse observational networks and inadequate knowledge of region-specific physical processes which often lead to large spread and uncertainties in model projections. Based on the available literature, the chapter highlights the past, present, and future projections of climate over South Asia. Recent advances in observations and dedicated regional and earth system modeling activities over the region are also discussed alongside other emerging methodologies and tools which can lead to overall improvement in understanding of physical processes. We discuss the studies that have been carried out in the past and also the prospective gap areas that can be pursued in future through the use of a combined framework of modern observations-modeling-analysis techniques.
We present our new multiscale pairwise-force smoothed particle hydrodynamics (PF-SPH) model for the characterization of flow in fractured porous media. The fully coupled multiscale PF-SPH model is able to simulate flow dynamics in a porous and permeable matrix and in adjacent fractures. Porous medium flow is governed by the volume-effective Richards equation, while the flow in fractures is governed by the Navier–Stokes equation. Flow from a fracture to the porous matrix is modeled by an efficient particle removal algorithm and a virtual water redistribution formulation to enforce mass and momentum conservation. The model is validated by (1) comparison to a finite element model (FEM) COMSOL for Richards-based flow dynamics in a partially saturated medium and (2) laboratory experiments to cover more complex cases of free-surface flow dynamics and imbibition into the porous matrix. For the laboratory experiments, Seeberger sandstone is used because of its well-known homogeneous pore space properties. The saturated hydraulic conductivity of the permeable matrix is estimated from a pore size and grain size distribution analysis. The developed PF-SPH model shows good correlation with the COMSOL model and all types of laboratory experiments. We employ the proposed model to study preferential flow dynamics for different infiltration rates. Here, flow in fracture is associated with the term “preferential flow”, providing rapid water transmission, while flow within the adjacent porous matrix enables only slow and diffuse water transmission. Depending on the infiltration rate and water inlet location, two cases can be distinguished: (1) immediate preferential/fracture flow or (2) delayed preferential flow. In the latter case, water accumulates at the surface first (ponding), then the fracture rapidly transmits water to the bottom system outlet. For the immediate fracture flow response, ponding only occurs once the fracture is fully saturated with water. In all cases, preferential flow is much more rapid than diffuse flow even under saturated porous medium conditions. Furthermore, infiltration dynamics in rough fractures adjacent to an impermeable or permeable matrix for different infiltration rates are studied as well. The simulation results show a significant lag in arrival times for small infiltration rates when a permeable porous matrix is employed, rather than an impermeable one. For higher infiltration rates, water rapidly flows through the fracture to the system outlet without any significant delay in arrival times even in the presence of the permeable matrix. The analysis of the amount of water stored in permeable fracture walls and in a fracture void space shows that for small infiltration rates, most of the injected water is retarded within the porous matrix. Flow velocity is higher for large infiltration rates, such that most of the water flows rapidly to the bottom of the fracture with very little influence of matrix imbibition process
C-LIFE is a landed camera suite, suitable for Ocean World surfaces, consisting of a color Context Reconnaissance Stereo Imager and an LED flashlight that can also identify biogenic material through fluorescence. The C-LIFE design leverages ongoing work (with industry partners Space Dynamics Lab, SRI International and Ball Aerospace) from our ICEE-2 and COLDTech awards in low-temperature detector qualification, radiation modeling and mechanical design. It takes advantage of our development work on the Descent Imager for Europa Hazard Avoidance and Radiation Durability (DIEHARD) and camera development on other missions. The C-LIFE camera head is mounted on the Europa Lander’s high gain antenna, which provides tilt and pan capability. Minimal electronics in the camera head control and read out the detector and LEDs. Electronics in the lander vault perform most camera functions and image processing. C-LIFE contains no moving parts. In lieu of a focus mechanism, C-LIFE utilizes high F/number optics and a field of view elongated in the vertical direction with progressive focus from top (infinity e.g. imaging horizon) to bottom (close e.g. imaging sample delivery port). Strip filters (that match Europa Clipper’s Europa Imaging System) on the detectors and partly-overlapping images provide color coverage. Heaters warm C-LIFE to operating temperatures, allowing self-heating to take over, but otherwise the camera is unheated in order to conserve energy. We combine two independent eyes into one mechanical housing (Figure 1) with a dual periscope design, which reduces the mechanical envelope, shielding mass, heating energy and total cabling distance. We use LEDs in three bands to illuminate and to excite fluorescence. Fluorescence excited by these three bands can identify the presence of key metabolic biomarkers and discriminate the quantities of live cells, dead cells and spores in a terrestrial setting. Figure 1. A dual-periscope design, with eyes 20 cm apart, folds optical trains (incoming rays in gray) to mutually shield focal planes. Fold mirrors locate LEDs inside the camera for shielding.
Satellite imagery to rapidly develop maps of historical flood hazard and current inundated areas over large spatial coverage is indispensable in supporting situational awareness for improved debris estimation, transportation impacts and damage assessments. However, how best to utilize these maps as actionable information during flood disasters and for flood disaster response assistance is less clear. Furthermore, the integration of any satellite data from an “untrusted” (non-mandated) source into the operations chain and response protocols of a mandated agencies such as FEMA, PDC (PDC is already pulling some DFO-DSS layers) or the UN WFP would be a non-trivial procedure. These agencies desire to prioritize support and resource requirements for community lifelines. (Safety & Security; Food, Water & Shelter; Health & Medical; Energy (Power & Fuel); Communications; Transportation; and Hazardous Materials). The majority of these lifelines can be impacted by floods. The Global Flood Observatory’s (DFO, University of Colorado Boulder) web map server and its associated mobile app (DFO-Floods) is a resource for global extents of floods now delivered as map products via web services. This flood decision support system (DSS) serves flood maps along with other trusted geospatial data to the global disaster response community. However, acceptance of the DFO product line as a trusted information source requires additional tests to assess its performance in combination with the respective response process of agencies around the world. This would allow moving the product from a high Application Readiness Level into an Operational Readiness Level (ORL) for agency trusted data implementation. This paper reviews success examples of the DFO flood layers, illustrates the newly released mobile app and discusses the need for trusted flood map products and services to support the global disaster response community.
Urban Green Spaces (UGSs) are proving to be most important part of urban area of a city. These green spaces not only provide psychological comfort to humans but also affect heat impact on city to a vast level. In a developing country like India, where urban growth is happening in a very fast and haphazard manner a little consideration is given to Green Spaces in city. Such a study has been conducted over Lucknow metro-city of Uttar Pradesh state of India. In this study, the relation between Land Use\Land Cover (LULC) and Land surface temperature (LST) has been tried to find. Using Landsat-8 OLI data (Band 3, 4 & 5) and Maximum Likelihood Classification algorithm, 6 Land Use classes are obtained, which are “Built-up”, “Vegetation”, “Shrub”, “Water”, “Fallow Land” and “Other”. In addition to these 4 Land Cover Indices namely Normalized Difference Built-up Index (NDBI), Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI) and Normalized Difference Barren Index (NDBaI) are also generated for same areas. LST is obtained using TIRS data (Band 10 & 11) and Split Window Algorithm by Radiative Transfer Equation. Ancillary data is used for digitization of 5 assembly constituencies (ACs) of Lucknow parliamentary constituency (PC) of Lucknow district namely, Lucknow Cantonment, Lucknow North, Lucknow East, Lucknow West and Lucknow Central has been done. River and Canal passing through these areas, are also considered in Urban Green Spaces as per Urban Atlas code 14100. 250m radius buffer is generated around Built-up pixels for analysis of impact of UGSs on LST. Cantonment and East Lucknow area having highest amount of UGSs in terms of “Vegetation”, “Shrub” & “Water” pixels due to presence of Forest area in both ACs. It is found that LST is positively related with all indices except for NDVI with strong negative correlation and R2 of 0.47 and highest R2 of 0.53 with NDBI. Among all 5 ACs best correlation between all 4 LC and LST values is found in Lucknow East AC with R2 > 0.64 for NDVI, NDWI and NDBI. Lucknow East AC is having least LST but there is very little difference between LST values of Built-up pixels having minimum UGSs present in 250m radius buffer around built-ups and Built-up pixels having no UGSs around. AC Lucknow Central is having 1 °C difference in LST values of such different Built-ups.
High resolution and accurate rainfall information is essential to modeling and predicting hydrological processes. Crowdsourced personal weather stations (PWSs) have become increasingly popular in recent years and can provide dense spatial and temporal resolution in rainfall estimates. However, their usefulness is limited due to a lack of trust in crowdsourced data compared to traditional data sources. Using crowdsourced PWSs data without an evaluation of its trustworthiness can result in inaccurate rainfall estimates as PWSs may be poorly maintained or incorrectly sited. In this study, we advance the Reputation System for Crowdsourced Rainfall Networks (RSCRN) to bridge this trust gap by assigning dynamic trust scores to the PWSs. Using rainfall data collected from 18 PWSs in two dense clusters in Houston, Texas USA as a case study, the results show that using RSCRN-derived trust scores can increase the accuracy of 15-min PWS rainfall estimates when compared to rainfall observations recorded at city’s high-fidelity rainfall stations. Overall, RSCRN rainfall estimates improved for 77% (48 out of 62) of the analyzed storm events, with a median RMSE improvement of 27.3%. Compared to an existing PWS quality control method, results showed that while 13 (21%) storm events had the same performance, RSCRN improved rainfall estimates for 78% of the remaining storm events (38 out of 49), with a median RMSE improvement of 13.4%. Using RSCRN-derived trust scores can make the rapidly growing network of PWSs a more useful resource for urban flood management, greatly improving knowledge of rainfall patterns in areas with dense PWSs.
The Foundation Spatial Data Framework (FSDF) is a framework of ten national authoritative geographic data themes that supports evidence-based social-economic decision making across multiple levels of Australian and New Zealand government agencies, industry, research and the community. The AAA data management principles (Authoritative, Accurate and Accessible), articulated for FSDF, are easily translatable to the FAIR Principles and applied to ensure: • Ability to Find data through rich and consistently implemented metadata; • Access to metadata and data by humans and machines while practicing federated data management within trusted data repositories; • Interoperability of metadata and data through adoption of common standards and application of best practices; and • Reusability of data by capturing licencing constraints and information about its quality and provenance. The Location Information Knowledge Platform (LINK) was developed in 2016 as a digital catalogue of FSDF content. This governed, online, dynamic, analysis and discovery tool was designed to enhance the discovery of FSDF datasets, support work planning and indicate the legal frameworks, agency priorities and use case associated with FSDF data. More than 73 Australian government agencies and commercial organisations use this Platform. Current work includes: • Building common high-level and individual lower-level information models (ontologies) for the FSDF and each dataset; • Development of a new architecture for persistent identifiers and identifier incorporation in the datasets; • The ISO 19115-1-based Australian and New Zealand Metadata profile and best practices user guides; and • Testing new workflows for metadata and data governance and integration utilising a set of common cloud-based infrastructure. On realisation, the FSDF will become a necessary component of spatial socio-economic decision making across Australian and New Zealand government agencies and the private sector. FSDF will encourage cross-sector partnerships and enable seamless access to authoritative spatial data across organisational and jurisdictional boundaries, thus contributing to economic growth, improved public safety, meeting legal and policy obligations and sustaining business needs.
VISAGE (Visualization for Integrated Satellite, Airborne, and Ground-based data Exploration) aims to provide visualization and analytic capabilities for diverse datasets in an interactive user interface. Proof-of-concept use cases are centered around the Global Precipitation Measurement (GPM) mission’s Ground Validation (GV) program, which provides a wealth of intensive, coincident observations of atmospheric phenomena from a wide variety of ground-based, airborne and satellite instruments. These data have diverse temporal and spatial scales, variables, and data formats and organization. Key technical challenges include: 3D data rendering and visualization of multiple diverse datasets on a web-based platform 3D data interrogation via map user interface Temporal alignment of data with diverse time scales and resolutions Computations on data fields across instruments and platforms To address these issues, the VISAGE project is working with cloud-native serverless technologies to render data as 3D Tile Point Clouds for display in the Cesium geospatial 3D global mapping platform.
Newer satellite platforms, such as NISAR, are poised to produce huge amounts of data that require large computational resources. Currently, researchers typically download datasets for analysis on local computer resources. This paradigm is no longer practical given the volumes of data from new sensing platforms. While cloud computing services offer a potential solution for accessing and managing large computational resources, there remains a significant barrier to entry. Levering cloud services requires users to: navigate new terminology without appropriate documentation; optimize settings for services to reduce costs; and maintain software dependencies, upgrades, and allocated hardware resources. A more accessible approach for migrating earth scientists to the cloud is needed. To address this problem, we are developing the open source Python library PODPAC (Pipeline for Observational Data Processing Analysis and Collaboration), with the goal of helping to address NASA’s rapidly growing observational data volume and variety needs. PODPAC enables earth scientists to seamlessly transition between processing on a local workstation (their current paradigm) to distributed remote processing on the cloud. It does this by leveraging a text-based JSON format automatically generated for any plug-and-play algorithm developed using PODPAC (e.g., in a Jupyter Notebook). This text format describes data provenance, and is used in RESTful web requests to preconfigured PODPAC cloud deployments, allowing scalable, massively distributed processing. We demonstrate the seamless transition to the cloud by developing a simplified soil moisture downscaling algorithm in Python using PODPAC. Data for this algorithm uses NASA Soil Moisture Active Passive (SMAP) sensor retrieved from the National Snow and Ice Data Center using OpenDAP, and fine-scale topographic data retrieved via Open Geospatial Consortium (OGC) Web Coverage Service (WCS) calls. We then use a serverless AWS Lambda function to run the same algorithm using the automatically-generated text format. Our generic preconfigured environment can handle a wide variety of processing pipelines, and scale up to 1024 parallel processes. This approach enables incremental adoption of cloud services by researchers, significantly lowering the barrier to entry.
A growing number of initiatives in disparate scientific domains arose in the last decade, with the common goal of clustering data and services resources across Europe and make them integrated, open, sharable and available through infrastructures built according to FAIR principles. In this context, the European Plate Observing System (EPOS), now in its Implementation Phase and soon getting to the status of ERIC, is a long-term plan to facilitate integrated use of data, data products, software, services (DDSS) from distributed research infrastructures in Europe in the Solid Earth Domain. Its innovation potential consists in the opportunity of integrating distributed heterogenous resources from European, National and institutional resource providers, and making them available in one single environment. EPOS technical architecture has three main layers: a) National Layer of data providers, that provide access to DDSS; b) community-specific, European-Wide Thematic Core Services (TCS) layer, that collect and integrate DDSS from specific sub-domains and make them available at European level; c) EPOS Integrated Core Services (ICS) system where the integration of DDSS occurs. The architecture relies on three main concepts: Metadata: it is fundamental for describing assets and resources managed by ICS. A twofold approach was used for metadata: at metadata management level, the CERIF model was used for storing all information within the system; at metadata transfer level, an extension of DCAT-AP was created (EPOS-DCAT-AP) to facilitate TCS metadata collection. An architectural approach based on microservices that ensures scalability, flexibility and system interoperability. A harmonization process, that focused on technical aspects like data formats and protocols to access DDSS, but also required intra-domain work on semantic interoperability that includes adoption of common standards and vocabularies. We will discuss these topics and show a demonstration of the ICS prototype.  https://www.force11.org/group/fairgroup/fairprinciples  https://www.epos-ip.org/  https://joinup.ec.europa.eu/solution/dcat-application-profile-data-portals-europe  https://github.com/epos-eu/EPOS-DCAT-AP/
The Space Physics Data Facility (SPDF) has developed and/or leveraged standardized self-describing data formats, metadata for datasets and parameters, time conventions, and dataset and filenaming conventions that enable effective data analysis and browsing using generic easy-to-use software and web services. Software and services include SPDF’s CDAWeb , and external tools such as Autoplot and SPEDAS IDL library. Standards and conventions include: datasets and filenaming and , the CDF scientific data format (including its new Python library ), the ISTP/IACG/SPDF Guidelines for global and variable attributes , time variable types , and the SKTeditor metadata creation tool . The SPASE standards for describing datasets for easy searching are crucial to the Heliophysics Data Portal .
Digital Elevation Model (DEM) is the 3D-representation of terrain surface in the discrete form and a standard tool to examine the hydrological and research application related to terrain characterization, landscape and water resources management. It helps in identifying physical features of an area, watershed delineation and stream network generation. However, several issues related to DEM’s accuracy is the utmost concern for researchers. The present study is based on the comparative studies of DEMs viz., Cartosat-1, SRTM, ALOS and ASTER having the same spatial resolution of 30m each, under two different categories of elevation data and topographic attributes. The vertical accuracy of DEMs is examined by using ground control points as a reference level of elevation generated from topographic map. Analysing different sources of error in the DEMs, the RMSE and MAE based validation of elevation suggests that Cartosat-1 shows relatively high vertical accuracy (RMSE=45.2 & MAE=7.7) and ASTER shows the least (RMSE=60.5 & MAE=34.6). The grid size, spatial variation and vertical accuracy of DEM are among the prime attribute of data sources to determine the variation in basin morphometry. The study area shows a gradually undulating topography with 5 th order drainage network. An inference can be made out of research study that the mean elevation values of ALOS, SRTM, Cartosat-1 are relatively lower than ASTER whereas differences in stream parameters are also observed.
The southwestern U.S. has been experiencing prolonged drought conditions contributing to declines in snowpack and surface water supply. On the Navajo Nation (NN), the largest U.S. federally recognized sovereign tribal nation in land area, snowpack is an essential reservoir for surface water storage and aquifer recharge, but has not been extensively monitored. Within the NN, only two high elevation regions, Chuska Mountains and Defiance Plateau, have in-situ snow monitoring at eight sites (consisting of two SNOTEL stations and eight snow courses). With climate change contributing to long-term temperature increases, patterns of snowfall and snowmelt are changing and NN leaders are recognizing the need for more detailed and reliable monitoring systems. This study explored how NASA Earth Observations can be used to provide more frequent, high-resolution tracking of snow cover extent on the NN, and how those data can offer actionable insights for local water managers. The Normalized Difference Snow Index (NDSI), as derived from NASA MODIS products, was used to create daily cloud/gap-free images of Snow Covered Area (SCA) during the winter months (November – April) from 2002 to 2018. Aggregated weekly and monthly means were then constructed to study spatial and temporal anomalies in SCA. These data were compared with the available ground-based measurements of snow water equivalent (SWE). Results indicate that SCA anomalies can serve as a proxy for monitoring snowpack variability across all high-elevation areas of the NN. SCA anomalies also suggest that the NN snowpack is exhibiting increased variability during peak winter months, along with declines in the spring months, consistent with broader regional climate trends. This study aids in the establishment of remotely sensed snow monitoring on the NN. Further analyses would be improved through the use of additional snow products, such as the MODIS Snow Covered-Area and Grain size retrieval algorithm (MODSCAG) to estimate fractional snow cover and snow grain size, and NASA’s Airborne Snow Observatory (ASO) to estimate snow albedo and SWE.