Essential Site Maintenance: Authorea-powered sites will be updated circa 15:00-17:00 Eastern on Tuesday 5 November.
There should be no interruption to normal services, but please contact us at [email protected] in case you face any issues.

Jenny Knuth

and 5 more

One obstacle to space weather research is the practical challenge of accessing relevant data. Space weather data are housed in disparate repositories, each with its own unique focus, be it solar, magnetospheric, atmospheric, or earth-based. Much of the effort spent acquiring data could instead be spent on space weather research and education. To address this problem, the Space Weather Technology, Research, and Education Center (SWx TREC), at the University of Colorado, Boulder, in collaboration with the Laboratory for Atmospheric and Space Physics (LASP), has developed the Space Weather Data Portal (https://lasp.colorado.edu/space-weather-portal), a tool built by and for the space weather community. Through the Data Portal, previously dispersed space weather data are in one unified place, accessible to scientists, students, and curious individuals. The focus is on the users and their ability to discover, display, compare, overplot, and download relevant data. A user can filter for past events then easily display and download data related to that event, from the moment it occurs on the Sun, as it travels through space and the atmosphere, to the impacts it has on the Earth. Analysis of space weather events via the Data Portal has proved useful for forecaster training and online learning. The community-created Event Library is a short-cut to curated data collections that provide narratives for context and serve as launch pads for further space weather exploration. This presentation will highlight contributions to the Data Portal from the community: datasets, event markers, timelines, and narrated data collections. Your contributions are encouraged as new resources and improvements are deployed every few weeks. Through this iterative, collaborative process, the Data Portal aims to increase awareness of space weather and its impacts and decrease the time between research and real-world applications.

Wendy Carande

and 4 more

Space weather events can impact satellite communications, astronaut health, and the electric power grid. It is thus of utmost importance that we develop efficient, reliable tools to determine when space weather events, such as solar flares, will occur and how strong they will be. The SWx TREC Deep Learning Laboratory has developed several state-of-the-art machine learning projects to improve solar flare prediction through the use of deep learning models, generative adversarial network data augmentation, and explainable artificial intelligence techniques. In particular, we compared two generative adversarial networks (GANs) to super-resolve the Solar and Heliospheric Observatory’s Michelson Doppler Imager (SOHO/MDI) magnetogram data to match the quality of the Solar Dynamics Observatory’s Helioseismic and Magnetic Imager (SDO/HMI) magnetogram data. We find that both GANs are able to preserve key features of the original SOHO/MDI magnetogram data while achieving better resolution to match the SDO/HMI data. In the future, we will use the combined, augmented dataset in a Long Short-Term Memory model for solar flare prediction to see if training on the expanded dataset results in improved predictive power compared to training on the SDO/HMI dataset alone. In addition to data augmentation, we have used Local Interpretable Model-Agnositc Explanations (LIME) on our existing solar flare prediction model to provide more insight into specific predictions. This is an important step in building trust in our model and understanding what features are driving the model’s predictions. In this presentation, we will discuss these recent projects as well as future work that the SWx TREC Deep Learning Laboratory will tackle in order to advance the field of machine learning in space weather, including: improved hardware, better visualization capabilities, cutting edge models, software tools, and community resources.

Julie Barnum

and 4 more

The Magnetospheric Multiscale (MMS) Science Data Center (SDC) at the Laboratory for Atmospheric and Space Physics (LASP), at the University of Colorado, has managed MMS science and ancillary data processing and distribution since MMS launched in March 2015. The MMS SDC employs automation in nearly every part of its operations. Automation is used to start up processing “runners” that listen on queues for new processing jobs, which are triggered by configurable timing rules, including cron and operational events, or certain data/data files being available. A separate set of SDC code then automatically creates processing jobs and tracks its progress. The MMS SDC runs processing jobs for each instrument (47 different job types in total), ranging from levels l1a to l3, for “survey” and “burst” modes, plotting, and cdf creation. The SDC runs anywhere from a few hundred to over 2,000 jobs per day (on average, 1,000 jobs per day). Processing jobs are scheduled in a few different ways, from running based on a fixed schedule in cron, to being triggered by certain mission events, to being triggered by the appearance of new files. Several fail-safes have been added into the code over time to ensure failures are caught and handled, however, situations do arise where failures occur and are dealt with when something in the SDC does not work as expected. Added to these complexities is the fact that the MMS mission is incredibly time-sensitive, and requires the SDC to be available and ready to handle issues 24/7/365, which can be challenging due to the limited staffing on MMS a few years into the mission. The importance of automation in MMS SDC processing is clear. Not only does automated processing relieve some of the load from the software engineers working on the SDC, but it ensures continued smooth operation of the MMS SDC. This then allows scientists to continue their research efforts unhindered. As time has progressed, various areas for improvement, and extra automation, in this process have been implemented. This poster will focus on automation improvements to keep the system running smoothly with almost no human involvement.