Steven Lu

and 5 more

The Planetary Data System (PDS) maintains archives of data collected by NASA missions that explore our solar system. The PDS Cartography and Imaging Sciences Node (Imaging Node) provides access to millions of images of planets, moons, and other bodies. Given the large and continually growing volume of data, there is a need for tools that enable users to quickly search for images of interest. Each image archived at the PDS Imaging Node is described by a rich set of searchable metadata properties, such as the time it was collected and the instrument used. However, users often wish to search on the content of the image to find those images most relevant to their scientific investigation or individual curiosity.  To enable the content-based search of the large image archives, we utilized machine learning techniques to create convolution neural network (CNN) classification models. The initial CNN classification results for rover missions (i.e., Mars Science Laboratory and Mars Exploration Rover) and orbiter missions (i.e., Mars Reconnaissance Orbiter, Cassini, and Galileo) were deployed at the PDS Image Atlas (https://pds-imaging.jpl.nasa.gov/search) in 2017. With the content-based search capability, users of the PDS Image Atlas can search using a list of pre-defined classes and quickly find relevant images. For example, users can search “Impact ejecta” and find the images containing impact ejecta from the archive of the Mars Reconnaissance Orbiter mission.  All of the CNN classification models were trained using the transfer learning approach, in which we adapted a CNN model pretrained on Earth images to classify planetary images. Over the past several years, we employed the following three techniques to improve the efficiency of collecting labeled data sets, the accuracy of the models, and the interpretability of the classification results:·      First, we used the marginal-probability based active learning (MP-AL) algorithm to improve the efficiency of collecting labeled data sets.·      Second, we used the classifier chain and ensemble approaches to improve the accuracy of the classification results. ·      Third, we incorporated the prototypical part network (ProtoPNet) architecture to improve the interpretability of the classification results.

Emily Dunkel

and 4 more

NASA’s Planetary Data System (PDS)* contains data collected by missions to explore our solar system. This includes Lunar Reconnaissance Orbiter (LRO), which has collected as much data as all other planetary missions combined. Currently, PDS offers no way to search lunar images based on content. Working with the PDS Cartography and Imaging Sciences Node (IMG), we develop LROCNet, a deep learning (DL) classifier for imagery from LRO’s Narrow Angle Cameras (NACs). Data we get from NACs are 5km swaths, at nominal orbit, so we perform a saliency detection step to find surface features of interest. A detector developed for Mars HiRISE (Wagstaff et al, 2021) worked well for our purposes, after updating based on LROC image resolution. We use this detector to create a set of image chipouts (small cutouts) from the larger image, sampling the lunar globe. The chipouts are used to train LROCNet. We select classes of interest based on what is visible at the NAC resolution, consulting with scientists and performing a literature review. Initially, we had 7 classes: fresh crater, old crater, overlapping craters, irregular mare patches, rockfalls and landfalls, of scientific interest, and none. Using the Zooniverse platform, we set up a labeling tool and labeled 5,000 images. We found that fresh crater made up 11% of the data, old crater 18%, with the vast majority none. Due to limited examples of the other classes, we reduced our initial class set to: fresh crater (with impact ejecta), old crater, and none. We divided the images into train/validation/test set making sure no image swaths span multiple sets and fine tuned pre-trained DL models. VGG-11, a standard DL model, gives the best performance on the validation set, with an overall accuracy of 82% on the test set. We had 83% label agreement in our human label study; labeling was difficult as there is no clear class boundary. Our DL model accuracy is similar to human labelers. 64% of fresh craters, 80% old craters, and 86% of the none class are classified correctly. Predictions from this model will be integrated with IMG’s Atlas, allowing users to interactively search classes of interest. *https://pds-imaging.jpl.nasa.gov Copyright © 2022, California Institute of Technology. U.S. Government sponsorship acknowledged.

Favour Nerrise

and 6 more

In-situ novelty-based target selection of scientifically interesting (“novel”) surface features can expedite follow-up observations and new discoveries for the Mars Science Laboratory (MSL) rover and other planetary exploration missions. This study aims to identify which methods perform best for detecting novel surface features in MSL Navcam images for follow-up observations with the ChemCam instrument, as a complement to the existing Autonomous Exploration for Gathering Increased Science (AEGIS) onboard targeting system. We created a dataset of 6630 candidate targets within Navcam grayscale images acquired between sols 1343-2578 using the Rockster algorithm. These were the same target candidates considered by AEGIS, chosen to enable direct comparison to past AEGIS target selections. We employed five novelty detection methods, namely Discovery via Eigenbasis Modeling of Uninteresting Data (DEMUD), Isolation Forest, Principal Component Analysis (PCA), Reed-Xiaoli (RX) detector, and Local RX. To evaluate the algorithm selections, a member of the MSL science operations team independently identified candidate targets that represented example scenarios of novel geology that we would desire an algorithm to identify, such as layered rocks, light-toned unusual textures, and small light-toned veins. We compared these methods to selections made by AEGIS and a random baseline. Initial experiments for three scenarios showed that Local RX most frequently prioritized novel targets, followed by DEMUD and AEGIS. Our next steps in this study include evaluating input feature representations other than pixel intensities (e.g., Histogram of Oriented Gradients features), performing additional experiments to evaluate novel target prioritization performance, and selecting target candidates in Mastcam color images.