loading page

Deep Learning Saliency and Segmentation Methods for Robust Methane Plume Detection
  • +3
  • Jake Lee,
  • Brian Bue,
  • Andrew Thorpe,
  • Daniel Cusworth,
  • Alana Ayasse,
  • Riley Duren
Jake Lee
Jet Propulsion Laboratory, California Institute of Technology

Corresponding Author:[email protected]

Author Profile
Brian Bue
Jet Propulsion Laboratory, California Institute of Technology
Andrew Thorpe
Jet Propulsion Laboratory, California Institute of Technology
Daniel Cusworth
Carbon Mapper, Inc
Alana Ayasse
Carbon Mapper, Inc
Riley Duren
University of Arizona, Carbon Mapper, Inc

Abstract

Identification of global methane (CH4) sources is critical for the quantification and mitigation of this potent greenhouse gas. In preparation for Carbon Mapper, a future spaceborne imaging spectrometer mission, our work so far has focused on developing a robust methane plume detection method with AVIRIS-NG Columnwise Matched Filter (CMF) data. While we have previously demonstrated robust classification of plume source presence in ~800m square tiles, a deployed Science Data System (SDS) pipeline will require heat map or mask products that highlight the location of methane plume sources in entire flightlines and scenes.
We present two methods for implementing pixel-wise methane plume detection. First, we convert our existing GoogLeNet Convolutional Neural Network (CNN) classifier into a Fully Convolutional Network (FCN) segmentation model with a novel implementation of shift-and-stitch, in which the final fully connected layer is replaced by a one-by-one convolutional layer. This allows us to produce a saliency map of methane plumes in scenes of arbitrary sizes without re-training the existing classification model. While heatmaps produced by this method lack pixel-wise precision, they are sufficiently localized to direct attention for further review.
Second, we propose a new hybrid segmentation model based on the popular U-Net architecture. Notably, we backpropagate both segmentation and classification losses during training, which significantly reduces the number of false positive plume detections due to false enhancements and artifacts. Additionally, we successfully use algorithm-generated weakly-labeled segmentation masks, mitigating the need for expensive human-generated segmentation annotations. We report and compare the scene-level detection performance of these two methods on previously curated datasets from “COVID” (2020 California), “CACH4” (2018 California), and “Permian” (2019 Texas) AVIRIS-NG Campaigns.
12 Dec 2023Submitted to AGU 2023 Annual Meeting
27 Dec 2023Published in AGU 2023 Annual Meeting