Abstract
Scene understanding methodologies (e.g., classification, segmentation,
and anomaly detection) can be costly when operating on a per-pixel
basis. As remote sensing applications rely on imagery with greater
resolution, pixels regularly contain similar information to their
neighbors. In recent years, scene understanding approaches have
leveraged superpixel algorithms to partition imagery into small
homogeneous regions for a variety of tasks. Instead of performing
operations on millions of pixels, thousands, or in some cases hundreds
of superpixels can be used as inputs into various pipelines. Most
superpixel algorithms rely solely on RGB color information to produce
superpixel maps. However, when multiple co-registered modalities are
available, as is the case with the National Ecological Observatory
Network (NEON) tree crown dataset, it is possible to combine information
from multiple sources to produce a single shared map. In this work we
combine airborne hyperspectral imagery, LiDAR point clouds, and RGB
imagery to obtain superpixel maps and compare the oversegmentation
results to those obtained by fusing individual maps produced from each
modality. Superpixels are computed using the Simple Non-Iterative
Clustering (SNIC) algorithm. Oversegmentation maps are scored using
standard evaluation approaches. We present results on a subset of the
National Ecological Observatory Network imagery dataset.