Figure 2. A flowchart of the MBARS algorithm. The blue box shows
the required input data, all of which are available on the PDS. The
image is specified manually, but the image metadata are automatically
fetched by MBARS locally. The green box shows the processing steps of
MBARS, described in detail in Section 3. The algorithm generally works
with a single image panel at a time and carries out analyses with
different shadow boundaries in sequence. The final product, the
GIS-ready list of boulder objects and attributes, compiles all image
panels into a single file for import to a GIS software. The user
compares the results of the different boundary parameter settings to the
manually counted boulders within the test areas (purple box) choosing
the best-fit solution as the final MBARS output.
3.1. Image Preparation and Shadow Boundary
Selection
Prior to MBARS processing, several steps are taken to prepare the image
for analysis. First, the HiRISE image is broken into panels, which we
carry out with the Split Raster tool in ArcMap This provides crucial map
orientation information to MBARS (as .pgw world files, which are
required for later steps) and splits the HiRISE image into manageable
sizes for processing. The size of each panel can be controlled by the
user, though 500-1000 pixel square image panels
(~125-250 m) are used in this work. The individual panel
size is only limited by computer hardware and MBARS can receive panels
of any size.
Previous shadow segmentation methods have relied on maximum entropy
thresholding (Golombek et al., 2008) or range filtering
(Nagle-McNaughton et al., 2020) to define shadow boundaries. The maximum
entropy thresholding approach used in the G-H method creates two classes
within the image, shadows and non-shadows, and modifies the brightness
boundary between those two classes to maximize the inter-class entropy.
MBARS takes a different approach, predicting a shadow boundary, i.e.,
the expected maximum brightness to be considered a part of a shadow, for
each image by forward modeling (Fig. 3). To predict a shadow boundary,
three key components are used: the darkness of shadows, the brightness
of non-shadow pixels, and the PSF of the HiRISE instrument. The interior
of shadows larger than ~5 pixels are not perfectly dark
but are generally dark enough to register as Digital Number (DN,
corresponds to pixel brightness in the image) =1 in HiRISE images. The
brightness of non-shadow pixels varies among images due to changing
surface properties (e.g., albedo) and photometric conditions. Instead of
predicting these changes, the brightness distribution in the target
HiRISE image is statistically sampled and used in the shadow model. Note
that this calculation is done on the entire image, not on individual
image panels, making the shadow boundary calculation consistent across
image panels. Finally, the HiRISE PSF is well-quantified from in-flight
imaging of on-board targets and stars (McEwen et al., 2007), though
other factors (spacecraft jitter, atmospheric conditions, etc.) are more
difficult to constrain. Following previous HiRISE work (Kirk et al.,
2008), a Lorentzian function with λ (half-width at half-maximum) = 0.77
is used here for the HiRISE PSF. Other factors that may influence the
effective PSF are assumed to be accounted for within this PSF. To
predict how a dark shadow will be blurred with the surrounding, brighter
non-shadow pixels, we construct a model image with a dark (DN=1) shadow
and convolve it with a background (non-shadowed area, Fig. 3) which is
randomly sampled from the image pixel brightness distribution. After
convolution, the shadow interior becomes brighter due to blurring with
the nearby background. The DN of pixels within the constructed shadow
are recorded, and this process is repeated 100 times for each HiRISE
image. From the 100 results, the average of a user-chosen percentile of
each shadow is taken as the shadow boundary. The choice of boundary
parameter is the only point of user influence. User selected boundary
parameters between 40-70 (i.e., 40th and
70th percentile) produce MBARS results consistent with
manual observations (Table S1).
MBARS segments each image panel based on the shadow boundary DN value,
below which the pixels are considered to be part of a shadowed area.
During this step, MBARS also retrieves relevant metadata (sub-solar
latitude/longitude, resolution, incidence angle, etc.) from the
RDRCUMINDEX file provided by the PDS. The image is also rotated
according to the sun direction calculated from the sub-solar and
sub-spacecraft coordinates provided in the HiRISE image metadata. This
first collection of functions results in one primary product: the
original image rotated and filtered such that pixel intensities above
the shadow boundary are set to a fixed value. This segmented image is
passed onto the next major function, Boulder Segmentation.