Improving Bayesian Evidential Learning 1D imaging (BEL1D) accuracy
through iterative prior resampling
Abstract
Bayesian Evidential Learning 1D Imaging (BEL1D) has been recently
introduced as a new computationally efficient tool for the
interpretation of 1D geophysical datasets in a Bayesian framework.
Applications have already been demonstrated for Surface Nuclear Magnetic
Resonance (SNMR) data and surface waves dispersion curves. The case of
SNMR is particularly relevant in hydrogeophysics, as it directly sounds
the water content of the subsurface. BEL1D relies on the constitution of
statistical relationships in a reduced dimension space between model
parameters and simulated data using prior model samples that replicate
the field experiment. In BEL1D, this relationship is deduced through
Canonical Correlation Analysis (CCA). When using large prior
distributions, CCA may lead to numerous poorly correlated distributions
for higher dimensions. Those poorly correlated distributions are
resulting in a low reduction of uncertainty on some parameters, even if
the experiment is supposed to be sensitive to them. This phenomenon is
related to the aggregation of multiple parameters in the same dimension,
hence the possible aggregation of sensitive and insensitive parameters.
However, arbitrarily reducing the extent of the prior will lead to
biased estimations. To overcome this impediment, we introduce an
iterative procedure, using the posterior model space of the previous
iteration as prior model of the current iteration. This approach
frequently reveals higher correlations between the datasets and the
model parameters, while still using large unbiased priors. It enables
BEL1D to produce better estimations of the posterior probability density
functions of the model parameters. Nonetheless, iterating on BEL1D
presents several challenges related to the presence of insensitive
parameters, that will always mitigate the capacity to reduce at once the
uncertainty on the whole set of parameters describing the models. On
noise-free synthetic datasets, this method leads to near-exact
estimation of the sensitive parameters after few (two to three)
iterations. On noisy datasets, the resulting distributions bear some
uncertainty, arising directly from the presence of noise, but to a
lesser extent than the non-iterative approach. The procedure remains
more computationally efficient than McMC.