Latent representation learning in physics-informed neural networks for
full waveform inversion
Abstract
Full waveform inversion (FWI), a state-of-the-art seismic inversion
algorithm, comprises an iterative data-fitting process to recover
high-resolution Earth’s properties (e.g., velocity). At the heart of
this process lies the numerical wave equation solver, which necessitates
discretization. To perform efficient discretization-free FWI for
large-scale problems, we introduce physics-informed neural networks
(PINNs) as surrogates for conventional numerical solvers. Unfortunately,
the original PINN implementation requires additional training for the
new velocity model when used in the forward simulation. To make PINNs
more suitable for such scenarios, we introduce latent representation
learning to PINNs. We first append the input with the encoded velocity
vectors, which are the latent representation of the velocity models
using an autoencoder model. Unlike the original implementation, the
trained PINN model can instantly produce different wavefield solutions
without retraining with this additional information. To further improve
the FWI efficiency, instead of computing the FWI updates on the original
velocity dimension, we resort to updating in its latent representation
dimension. Specifically, only the latent representation vectors get
updated while the weights of the autoencoder and the PINN model are kept
fixed during FWI. Through a series of numerical tests, the proposed
framework shows a significant increase in accuracy and computational
efficiency compared to the conventional FWI. The improved performance of
our framework can be attributed to implicit regularization introduced by
the velocity encoding and physics-informed training procedures. The
proposed framework presents a significant step forward in utilizing a
discretization-free wave equation solver for a more efficient and
accurate FWI application.