loading page

Enhancing  Interpretability in Generative Modeling: Disentangled  Latent Spaces in Scientific Datasets            
  • +1
  • Arkaprabha Ganguli,
  • Nesar Ramachandra,
  • Julie Bessac,
  • Emil Constantinescu
Arkaprabha Ganguli
Argonne National Laboratory

Corresponding Author:[email protected]

Author Profile
Nesar Ramachandra
Argonne National Laboratory
Julie Bessac
National Renewable Energy Laboratory
Emil Constantinescu
Argonne National Laboratory

Abstract

Disentangling latent representations is crucial for improving the interpretability and robustness of AI models, especially in complex scientific datasets where domain scientists often know some generative factors, but many others remain unknown. Our method, the Auxiliary information guided Variational AutoEncoder (Aux-VAE), focuses on disentangling the latent space with respect to the known factors while allowing the remaining latent dimensions to jointly capture the unknown factors. This approach not only maintains data reconstruction accuracy but also enhances the interpretability of latent spaces, providing valuable insights into the underlying mechanisms of data generation.
17 Oct 2024Submitted to ESS Open Archive
18 Oct 2024Published in ESS Open Archive