Disentangling latent representations is crucial for improving the interpretability and robustness of AI models, especially in complex scientific datasets where domain scientists often know some generative factors, but many others remain unknown. Our method, the Auxiliary information guided Variational AutoEncoder (Aux-VAE), focuses on disentangling the latent space with respect to the known factors while allowing the remaining latent dimensions to jointly capture the unknown factors. This approach not only maintains data reconstruction accuracy but also enhances the interpretability of latent spaces, providing valuable insights into the underlying mechanisms of data generation.