Back to Models

DIPVAE

🔍 Disentanglement

Disentangled Inferred Prior VAE

RNA

VAE that learns a factorial prior to encourage disentanglement

Publications

Variational Inference of Disentangled Latent Concepts from Unlabeled Observations

Kumar et al.2018
Complexity
★★★
complex
Interpretability
★★★
high
Architecture
DIPVAE
Latent Dim
10

Factorial Prior Learning

DIPVAE encourages disentanglement by regularizing the covariance of the aggregate posterior to be diagonal (factorized) with full VAE reconstruction

Main Idea

Encourage factorial posterior by matching aggregate posterior covariance to identity matrix while reconstructing

Key Components

Encoder

Maps to factorial latent space

Covariance Regularization

Regularizes Cov[q(z)] to be diagonal

Factorial Prior

Encourages independence across dimensions

Type I/II Variants

Different regularization strategies

Decoder

Reconstructs from factorial latents

Mathematical Formulation

L = ELBO + λ*||Cov_q(z) - I||_F²; X̂ = Decoder(z)

Loss Functions

DIPVAE Loss
Reconstruction + KL + Îť*Covariance Penalty

Data Flow

Data → Encoder → Factorial Latents → Decoder → Reconstruction

Architecture Details

Architecture Type

VAE with Factorial Prior Learning (VAE Architecture)

Input/Output Types

single-cell → reconstruction

Key Layers

EncoderCovarianceRegularizerDecoder

Frameworks

PyTorch

Tags

vaedisentanglementfactorial-priorgenerativerna