Back to Models
β-VAE
🔍 DisentanglementBeta Variational Autoencoder
RNA
VAE with weighted KL divergence for learning disentangled factors
Publications
β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Complexity
★★☆
moderateInterpretability
★★★
highArchitecture
β-VAE
Latent Dim
10
Weighted Disentanglement
β-VAE uses a hyperparameter β to weight the KL term, encouraging each latent dimension to encode independent factors of variation with full VAE reconstruction
Main Idea
Learn disentangled representations by increasing pressure on latent bottleneck while maintaining reconstruction
Key Components
Encoder
Maps data to latent factors
Weighted KL Term
β > 1 increases disentanglement pressure
Information Bottleneck
Forces compact, factorized representations
Decoder
Reconstructs data from disentangled factors
Mathematical Formulation
L = E_q[log p(x|z)] - β*KL(q(z|x)||p(z)); X̂ = Decoder(z)
Loss Functions
β-VAE Loss
Reconstruction + β*KL (β > 1 for disentanglement)
Data Flow
Data → Encoder → Disentangled Factors → Decoder → Reconstruction
Architecture Details
Architecture Type
Weighted Variational Autoencoder (VAE Architecture)
Input/Output Types
single-cell → reconstruction
Key Layers
EncoderKLWeightingDecoder
Frameworks
PyTorch
Tags
vaedisentanglementfactorsgenerativerna