Back to Models

β-VAE

🔍 Disentanglement

Beta Variational Autoencoder

RNA

VAE with weighted KL divergence for learning disentangled factors

Publications

β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

Higgins et al.2017
Complexity
moderate
Interpretability
high
Architecture
β-VAE
Latent Dim
10

Weighted Disentanglement

β-VAE uses a hyperparameter β to weight the KL term, encouraging each latent dimension to encode independent factors of variation with full VAE reconstruction

Main Idea

Learn disentangled representations by increasing pressure on latent bottleneck while maintaining reconstruction

Key Components

Encoder

Maps data to latent factors

Weighted KL Term

β > 1 increases disentanglement pressure

Information Bottleneck

Forces compact, factorized representations

Decoder

Reconstructs data from disentangled factors

Mathematical Formulation

L = E_q[log p(x|z)] - β*KL(q(z|x)||p(z)); X̂ = Decoder(z)

Loss Functions

β-VAE Loss
Reconstruction + β*KL (β > 1 for disentanglement)

Data Flow

Data → Encoder → Disentangled Factors → Decoder → Reconstruction

Architecture Details

Architecture Type

Weighted Variational Autoencoder (VAE Architecture)

Input/Output Types

single-cellreconstruction

Key Layers

EncoderKLWeightingDecoder

Frameworks

PyTorch

Tags

vaedisentanglementfactorsgenerativerna