Back to Models
siVAE
✨ Generative ModelsInterpretable Deep Generative Model
RNA
VAE with structured latent space for interpretable single-cell modeling
Publications
siVAE: interpretable deep generative models for single-cell transcriptomes
Complexity
★★☆
moderateInterpretability
★★★
highArchitecture
Structured VAE
Latent Dim
10
Used in LAIOR Framework
Structured Latent Space
siVAE uses structured priors and regularization to learn interpretable latent representations with full VAE reconstruction
Main Idea
Learn interpretable factors by structuring the latent space with domain knowledge and reconstructing expression
Key Components
Encoder
Maps expression to structured latent space
Structured Prior
Incorporates biological structure into latent space
Interpretable Factors
Each dimension corresponds to biological variation
Negative Binomial Decoder
Models count distribution appropriately for reconstruction
Mathematical Formulation
p(x|z) = NB(μ(z), θ); structured prior on z; X̂ = Decoder(z)
Loss Functions
ELBO
Reconstruction + KL divergence
Data Flow
Expression → Encoder → Structured Latent → NB Decoder → Reconstructed Expression
Architecture Details
Architecture Type
VAE with Structured Prior
Input/Output Types
single-cell → reconstruction
Key Layers
EncoderStructuredPriorNBDecoder
Frameworks
PyTorch
Tags
vaeinterpretablegenerativerna