Evaluation Metrics
SCPortal documents 24 evaluation metrics across 4 categories, used for benchmarking 23 single-cell analysis models on 66 datasets.
Full LAIOR Dashboard
For interactive charts, detailed comparisons, and full filtering capabilities, visit the LAIOR Dashboard. Note: this portal documents the evaluation metrics used; full benchmark results are available on the LAIOR Dashboard.
Open Full DashboardEvaluation Metrics
Clustering
- NMI
- ARI
- ASW
- Calinski-Harabasz
- Davies-Bouldin
Dim. Reduction
- Q_local
- Q_global
- Trustworthiness
- Continuity
Latent Space
- Manifold Dim.
- Intrinsic Dim.
- Spectral Decay
Runtime
- Training Time
- Memory Usage
- Inference Speed
Model Categories
Predictive Models
2 modelsModels focused on prediction tasks
Generative Models
10 modelsVAE-based generative approaches
scATAC-Specific
2 modelsChromatin accessibility focused
Trajectory
1 modelsTrajectory inference models
Geometric
4 modelsHyperbolic and geometric approaches
Disentanglement
4 modelsDisentangled representation learning
Benchmark Repositories
LAIOR
Benchmark dataset repository with 24 evaluation metrics across 66 datasets (48 scRNA-seq, 18 scATAC-seq) and 23 models.
View LAIOR DashboardiAODE
Benchmark dataset repository with 617 datasets from 113 studies (434 scATAC-seq, 183 scRNA-seq).
View iAODE BrowserRelated Destinations
Move from metric documentation into the public benchmark and dataset destinations that supply the surrounding context.