Home/Benchmarks

Evaluation Metrics

SCPortal documents 24 evaluation metrics across 4 categories, used for benchmarking 23 single-cell analysis models on 66 datasets.

23
Models
66
Datasets
24
Metrics
6
Categories

Full LAIOR Dashboard

For interactive charts, detailed comparisons, and full filtering capabilities, visit the LAIOR Dashboard. Note: this portal documents the evaluation metrics used; full benchmark results are available on the LAIOR Dashboard.

Open Full Dashboard

Evaluation Metrics

Clustering

  • NMI
  • ARI
  • ASW
  • Calinski-Harabasz
  • Davies-Bouldin

Dim. Reduction

  • Q_local
  • Q_global
  • Trustworthiness
  • Continuity

Latent Space

  • Manifold Dim.
  • Intrinsic Dim.
  • Spectral Decay

Runtime

  • Training Time
  • Memory Usage
  • Inference Speed

Model Categories

Predictive Models

2 models

Models focused on prediction tasks

scGCCCLEAR

Generative Models

10 models

VAE-based generative approaches

scVIscGNNSCALEXCellBLASTscDAC...

scATAC-Specific

2 models

Chromatin accessibility focused

PeakVIPoissonVI

Trajectory

1 models

Trajectory inference models

scTour

Geometric

4 models

Hyperbolic and geometric approaches

GMVAE-PGMPoincaréHyperbolic-WrappedLearnable-PGM

Disentanglement

4 models

Disentangled representation learning

β-VAEInfoVAETCVAEDIPVAE

Benchmark Repositories

LAIOR

Benchmark dataset repository with 24 evaluation metrics across 66 datasets (48 scRNA-seq, 18 scATAC-seq) and 23 models.

View LAIOR Dashboard

iAODE

Benchmark dataset repository with 617 datasets from 113 studies (434 scATAC-seq, 183 scRNA-seq).

View iAODE Browser

Related Destinations

Move from metric documentation into the public benchmark and dataset destinations that supply the surrounding context.