Benchmarked methods
Methods and benchmark tracks
The comparison design is public in preview mode; result figures and full benchmark matrices remain gated until publication.
Datasets
53
27 cancer and 26 development cohorts
Study tracks
11
7 comparative, 4 robustness and efficiency
Metrics
20
clustering, DRE, and LSE families
Primary latent
10D
shared dimensionality for learned methods
Question
What changes?
Each track isolates either a model component, a baseline family, or a robustness condition.
Control
What stays fixed?
Preprocessing, dataset pairing, latent dimension, and train-validation splits remain aligned when applicable.
Output
How is it judged?
The shared 20-metric suite evaluates clustering, projection fidelity, and latent-space structure.
Tracks
Benchmark families
Component ablation
Isolates the contributions of information bottleneck, Lorentz geometry, and graph attention.
Deep-learning benchmark
Compares GAHIB with published single-cell deep representation methods under the same pipeline.
Classical dimensionality reduction
Contrasts learned graph-hyperbolic representations with standard linear and nonlinear decompositions.
Geometric VAE benchmark
Tests whether hyperbolic priors alone match the graph-attention bottleneck formulation.
Disentanglement regularization
Compares structural geometry against posterior regularizers used for disentangled VAEs.
Encoder architecture comparison
Fixes the GAHIB objective and varies only the encoder family.
Graph convolution operator sweep
Compares message-passing operators within the GAHIB graph encoder setting.
Robustness and efficiency studies
Measures sensitivity to latent dimension, random seed, hyperparameters, and computational cost.
Shared protocol
Learned methods use the same preprocessing pipeline, train-validation split, 200-epoch budget, patience of 30, and latent dimension of 10 when the method exposes a latent-dimensionality setting. Classical methods use their standard decomposition defaults under the same input matrix.
Continue