Preview notice. This site includes method notes, datasets, metrics, and code; results and weights are not included.

Benchmarked methods

Methods and benchmark tracks

GAHIB is compared with deep-learning baselines, classical dimensionality reduction, geometric VAE variants, disentanglement regularizers, encoder alternatives, graph convolution operators, and robustness studies.

The comparison design is public in preview mode; result figures and full benchmark matrices remain gated until publication.

Datasets

53

27 cancer and 26 development cohorts

Study tracks

11

7 comparative, 4 robustness and efficiency

Metrics

20

clustering, DRE, and LSE families

Primary latent

10D

shared dimensionality for learned methods

Question

What changes?

Each track isolates either a model component, a baseline family, or a robustness condition.

Control

What stays fixed?

Preprocessing, dataset pairing, latent dimension, and train-validation splits remain aligned when applicable.

Output

How is it judged?

The shared 20-metric suite evaluates clustering, projection fidelity, and latent-space structure.

Tracks

Benchmark families

The inventory below is intentionally method-design metadata, not unpublished result disclosure.

Component ablation

Isolates the contributions of information bottleneck, Lorentz geometry, and graph attention.

5 entries
Base VAEVAE + IBVAE + HypVAE + IB + HypGAHIB

Deep-learning benchmark

Compares GAHIB with published single-cell deep representation methods under the same pipeline.

8 entries
GAHIBscVICellBLASTCLEARSCALEXscDeepClusterscDHMapscGNN

Classical dimensionality reduction

Contrasts learned graph-hyperbolic representations with standard linear and nonlinear decompositions.

6 entries
GAHIBPCAICANMFTruncated SVDDiffusion Maps

Geometric VAE benchmark

Tests whether hyperbolic priors alone match the graph-attention bottleneck formulation.

6 entries
GAHIBGM-VAE EuclideanGM-VAE PoincareGM-VAE PGMGM-VAE Learnable PGMGM-VAE Hyperbolic-Wasserstein

Disentanglement regularization

Compares structural geometry against posterior regularizers used for disentangled VAEs.

6 entries
GAHIBBase VAEBeta-VAEDIP-VAEBeta-TC-VAEInfoVAE

Encoder architecture comparison

Fixes the GAHIB objective and varies only the encoder family.

3 entries
MLPTransformerGAT

Graph convolution operator sweep

Compares message-passing operators within the GAHIB graph encoder setting.

6 entries
GCNGraphSAGEChebyshevTAGGraphTransformerGAT

Robustness and efficiency studies

Measures sensitivity to latent dimension, random seed, hyperparameters, and computational cost.

4 entries
Latent dimension ablationSeed robustnessHyperparameter sensitivityComputational cost

Shared protocol

Learned methods use the same preprocessing pipeline, train-validation split, 200-epoch budget, patience of 30, and latent dimension of 10 when the method exposes a latent-dimensionality setting. Classical methods use their standard decomposition defaults under the same input matrix.

Continue