BCCT-Hub / README.md
EvalData's picture
Initial release: BCCT-Hub dataset for NeurIPS 2026 E&D Track
285f7ca verified
metadata
license: apache-2.0
language:
  - en
pretty_name: BCCT-Hub
tags:
  - representation-similarity
  - representation-convergence
  - cross-model-transport
  - benchmark
  - alignment
  - evaluation
  - neurips-2026
size_categories:
  - 1K<n<10K
task_categories:
  - feature-extraction
configs:
  - config_name: vision_atlas
    data_files:
      - split: pairs
        path: atlas.json
  - config_name: llm_atlas
    data_files:
      - split: pairs
        path: llm_atlas.json
  - config_name: audio_atlas
    data_files:
      - split: pairs
        path: audio_atlas.json
  - config_name: video_atlas
    data_files:
      - split: pairs
        path: video_atlas.json
  - config_name: meta_analysis
    data_files:
      - split: papers
        path: meta_analysis.csv

BCCT-Hub

A benchmark and evaluation dataset for measuring representation convergence and cross-model transport across pretrained encoders. Released alongside the NeurIPS 2026 Evaluations & Datasets Track submission "BCCT-Hub: A Benchmark and Toolkit for Measuring Representation Convergence Across Model Families."

The dataset packages four pre-computed pairwise compatibility atlases, 41 pre-extracted feature tensors, statistical-analysis JSON outputs, an 88-paper meta-analysis CSV, and a Croissant 1.0 metadata file with the five Responsible-AI fields required by the NeurIPS 2026 E&D Track.

Atlas summary

Atlas Pairs Models Source data
atlas.json (vision) 190 20 vision encoders CIFAR-100 test (5000 images)
llm_atlas.json 36 9 base LLMs WikiText-103 (2000 passages of 128 tokens)
audio_atlas.json (preliminary) 15 6 audio encoders LibriSpeech test-clean
video_atlas.json (exploratory) 15 6 video encoders STL-10 pseudo-clips
meta_analysis.csv 88 papers tagged Survey extraction

Each pairwise record carries six BCCT quantities: R (effective-rank bitrate proxy), τ (bidirectional Procrustes-vs-MLP transport linearity score), λ (alignment locality), TAI (transport asymmetry), Δ (bottleneck mismatch), and a regime label in {Collapsed, Local-Only, Convergent, Divergent}.

Headline empirical results

  • Family is the strongest predictor of transport success: mixed-effects random-intercept LMM with both models in each pair contributing as random effects yields β_family = 0.20 (p < 10⁻¹⁶), β_Δ = −0.034 (LRT χ² = 55.1, p < 10⁻¹³), β_objective = 0.042 (p = 0.023).
  • The bitrate–transport association survives dependence-aware resampling: block-bootstrap ρ = −0.70, 95 % CI [−0.82, −0.43].
  • S_local predicts cross-encoder k-NN retrieval: raw ρ = 0.76 (p = 1.1 × 10⁻⁴; Benjamini–Hochberg-adjusted p = 1.7 × 10⁻³ across 15 retrieval correlations, 9/15 significant).

Repository layout

data/
├── atlas.json                       (vision; 190 pairs)
├── llm_atlas.json                   (language; 36 pairs)
├── audio_atlas.json                 (preliminary; 15 pairs)
├── video_atlas.json                 (exploratory; 15 pairs)
├── croissant.json                   (Croissant 1.0 + RAI fields)
├── meta_analysis.csv                (88-paper survey)
├── bitrate_estimator_robustness.json
├── audio_bitrate_sweep.json
├── stl10_vs_cifar100_comparison.json
├── cifar100_{test,train}_labels.pt  (paired CIFAR-100 labels)
├── experiments/                     (mixed-effects, holdout, retrieval, stitching, ...)
├── features/                        (vision: 20 encoders × 5000 CIFAR-100 test)
├── features_train/                  (vision: 20 encoders × 5000 CIFAR-100 train, seed=42)
├── features_external/               (5 out-of-atlas encoders for the case study)
├── features_stl10/                  (vision robustness check)
└── audio_features/                  (6 audio encoders × LibriSpeech test-clean)

Croissant 1.0 metadata

The dataset ships with a validated Croissant file (data/croissant.json, 17 KB). It declares SHA-256 hashes for each atlas, a conformsTo: http://mlcommons.org/croissant/1.0 header, and the five Responsible-AI fields required by the NeurIPS 2026 Evaluations & Datasets Track (rai:dataLimitations, rai:dataBiases, rai:dataPersonalSensitiveInformation, rai:dataUseCases, rai:dataSocialImpact). The file passes the official validator:

pip install mlcroissant
mlcroissant validate --jsonld data/croissant.json
# → "Done." (zero errors)

Intended uses

  1. Compatibility screening between candidate model pairs prior to model-stitching or knowledge-transfer experiments.
  2. Reproducing and auditing the headline statistical findings of the paper.
  3. Extending the atlas with new encoders by re-running the extraction pipeline in the companion code repository.
  4. Regression testing for representation-similarity research that builds on CKA, mutual k-NN, Procrustes-based transport, or effective-rank proxies.

Out-of-scope uses (not validated, not recommended)

  • Deployment-time decisions about whether two production models are interchangeable in a downstream application.
  • Safety or fairness certification of any individual encoder.
  • Inference about model training data, intellectual property, or copyright provenance from feature-space similarity.

BCCT scores are diagnostic summaries for comparative research, not deployment guarantees. See data/croissant.json rai:dataSocialImpact for the full scope statement.

Source-data licensing

The dataset redistributes only deterministic numerical activations and per-pair metric outputs, never source images, audio, or text. Upstream benchmarks remain under their original licenses:

  • CIFAR-100: MIT-style permissive
  • STL-10: permissive (Coates et al., 2011)
  • WikiText-103: CC BY-SA 3.0
  • LibriSpeech test-clean: CC BY 4.0

Our derivative numerical artifacts are released under Apache-2.0 (see LICENSE).

Companion artifacts

Citation

If you use BCCT-Hub in your research, please cite the accompanying paper:

@misc{bcct_hub_2026,
  title  = {{BCCT-Hub}: A Benchmark and Toolkit for Measuring Representation
            Convergence Across Model Families},
  author = {Anonymous Authors},
  year   = {2026},
  note   = {NeurIPS 2026 Evaluations \& Datasets Track (under review)}
}