Datasets:
metadata
pretty_name: ARIA Repo Benchmark
dataset_info:
features:
- name: dataset
dtype: string
- name: model_name
dtype: string
- name: model_links
list: 'null'
- name: paper_title
dtype: string
- name: paper_date
dtype: timestamp[ns]
- name: paper_url
dtype: string
- name: code_links
list: string
- name: metrics
dtype: string
- name: table_metrics
list: string
- name: prompts
dtype: string
- name: paper_text
dtype: string
- name: compute_hours
dtype: float64
- name: num_gpus
dtype: int64
- name: reasoning
dtype: string
- name: trainable_single_gpu_8h
dtype: string
- name: verified
dtype: string
- name: modality
dtype: string
- name: paper_title_drop
dtype: string
- name: paper_date_drop
dtype: string
- name: code_links_drop
dtype: string
- name: num_gpus_drop
dtype: int64
- name: dataset_link
dtype: string
- name: time_and_compute_verification
dtype: string
- name: link_to_colab_notebook
dtype: string
- name: run_possible
dtype: string
- name: notes
dtype: string
splits:
- name: train
num_bytes: 3507790
num_examples: 58
download_size: 1553348
dataset_size: 3507790
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
size_categories:
- n<1K
tags:
- aria
- benchmark
- ml-research
- reproducibility
task_categories:
- text-generation
ARIA Repo Benchmark
The ARIA Repo Benchmark is part of the ARIA benchmark suite, a collection of closed-book benchmarks probing the ML knowledge that frontier models have internalized during training. This dataset contains 58 curated research paper implementations with metadata for evaluating whether ML experiments described in papers can be reproduced.
Dataset Summary
- Size: 58 entries
- Coverage: Computer Vision, NLP, Time Series, Graph, Bioinformatics
- Purpose: Evaluate AI agents on their ability to locate, understand, and reproduce ML research experiments
Each entry links a research paper to its code repository, dataset, metrics, and compute requirements, along with verification of whether the experiment is reproducible on constrained hardware.
Dataset Structure
Key Fields
| Field | Type | Description |
|---|---|---|
paper_title |
string | Title of the research paper |
paper_url |
string | ArXiv URL |
paper_date |
timestamp | Publication date |
paper_text |
string | Full paper text |
dataset |
string | Dataset used in the paper |
dataset_link |
string | Link to the dataset |
model_name |
string | Model name |
code_links |
list[string] | GitHub repository links |
metrics |
string | Performance metrics reported |
table_metrics |
list[string] | Detailed metrics from tables |
prompts |
string | Evaluation prompts |
modality |
string | Data modality (CV, NLP, Time Series, Graph, etc.) |
Compute & Reproducibility Fields
| Field | Type | Description |
|---|---|---|
compute_hours |
float64 | Estimated training compute hours |
num_gpus |
int64 | Number of GPUs required |
reasoning |
string | Reasoning about compute estimates |
trainable_single_gpu_8h |
string | Trainable on a single GPU in 8 hours |
verified |
string | Verification status |
time_and_compute_verification |
string | Compute verification notes |
link_to_colab_notebook |
string | Google Colab notebook link |
run_possible |
string | Whether the code runs successfully |
notes |
string | Additional notes |
Usage
from datasets import load_dataset
ds = load_dataset("AlgorithmicResearchGroup/aria-repo-benchmark", split="train")
for entry in ds:
print(f"{entry['paper_title']} - {entry['modality']} - Reproducible: {entry['run_possible']}")
Related Resources
Citation
@misc{aria_repo_benchmark,
title={ARIA Repo Benchmark},
author={Algorithmic Research Group},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/AlgorithmicResearchGroup/aria-repo-benchmark}
}