Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Sub-tasks:
language-modeling
Languages:
English
Size:
10K - 100K
ArXiv:
License:
metadata
license: cc-by-nc-sa-4.0
language:
- en
annotations_creators:
- no-annotation
task_categories:
- text-generation
task_ids:
- language-modeling
size_categories:
- 10K<n<100K
configs:
- config_name: python
data_files:
- split: test
path:
- data/python.jsonl
- config_name: cc
data_files:
- split: test
path:
- data/cc.jsonl
- config_name: arxiv_math
data_files:
- split: test
path:
- data/arxiv_math.jsonl
This is the compression corpora dataset used in the paper "Compression Represents Intelligence Linearly". We find that LLMs’ intelligence – reflected by benchmark scores – almost linearly correlates with their ability to compress external text corpora. We measure intelligence along three key abilities: knowledge and commonsense, coding, and mathematical reasoning, and provide the corresponding compression corpora here respectively named cc, python, and arxiv_math.
Load the data
from datasets import load_dataset
dataset=load_dataset(r"hkust-nlp/llm-compression",name="python")
print(dataset['test'][0])
More details on compression evaluation are at our github page.
Citation
@misc{huang2024compression,
title={Compression Represents Intelligence Linearly},
author={Yuzhen Huang and Jinghan Zhang and Zifei Shan and Junxian He},
year={2024},
eprint={2404.09937},
archivePrefix={arXiv},
primaryClass={cs.CL}
}