Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Timmli commited on
Commit
2a69efe
1 Parent(s): 4435a11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -26
README.md CHANGED
@@ -1,26 +1,42 @@
1
- ---
2
- license: apache-2.0
3
- dataset_info:
4
- features:
5
- - name: question_id
6
- dtype: string
7
- - name: category
8
- dtype: string
9
- - name: cluster
10
- dtype: string
11
- - name: turns
12
- list:
13
- - name: content
14
- dtype: string
15
- splits:
16
- - name: train
17
- num_bytes: 251691
18
- num_examples: 500
19
- download_size: 154022
20
- dataset_size: 251691
21
- configs:
22
- - config_name: default
23
- data_files:
24
- - split: train
25
- path: data/train-*
26
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ dataset_info:
4
+ features:
5
+ - name: question_id
6
+ dtype: string
7
+ - name: category
8
+ dtype: string
9
+ - name: cluster
10
+ dtype: string
11
+ - name: turns
12
+ list:
13
+ - name: content
14
+ dtype: string
15
+ splits:
16
+ - name: train
17
+ num_bytes: 251691
18
+ num_examples: 500
19
+ download_size: 154022
20
+ dataset_size: 251691
21
+ configs:
22
+ - config_name: default
23
+ data_files:
24
+ - split: train
25
+ path: data/train-*
26
+ ---
27
+
28
+ ## Arena-Hard-Auto
29
+
30
+ **Arena-Hard-Auto-v0.1** ([See Paper](https://arxiv.org/abs/2406.11939)) is an automatic evaluation tool for instruction-tuned LLMs. It contains 500 challenging user queries sourced from Chatbot Arena. We prompt GPT-4-Turbo as judge to compare the models' responses against a baseline model (default: GPT-4-0314). Notably, Arena-Hard-Auto has the highest *correlation* and *separability* to Chatbot Arena among popular open-ended LLM benchmarks ([See Paper](https://arxiv.org/abs/2406.11939)). If you are curious to see how well your model might perform on Chatbot Arena, we recommend trying Arena-Hard-Auto.
31
+
32
+ Please checkout our GitHub repo on how to evaluate models using Arena-Hard-Auto and more information about the benchmark.
33
+
34
+ If you find this dataset useful, feel free to cite us!
35
+ ```
36
+ @article{li2024crowdsourced,
37
+ title={From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline},
38
+ author={Li, Tianle and Chiang, Wei-Lin and Frick, Evan and Dunlap, Lisa and Wu, Tianhao and Zhu, Banghua and Gonzalez, Joseph E and Stoica, Ion},
39
+ journal={arXiv preprint arXiv:2406.11939},
40
+ year={2024}
41
+ }
42
+ ```