KikiQiQi's picture
Update README.md
4e440b3 verified
---
license: mit
task_categories:
- text2text-generation
language:
- en
size_categories:
- n<1K
configs:
- config_name: all
data_files:
- split: test
path: simulbench_all.jsonl
- config_name: hard
data_files:
- split: test
path: simulbench_hard.jsonl
- config_name: objective
data_files:
- split: test
path: simulbench_objective.jsonl
- config_name: subjective
data_files:
- split: test
path: simulbench_subjective.jsonl
- config_name: system
data_files:
- split: test
path: simulbench_system.jsonl
- config_name: tool
data_files:
- split: test
path: simulbench_tool.jsonl
- config_name: role
data_files:
- split: test
path: simulbench_role.jsonl
---
## Dataset Formats
```jsonl
{
"id": "...",
"task_description": "...",
"act": "..."
}
```
## Dataset Loading
The subsets for SimulBench can be loaded as follows:
```python
from dataset import load_dataset
all_tasks = load_dataset("SimulBench/SimulBench", "all", split="test")
```
Other available subsets are: `hard`, `subjective`, `objective`, `system`, `tool`, `role`.
## More Info
* [Paper](xxx)
* [Website](https://simulbench.github.io/)
* [Leaderboard & Data Explorer](https://huggingface.co/spaces/SimulBench/SimulBench)
## Acknowledgements
The simulation tasks are sourced from [Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts) with modifications.
## Citation Information
```latex
@article{simulbench2024,
title={SimulBench: Evaluating LLMs with Diverse Simulation Tasks},
author={Qi Jia, Xiang Yue, Tianyu Zheng, Jie Huang, and Bill Yuchen Lin},
year={2024},
eprint={},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```