Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
ronch99 commited on
Commit
e9bfb06
1 Parent(s): 032cc44

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -3
README.md CHANGED
@@ -1,3 +1,57 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+ ## ScienceAgentBench
5
+
6
+ The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about their true capabilities.
7
+ In this work, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation.
8
+ To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery:
9
+ - To ensure the scientific authenticity and real-world relevance of our benchmark, we extract 102 tasks from 44 peer-reviewed publications in four disciplines and engage nine subject matter experts to validate them.
10
+ - We unify the target output for every task to a self-contained Python program file and employ an array of evaluation metrics to examine the generated programs, execution results, and costs.
11
+ - Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its annotation quality and scientific plausibility.
12
+
13
+ ## Benchmark Access
14
+
15
+ To prevent benchmark data contamination, we only provide the annotation sheet on Huggingface, which includes all necessary *inputs* to run an agent.
16
+
17
+ To evaluate the agent outcomes, i.e. generated code, please follow the instructions in our [github repository](https://github.com/OSU-NLP-Group/ScienceAgentBench).
18
+
19
+ ## Benchmark Structure
20
+
21
+ - "instance_id" (str): unique id for each task
22
+ - "domain" (str): scientific discipline of each task
23
+ - "subtask_categories" (str): sub-tasks involved in each task
24
+ - "github_name" (str): the original github repository each task is adapted from
25
+ - "task_inst" (str): task goal description and output formatting instruction
26
+ - "domain_knowledge" (str): expert-annotated information about the task
27
+ - "dataset_folder_tree" (str): string representation of dataset directory structure for each task
28
+ - "dataset_preview" (str): string representation of the first few examples/lines in dataset files used in each task
29
+ - "src_file_or_path" (str): source program location in the original github repository that is adapted
30
+ - "gold_program_name" (str): name of annotated program (reference solution) for each task
31
+ - "output_fname" (str): output location to save the generated program for each task
32
+ - "eval_script_name" (str): name of evaluation script to check success criteria for each task
33
+
34
+ ## Licensing Information
35
+
36
+ Most tasks in ScienceAgentBench is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
37
+ We retain their original licenses for tasks adapted from [rasterio/rasterio](https://github.com/rasterio/rasterio?tab=License-1-ov-file) (Instance ID: 32, 46, 53, 54, 84) and [hackingmaterials/matminer](https://github.com/hackingmaterials/matminer?tab=License-1-ov-file) (Instance ID: 3).
38
+
39
+ ## Disclaimer
40
+
41
+ Our benchmark is constructed by adapting open-source code and data, to which we respect their creators' ownership and intellectual property. In Appendix I of our paper, we have made our best effort to cite the original papers, list the repositories, and provide their licenses. Still, we acknowledge that two repositories ([rasterio/rasterio](https://github.com/rasterio/rasterio) and [hackingmaterials/matminer](https://github.com/hackingmaterials/matminer)) are copyrighted and believe their terms for use are compatible with our research purpose. We welcome requests from the original authors to modify or remove relevant tasks related to those two repositories if needed.
42
+
43
+ ## Citation
44
+
45
+ If you find our code and data useful, please consider citing our paper:
46
+
47
+ ```
48
+ @misc{chen2024scienceagentbenchrigorousassessmentlanguage,
49
+ title={ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery},
50
+ author={Ziru Chen and Shijie Chen and Yuting Ning and Qianheng Zhang and Boshi Wang and Botao Yu and Yifei Li and Zeyi Liao and Chen Wei and Zitong Lu and Vishal Dey and Mingyi Xue and Frazier N. Baker and Benjamin Burns and Daniel Adu-Ampratwum and Xuhui Huang and Xia Ning and Song Gao and Yu Su and Huan Sun},
51
+ year={2024},
52
+ eprint={2410.05080},
53
+ archivePrefix={arXiv},
54
+ primaryClass={cs.CL},
55
+ url={https://arxiv.org/abs/2410.05080},
56
+ }
57
+ ```