BDDSSD ronch99 commited on
Commit
a5e4a4d
·
0 Parent(s):

Duplicate from osunlp/ScienceAgentBench

Browse files

Co-authored-by: Ziru Chen <ronch99@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +58 -0
  2. README.md +65 -0
  3. ScienceAgentBench.csv +0 -0
.gitattributes ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ # Video files - compressed
57
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
58
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ configs:
4
+ - config_name: validation
5
+ data_files:
6
+ - split: validation
7
+ path: ScienceAgentBench.csv
8
+ language:
9
+ - en
10
+ ---
11
+
12
+ ## ScienceAgentBench
13
+
14
+ The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about their true capabilities.
15
+ In this work, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation.
16
+ To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery:
17
+ - To ensure the scientific authenticity and real-world relevance of our benchmark, we extract 102 tasks from 44 peer-reviewed publications in four disciplines and engage nine subject matter experts to validate them.
18
+ - We unify the target output for every task to a self-contained Python program file and employ an array of evaluation metrics to examine the generated programs, execution results, and costs.
19
+ - Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its annotation quality and scientific plausibility.
20
+
21
+ ## Benchmark Access
22
+
23
+ To prevent benchmark data contamination, we only provide the annotation sheet on Huggingface, which includes all necessary *inputs* to run an agent.
24
+
25
+ To evaluate the agent outcomes, i.e. generated code, please follow the instructions in our [github repository](https://github.com/OSU-NLP-Group/ScienceAgentBench).
26
+
27
+ ## Benchmark Structure
28
+
29
+ - "instance_id" (str): unique id for each task
30
+ - "domain" (str): scientific discipline of each task
31
+ - "subtask_categories" (str): sub-tasks involved in each task
32
+ - "github_name" (str): the original github repository each task is adapted from
33
+ - "task_inst" (str): task goal description and output formatting instruction
34
+ - "domain_knowledge" (str): expert-annotated information about the task
35
+ - "dataset_folder_tree" (str): string representation of dataset directory structure for each task
36
+ - "dataset_preview" (str): string representation of the first few examples/lines in dataset files used in each task
37
+ - "src_file_or_path" (str): source program location in the original github repository that is adapted
38
+ - "gold_program_name" (str): name of annotated program (reference solution) for each task
39
+ - "output_fname" (str): output location to save the generated program for each task
40
+ - "eval_script_name" (str): name of evaluation script to check success criteria for each task
41
+
42
+ ## Licensing Information
43
+
44
+ Most tasks in ScienceAgentBench is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
45
+ We retain their original licenses for tasks adapted from [rasterio/rasterio](https://github.com/rasterio/rasterio?tab=License-1-ov-file) (Instance ID: 32, 46, 53, 54, 84) and [hackingmaterials/matminer](https://github.com/hackingmaterials/matminer?tab=License-1-ov-file) (Instance ID: 3).
46
+
47
+ ## Disclaimer
48
+
49
+ Our benchmark is constructed by adapting open-source code and data, to which we respect their creators' ownership and intellectual property. In Appendix I of our paper, we have made our best effort to cite the original papers, list the repositories, and provide their licenses. Still, we acknowledge that two repositories ([rasterio/rasterio](https://github.com/rasterio/rasterio) and [hackingmaterials/matminer](https://github.com/hackingmaterials/matminer)) are copyrighted and believe their terms for use are compatible with our research purpose. We welcome requests from the original authors to modify or remove relevant tasks related to those two repositories if needed.
50
+
51
+ ## Citation
52
+
53
+ If you find our code and data useful, please consider citing our paper:
54
+
55
+ ```
56
+ @misc{chen2024scienceagentbenchrigorousassessmentlanguage,
57
+ title={ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery},
58
+ author={Ziru Chen and Shijie Chen and Yuting Ning and Qianheng Zhang and Boshi Wang and Botao Yu and Yifei Li and Zeyi Liao and Chen Wei and Zitong Lu and Vishal Dey and Mingyi Xue and Frazier N. Baker and Benjamin Burns and Daniel Adu-Ampratwum and Xuhui Huang and Xia Ning and Song Gao and Yu Su and Huan Sun},
59
+ year={2024},
60
+ eprint={2410.05080},
61
+ archivePrefix={arXiv},
62
+ primaryClass={cs.CL},
63
+ url={https://arxiv.org/abs/2410.05080},
64
+ }
65
+ ```
ScienceAgentBench.csv ADDED
The diff for this file is too large to render. See raw diff