hails commited on
Commit
432868e
1 Parent(s): deece39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -10
README.md CHANGED
@@ -25,19 +25,68 @@ configs:
25
 
26
  Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo, following dmayhem93/agieval-* datasets on the HF hub.
27
 
28
- This dataset contains the contents of the LSAT-reading comprehension subtask of AGIEval, as accessed in https://github.com/ruixiangcui/AGIEval/commit/5c77d073fda993f1652eaae3cf5d04cc5fd21d40 .
29
 
30
 
31
  Citation:
 
 
 
 
 
 
 
 
 
 
32
 
 
33
 
34
- @misc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
- {zhong2023agieval,
37
- title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
38
- author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
39
- year={2023},
40
- eprint={2304.06364},
41
- archivePrefix={arXiv},
42
- primaryClass={cs.CL}
43
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo, following dmayhem93/agieval-* datasets on the HF hub.
27
 
28
+ This dataset contains the contents of the LSAT reading comprehension subtask of AGIEval, as accessed in https://github.com/ruixiangcui/AGIEval/commit/5c77d073fda993f1652eaae3cf5d04cc5fd21d40 .
29
 
30
 
31
  Citation:
32
+ ```
33
+ @misc{zhong2023agieval,
34
+ title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
35
+ author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
36
+ year={2023},
37
+ eprint={2304.06364},
38
+ archivePrefix={arXiv},
39
+ primaryClass={cs.CL}
40
+ }
41
+ ```
42
 
43
+ Please make sure to cite all the individual datasets in your paper when you use them. We provide the relevant citation information below:
44
 
45
+ ```
46
+ @inproceedings{ling-etal-2017-program,
47
+ title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems",
48
+ author = "Ling, Wang and
49
+ Yogatama, Dani and
50
+ Dyer, Chris and
51
+ Blunsom, Phil",
52
+ booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
53
+ month = jul,
54
+ year = "2017",
55
+ address = "Vancouver, Canada",
56
+ publisher = "Association for Computational Linguistics",
57
+ url = "https://aclanthology.org/P17-1015",
58
+ doi = "10.18653/v1/P17-1015",
59
+ pages = "158--167",
60
+ abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.",
61
+ }
62
 
63
+ @inproceedings{hendrycksmath2021,
64
+ title={Measuring Mathematical Problem Solving With the MATH Dataset},
65
+ author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
66
+ journal={NeurIPS},
67
+ year={2021}
68
+ }
69
+
70
+ @inproceedings{Liu2020LogiQAAC,
71
+ title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning},
72
+ author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang},
73
+ booktitle={International Joint Conference on Artificial Intelligence},
74
+ year={2020}
75
+ }
76
+
77
+ @inproceedings{zhong2019jec,
78
+ title={JEC-QA: A Legal-Domain Question Answering Dataset},
79
+ author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong},
80
+ booktitle={Proceedings of AAAI},
81
+ year={2020},
82
+ }
83
+
84
+ @article{Wang2021FromLT,
85
+ title={From LSAT: The Progress and Challenges of Complex Reasoning},
86
+ author={Siyuan Wang and Zhongkun Liu and Wanjun Zhong and Ming Zhou and Zhongyu Wei and Zhumin Chen and Nan Duan},
87
+ journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
88
+ year={2021},
89
+ volume={30},
90
+ pages={2201-2216}
91
+ }
92
+ ```