Update README.md
Browse files
README.md
CHANGED
@@ -226,17 +226,110 @@ configs:
|
|
226 |
path: truthfulqa/validation-*
|
227 |
---
|
228 |
|
229 |
-
|
230 |
|
231 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
232 |
|
|
|
233 |
|
234 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
235 |
|
|
|
236 |
|
237 |
-
|
238 |
|
239 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
240 |
|
241 |
|
242 |
## Citation
|
|
|
226 |
path: truthfulqa/validation-*
|
227 |
---
|
228 |
|
229 |
+
## 2WikiHotpotQA
|
230 |
|
231 |
+
This dataset is a multihop question answering task, as proposed in "Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps" by Ho. et. al
|
232 |
+
The folder contains evaluation script and path to dataset on the validation split on around 12k samples.
|
233 |
+
```
|
234 |
+
@inproceedings{xanh2020_2wikimultihop,
|
235 |
+
title = "Constructing A Multi-hop {QA} Dataset for Comprehensive Evaluation of Reasoning Steps",
|
236 |
+
author = "Ho, Xanh and
|
237 |
+
Duong Nguyen, Anh-Khoa and
|
238 |
+
Sugawara, Saku and
|
239 |
+
Aizawa, Akiko",
|
240 |
+
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
|
241 |
+
month = dec,
|
242 |
+
year = "2020",
|
243 |
+
address = "Barcelona, Spain (Online)",
|
244 |
+
publisher = "International Committee on Computational Linguistics",
|
245 |
+
url = "https://www.aclweb.org/anthology/2020.coling-main.580",
|
246 |
+
pages = "6609--6625",
|
247 |
+
}
|
248 |
+
```
|
249 |
|
250 |
+
## HotpotQA
|
251 |
|
252 |
+
HotpotQA is a Wikipedia-based question-answer pairs with the questions require finding and reasoning over multiple supporting documents to answer. We evaluate on 7405 datapoints, on the distractor setting. This dataset was proposed in the below paper
|
253 |
+
```
|
254 |
+
@inproceedings{yang2018hotpotqa,
|
255 |
+
title={{HotpotQA}: A Dataset for Diverse, Explainable Multi-hop Question Answering},
|
256 |
+
author={Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William W. and Salakhutdinov, Ruslan and Manning, Christopher D.},
|
257 |
+
booktitle={Conference on Empirical Methods in Natural Language Processing ({EMNLP})},
|
258 |
+
year={2018}
|
259 |
+
}
|
260 |
+
```
|
261 |
|
262 |
+
## MuSiQue
|
263 |
|
264 |
+
This dataset is a multihop question answering task, that requires 2-4 hop in every questions, making it slightly harder task when compared to other multihop tasks.This dataset was proposed in the below paper
|
265 |
|
266 |
+
```
|
267 |
+
@article{trivedi2021musique,
|
268 |
+
title={{M}u{S}i{Q}ue: Multihop Questions via Single-hop Question Composition},
|
269 |
+
author={Trivedi, Harsh and Balasubramanian, Niranjan and Khot, Tushar and Sabharwal, Ashish},
|
270 |
+
journal={Transactions of the Association for Computational Linguistics},
|
271 |
+
year={2022}
|
272 |
+
publisher={MIT Press}
|
273 |
+
}
|
274 |
+
```
|
275 |
+
|
276 |
+
## NaturalQuestions
|
277 |
+
|
278 |
+
The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question
|
279 |
+
|
280 |
+
```
|
281 |
+
@article{47761,
|
282 |
+
title = {Natural Questions: a Benchmark for Question Answering Research},
|
283 |
+
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
|
284 |
+
year = {2019},
|
285 |
+
journal = {Transactions of the Association of Computational Linguistics}
|
286 |
+
}
|
287 |
+
```
|
288 |
+
|
289 |
+
## PopQA
|
290 |
+
PopQA is a large-scale open-domain question answering (QA) dataset, the long-tail subset, consisting of 1,399 rare entity queries whose monthly Wikipedia page views are less than 100
|
291 |
+
|
292 |
+
Make sure to cite the work
|
293 |
+
```
|
294 |
+
@article{ mallen2023llm_memorization ,
|
295 |
+
title={When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories },
|
296 |
+
author={ Mallen, Alex and Asai,Akari and Zhong, Victor and Das, Rajarshi and Hajishirzi, Hannaneh and Khashabi, Daniel},
|
297 |
+
journal={ arXiv preprint },
|
298 |
+
year={ 2022 }
|
299 |
+
}
|
300 |
+
```
|
301 |
+
|
302 |
+
## TriviaQA
|
303 |
+
|
304 |
+
TriviaqQA is a reading comprehension dataset containing question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions.
|
305 |
+
```
|
306 |
+
@article{2017arXivtriviaqa,
|
307 |
+
author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld},
|
308 |
+
Daniel and {Zettlemoyer}, Luke},
|
309 |
+
title = "{triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}",
|
310 |
+
journal = {arXiv e-prints},
|
311 |
+
year = 2017,
|
312 |
+
eid = {arXiv:1705.03551},
|
313 |
+
pages = {arXiv:1705.03551},
|
314 |
+
archivePrefix = {arXiv},
|
315 |
+
eprint = {1705.03551},
|
316 |
+
}
|
317 |
+
```
|
318 |
+
|
319 |
+
## TruthfulQA
|
320 |
+
|
321 |
+
TruthfulQA is a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. Questions are crafted so that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts.
|
322 |
+
|
323 |
+
```
|
324 |
+
@misc{lin2021truthfulqa,
|
325 |
+
title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
|
326 |
+
author={Stephanie Lin and Jacob Hilton and Owain Evans},
|
327 |
+
year={2021},
|
328 |
+
eprint={2109.07958},
|
329 |
+
archivePrefix={arXiv},
|
330 |
+
primaryClass={cs.CL}
|
331 |
+
}
|
332 |
+
```
|
333 |
|
334 |
|
335 |
## Citation
|