Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Libraries:
Datasets
pandas
License:
vinzentp commited on
Commit
bfc28ea
1 Parent(s): 17c9add

update readme

Browse files
Files changed (1) hide show
  1. README.md +14 -5
README.md CHANGED
@@ -25,12 +25,21 @@ configs:
25
  - seemingly_relevant.csv
26
  license: cc-by-sa-4.0
27
  ---
 
28
 
 
 
 
 
 
 
 
 
29
  This dataset is based on the BeIR version of the Natural Questions dataset.
30
 
31
- BeIR:
32
- - Paper: https://doi.org/10.48550/arXiv.2104.08663
33
 
34
- Natural Questions:
35
- - Website: https://ai.google.com/research/NaturalQuestions
36
- - Paper: https://doi.org/10.1162/tacl_a_00276
 
25
  - seemingly_relevant.csv
26
  license: cc-by-sa-4.0
27
  ---
28
+ # RAGE - Retrieval Augmented Generation Evaluation
29
 
30
+ ## TL;DR
31
+ RAGE is a tool for evaluating how well Large Language Models (LLMs) cite relevant sources in Retrieval Augmented Generation (RAG) tasks.
32
+
33
+ ## More Details
34
+ For more information, please refer to our GitHub page:
35
+ [https://github.com/othr-nlp/rage_toolkit](https://github.com/othr-nlp/rage_toolkit)
36
+
37
+ ## References
38
  This dataset is based on the BeIR version of the Natural Questions dataset.
39
 
40
+ - **BeIR**:
41
+ - [Paper: https://doi.org/10.48550/arXiv.2104.08663](https://doi.org/10.48550/arXiv.2104.08663)
42
 
43
+ - **Natural Questions**:
44
+ - [Website: https://ai.google.com/research/NaturalQuestions](https://ai.google.com/research/NaturalQuestions)
45
+ - [Paper: https://doi.org/10.1162/tacl_a_00276](https://doi.org/10.1162/tacl_a_00276)