--- language: - en configs: - config_name: corpus data_files: - split: corpus path: - corpus.csv - config_name: queries data_files: - split: queries path: - queries.csv - config_name: mapping data_files: - split: relevant path: - relevant.csv - split: irrelevant path: - irrelevant.csv - split: seemingly_relevant path: - seemingly_relevant.csv license: cc-by-sa-4.0 --- # RAGE - Retrieval Augmented Generation Evaluation ## TL;DR RAGE is a tool for evaluating how well Large Language Models (LLMs) cite relevant sources in Retrieval Augmented Generation (RAG) tasks. ## More Details For more information, please refer to our GitHub page: [https://github.com/othr-nlp/rage_toolkit](https://github.com/othr-nlp/rage_toolkit) ## References This dataset is based on the BeIR version of the Natural Questions dataset. - **BeIR**: - [Paper: https://doi.org/10.48550/arXiv.2104.08663](https://doi.org/10.48550/arXiv.2104.08663) - **Natural Questions**: - [Website: https://ai.google.com/research/NaturalQuestions](https://ai.google.com/research/NaturalQuestions) - [Paper: https://doi.org/10.1162/tacl_a_00276](https://doi.org/10.1162/tacl_a_00276)