license: apache-2.0 task_categories: - question-answering size_categories: - 10K<n<100K
Dataset Card for NovelQA
NovelQA is a benchmark for testing the long-text ability of LLMs.
The Dataset Viewer is automatically generated and only display the novel content. We do not know how to remove or remake it, so you can just ignore it.
Dataset Details
Dataset Sources
- Repository: https://github.com/NovelQA/novelqa.github.io
- Leaderboard: https://novelqa.github.io
- Competition:: https://www.codabench.org/competitions/2295/
- Paper: https://arxiv.org/abs/2403.12766
Uses
Directly downloading
You can directly download the NovelQA.zip, which contains all files of Raw_Novels/, Data/ and Demonstration/.
Through API
You might also use this dataset through the Huggingface dataset
package as follows.
from datasets import load_dataset
dataset = load_dataset("NovelQA/NovelQA", data_files = {
"book": "Raw_Novels/*.txt",
"ques": "Data/*.json"
}, streaming=True)
books = dataset["book"]
ques = dataset["ques"]
Dataset Structure
The dataset is structured as follows.
- NovelQA.zip // This zip file contains all of the novels, data and demonstration
- NovelQA
| - book // the book contents of the novels
| | - booktitle1.txt
| | - booktitle2.txt
| | - ...
|
| - ques // the corresponding QA-pairs of each book
| - booktitle1.json
| - booktitle2.json
| - ...
Among the json files, each file includes a list of dicts, each of which is structured as follows.
{
"Question": "The input question",
"Options": [
"Option A",
"Option B",
"Option C",
"Option D"
],
"Complex": "A complexity level among mh, sh, and dtl",
"Aspect": "An aspect from times, meaning, span, settg, relat, character, and plot"
},
Citation
BibTeX:
@misc{wang2024novelqa,
title={NovelQA: A Benchmark for Long-Range Novel Question Answering},
author={Cunxiang Wang and Ruoxi Ning and Boqi Pan and Tonghui Wu and Qipeng Guo and Cheng Deng and Guangsheng Bao and Qian Wang and Yue Zhang},
year={2024},
eprint={2403.12766},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Term
Your participation and submission to this benchmark will naturally give your consent to the following terms.
The input data are only for internal evaluation use. Please do not publicly spread the input data online. The competition hosts are not responsible for any possible violation of novel copyright caused by the participants' spreading the input data publicly online.
Contact
If you find problems downloading or using this dataset, please contact the first authors from the Arxiv paper to get access to the dataset.
- Downloads last month
- 38