You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for NovelQA

NovelQA is the first benchmark with the average input length over 200K for testing the long-text ability of LLMs. It comprises the texts of 89 novels (among which 28 are copy-write protected and not publicly published) and 2305 question-answer pairs on the details of these novels.

NOTICE: The Dataset Viewer is automatically generated and only display the novel content. We do not know how to remove or remake it, so you can just ignore it.

Dataset Details

Dataset Sources

Usage

We recommend you to use git clone command to download this dataset. The steps are detailed as follows.

  1. Install Hugging Face Hub Client (if you have not done this).

    pip install huggingface_hub
    
  2. Login to your Hugging Face account (if you have not done this).

    huggingface-cli login
    
  3. Follow the prompt to input your username and your Hugging Face token (if you have not done this).

    If you have not obtained your token, you can set it by navigating to your Hugging Face account - 'Profile Setting' - 'Access Tokens', and generate a new one with at least read authority. Then you can copy your generated token and paste it in the CLI.

  4. Clone this repository through

    git clone https://huggingface.co/datasets/NovelQA/NovelQA
    

Or simply download the zip file NovelQA.zip.

Dataset Structure

The dataset is structured as follows.

.
β”œβ”€β”€ NovelQA.zip                 // the latest format dataset with bid/qid/eid as indices for better reading efficiency.
β”œβ”€β”€ NovelQA-prev-format.zip     // the previous format dataset.
β”œβ”€β”€ Books                       // the book contents of the novels.
β”‚   └── PublicDomain
β”‚       β”œβ”€β”€ bookid1.txt
β”‚       └── bookid2.txt
β”œβ”€β”€ Data                        // the corresponding QA-pairs of each book.
β”‚   β”œβ”€β”€ CopyrightProtected
β”‚   β”‚   β”œβ”€β”€ bookid1.json
β”‚   β”‚   └── bookid2.json
β”‚   └── PublicDomain
β”‚       β”œβ”€β”€ bookid3.json
β”‚       └── bookid4.json
β”œβ”€β”€ Demonstration                // demonstration book and QA with gold answer.
β”œβ”€β”€ Scripts                      // the dataloader script for reading/writing the most common data formats.
└── bookmeta.json                // books' metadata.

Among the json files, each file includes a list of dicts, each of which is structured as follows.

[
    "QID":
        {
            "Aspect": "the question classification in 'aspect', e.g., 'times'",
            "Complexity": "the question classification in complexity, e.g., 'mh'",
            "Question": "the input question",
            "Options": {
                "A": "Option A",
                "B": "Option B",
                "C": "Option C (not applicable in several yes/no questions)",
                "D": "Option D (not application in several yes/no questions)"
            },
        },
    ...
]

Metadata

Books

You can check the book's metadata from the file bookmeta.json.

  • BID: a unique book id
  • title: book title
  • txtfile: txt filename
  • jsonfile: json filename
  • source: book source: gutenberg or self-owned
  • link: download link: gutenberg or null
  • whichgtb: [us, au, ca] for gutenberg, gutenberg.au and gutenberg.ca if gutenberg, else null
  • copyright: have copyright or public domain
  • yearpub: retrieve from goodreads
  • author: author name
  • yearperish: author perish year
  • period: book classified into which period
  • tokenlen: book token count, tokenized by Xenova/gpt-3.5-turbo-16k

QAs

And for the QA's classifcations,

Complxity:

  • mh: stands for multi-hop. solving this question requires knowledge across all over the book.
  • sh: solving this problem requires knowledge from only one chapter. might be solved simply by recalling wo referring back.
  • dtl: stands for detail. the question involves character or place or plot that only appears once or twice, and has no impacts to other plots, and thus reading other part of the book will not remind the readers of this detail and the readers possibly does not remember this detail after finishing the book as well.

Aspects:

  • times: stands for "how many times does sth appear". asserted to be mh.
  • meaning: stands for interpretation of words and sentences.
  • settg: when/where for other single event.
  • span: search for the earliest and latest time, all the places involved. asserted to be mh.
  • role: ask about characters information. "who".
  • relat: relationship of characters.
  • plot: "something happens".

Citation

BibTeX:

@misc{novelqa,
      title={NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens}, 
      author={Cunxiang Wang and Ruoxi Ning and Boqi Pan and Tonghui Wu and Qipeng Guo and Cheng Deng and Guangsheng Bao and Xiangkun Hu and Zheng Zhang and Qian Wang and Yue Zhang},
      year={2024},
      eprint={2403.12766},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2403.12766}, 
}

Term

Your participation and submission to this benchmark will naturally give your consent to the following terms.

The input data are only for internal evaluation use. Please do not publicly spread the input data online. The competition hosts are not responsible for any possible violation of novel copyright caused by the participants' spreading the input data publicly online.

Contact

We welcome you to contact us if you:

  • find problems downloading or using this dataset,
  • find a necessity to get access to the original dataset (with answers and evidences),

please feel free to contact the first authors from the Arxiv paper!

Important Note on Getting Access to Full Dataset: Please do not send the data you received to others or publish the data online. When we grant you access to the full dataset, an ID will be embedded into the data so that if possible data leakage happens, we are able to identify who published them online. So please remember to keep them secret.

Downloads last month
37