Datasets:

Sub-tasks: extractive-qa
Languages: Chinese
Multilinguality: monolingual
Size Categories: 1K<n<10K
Language Creators: found
Annotations Creators: found
Source Datasets: original
License:
liveqa / README.md
lhoestq's picture
lhoestq HF staff
add dataset_info in dataset metadata
0a266e8
metadata
annotations_creators:
  - found
language_creators:
  - found
language:
  - zh
license:
  - unknown
multilinguality:
  - monolingual
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - question-answering
task_ids:
  - extractive-qa
paperswithcode_id: liveqa
pretty_name: LiveQA
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: passages
      sequence:
        - name: is_question
          dtype: bool
        - name: text
          dtype: string
        - name: candidate1
          dtype: string
        - name: candidate2
          dtype: string
        - name: answer
          dtype: string
  splits:
    - name: train
      num_bytes: 112187507
      num_examples: 1670
  download_size: 114704569
  dataset_size: 112187507

Dataset Card for LiveQA

Table of Contents

Dataset Description

Dataset Summary

The LiveQA dataset is a Chinese question-answering resource constructed from playby-play live broadcasts. It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games, which are collected from the Chinese Hupu website.

Supported Tasks and Leaderboards

Question Answering.

[More Information Needed]

Languages

Chinese.

Dataset Structure

Data Instances

Each instance represents a timeline (i.e., a game) with an identifier. The passages field comprise an array of text or question segments. In the following truncated example, user comments about the game is followed by a question about which team will be the first to reach 60 points.

{
  
    'id': 1,
    'passages': [
      {
        "is_question": False,
        "text": "'我希望两位球员都能做到!!",
        "candidate1": "",
        "candidate2": "",
        "answer": "",
      },
      {
        "is_question": False,
        "text": "新年给我们送上精彩比赛!",
        "candidate1": "",
        "candidate2": "",
        "answer": "",
      },
      {
        "is_question": True,
        "text": "先达到60分?",
        "candidate1": "火箭",
        "candidate2": "勇士",
        "answer": "勇士",
      },
      {
        "is_question": False,
        "text": "自己急停跳投!!!",
        "candidate1": "",
        "candidate2": "",
        "answer": "",
      }
    ]
}

Data Fields

  • id: identifier for the game
  • passages: collection of text/question segments
  • text: real-time text comment or binary question related to the context
  • candidate1/2: one of the two answer options to the question
  • answer: correct answer to the question in text

Data Splits

There is no predefined split in this dataset.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

This resource is developed by Liu et al., 2020.

@inproceedings{qianying-etal-2020-liveqa,
    title = "{L}ive{QA}: A Question Answering Dataset over Sports Live",
    author = "Qianying, Liu  and
      Sicong, Jiang  and
      Yizhong, Wang  and
      Sujian, Li",
    booktitle = "Proceedings of the 19th Chinese National Conference on Computational Linguistics",
    month = oct,
    year = "2020",
    address = "Haikou, China",
    publisher = "Chinese Information Processing Society of China",
    url = "https://www.aclweb.org/anthology/2020.ccl-1.98",
    pages = "1057--1067"
}

Contributions

Thanks to @j-chim for adding this dataset.