system HF staff commited on
Commit
5d10ca4
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +188 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. liveqa.py +128 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - zh
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - extractive-qa
20
+ ---
21
+
22
+ # Dataset Card for LiveQA
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [Github](https://github.com/PKU-TANGENT/LiveQA)
50
+ - **Repository:** [Github](https://github.com/PKU-TANGENT/LiveQA)
51
+ - **Paper:** [Liu et al., 2020](https://www.aclweb.org/anthology/2020.ccl-1.98.pdf)
52
+ - **Leaderboard:** N/A
53
+ - **Point of Contact:** Qianying Liu
54
+
55
+ ### Dataset Summary
56
+ The LiveQA dataset is a Chinese question-answering resource constructed from playby-play live broadcasts. It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games, which are collected from the Chinese Hupu website.
57
+
58
+ ### Supported Tasks and Leaderboards
59
+ Question Answering.
60
+
61
+ [More Information Needed]
62
+
63
+ ### Languages
64
+ Chinese.
65
+
66
+ ## Dataset Structure
67
+
68
+ ### Data Instances
69
+ Each instance represents a timeline (i.e., a game) with an identifier. The passages field comprise an array of text or question segments. In the following truncated example, user comments about the game is followed by a question about which team will be the first to reach 60 points.
70
+ ```python
71
+ {
72
+
73
+ 'id': 1,
74
+ 'passages': [
75
+ {
76
+ "is_question": False,
77
+ "text": "'我希望两位球员都能做到!!",
78
+ "candidate1": "",
79
+ "candidate2": "",
80
+ "answer": "",
81
+ },
82
+ {
83
+ "is_question": False,
84
+ "text": "新年给我们送上精彩比赛!",
85
+ "candidate1": "",
86
+ "candidate2": "",
87
+ "answer": "",
88
+ },
89
+ {
90
+ "is_question": True,
91
+ "text": "先达到60分?",
92
+ "candidate1": "火箭",
93
+ "candidate2": "勇士",
94
+ "answer": "勇士",
95
+ },
96
+ {
97
+ "is_question": False,
98
+ "text": "自己急停跳投!!!",
99
+ "candidate1": "",
100
+ "candidate2": "",
101
+ "answer": "",
102
+ }
103
+ ]
104
+ }
105
+ ```
106
+
107
+ ### Data Fields
108
+ - id: identifier for the game
109
+ - passages: collection of text/question segments
110
+ - text: real-time text comment or binary question related to the context
111
+ - candidate1/2: one of the two answer options to the question
112
+ - answer: correct answer to the question in text
113
+
114
+ ### Data Splits
115
+ There is no predefined split in this dataset.
116
+
117
+ ## Dataset Creation
118
+
119
+ ### Curation Rationale
120
+
121
+ [More Information Needed]
122
+
123
+ ### Source Data
124
+
125
+ #### Initial Data Collection and Normalization
126
+
127
+ [More Information Needed]
128
+
129
+ #### Who are the source language producers?
130
+
131
+ [More Information Needed]
132
+
133
+ ### Annotations
134
+
135
+ #### Annotation process
136
+
137
+ [More Information Needed]
138
+
139
+ #### Who are the annotators?
140
+
141
+ [More Information Needed]
142
+
143
+ ### Personal and Sensitive Information
144
+
145
+ [More Information Needed]
146
+
147
+ ## Considerations for Using the Data
148
+
149
+ ### Social Impact of Dataset
150
+
151
+ [More Information Needed]
152
+
153
+ ### Discussion of Biases
154
+
155
+ [More Information Needed]
156
+
157
+ ### Other Known Limitations
158
+
159
+ [More Information Needed]
160
+
161
+ ## Additional Information
162
+
163
+ ### Dataset Curators
164
+
165
+ [More Information Needed]
166
+
167
+ ### Licensing Information
168
+
169
+ [More Information Needed]
170
+
171
+ ### Citation Information
172
+ This resource is developed by [Liu et al., 2020](https://www.aclweb.org/anthology/2020.ccl-1.98.pdf).
173
+ ```
174
+ @inproceedings{qianying-etal-2020-liveqa,
175
+ title = "{L}ive{QA}: A Question Answering Dataset over Sports Live",
176
+ author = "Qianying, Liu and
177
+ Sicong, Jiang and
178
+ Yizhong, Wang and
179
+ Sujian, Li",
180
+ booktitle = "Proceedings of the 19th Chinese National Conference on Computational Linguistics",
181
+ month = oct,
182
+ year = "2020",
183
+ address = "Haikou, China",
184
+ publisher = "Chinese Information Processing Society of China",
185
+ url = "https://www.aclweb.org/anthology/2020.ccl-1.98",
186
+ pages = "1057--1067"
187
+ }
188
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "This is LiveQA, a Chinese dataset constructed from play-by-play live broadcast.\nIt contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games,\nwhich are collected from the Chinese Hupu website.\n", "citation": "@inproceedings{qianying-etal-2020-liveqa,\n title = \"{L}ive{QA}: A Question Answering Dataset over Sports Live\",\n author = \"Qianying, Liu and\n Sicong, Jiang and\n Yizhong, Wang and\n Sujian, Li\",\n booktitle = \"Proceedings of the 19th Chinese National Conference on Computational Linguistics\",\n month = oct,\n year = \"2020\",\n address = \"Haikou, China\",\n publisher = \"Chinese Information Processing Society of China\",\n url = \"https://www.aclweb.org/anthology/2020.ccl-1.98\",\n pages = \"1057--1067\"\n}\n", "homepage": "https://github.com/PKU-TANGENT/LiveQA", "license": "", "features": {"id": {"dtype": "int64", "id": null, "_type": "Value"}, "passages": {"feature": {"is_question": {"dtype": "bool", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "candidate1": {"dtype": "string", "id": null, "_type": "Value"}, "candidate2": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "live_qa", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 112187507, "num_examples": 1670, "dataset_name": "live_qa"}}, "download_checksums": {"https://raw.githubusercontent.com/PKU-TANGENT/LiveQA/master/LiveQA-1.json": {"num_bytes": 22754058, "checksum": "ab4861f63bbfc9b84bd9fee9d0a682cd77d9ac48aa57d3cbc25bd9e1433e53fd"}, "https://raw.githubusercontent.com/PKU-TANGENT/LiveQA/master/LiveQA-2.json": {"num_bytes": 22879003, "checksum": "e1094303ff34bf0bc105f9d336e0d6824625955bab02e52cae5115c51af67eb5"}, "https://raw.githubusercontent.com/PKU-TANGENT/LiveQA/master/LiveQA-3.json": {"num_bytes": 22771837, "checksum": "fcb46b890c6622508fa35ad62e96e8f5e072d5c4962ebbea95f9a1a12f7a641b"}, "https://raw.githubusercontent.com/PKU-TANGENT/LiveQA/master/LiveQA-4.json": {"num_bytes": 22966788, "checksum": "1d8a1ba7ba503b6c6f94fb301022e7cfe7d719dc03f89887e447617bc0f6bac3"}, "https://raw.githubusercontent.com/PKU-TANGENT/LiveQA/master/LiveQA-5.json": {"num_bytes": 23332883, "checksum": "85d8f8fe802aa5e80bc5e58edd6a990912a7b128f8279684fda4e5d39cd07bb9"}}, "download_size": 114704569, "post_processing_size": null, "dataset_size": 112187507, "size_in_bytes": 226892076}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bc66ac3ede6d276010f4b7a5c88aa4d8f3ee70e5887ef6748737bb572626a3a
3
+ size 2391
liveqa.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """LiveQA dataset."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{qianying-etal-2020-liveqa,
26
+ title = "{L}ive{QA}: A Question Answering Dataset over Sports Live",
27
+ author = "Qianying, Liu and
28
+ Sicong, Jiang and
29
+ Yizhong, Wang and
30
+ Sujian, Li",
31
+ booktitle = "Proceedings of the 19th Chinese National Conference on Computational Linguistics",
32
+ month = oct,
33
+ year = "2020",
34
+ address = "Haikou, China",
35
+ publisher = "Chinese Information Processing Society of China",
36
+ url = "https://www.aclweb.org/anthology/2020.ccl-1.98",
37
+ pages = "1057--1067"
38
+ }
39
+ """
40
+
41
+ _DESCRIPTION = """\
42
+ This is LiveQA, a Chinese dataset constructed from play-by-play live broadcast.
43
+ It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games,
44
+ which are collected from the Chinese Hupu website.
45
+ """
46
+
47
+ _HOMEPAGE = "https://github.com/PKU-TANGENT/LiveQA"
48
+
49
+ _REPO = "https://raw.githubusercontent.com/PKU-TANGENT/LiveQA/master/"
50
+ _URLs = [f"{_REPO}LiveQA-{i}.json" for i in range(1, 6)]
51
+
52
+
53
+ class LiveQA(datasets.GeneratorBasedBuilder):
54
+ """LiveQA dataset."""
55
+
56
+ VERSION = datasets.Version("1.0.0")
57
+
58
+ def _info(self):
59
+ features = datasets.Features(
60
+ {
61
+ "id": datasets.Value("int64"),
62
+ "passages": datasets.Sequence(
63
+ {
64
+ "is_question": datasets.Value("bool"),
65
+ "text": datasets.Value("string"),
66
+ "candidate1": datasets.Value("string"),
67
+ "candidate2": datasets.Value("string"),
68
+ "answer": datasets.Value("string"),
69
+ }
70
+ ),
71
+ }
72
+ )
73
+
74
+ return datasets.DatasetInfo(
75
+ description=_DESCRIPTION,
76
+ features=features,
77
+ supervised_keys=None,
78
+ homepage=_HOMEPAGE,
79
+ citation=_CITATION,
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ """Returns SplitGenerators."""
84
+ # No default split.
85
+ # Data is separated into 5 files due to size restrictions,
86
+ # but they must be concatenated to create a well-formed json.
87
+
88
+ data_dir = dl_manager.download_and_extract(_URLs)
89
+ return [
90
+ datasets.SplitGenerator(
91
+ name=datasets.Split.TRAIN,
92
+ gen_kwargs={"filepaths": data_dir, "split": "train"},
93
+ )
94
+ ]
95
+
96
+ def _generate_examples(self, filepaths, split):
97
+ """ Yields examples. """
98
+
99
+ data_raw = ""
100
+ for filepath in filepaths:
101
+ with open(filepath, "r", encoding="utf-8") as f:
102
+ data_raw += f.read()
103
+
104
+ data = json.loads(data_raw)
105
+ games = data["passages"]
106
+
107
+ game_id = -1 # "id" field is always 1 in the original dataset regardless of game
108
+ for game in games:
109
+ game_id += 1
110
+ passages = []
111
+ for passage in game["passage"]:
112
+ is_question = "question" in passage
113
+ text = passage["question"] if is_question else passage["text"]
114
+ candidate_1 = passage["candidate1"] if is_question else ""
115
+ candidate_2 = passage["candidate2"] if is_question else ""
116
+ answer = passage["answer"] if is_question else ""
117
+
118
+ passages.append(
119
+ {
120
+ "is_question": is_question,
121
+ "text": text,
122
+ "candidate1": candidate_1,
123
+ "candidate2": candidate_2,
124
+ "answer": answer,
125
+ }
126
+ )
127
+
128
+ yield game_id, {"id": game_id, "passages": passages}