system HF staff commited on
Commit
6da74d5
1 Parent(s): bd6256d

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +159 -0
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "jeopardy"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/](https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 12.13 MB
37
+ - **Size of the generated dataset:** 34.46 MB
38
+ - **Total amount of disk used:** 46.59 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Dataset containing 216,930 Jeopardy questions, answers and other data.
43
+
44
+ The json file is an unordered list of questions where each question has
45
+ 'category' : the question category, e.g. "HISTORY"
46
+ 'value' : integer $ value of the question as string, e.g. "200"
47
+ Note: This is "None" for Final Jeopardy! and Tiebreaker questions
48
+ 'question' : text of question
49
+ Note: This sometimes contains hyperlinks and other things messy text such as when there's a picture or video question
50
+ 'answer' : text of answer
51
+ 'round' : one of "Jeopardy!","Double Jeopardy!","Final Jeopardy!" or "Tiebreaker"
52
+ Note: Tiebreaker questions do happen but they're very rare (like once every 20 years)
53
+ 'show_number' : int of show number, e.g '4680'
54
+ 'air_date' : string of the show air date in format YYYY-MM-DD
55
+
56
+ ### [Supported Tasks](#supported-tasks)
57
+
58
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
59
+
60
+ ### [Languages](#languages)
61
+
62
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
63
+
64
+ ## [Dataset Structure](#dataset-structure)
65
+
66
+ We show detailed information for up to 5 configurations of the dataset.
67
+
68
+ ### [Data Instances](#data-instances)
69
+
70
+ #### default
71
+
72
+ - **Size of downloaded dataset files:** 12.13 MB
73
+ - **Size of the generated dataset:** 34.46 MB
74
+ - **Total amount of disk used:** 46.59 MB
75
+
76
+ An example of 'train' looks as follows.
77
+ ```
78
+ {
79
+ "air_date": "2004-12-31",
80
+ "answer": "Hattie McDaniel (for her role in Gone with the Wind)",
81
+ "category": "EPITAPHS & TRIBUTES",
82
+ "question": "'1939 Oscar winner: \"...you are a credit to your craft, your race and to your family\"'",
83
+ "round": "Jeopardy!",
84
+ "show_number": 4680,
85
+ "value": 2000
86
+ }
87
+ ```
88
+
89
+ ### [Data Fields](#data-fields)
90
+
91
+ The data fields are the same among all splits.
92
+
93
+ #### default
94
+ - `category`: a `string` feature.
95
+ - `air_date`: a `string` feature.
96
+ - `question`: a `string` feature.
97
+ - `value`: a `int32` feature.
98
+ - `answer`: a `string` feature.
99
+ - `round`: a `string` feature.
100
+ - `show_number`: a `int32` feature.
101
+
102
+ ### [Data Splits Sample Size](#data-splits-sample-size)
103
+
104
+ | name |train |
105
+ |-------|-----:|
106
+ |default|216930|
107
+
108
+ ## [Dataset Creation](#dataset-creation)
109
+
110
+ ### [Curation Rationale](#curation-rationale)
111
+
112
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
+
114
+ ### [Source Data](#source-data)
115
+
116
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
+
118
+ ### [Annotations](#annotations)
119
+
120
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
+
122
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
123
+
124
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
+
126
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
127
+
128
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
129
+
130
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
131
+
132
+ ### [Discussion of Biases](#discussion-of-biases)
133
+
134
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
135
+
136
+ ### [Other Known Limitations](#other-known-limitations)
137
+
138
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
+
140
+ ## [Additional Information](#additional-information)
141
+
142
+ ### [Dataset Curators](#dataset-curators)
143
+
144
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
145
+
146
+ ### [Licensing Information](#licensing-information)
147
+
148
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
149
+
150
+ ### [Citation Information](#citation-information)
151
+
152
+ ```
153
+
154
+ ```
155
+
156
+
157
+ ### Contributions
158
+
159
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.