lucasmccabe commited on
Commit
df2e254
1 Parent(s): 0629b4d

docs: dataset card

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ language:
5
+ - en
6
+ pretty_name: LogiQA
7
+ size_categories:
8
+ - 1K<n<10K
9
+ paperswithcode_id: logiqa
10
+ dataset_info:
11
+ features:
12
+ - name: context
13
+ dtype: string
14
+ - name: query
15
+ dtype: string
16
+ - name: options
17
+ sequence:
18
+ dtype: string
19
+ - name: correct_option
20
+ dtype: string
21
+ splits:
22
+ - name: train
23
+ num_examples: 7376
24
+ - name: validation
25
+ num_examples: 651
26
+ - name: test
27
+ num_examples: 651
28
+ ---
29
+ # Dataset Card for LogiQA
30
+
31
+ ## Dataset Description
32
+
33
+ - **Homepage:**
34
+ - **Repository:**
35
+ - **Paper:**
36
+ - **Leaderboard:**
37
+ - **Point of Contact:**
38
+
39
+ ### Dataset Summary
40
+
41
+ LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source.
42
+
43
+
44
+ ## Dataset Structure
45
+
46
+ ### Data Instances
47
+
48
+ An example from `train` looks as follows:
49
+ ```
50
+ {'context': 'Continuous exposure to indoor fluorescent lights is beneficial to the health of hamsters with heart disease. One group of hamsters exposed to continuous exposure to fluorescent lights has an average lifespan that is 2.5% longer than another one of the same species but living in a black wall.',
51
+ 'query': 'Which of the following questions was the initial motivation for conducting the above experiment?',
52
+ 'options': ['Can hospital light therapy be proved to promote patient recovery?',
53
+ 'Which one lives longer, the hamster living under the light or the hamster living in the dark?',
54
+ 'What kind of illness does the hamster have?',
55
+ 'Do some hamsters need a period of darkness?'],
56
+ 'correct_option': 0}
57
+ ```
58
+
59
+ ### Data Fields
60
+
61
+ - `context`: a `string` feature.
62
+ - `query`: a `string` feature.
63
+ - `answers`: a `list` feature containing `string` features.
64
+ - `correct_option`: a `string` feature.
65
+
66
+
67
+ ### Data Splits
68
+
69
+ |train|validation|test|
70
+ |----:|---------:|---:|
71
+ | 7376| 651| 651|
72
+
73
+
74
+ ## Additional Information
75
+
76
+ ### Dataset Curators
77
+
78
+ The original LogiQA was produced by Jian Liu1, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
79
+
80
+ ### Licensing Information
81
+
82
+ [More Information Needed]
83
+
84
+ ### Citation Information
85
+
86
+ ```
87
+ @article{liu2020logiqa,
88
+ title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
89
+ author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
90
+ journal={arXiv preprint arXiv:2007.08124},
91
+ year={2020}
92
+ }
93
+ ```
94
+
95
+ ### Contributions
96
+
97
+ [@lucasmccabe](https://github.com/lucasmccabe) added this dataset.