parquet-converter commited on
Commit
8da6151
1 Parent(s): f06e930

Update parquet files

Browse files
README.md DELETED
@@ -1,97 +0,0 @@
1
- ---
2
- task_categories:
3
- - question-answering
4
- language:
5
- - zh
6
- pretty_name: LogiQA-zh
7
- size_categories:
8
- - 1K<n<10K
9
- paperswithcode_id: logiqa
10
- dataset_info:
11
- features:
12
- - name: context
13
- dtype: string
14
- - name: query
15
- dtype: string
16
- - name: options
17
- sequence:
18
- dtype: string
19
- - name: correct_option
20
- dtype: string
21
- splits:
22
- - name: train
23
- num_examples: 7376
24
- - name: validation
25
- num_examples: 651
26
- - name: test
27
- num_examples: 651
28
- ---
29
- # Dataset Card for LogiQA
30
-
31
- ## Dataset Description
32
-
33
- - **Homepage:**
34
- - **Repository:**
35
- - **Paper:**
36
- - **Leaderboard:**
37
- - **Point of Contact:**
38
-
39
- ### Dataset Summary
40
-
41
- LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the Chinese versions only.
42
-
43
-
44
- ## Dataset Structure
45
-
46
- ### Data Instances
47
-
48
- An example from `train` looks as follows:
49
- ```
50
- {'context': '有些广东人不爱吃辣椒.因此,有些南方人不爱吃辣椒.',
51
- 'query': '以下哪项能保证上述论证的成立?',
52
- 'options': ['有些广东人爱吃辣椒',
53
- '爱吃辣椒的有些是南方人',
54
- '所有的广东人都是南方人',
55
- '有些广东人不爱吃辣椒也不爱吃甜食'],
56
- 'correct_option': 2}
57
- ```
58
-
59
- ### Data Fields
60
-
61
- - `context`: a `string` feature.
62
- - `query`: a `string` feature.
63
- - `answers`: a `list` feature containing `string` features.
64
- - `correct_option`: a `string` feature.
65
-
66
-
67
- ### Data Splits
68
-
69
- |train|validation|test|
70
- |----:|---------:|---:|
71
- | 7376| 651| 651|
72
-
73
-
74
- ## Additional Information
75
-
76
- ### Dataset Curators
77
-
78
- The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
79
-
80
- ### Licensing Information
81
-
82
- [More Information Needed]
83
-
84
- ### Citation Information
85
-
86
- ```
87
- @article{liu2020logiqa,
88
- title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
89
- author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
90
- journal={arXiv preprint arXiv:2007.08124},
91
- year={2020}
92
- }
93
- ```
94
-
95
- ### Contributions
96
- [@jiacheng-ye](https://github.com/jiacheng-ye) added this Chinese dataset.
97
- [@lucasmccabe](https://github.com/lucasmccabe) added the English dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
default/logiqa-zh-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5f1698fc5869b5bc365b62d3f2f5e52c244496c6665a6d6c6f1c7fdc7ac624c
3
+ size 269504
default/logiqa-zh-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddad213d10116f5cb5ad972a64f88903552672c2fae799b0f39896d149bde041
3
+ size 3379804
default/logiqa-zh-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05d6582ab206dde05f89b187cd325bcf93abbff6bc8c1c0351e7962dc146274e
3
+ size 269429
logiqa-zh.py DELETED
@@ -1,105 +0,0 @@
1
- """LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning"""
2
-
3
- import re
4
- import datasets
5
-
6
- logger = datasets.logging.get_logger(__name__)
7
-
8
-
9
- _HOMEPAGE = "https://github.com/lgw863/LogiQA-dataset"
10
-
11
- _DESCRIPTION = """\
12
- LogiQA is constructed from the logical comprehension problems from \
13
- publically available questions of the National Civil Servants Examination \
14
- of China, which is designed to test the civil servant candidates’ critical \
15
- thinking and problem-solving. This dataset includes the Chinese versions only"""
16
-
17
- _CITATION = """\
18
- @article{liu2020logiqa,
19
- title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
20
- author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
21
- journal={arXiv preprint arXiv:2007.08124},
22
- year={2020}
23
- }
24
- """
25
-
26
- _URLS = {
27
- "zh_train": "https://raw.githubusercontent.com/lgw863/LogiQA-dataset/master/zh_train.txt",
28
- "zh_test": "https://raw.githubusercontent.com/lgw863/LogiQA-dataset/master/zh_test.txt",
29
- "zh_eval": "https://raw.githubusercontent.com/lgw863/LogiQA-dataset/master/zh_eval.txt",
30
- }
31
-
32
- def _process_answer(answer):
33
- if not any(answer.startswith(x) for x in "ABCD"):
34
- return answer
35
- else:
36
- return answer[3:]
37
-
38
- def _process_sentences(text):
39
- text = text.replace("\n", "")
40
- sents = text.split(".")
41
- text = ""
42
- for sent in sents:
43
- if len(sent) == 0:
44
- continue
45
- if len(text) == 0:
46
- text += sent
47
- elif sent[0].isnumeric():
48
- text += "."+sent
49
- else:
50
- text += ". "+sent
51
- text = text.replace(" ", " ")
52
- text = text.replace("\\'", "'")
53
- while text.endswith(" "):
54
- text = text[:-1]
55
- if re.match('^[A-Z][\w\s]+[?.!]$', text) is None:
56
- text += "."
57
- text = text.replace("?.", "?")
58
- text = text.replace("!.", "!")
59
- text = text.replace("..", ".")
60
- return text
61
-
62
- class LogiQA(datasets.GeneratorBasedBuilder):
63
- def _info(self):
64
- features = datasets.Features(
65
- {
66
- "context": datasets.Value("string"),
67
- "query": datasets.Value("string"),
68
- "options": datasets.features.Sequence(datasets.Value("string")),
69
- "correct_option": datasets.Value("int32")
70
- }
71
- )
72
- return datasets.DatasetInfo(
73
- description=_DESCRIPTION,
74
- features=features,
75
- homepage=_HOMEPAGE,
76
- citation=_CITATION,
77
- )
78
-
79
- def _split_generators(self, dl_manager):
80
- downloaded_files = dl_manager.download_and_extract(_URLS)
81
- return [
82
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["zh_train"]}),
83
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["zh_eval"]}),
84
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["zh_test"]}),
85
- ]
86
-
87
- def _generate_examples(self, filepath):
88
- logger.info("generating examples from = %s", filepath)
89
- with open(filepath, encoding="utf-8") as f:
90
- logiqa = f.readlines()
91
- logiqa = [_process_sentences(s) for s in logiqa]
92
-
93
- for key in range(int(len(logiqa)/8)):
94
- row = 8*key
95
- correct_answer = logiqa[row+1].replace(".","")
96
- context = logiqa[row+2]
97
- query = logiqa[row+3]
98
- answers = logiqa[row+4:row+8]
99
-
100
- yield key, {
101
- "context": context,
102
- "query": query,
103
- "options": [_process_answer(answers[i]) for i in range(4)],
104
- "correct_option": "abcd".index(correct_answer)
105
- }