File size: 1,504 Bytes
dbc6d51
3122a64
 
 
 
 
 
 
 
 
 
 
dbc6d51
3122a64
 
 
 
 
4e294f9
 
3122a64
 
 
 
4edcaa6
3122a64
34c9336
 
3122a64
 
4edcaa6
3122a64
 
 
4edcaa6
3122a64
 
 
 
 
 
 
34c9336
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: cc-by-sa-4.0
task_categories:
- multiple-choice
language:
- en
- zh
pretty_name: LogiQA2.0
data_splits:
- train
- validation
- test 
---

# Dataset Card for Dataset Name

## Dataset Description

- **Homepage:** https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval
- **Repository:** https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval
- **Paper:** https://ieeexplore.ieee.org/abstract/document/10174688

### Dataset Summary

Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks

LogiEval: a benchmark suite for testing logical reasoning abilities of instruct-prompt large language models

### Licensing Information

 Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

### Citation Information

@ARTICLE{10174688,
  author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue},
  journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, 
  title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding}, 
  year={2023},
  volume={},
  number={},
  pages={1-16},
  doi={10.1109/TASLP.2023.3293046}}

@misc{liu2023evaluating,
      title={Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4}, 
      author={Hanmeng Liu and Ruoxi Ning and Zhiyang Teng and Jian Liu and Qiji Zhou and Yue Zhang},
      year={2023},
      eprint={2304.03439},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}