shibing624 commited on
Commit
fd5a00a
1 Parent(s): 83219a7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +197 -0
README.md ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - shibing624
4
+ language_creators:
5
+ - shibing624
6
+ languages:
7
+ - zh
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<20M
14
+ source_datasets:
15
+ - extended|other-flicker-30k
16
+ - https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC
17
+ - http://icrc.hitsz.edu.cn/info/1037/1162.htm
18
+ - http://icrc.hitsz.edu.cn/Article/show/171.html
19
+ - https://arxiv.org/abs/1908.11828
20
+ - https://github.com/pluto-junzeng/CNSD
21
+ task_categories:
22
+ - text-classification
23
+ - text-scoring
24
+ task_ids:
25
+ - natural-language-inference
26
+ - semantic-similarity-scoring
27
+ paperswithcode_id: snli
28
+ pretty_name: Stanford Natural Language Inference
29
+ ---
30
+ # Dataset Card for NLI_zh
31
+ ## Table of Contents
32
+ - [Dataset Description](#dataset-description)
33
+ - [Dataset Summary](#dataset-summary)
34
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
35
+ - [Languages](#languages)
36
+ - [Dataset Structure](#dataset-structure)
37
+ - [Data Instances](#data-instances)
38
+ - [Data Fields](#data-fields)
39
+ - [Data Splits](#data-splits)
40
+ - [Dataset Creation](#dataset-creation)
41
+ - [Curation Rationale](#curation-rationale)
42
+ - [Source Data](#source-data)
43
+ - [Annotations](#annotations)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+ ## Dataset Description
55
+ - **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
56
+ - **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec) (located on the homepage)
57
+ - **Size of downloaded dataset files:** 16 MB
58
+ - **Total amount of disk used:** 42 MB
59
+ ### Dataset Summary
60
+
61
+ 常见中文语义匹配数据集,包含[ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)、[BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)、[LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)、[PAWSX](https://arxiv.org/abs/1908.11828)、[STS-B](https://github.com/pluto-junzeng/CNSD)共5个任务。
62
+ 可以从数据集对应的链接自行下载,也可以从[百度网盘(提取码:qkt6)](https://pan.baidu.com/s/1d6jSiU1wHQAEMWJi7JJWCQ)下载。其中senteval_cn目录是评测数据集汇总,senteval_cn.zip是senteval目录的打包,两者下其一就好。
63
+
64
+ - ATEC: https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC
65
+ - BQ: http://icrc.hitsz.edu.cn/info/1037/1162.htm
66
+ - LCQMC: http://icrc.hitsz.edu.cn/Article/show/171.html
67
+ - PAWSX: https://arxiv.org/abs/1908.11828
68
+ - STS-B: https://github.com/pluto-junzeng/CNSD
69
+
70
+
71
+ ### Supported Tasks and Leaderboards
72
+
73
+ Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
74
+
75
+ 中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
76
+
77
+ **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
78
+
79
+ ### Languages
80
+
81
+ 数据集均是简体中文文本。
82
+
83
+ ## Dataset Structure
84
+ ### Data Instances
85
+ An example of 'train' looks as follows.
86
+ ```
87
+ {
88
+ "sentence1": "刘诗诗杨幂谁漂亮",
89
+ "sentence2": "刘诗诗和杨幂谁漂亮",
90
+ "label": 1,
91
+ }
92
+ {
93
+ "sentence1": "汇理财怎么样",
94
+ "sentence2": "怎么样去理财",
95
+ "label": 0,
96
+ }
97
+ ```
98
+
99
+ ### Data Fields
100
+ The data fields are the same among all splits.
101
+
102
+ - `sentence1`: a `string` feature.
103
+ - `sentence2`: a `string` feature.
104
+ - `label`: a classification label, with possible values including `similarity` (1), `dissimilarity` (0).
105
+
106
+ ### Data Splits
107
+
108
+ #### ATEC
109
+
110
+ ```shell
111
+ $ wc -l ATEC/*
112
+ 20000 ATEC/ATEC.test.data
113
+ 62477 ATEC/ATEC.train.data
114
+ 20000 ATEC/ATEC.valid.data
115
+ 102477 total
116
+ ```
117
+
118
+ #### BQ
119
+
120
+ ```shell
121
+ $ wc -l BQ/*
122
+ 10000 BQ/BQ.test.data
123
+ 100000 BQ/BQ.train.data
124
+ 10000 BQ/BQ.valid.data
125
+ 120000 total
126
+ ```
127
+
128
+ #### LCQMC
129
+
130
+ ```shell
131
+ $ wc -l LCQMC/*
132
+ 12500 LCQMC/LCQMC.test.data
133
+ 238766 LCQMC/LCQMC.train.data
134
+ 8802 LCQMC/LCQMC.valid.data
135
+ 260068 total
136
+ ```
137
+
138
+ #### PAWSX
139
+
140
+ ```shell
141
+ $ wc -l PAWSX/*
142
+ 2000 PAWSX/PAWSX.test.data
143
+ 49401 PAWSX/PAWSX.train.data
144
+ 2000 PAWSX/PAWSX.valid.data
145
+ 53401 total
146
+ ```
147
+
148
+ #### STS-B
149
+
150
+ ```shell
151
+ $ wc -l STS-B/*
152
+ 1361 STS-B/STS-B.test.data
153
+ 5231 STS-B/STS-B.train.data
154
+ 1458 STS-B/STS-B.valid.data
155
+ 8050 total
156
+ ```
157
+
158
+ ## Dataset Creation
159
+ ### Curation Rationale
160
+ 作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。
161
+ ### Source Data
162
+ #### Initial Data Collection and Normalization
163
+ #### Who are the source language producers?
164
+ 数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
165
+
166
+ BQ: Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, Buzhou Tang, The BQ Corpus: A Large-scale Domain-specific Chinese Corpus For Sentence Semantic Equivalence Identification EMNLP2018.
167
+ ### Annotations
168
+ #### Annotation process
169
+
170
+ #### Who are the annotators?
171
+ 原作者。
172
+
173
+ ### Personal and Sensitive Information
174
+
175
+ ## Considerations for Using the Data
176
+ ### Social Impact of Dataset
177
+ This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context.
178
+
179
+ Systems that are successful at such a task may be more successful in modeling semantic representations.
180
+ ### Discussion of Biases
181
+ ### Other Known Limitations
182
+ ## Additional Information
183
+ ### Dataset Curators
184
+
185
+ - 苏剑林对文件名称有整理
186
+ - 我上传到huggingface的datasets
187
+
188
+ ### Licensing Information
189
+
190
+ 用于学术研究。
191
+
192
+ The BQ corpus is free to the public for academic research.
193
+
194
+
195
+ ### Contributions
196
+
197
+ Thanks to [@shibing624](https://github.com/shibing624) add this dataset.