shibing624 commited on
Commit
a2a9cc2
1 Parent(s): 8952364

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md CHANGED
@@ -1,3 +1,121 @@
1
  ---
 
 
 
 
 
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - shibing624
4
+ language_creators:
5
+ - liuhuanyong
6
+ language:
7
+ - zh
8
  license: cc-by-4.0
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - 100K<n<20M
13
+ source_datasets:
14
+ - https://github.com/liuhuanyong/ChineseTextualInference/
15
+ task_categories:
16
+ - text-classification
17
+ task_ids:
18
+ - natural-language-inference
19
+ - semantic-similarity-scoring
20
+ - text-scoring
21
+ paperswithcode_id: snli
22
+ pretty_name: Stanford Natural Language Inference
23
  ---
24
+ # Dataset Card for SNLI_zh
25
+
26
+ ## Dataset Description
27
+ - **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
28
+ - **Dataset:** [train data from ChineseTextualInference](https://github.com/liuhuanyong/ChineseTextualInference/)
29
+ - **Size of downloaded dataset files:** 54 MB
30
+ - **Total amount of disk used:** 54 MB
31
+ ### Dataset Summary
32
+
33
+ 中文SNLI数据集,翻译自英文[SNLI](https://huggingface.co/datasets/snli)
34
+
35
+ ![img](https://github.com/liuhuanyong/ChineseTextualInference/blob/master/image/project_route.png)
36
+
37
+
38
+ ### Supported Tasks and Leaderboards
39
+
40
+ Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
41
+
42
+ 中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
43
+
44
+ **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
45
+
46
+ ### Languages
47
+
48
+ 数据集均是简体中文文本。
49
+
50
+ ## Dataset Structure
51
+ ### Data Instances
52
+ An example of 'train' looks as follows.
53
+ ```
54
+ sentence1 sentence2 gold_label
55
+ 是的,我想一个洞穴也会有这样的问题 我认为洞穴可能会有更严重的问题。 neutral
56
+ 几周前我带他和一个朋友去看幼儿园警察 我还没看过幼儿园警察,但他看了。 contradiction
57
+ 航空旅行的扩张开始了大众旅游的时代,希腊和爱琴海群岛成为北欧人逃离潮湿凉爽的夏天的令人兴奋的目的地。 航空旅行的扩大开始了许多旅游业的发展。 entailment
58
+ ```
59
+
60
+ ### Data Fields
61
+ The data fields are the same among all splits.
62
+
63
+ - `sentence1`: a `string` feature.
64
+ - `sentence2`: a `string` feature.
65
+ - `label`: a classification label, with possible values including entailment(0), neutral(1), contradiction(2). 注意:此数据集0表示相似,2表示不相似。
66
+ -
67
+
68
+ ### Data Splits
69
+
70
+
71
+ ```shell
72
+ $ wc -l ChineseTextualInference-train.txt
73
+ 420000 total
74
+ ```
75
+
76
+ ## Dataset Creation
77
+ ### Curation Rationale
78
+ 作为中文SNLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。
79
+ ### Source Data
80
+ #### Initial Data Collection and Normalization
81
+ #### Who are the source language producers?
82
+ 数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
83
+
84
+ @inproceedings{snli:emnlp2015,
85
+ Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
86
+ Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
87
+ Publisher = {Association for Computational Linguistics},
88
+ Title = {A large annotated corpus for learning natural language inference},
89
+ Year = {2015}
90
+ }
91
+
92
+ ### Annotations
93
+ #### Annotation process
94
+
95
+ #### Who are the annotators?
96
+ 原作者。
97
+
98
+ ### Personal and Sensitive Information
99
+
100
+ ## Considerations for Using the Data
101
+ ### Social Impact of Dataset
102
+ This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context.
103
+
104
+ Systems that are successful at such a task may be more successful in modeling semantic representations.
105
+ ### Discussion of Biases
106
+ ### Other Known Limitations
107
+ ## Additional Information
108
+ ### Dataset Curators
109
+
110
+ - [liuhuanyong](https://github.com/liuhuanyong/ChineseTextualInference/)翻译成中文
111
+ - [shibing624](https://github.com/shibing624) 上传到huggingface的datasets
112
+
113
+ ### Licensing Information
114
+
115
+ 用于学术研究。
116
+
117
+
118
+
119
+ ### Contributions
120
+
121
+ [shibing624](https://github.com/shibing624) add this dataset.