shibing624 commited on
Commit
cc2acf9
1 Parent(s): 36cd788

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -10
README.md CHANGED
@@ -11,17 +11,17 @@ multilinguality:
11
  size_categories:
12
  - 1M<n<10M
13
  source_datasets:
14
- - https://huggingface.co/datasets
15
  task_categories:
16
  - text-classification
17
  task_ids:
18
  - natural-language-inference
19
  - semantic-similarity-scoring
20
  - text-scoring
21
- paperswithcode_id: snli
22
- pretty_name: Stanford Natural Language Inference
23
  ---
24
- # Dataset Card for SNLI_zh
25
 
26
  ## Dataset Description
27
  - **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
@@ -35,9 +35,6 @@ pretty_name: Stanford Natural Language Inference
35
  整合了文本推理,相似,摘要,问答,指令微调等任务的820万高质量数据,并转化为匹配格式数据集。
36
 
37
 
38
-
39
-
40
-
41
  ### Supported Tasks and Leaderboards
42
 
43
  Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
@@ -70,7 +67,6 @@ The data fields are the same among all splits.
70
 
71
  after remove None and len(text) < 1 data:
72
  ```shell
73
-
74
  $ wc -l nli-zh-all/*
75
  48818 nli-zh-all/alpaca_gpt4-train.jsonl
76
  5000 nli-zh-all/amazon_reviews-train.jsonl
@@ -91,13 +87,14 @@ $ wc -l nli-zh-all/*
91
  93404 nli-zh-all/xlsum-train.jsonl
92
  1006218 nli-zh-all/zhihu_kol-train.jsonl
93
  8234680 total
94
-
95
  ```
96
 
97
  ### Data Length
98
 
99
  ![len](https://huggingface.co/datasets/shibing624/nli-zh-all/resolve/main/nli-zh-all-len.png)
100
 
 
 
101
  ## Dataset Creation
102
  ### Curation Rationale
103
  受[m3e-base](https://huggingface.co/moka-ai/m3e-base#M3E%E6%95%B0%E6%8D%AE%E9%9B%86)启发,合并了中文高质量NLI(natural langauge inference)数据集,
@@ -132,7 +129,8 @@ $ wc -l nli-zh-all/*
132
  #### Who are the source language producers?
133
  数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
134
 
135
- - SNLI:
 
136
  @inproceedings{snli:emnlp2015,
137
  Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
138
  Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
 
11
  size_categories:
12
  - 1M<n<10M
13
  source_datasets:
14
+ - https://github.com/shibing624/text2vec
15
  task_categories:
16
  - text-classification
17
  task_ids:
18
  - natural-language-inference
19
  - semantic-similarity-scoring
20
  - text-scoring
21
+ paperswithcode_id: nli
22
+ pretty_name: Chinese Natural Language Inference
23
  ---
24
+ # Dataset Card for nli-zh-all
25
 
26
  ## Dataset Description
27
  - **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
 
35
  整合了文本推理,相似,摘要,问答,指令微调等任务的820万高质量数据,并转化为匹配格式数据集。
36
 
37
 
 
 
 
38
  ### Supported Tasks and Leaderboards
39
 
40
  Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
 
67
 
68
  after remove None and len(text) < 1 data:
69
  ```shell
 
70
  $ wc -l nli-zh-all/*
71
  48818 nli-zh-all/alpaca_gpt4-train.jsonl
72
  5000 nli-zh-all/amazon_reviews-train.jsonl
 
87
  93404 nli-zh-all/xlsum-train.jsonl
88
  1006218 nli-zh-all/zhihu_kol-train.jsonl
89
  8234680 total
 
90
  ```
91
 
92
  ### Data Length
93
 
94
  ![len](https://huggingface.co/datasets/shibing624/nli-zh-all/resolve/main/nli-zh-all-len.png)
95
 
96
+ count text length script: https://github.com/shibing624/text2vec/blob/master/examples/data/count_text_length.py
97
+
98
  ## Dataset Creation
99
  ### Curation Rationale
100
  受[m3e-base](https://huggingface.co/moka-ai/m3e-base#M3E%E6%95%B0%E6%8D%AE%E9%9B%86)启发,合并了中文高质量NLI(natural langauge inference)数据集,
 
129
  #### Who are the source language producers?
130
  数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
131
 
132
+ SNLI:
133
+
134
  @inproceedings{snli:emnlp2015,
135
  Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
136
  Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},