[update]add ocnli
Browse files- README.md +7 -3
- data/pawsx_zh.jsonl +3 -0
- data/sts_b.jsonl +3 -0
- examples/preprocess/process_pawsx_zh.py +76 -0
- examples/preprocess/process_sts_b.py +76 -0
- sentence_pair.py +2 -0
README.md
CHANGED
@@ -14,16 +14,18 @@ size_categories:
|
|
14 |
| 数据 | 语言 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
|
15 |
| :--- | :---: | :---: | :---: | :---: | :---: |
|
16 |
| ChineseSTS | 汉语 | [ChineseSTS](https://github.com/IAdmireu/ChineseSTS) | 24.7K | STS 中文文本语义相似度(使用时注意打乱数据集) | [ChineseSTS](https://huggingface.co/datasets/tiansz/ChineseSTS) |
|
17 |
-
| ccks2018_task3 | 汉语 | [CCKS2018_3](https://www.biendata.xyz/competition/CCKS2018_3/data/) | 100K | CCKS 2018 微众银行智能客服问句匹配大赛 |
|
18 |
| DIAC2019 | 汉语 | [DIAC2019](https://www.biendata.xyz/competition/2019diac/data/) | 6K | 以问题组的形式提供,每组问句又分为等价部分和不等价部分,等价问句之间互相组合可以生成正样本,等价问句和不等价问句之间互相组合可以生成负样本。我们提供6000组问句的训练集。 | |
|
19 |
-
| LCQMC | 汉语 | [LCQMC](https://www.luge.ai/#/luge/dataDetail?id=14); [C18-1166.pdf](https://aclanthology.org/C18-1166.pdf) | TRAIN: 238766, VALID: 8802, TEST: 12500 | 百度知道领域的中文问题匹配数据集,目的是为了解决在中文领域大规模问题匹配数据集的缺失。该数据集从百度知道不同领域的用户问题中抽取构建数据。| [lcqmc_data](https://github.com/xiaohai-AI/lcqmc_data) |
|
20 |
-
| AFQMC | 汉语 | [AFQMC](https://tianchi.aliyun.com/dataset/106411) | TRAIN: 34334, VALID: 4316, TEST: 3861 | 蚂蚁金融语义相似度数据集,用于问题相似度计算。即:给定客服里用户描述的两句话,用算法来判断是否表示了相同的语义。 |
|
21 |
| BUSTM | 汉语 | [BUSTM](https://tianchi.aliyun.com/competition/entrance/531851/information); [BUSTM](https://github.com/xiaobu-coai/BUSTM) | 总样本数为:177173,其中,匹配样本个数为:54805,不匹配样本个数为:122368 | 小布助手对话短文本语义匹配比赛数据集 | [BUSTM](https://github.com/CLUEbenchmark/FewCLUE/tree/main/datasets/bustm) |
|
22 |
| CHIP2019 | 汉语 | [CHIP2019](https://www.biendata.xyz/competition/chip2019/) | 2万 | 平安医疗科技疾病问答迁移学习比赛数据集 | |
|
23 |
| COVID-19 | 汉语 | [COVID-19](https://tianchi.aliyun.com/competition/entrance/231776/information) | | 天池新冠疫情相似句对判定大赛 | [COVID-19](https://gitee.com/liangzongchang/COVID-19-sentence-pair/) |
|
24 |
| Chinese-MNLI | 汉语 | [Chinese-MNLI](https://github.com/pluto-junzeng/CNSD) | TRAIN: 390K, VALID: 12K, TEST: 13K | 通过翻译加部分人工修正的方法,从英文原数据集生成(原数据是:蕴含,中性,冲突,的句子推理数据集,已转换为句子对)。 | |
|
25 |
| Chinese-SNLI | 汉语 | [Chinese-SNLI](https://github.com/pluto-junzeng/CNSD) | TRAIN: 550K, VALID: 10K, TEST: 10K | 通过翻译加部分人工修正的方法,从英文原数据集生成(原数据是:蕴含,中性,冲突,的句子推理数据集,已转换为句子对)。 | |
|
26 |
| OCNLI | 汉语 | [OCNLI](https://github.com/CLUEbenchmark/OCNLI) | TRAIN: 50K, VALID: 3K, TEST: 3K | 原生中文自然语言推理数据集,是第一个非翻译的、使用原生汉语的大型中文自然语言推理数据集。 | |
|
|
|
|
|
27 |
|
28 |
|
29 |
<details>
|
@@ -33,5 +35,7 @@ https://github.com/liucongg/NLPDataSet
|
|
33 |
|
34 |
https://huggingface.co/datasets/tiansz/ChineseSTS
|
35 |
https://zhuanlan.zhihu.com/p/454173790
|
|
|
|
|
36 |
</code></pre>
|
37 |
</details>
|
|
|
14 |
| 数据 | 语言 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
|
15 |
| :--- | :---: | :---: | :---: | :---: | :---: |
|
16 |
| ChineseSTS | 汉语 | [ChineseSTS](https://github.com/IAdmireu/ChineseSTS) | 24.7K | STS 中文文本语义相似度(使用时注意打乱数据集) | [ChineseSTS](https://huggingface.co/datasets/tiansz/ChineseSTS) |
|
17 |
+
| ccks2018_task3 | 汉语 | [BQ_corpus](http://icrc.hitsz.edu.cn/info/1037/1162.htm); [CCKS2018_3](https://www.biendata.xyz/competition/CCKS2018_3/data/) | TRAIN: 100K, VALID: 10K, TEST: 10K | CCKS 2018 微众银行智能客服问句匹配大赛 | [BQ_corpus](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/BQ_corpus) |
|
18 |
| DIAC2019 | 汉语 | [DIAC2019](https://www.biendata.xyz/competition/2019diac/data/) | 6K | 以问题组的形式提供,每组问句又分为等价部分和不等价部分,等价问句之间互相组合可以生成正样本,等价问句和不等价问句之间互相组合可以生成负样本。我们提供6000组问句的训练集。 | |
|
19 |
+
| LCQMC | 汉语 | [LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html); [LCQMC](https://www.luge.ai/#/luge/dataDetail?id=14); [C18-1166.pdf](https://aclanthology.org/C18-1166.pdf) | TRAIN: 238766, VALID: 8802, TEST: 12500 | 百度知道领域的中文问题匹配数据集,目的是为了解决在中文领域大规模问题匹配数据集的缺失。该数据集从百度知道不同领域的用户问题中抽取构建数据。| [lcqmc_data](https://github.com/xiaohai-AI/lcqmc_data) |
|
20 |
+
| AFQMC | 汉语 | [AFQMC](https://tianchi.aliyun.com/dataset/106411) | TRAIN: 34334, VALID: 4316, TEST: 3861 | 蚂蚁金融语义相似度数据集,用于问题相似度计算。即:给定客服里用户描述的两句话,用算法来判断是否表示了相同的语义。 | [ATEC](https://huggingface.co/datasets/shibing624/nli_zh); [ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC) |
|
21 |
| BUSTM | 汉语 | [BUSTM](https://tianchi.aliyun.com/competition/entrance/531851/information); [BUSTM](https://github.com/xiaobu-coai/BUSTM) | 总样本数为:177173,其中,匹配样本个数为:54805,不匹配样本个数为:122368 | 小布助手对话短文本语义匹配比赛数据集 | [BUSTM](https://github.com/CLUEbenchmark/FewCLUE/tree/main/datasets/bustm) |
|
22 |
| CHIP2019 | 汉语 | [CHIP2019](https://www.biendata.xyz/competition/chip2019/) | 2万 | 平安医疗科技疾病问答迁移学习比赛数据集 | |
|
23 |
| COVID-19 | 汉语 | [COVID-19](https://tianchi.aliyun.com/competition/entrance/231776/information) | | 天池新冠疫情相似句对判定大赛 | [COVID-19](https://gitee.com/liangzongchang/COVID-19-sentence-pair/) |
|
24 |
| Chinese-MNLI | 汉语 | [Chinese-MNLI](https://github.com/pluto-junzeng/CNSD) | TRAIN: 390K, VALID: 12K, TEST: 13K | 通过翻译加部分人工修正的方法,从英文原数据集生成(原数据是:蕴含,中性,冲突,的句子推理数据集,已转换为句子对)。 | |
|
25 |
| Chinese-SNLI | 汉语 | [Chinese-SNLI](https://github.com/pluto-junzeng/CNSD) | TRAIN: 550K, VALID: 10K, TEST: 10K | 通过翻译加部分人工修正的方法,从英文原数据集生成(原数据是:蕴含,中性,冲突,的句子推理数据集,已转换为句子对)。 | |
|
26 |
| OCNLI | 汉语 | [OCNLI](https://github.com/CLUEbenchmark/OCNLI) | TRAIN: 50K, VALID: 3K, TEST: 3K | 原生中文自然语言推理数据集,是第一个非翻译的、使用原生汉语的大型中文自然语言推理数据集。 | |
|
27 |
+
| STS-B | 汉语 | [STS-B](https://adapterhub.ml/explore/sts/sts-b/); [STS Benchmark](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) | TRAIN: 5749, VALID: 1500, TEST: 1379 | 语义文本相似性基准测试 | [STS-B](https://pan.baidu.com/s/10yfKfTtcmLQ70-jzHIln1A?pwd=gf8y#list/path=%2F); [STS-B](https://huggingface.co/datasets/shibing624/nli_zh/viewer/STS-B) |
|
28 |
+
| PAWSX-ZH | 汉语 | [PAWSX](https://paperswithcode.com/paper/paws-x-a-cross-lingual-adversarial-dataset/review/) | TRAIN: 49.4K, VALID: 2K, TEST: 2K | 从 PAWSX翻译成中文的数据集 | [PAWSX](https://pan.baidu.com/share/init?surl=ox0tJY3ZNbevHDeAqDBOPQ&pwd=mgjn); [PAWSX](https://huggingface.co/datasets/shibing624/nli_zh/viewer/PAWSX) |
|
29 |
|
30 |
|
31 |
<details>
|
|
|
35 |
|
36 |
https://huggingface.co/datasets/tiansz/ChineseSTS
|
37 |
https://zhuanlan.zhihu.com/p/454173790
|
38 |
+
|
39 |
+
https://huggingface.co/datasets/shibing624/nli_zh
|
40 |
</code></pre>
|
41 |
</details>
|
data/pawsx_zh.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b9cf16ca90456852dec844396a64dd120db5acf1fa7d4dc600ea00cebaac8379
|
3 |
+
size 16737660
|
data/sts_b.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:38f45fcff68071a92e45a4893d829a2f85d6b2fbb8f898729ba6c379d25aad22
|
3 |
+
size 1789990
|
examples/preprocess/process_pawsx_zh.py
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
import json
|
5 |
+
import os
|
6 |
+
import sys
|
7 |
+
|
8 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
9 |
+
sys.path.append(os.path.join(pwd, '../../'))
|
10 |
+
|
11 |
+
from datasets import load_dataset
|
12 |
+
from tqdm import tqdm
|
13 |
+
|
14 |
+
from project_settings import project_path
|
15 |
+
|
16 |
+
|
17 |
+
def get_args():
|
18 |
+
parser = argparse.ArgumentParser()
|
19 |
+
|
20 |
+
parser.add_argument("--dataset_path", default="shibing624/nli_zh", type=str)
|
21 |
+
parser.add_argument("--dataset_name", default="PAWSX", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/pawsx_zh.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
dataset_dict = load_dataset(
|
41 |
+
path=args.dataset_path,
|
42 |
+
name=args.dataset_name,
|
43 |
+
cache_dir=args.dataset_cache_dir,
|
44 |
+
)
|
45 |
+
print(dataset_dict)
|
46 |
+
|
47 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
48 |
+
for split, dataset in dataset_dict.items():
|
49 |
+
for sample in dataset:
|
50 |
+
sentence1 = sample["sentence1"]
|
51 |
+
sentence2 = sample["sentence2"]
|
52 |
+
label = sample["label"]
|
53 |
+
|
54 |
+
label = str(int(label))
|
55 |
+
|
56 |
+
if label not in ("0", "1", None):
|
57 |
+
print(label)
|
58 |
+
raise AssertionError
|
59 |
+
|
60 |
+
row = {
|
61 |
+
"sentence1": sentence1,
|
62 |
+
"sentence2": sentence2,
|
63 |
+
"label": label,
|
64 |
+
"category": None,
|
65 |
+
"data_source": "PAWSX-ZH",
|
66 |
+
"split": split
|
67 |
+
}
|
68 |
+
|
69 |
+
row = json.dumps(row, ensure_ascii=False)
|
70 |
+
f.write("{}\n".format(row))
|
71 |
+
|
72 |
+
return
|
73 |
+
|
74 |
+
|
75 |
+
if __name__ == '__main__':
|
76 |
+
main()
|
examples/preprocess/process_sts_b.py
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
import json
|
5 |
+
import os
|
6 |
+
import sys
|
7 |
+
|
8 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
9 |
+
sys.path.append(os.path.join(pwd, '../../'))
|
10 |
+
|
11 |
+
from datasets import load_dataset
|
12 |
+
from tqdm import tqdm
|
13 |
+
|
14 |
+
from project_settings import project_path
|
15 |
+
|
16 |
+
|
17 |
+
def get_args():
|
18 |
+
parser = argparse.ArgumentParser()
|
19 |
+
|
20 |
+
parser.add_argument("--dataset_path", default="shibing624/nli_zh", type=str)
|
21 |
+
parser.add_argument("--dataset_name", default="STS-B", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/sts_b.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
dataset_dict = load_dataset(
|
41 |
+
path=args.dataset_path,
|
42 |
+
name=args.dataset_name,
|
43 |
+
cache_dir=args.dataset_cache_dir,
|
44 |
+
)
|
45 |
+
print(dataset_dict)
|
46 |
+
|
47 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
48 |
+
for split, dataset in dataset_dict.items():
|
49 |
+
for sample in dataset:
|
50 |
+
sentence1 = sample["sentence1"]
|
51 |
+
sentence2 = sample["sentence2"]
|
52 |
+
label = sample["label"]
|
53 |
+
|
54 |
+
label = "1" if label >= 3 else "0"
|
55 |
+
|
56 |
+
if label not in ("0", "1", None):
|
57 |
+
print(label)
|
58 |
+
raise AssertionError
|
59 |
+
|
60 |
+
row = {
|
61 |
+
"sentence1": sentence1,
|
62 |
+
"sentence2": sentence2,
|
63 |
+
"label": label,
|
64 |
+
"category": None,
|
65 |
+
"data_source": "STS-B",
|
66 |
+
"split": split
|
67 |
+
}
|
68 |
+
|
69 |
+
row = json.dumps(row, ensure_ascii=False)
|
70 |
+
f.write("{}\n".format(row))
|
71 |
+
|
72 |
+
return
|
73 |
+
|
74 |
+
|
75 |
+
if __name__ == '__main__':
|
76 |
+
main()
|
sentence_pair.py
CHANGED
@@ -20,6 +20,8 @@ _URLS = {
|
|
20 |
"diac2019": "data/diac2019.jsonl",
|
21 |
"lcqmc": "data/lcqmc.jsonl",
|
22 |
"ocnli": "data/ocnli.jsonl",
|
|
|
|
|
23 |
|
24 |
}
|
25 |
|
|
|
20 |
"diac2019": "data/diac2019.jsonl",
|
21 |
"lcqmc": "data/lcqmc.jsonl",
|
22 |
"ocnli": "data/ocnli.jsonl",
|
23 |
+
"pawsx_zh": "data/pawsx_zh.jsonl",
|
24 |
+
"sts_b": "data/sts_b.jsonl",
|
25 |
|
26 |
}
|
27 |
|