Datasets:

ArXiv:
License:
HoneyTian commited on
Commit
b07144f
1 Parent(s): a362b8b
README.md CHANGED
@@ -12,6 +12,7 @@ Tips:
12
  数据集从网上收集整理如下:
13
 
14
 
 
15
  多语言语料
16
 
17
  | 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
@@ -21,6 +22,7 @@ Tips:
21
  | stsb_multi_mt | [SemEval-2017 Task 1](https://arxiv.org/abs/1708.00055) | TRAIN: 104117, VALID: 25943, TEST: 22457 | **使用时注意要打乱**。可用语言有:de、en、es、fr、it、nl、pl、pt、ru、zh | [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) |
22
 
23
 
 
24
  语种识别
25
 
26
  | 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
@@ -31,6 +33,7 @@ Tips:
31
  | nbnn | [oai-nb-no-sbr-80](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-80/) | TRAIN: 1556212, VALID: 1957, TEST: 1944 | 该语料库包含挪威电报局 (NTB) 的新闻文本从博克马尔语翻译成新挪威语的内容。 | [NbAiLab/nbnn_language_detection](https://huggingface.co/datasets/NbAiLab/nbnn_language_detection) |
32
 
33
 
 
34
  机器翻译
35
 
36
  | 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
@@ -50,13 +53,38 @@ Tips:
50
  | id_panl_bppt | | TRAIN: 47916 | BPPT(印度尼西亚技术评估和应用机构)为 PAN 本地化项目(发展亚洲本地语言计算能力的区域性倡议)创建的多域翻译系统并行文本语料库。 该数据集包含大约 24K 个句子,分为 4 个不同主题(经济、国际、科学技术和体育)。 | [id_panl_bppt](https://huggingface.co/datasets/id_panl_bppt) |
51
  | igbo | [Igbo-English Machine Translation](https://arxiv.org/abs/2004.00648v1) | | 在这项工作中,我们讨论了为伊博语(尼日利亚三种主要语言之一)构建标准机器翻译基准数据集所做的努力。 | [igbo_english_machine_translation](https://huggingface.co/datasets/igbo_english_machine_translation) |
52
  | menyo20k_mt | [menyo20k_mt](https://arxiv.org/abs/2103.08647v3) | TRAIN: 19899, VALID: 6655, TEST: 13148 | MENYO-20k 是一个多域并行数据集,其中的文本来自新闻文章、ted 演讲、电影文字记录、广播文字记录、科技文本以及其他由网络和专业翻译人员策划的短文。 | [menyo20k_mt](https://huggingface.co/datasets/menyo20k_mt) |
53
- | para_pat | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 10242500 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
54
  | pib | [CVIT-PIB](https://arxiv.org/abs/2008.04860) | | 该数据集是 11 种印度语言的大规模句子对齐语料库,即: CVIT-PIB 语料库是印度语言可用的最大多语言语料库。 | [pib](https://huggingface.co/datasets/pib) |
55
  | poleval2019_mt | | | PolEval 是一项受 SemEval 启发的波兰语自然语言处理工具评估活动。 | [poleval2019_mt](https://huggingface.co/datasets/poleval2019_mt) |
56
  | wmt19 | [statmt.org](https://www.statmt.org/wmt19/translation-task.html) | 样本个数 | 我们的目标是尽可能使用公开的数据源。我们的训练数据主要来源是Europarl 语料库、 UN 语料库、新闻评论语料库和 ParaCrawl语料库。我们还发布了单语 新闻抓取语料库。将提供其他特定语言的语料库。 | [wmt/wmt19](https://huggingface.co/datasets/wmt/wmt19) |
57
  | ro_sts_parallel | | 样本个数 | 我们提出 RO-STS-Parallel - 通过将 STS 英语数据集翻译成罗马尼亚语而获得的并行罗马尼亚语-英语数据集。 | [ro_sts_parallel](https://huggingface.co/datasets/ro_sts_parallel) |
58
 
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  机器翻译
61
 
62
  https://opus.nlpl.eu/
@@ -78,6 +106,7 @@ https://opus.nlpl.eu/
78
  | tanzil | [Tanzil](https://opus.nlpl.eu/Tanzil/corpus/version/Tanzil) | 样本个数 | | [tanzil](https://huggingface.co/datasets/tanzil) |
79
 
80
 
 
81
  机器翻译
82
 
83
  https://opus.nlpl.eu/
@@ -103,6 +132,9 @@ https://opus.nlpl.eu/
103
  | para_crawl_en_pl | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 6537110 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
104
  | para_crawl_en_pt | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 15186124 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
105
  | para_crawl_en_ro | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 3580912 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
 
 
 
106
 
107
 
108
 
 
12
  数据集从网上收集整理如下:
13
 
14
 
15
+
16
  多语言语料
17
 
18
  | 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
 
22
  | stsb_multi_mt | [SemEval-2017 Task 1](https://arxiv.org/abs/1708.00055) | TRAIN: 104117, VALID: 25943, TEST: 22457 | **使用时注意要打乱**。可用语言有:de、en、es、fr、it、nl、pl、pt、ru、zh | [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt) |
23
 
24
 
25
+
26
  语种识别
27
 
28
  | 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
 
33
  | nbnn | [oai-nb-no-sbr-80](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-80/) | TRAIN: 1556212, VALID: 1957, TEST: 1944 | 该语料库包含挪威电报局 (NTB) 的新闻文本从博克马尔语翻译成新挪威语的内容。 | [NbAiLab/nbnn_language_detection](https://huggingface.co/datasets/NbAiLab/nbnn_language_detection) |
34
 
35
 
36
+
37
  机器翻译
38
 
39
  | 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
 
53
  | id_panl_bppt | | TRAIN: 47916 | BPPT(印度尼西亚技术评估和应用机构)为 PAN 本地化项目(发展亚洲本地语言计算能力的区域性倡议)创建的多域翻译系统并行文本语料库。 该数据集包含大约 24K 个句子,分为 4 个不同主题(经济、国际、科学技术和体育)。 | [id_panl_bppt](https://huggingface.co/datasets/id_panl_bppt) |
54
  | igbo | [Igbo-English Machine Translation](https://arxiv.org/abs/2004.00648v1) | | 在这项工作中,我们讨论了为伊博语(尼日利亚三种主要语言之一)构建标准机器翻译基准数据集所做的努力。 | [igbo_english_machine_translation](https://huggingface.co/datasets/igbo_english_machine_translation) |
55
  | menyo20k_mt | [menyo20k_mt](https://arxiv.org/abs/2103.08647v3) | TRAIN: 19899, VALID: 6655, TEST: 13148 | MENYO-20k 是一个多域并行数据集,其中的文本来自新闻文章、ted 演讲、电影文字记录、广播文字记录、科技文本以及其他由网络和专业翻译人员策划的短文。 | [menyo20k_mt](https://huggingface.co/datasets/menyo20k_mt) |
 
56
  | pib | [CVIT-PIB](https://arxiv.org/abs/2008.04860) | | 该数据集是 11 种印度语言的大规模句子对齐语料库,即: CVIT-PIB 语料库是印度语言可用的最大多语言语料库。 | [pib](https://huggingface.co/datasets/pib) |
57
  | poleval2019_mt | | | PolEval 是一项受 SemEval 启发的波兰语自然语言处理工具评估活动。 | [poleval2019_mt](https://huggingface.co/datasets/poleval2019_mt) |
58
  | wmt19 | [statmt.org](https://www.statmt.org/wmt19/translation-task.html) | 样本个数 | 我们的目标是尽可能使用公开的数据源。我们的训练数据主要来源是Europarl 语料库、 UN 语料库、新闻评论语料库和 ParaCrawl语料库。我们还发布了单语 新闻抓取语料库。将提供其他特定语言的语料库。 | [wmt/wmt19](https://huggingface.co/datasets/wmt/wmt19) |
59
  | ro_sts_parallel | | 样本个数 | 我们提出 RO-STS-Parallel - 通过将 STS 英语数据集翻译成罗马尼亚语而获得的并行罗马尼亚语-英语数据集。 | [ro_sts_parallel](https://huggingface.co/datasets/ro_sts_parallel) |
60
 
61
 
62
+
63
+ 机器翻译
64
+
65
+ | 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
66
+ | :--- | :---: | :---: | :---: | :---: |
67
+ | para_pat_cs_en | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 156028 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
68
+ | para_pat_de_en | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 3065565 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
69
+ | para_pat_de_fr | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 1243643 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
70
+ | para_pat_el_en | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 20234 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
71
+ | para_pat_en_es | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 1147278 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
72
+ | para_pat_en_hu | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 84824 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
73
+ | para_pat_en_ja | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 11971591 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
74
+ | para_pat_en_ko | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 4268110 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
75
+ | para_pat_en_pt | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 42623 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
76
+ | para_pat_en_ro | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 94326 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
77
+ | para_pat_en_ru | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 6795724 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
78
+ | para_pat_en_sk | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 44337 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
79
+ | para_pat_en_uk | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 177043 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
80
+ | para_pat_en_zh | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 9367823 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
81
+ | para_pat_es_fr | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 55795 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
82
+ | para_pat_fr_ja | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 599299 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
83
+ | para_pat_fr_ko | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 200044 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
84
+ | para_pat_fr_ru | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 19577 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
85
+
86
+
87
+
88
  机器翻译
89
 
90
  https://opus.nlpl.eu/
 
106
  | tanzil | [Tanzil](https://opus.nlpl.eu/Tanzil/corpus/version/Tanzil) | 样本个数 | | [tanzil](https://huggingface.co/datasets/tanzil) |
107
 
108
 
109
+
110
  机器翻译
111
 
112
  https://opus.nlpl.eu/
 
132
  | para_crawl_en_pl | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 6537110 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
133
  | para_crawl_en_pt | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 15186124 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
134
  | para_crawl_en_ro | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 3580912 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
135
+ | para_crawl_en_sk | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 3047345 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
136
+ | para_crawl_en_sl | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 1282153 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
137
+ | para_crawl_en_sv | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | TRAIN: 6626302 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
138
 
139
 
140
 
data/{para_pat.jsonl → para_crawl_en_sk.jsonl} RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:125ac7672e3164064d3512f359a884150ae9e9e59f37622f0572157407fb689d
3
- size 13516036688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a87fc26cb81e89f6a358ff19af120999c8d0a84b41340e4b53692d3bb7541d60
3
+ size 552364035
data/para_crawl_en_sl.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0448d0a8b45908902c2f9a5eba3449bab096f700dda831aadaaa77f03729d08e
3
+ size 273962941
data/para_crawl_en_sv.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2517d9126dd4ea6a77af63f32b7868a8aa2bbcf46de1df359dbb40548f666fea
3
+ size 1335220345
data/para_pat_cs_en.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef48b64bd626dfecf3664d2197081e9af59e928c367b3eb106c82c66226051c6
3
+ size 126803615
data/para_pat_de_en.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a8bf2cd12c8398eb5c8175c39775a3270a937c33248545c2e581c37e9df24ff
3
+ size 980669225
data/para_pat_de_fr.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e6b42338438571675c804a64e385d07601178ad023d57615ebbcff7c675451a
3
+ size 448168141
data/para_pat_el_en.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:262ae50b48f6969172360b0586691155794895884493a383e14e676c8411bbe6
3
+ size 24704396
data/para_pat_en_es.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b766e53f9556bc49f30ca50a2f46c4b281fb998b95bf304faaa103837c9fbdb
3
+ size 386858974
data/para_pat_en_hu.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f59210b7c6ca8c4ae40aeb215f50b88529552b2439b61359710f2bdb38695d7
3
+ size 86090965
data/para_pat_en_ja.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d99a0d51b474a7ddbe56457bfd32737d8b7d8725b2480649060c8cc9b18ddc8
3
+ size 3923552393
data/para_pat_en_ko.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15a2ac085c2fc7b02db098881001723f78aa449a285a7fd186f5d5a0eb89b2ae
3
+ size 1654173028
data/para_pat_en_pt.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62e7ffd26069282570dfcec22abc29ec9c0f1ea0b406e2dbb7736df6028126cd
3
+ size 37598599
data/para_pat_en_ro.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64cdef5e751e2eecee7647a69aa4ba9f65dd831dff562ca53f60e2bddea99d0a
3
+ size 83873351
data/para_pat_en_ru.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d0badc949e26d8fc705af2e590bc612b1fd01763ce8e9bd82820e53b7d34aa5
3
+ size 2361506439
data/para_pat_en_sk.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f407a8a26a702889528798b70df77a794cda8652b168185b6c6b3eaadc0f60a6
3
+ size 33038222
data/para_pat_en_uk.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89cf6760cd50ad81a6c73fcade9874211033daf7b08cfdb6e02b9262220b55dd
3
+ size 148040587
data/para_pat_en_zh.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b93683a7e49ee2916b9d8292127dd2840e2e2de91727496189c7d2f2988b4e54
3
+ size 2897727275
data/para_pat_es_fr.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99ebbbb2c82b33df8bf0ad7cf52bb59f1d5525cfa70d3596bfa6414fa048d8db
3
+ size 49823412
data/para_pat_fr_ja.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab013f6c3f5d3638104afa47241663b45c74f9dd5801a572c87fd882b76edc7a
3
+ size 246281219
data/para_pat_fr_ko.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8102b99d887072fa16c8b1e55cd939c8d089ad0a4eed74c8b87effeee3e5301
3
+ size 197829003
data/para_pat_fr_ru.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35955e5097c0a74e3f92a1cc15607d9f0b1c90b00da286fa7680c1bd44e4dff1
3
+ size 31750767
dataset_details.md CHANGED
@@ -1535,6 +1535,211 @@ fr: 656
1535
  ![open_subtitles_text_length.jpg](docs/picture/open_subtitles_text_length.jpg)
1536
 
1537
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1538
  #### php
1539
  以下都是 train 训练集的信息
1540
 
 
1535
  ![open_subtitles_text_length.jpg](docs/picture/open_subtitles_text_length.jpg)
1536
 
1537
 
1538
+ #### para_crawl_en_pl
1539
+ 以下都是 train 训练集的信息
1540
+
1541
+ ```text
1542
+ 语种数量:
1543
+ en: 3268977
1544
+ pl: 3268133
1545
+ ```
1546
+
1547
+ 样本示例:
1548
+
1549
+ | 数据 | 语种 | 样本 |
1550
+ | :---: | :---: | :---: |
1551
+ | para_crawl_en_pl | en | 4. errors deltaE white – distortion palette of gray , analogous to the color deltaE |
1552
+ | para_crawl_en_pl | en | 4. errors deltaE white – distortion palette of gray, analogous to the color deltaE |
1553
+ | para_crawl_en_pl | en | A wy , also met with “ by professional test ” image quality at different hdmi cables ? Czekam na odpowiedzi w komentarzach |
1554
+ | para_crawl_en_pl | pl | 4. błędy deltaE bieli – przekłamania palety szarości, analogiczne do deltaE kolorów |
1555
+ | para_crawl_en_pl | pl | A wy, też spotkaliście się z „fachowymi testami” jakości obrazu na różnych kablach hdmi? Czekam na odpowiedzi w komentarzach |
1556
+ | para_crawl_en_pl | pl | Anatomia prawej terroru w Niemczech i niekompetentny BND tak raczej : "Homeland Security ??? " |
1557
+
1558
+
1559
+ <details>
1560
+ <summary>文本长度</summary>
1561
+ <pre><code>10-20: 13466
1562
+ 20-30: 265266
1563
+ 30-40: 495840
1564
+ 40-50: 507676
1565
+ 50-60: 532475
1566
+ 60-70: 509995
1567
+ 70-80: 455718
1568
+ 80-90: 389501
1569
+ 90-100: 358157
1570
+ 100-110: 321469
1571
+ 110-120: 281841
1572
+ 120-130: 269854
1573
+ 130-140: 263661
1574
+ 140-150: 228062
1575
+ 150-160: 196223
1576
+ 160-170: 158115
1577
+ 170-180: 138006
1578
+ 180-190: 119215
1579
+ 190-200: 107041
1580
+ 200-210: 925529
1581
+ </code></pre>
1582
+ </details>
1583
+
1584
+ 文本长度统计图像:
1585
+
1586
+ ![para_crawl_en_pl_text_length.jpg](docs/picture/para_crawl_en_pl_text_length.jpg)
1587
+
1588
+
1589
+ #### para_crawl_en_pt
1590
+ 以下都是 train 训练集的信息
1591
+
1592
+ ```text
1593
+ 语种数量:
1594
+ en: 7604199
1595
+ pt: 7581925
1596
+ ```
1597
+
1598
+ 样本示例:
1599
+
1600
+ | 数据 | 语种 | 样本 |
1601
+ | :---: | :---: | :---: |
1602
+ | para_crawl_en_pt | en | 23 April 1905: Official ceremony the laying of the foundation stone . |
1603
+ | para_crawl_en_pt | en | 23 April 1905: Official ceremony the laying of the foundation stone. |
1604
+ | para_crawl_en_pt | en | Look familiar this face? Mmm….maybe not, but should be familiar to all of you. |
1605
+ | para_crawl_en_pt | pt | 23 Abril 1905: Cerimônia oficial colocação da primeira pedra . |
1606
+ | para_crawl_en_pt | pt | 23 Abril 1905: Cerimônia oficial colocação da primeira pedra. |
1607
+ | para_crawl_en_pt | pt | Look familiar this face? mmm….talvez não, mas deve ser familiar a todos vocês. |
1608
+
1609
+
1610
+ <details>
1611
+ <summary>文本长度</summary>
1612
+ <pre><code>10-20: 28282
1613
+ 20-30: 493824
1614
+ 30-40: 1007643
1615
+ 40-50: 1171581
1616
+ 50-60: 1330254
1617
+ 60-70: 1238647
1618
+ 70-80: 1050363
1619
+ 80-90: 916893
1620
+ 90-100: 817676
1621
+ 100-110: 736767
1622
+ 110-120: 651559
1623
+ 120-130: 582723
1624
+ 130-140: 525997
1625
+ 140-150: 476314
1626
+ 150-160: 441785
1627
+ 160-170: 389696
1628
+ 170-180: 352789
1629
+ 180-190: 315622
1630
+ 190-200: 288462
1631
+ 200-210: 2369247
1632
+ </code></pre>
1633
+ </details>
1634
+
1635
+ 文本长度统计图像:
1636
+
1637
+ ![para_crawl_en_pt_text_length.jpg](docs/picture/para_crawl_en_pt_text_length.jpg)
1638
+
1639
+
1640
+ #### para_crawl_en_sl
1641
+ 以下都是 train 训练集的信息
1642
+
1643
+ ```text
1644
+ 语种数量:
1645
+ sl: 641207
1646
+ en: 640946
1647
+ ```
1648
+
1649
+ 样本示例:
1650
+
1651
+ | 数据 | 语种 | 样本 |
1652
+ | :---: | :---: | :---: |
1653
+ | para_crawl_en_sl | en | 1. First, press the START (or sign in the left corner of the screen) |
1654
+ | para_crawl_en_sl | en | An anatomy of the right terror in Germany and the incompetent BND so rather : "Homeland Security ??? ” |
1655
+ | para_crawl_en_sl | en | An anatomy of the right terror in Germany and the incompetent BND so rather: "Homeland Security ???” |
1656
+ | para_crawl_en_sl | sl | 1. Najprej pritisnite tipko START (ali znak v levem kotu zaslona) |
1657
+ | para_crawl_en_sl | sl | Anatomija pravi teror v Nemčiji in nesposobni BND tako precej : "Homeland Security ??? " |
1658
+ | para_crawl_en_sl | sl | Anatomija pravi teror v Nemčiji in nesposobni BND tako precej: "Homeland Security ???" |
1659
+
1660
+
1661
+ <details>
1662
+ <summary>文本长度</summary>
1663
+ <pre><code>10-20: 1135
1664
+ 20-30: 33844
1665
+ 30-40: 67979
1666
+ 40-50: 78931
1667
+ 50-60: 84083
1668
+ 60-70: 83616
1669
+ 70-80: 82419
1670
+ 80-90: 80251
1671
+ 90-100: 74910
1672
+ 100-110: 65987
1673
+ 110-120: 60489
1674
+ 120-130: 56953
1675
+ 130-140: 51168
1676
+ 140-150: 47274
1677
+ 150-160: 42314
1678
+ 160-170: 38112
1679
+ 170-180: 34451
1680
+ 180-190: 32783
1681
+ 190-200: 29340
1682
+ 200-210: 236114
1683
+ </code></pre>
1684
+ </details>
1685
+
1686
+ 文本长度统计图像:
1687
+
1688
+ ![para_crawl_en_sl_text_length.jpg](docs/picture/para_crawl_en_sl_text_length.jpg)
1689
+
1690
+
1691
+ #### para_pat_en_uk
1692
+ 以下都是 train 训练集的信息
1693
+
1694
+ ```text
1695
+ 语种数量:
1696
+ uk: 88533
1697
+ en: 88510
1698
+ ```
1699
+
1700
+ 样本示例:
1701
+
1702
+ | 数据 | 语种 | 样本 |
1703
+ | :---: | :---: | :---: |
1704
+ | para_pat_en_uk | en | A replaceable handle to kitchen appliances comprises a bakelite handle with a connecting mechanism, available therein, a plastic part, which includes the upper section and the lower one, a spring, an aluminium part, which includes the upper section and the lower one. |
1705
+ | para_pat_en_uk | en | A method for predicting the risk of osteoporosis in the patients with systemic lupus erythematosus comprises X-ray imaging, the analysis of MTHFR C667T and eNOS T786C gene polymorphisms. The combination of the polymorphisms suggests the risk of osteoporosis. |
1706
+ | para_pat_en_uk | en | A method for growing red cabbage using the EM-preparation includes treating the soil with the given preparation prior to sowing with a rate of 20 l/ha. The seeds are soaked with a rate of 1 l/t and foliar fertilizings are carried out during vegetation with a rate of 2 l/ha in three terms. |
1707
+ | para_pat_en_uk | uk | Знімна ручка до кухонного приладдя містить бакелітову ручку з наявним у ній з'єднувальним механізмом, пластикову частину, яка включає верхню секцію і нижню секцію, пружину, алюмінієву частину, яка включає верхню секцію і нижню секцію. |
1708
+ | para_pat_en_uk | uk | Спосіб прогнозування розвитку остеопорозу при системному червоному вовчаку включає проведення рентгенографії, визначення поліморфізму генів MTHFR С667Т та eNOS Т786С, і при їх поєднанні прогнозування розвитку остеопорозу. |
1709
+ | para_pat_en_uk | uk | Спосіб вирощування капусти червоноголової з застосуванням ЕМ-препарату включає обробку даним препаратом ґрунту до посіву з нормою 20 л/га. Намочують насіння з нормою 1 л/т та здійснюють позакореневі підживлення під час вегетації з нормою 2 л/га в три строки. |
1710
+
1711
+
1712
+ <details>
1713
+ <summary>文本长度</summary>
1714
+ <pre><code>0-10: 10
1715
+ 10-20: 4
1716
+ 20-30: 2
1717
+ 30-40: 15
1718
+ 40-50: 37
1719
+ 50-60: 63
1720
+ 60-70: 133
1721
+ 70-80: 248
1722
+ 80-90: 400
1723
+ 90-100: 555
1724
+ 100-110: 754
1725
+ 110-120: 923
1726
+ 120-130: 1132
1727
+ 130-140: 1278
1728
+ 140-150: 1535
1729
+ 150-160: 1685
1730
+ 160-170: 1901
1731
+ 170-180: 1998
1732
+ 180-190: 2154
1733
+ 190-200: 2404
1734
+ 200-210: 159812
1735
+ </code></pre>
1736
+ </details>
1737
+
1738
+ 文本长度统计图像:
1739
+
1740
+ ![para_pat_en_uk_text_length.jpg](docs/picture/para_pat_en_uk_text_length.jpg)
1741
+
1742
+
1743
  #### php
1744
  以下都是 train 训练集的信息
1745
 
docs/picture/para_crawl_en_pl_text_length.jpg ADDED

Git LFS Details

  • SHA256: 88762e7a90bd75d1382640035c2286ebcc4470f61857958a3995ac33a523dddb
  • Pointer size: 130 Bytes
  • Size of remote file: 17.2 kB
docs/picture/para_crawl_en_pt_text_length.jpg ADDED

Git LFS Details

  • SHA256: a2ebde575526382907279c10bb77545b94185d707c0938ccf6fe1c38a65348c4
  • Pointer size: 130 Bytes
  • Size of remote file: 16.4 kB
docs/picture/para_crawl_en_sl_text_length.jpg ADDED

Git LFS Details

  • SHA256: 49c4e044e23a35a3362e1e95bf2645d7fc346477629d2eaf9948af0db862c81c
  • Pointer size: 130 Bytes
  • Size of remote file: 16.6 kB
docs/picture/para_pat_en_uk_text_length.jpg ADDED

Git LFS Details

  • SHA256: 80b9e9426138fb0fc7bb069b14563984cbc857ab666febedbfc40a84f2900f28
  • Pointer size: 130 Bytes
  • Size of remote file: 17.4 kB
examples/load_data/plan_1.py CHANGED
@@ -1,17 +1,13 @@
1
  #!/usr/bin/python3
2
  # -*- coding: utf-8 -*-
3
- """
4
- 训练模型后发现它对短句子的识别能力很差。
5
- 当句子长度足够的时候,它能够识别准确,并给出接近1.0 的概率值。
6
- 但对于短句子,识别结果让人难以接受。例如把“你好”识别成 de 德语。
7
- """
8
  import argparse
9
  import json
10
  import os
 
11
  import sys
12
 
13
  pwd = os.path.abspath(os.path.dirname(__file__))
14
- sys.path.append(os.path.join(pwd, "../../"))
15
 
16
  from datasets import load_dataset, DownloadMode
17
 
@@ -114,14 +110,13 @@ def main():
114
  total = int(row[2])
115
  subsets = [e.strip() for e in row[3].split(";")]
116
 
117
- train_count = 0
118
- valid_count = 0
119
  for subset in subsets:
120
  if subset in subset_dataset_dict.keys():
121
  dataset_dict = subset_dataset_dict[subset]
122
  else:
123
  dataset_dict = load_dataset(
124
- "../../language_identification.py",
125
  name=subset,
126
  cache_dir=args.dataset_cache_dir,
127
  # download_mode=DownloadMode.FORCE_REDOWNLOAD
@@ -134,38 +129,31 @@ def main():
134
  language = sample["language"]
135
  data_source = sample["data_source"]
136
 
137
- if train_count > total:
138
  break
139
- if language == abbr:
140
- row_ = {
141
- "text": text,
142
- "label": language,
143
- "data_source": data_source,
144
- "split": "train",
145
- }
146
- row_ = json.dumps(row_, ensure_ascii=False)
 
 
 
 
 
 
 
 
147
  ftrain.write("{}\n".format(row_))
148
- train_count += 1
149
-
150
- if "validation" in dataset_dict:
151
- valid_dataset = dataset_dict["validation"]
152
- for sample in valid_dataset:
153
- text = sample["text"]
154
- language = sample["language"]
155
- data_source = sample["data_source"]
156
-
157
- if valid_count > total:
158
- break
159
- if language == abbr:
160
- row_ = {
161
- "text": text,
162
- "label": language,
163
- "data_source": data_source,
164
- "split": "valid",
165
- }
166
- row_ = json.dumps(row_, ensure_ascii=False)
167
- fvalid.write("{}\n".format(row_))
168
- valid_count += 1
169
 
170
  return
171
 
 
1
  #!/usr/bin/python3
2
  # -*- coding: utf-8 -*-
 
 
 
 
 
3
  import argparse
4
  import json
5
  import os
6
+ import random
7
  import sys
8
 
9
  pwd = os.path.abspath(os.path.dirname(__file__))
10
+ sys.path.append(os.path.join(pwd, "../../../"))
11
 
12
  from datasets import load_dataset, DownloadMode
13
 
 
110
  total = int(row[2])
111
  subsets = [e.strip() for e in row[3].split(";")]
112
 
113
+ count = 0
 
114
  for subset in subsets:
115
  if subset in subset_dataset_dict.keys():
116
  dataset_dict = subset_dataset_dict[subset]
117
  else:
118
  dataset_dict = load_dataset(
119
+ "qgyd2021/language_identification",
120
  name=subset,
121
  cache_dir=args.dataset_cache_dir,
122
  # download_mode=DownloadMode.FORCE_REDOWNLOAD
 
129
  language = sample["language"]
130
  data_source = sample["data_source"]
131
 
132
+ if count > total:
133
  break
134
+
135
+ if language != abbr:
136
+ continue
137
+
138
+ split = "train" if random.random() < 0.8 else "valid"
139
+
140
+ row_ = {
141
+ "text": text,
142
+ "label": language,
143
+ "language": full,
144
+ "data_source": data_source,
145
+ "split": split,
146
+ }
147
+ row_ = json.dumps(row_, ensure_ascii=False)
148
+
149
+ if split == "train":
150
  ftrain.write("{}\n".format(row_))
151
+ elif split == "valid":
152
+ fvalid.write("{}\n".format(row_))
153
+ else:
154
+ raise AssertionError
155
+
156
+ count += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
157
 
158
  return
159
 
examples/load_data/plan_2.py ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python3
2
+ # -*- coding: utf-8 -*-
3
+ import argparse
4
+ import json
5
+ import os
6
+ import random
7
+ import sys
8
+
9
+ pwd = os.path.abspath(os.path.dirname(__file__))
10
+ sys.path.append(os.path.join(pwd, "../../"))
11
+
12
+ from datasets import load_dataset, DownloadMode
13
+
14
+ from project_settings import project_path
15
+
16
+
17
+ def get_args():
18
+ parser = argparse.ArgumentParser()
19
+ parser.add_argument(
20
+ "--dataset_cache_dir",
21
+ default=(project_path / "hub_datasets").as_posix(),
22
+ type=str
23
+ )
24
+ parser.add_argument(
25
+ "--train_subset",
26
+ default="train.jsonl",
27
+ type=str
28
+ )
29
+ parser.add_argument(
30
+ "--valid_subset",
31
+ default="valid.jsonl",
32
+ type=str
33
+ )
34
+ args = parser.parse_args()
35
+ return args
36
+
37
+
38
+ s = """
39
+ | ar | arabic | 100000 | iwslt2017 |
40
+ | bg | bulgarian | 100000 | xnli |
41
+ | bn | bengali | 36064 | open_subtitles |
42
+ | bs | bosnian | 10212 | open_subtitles |
43
+ | cs | czech | 100000 | emea |
44
+ | da | danish | 100000 | open_subtitles |
45
+ | de | german | 100000 | iwslt2017 |
46
+ | el | modern greek | 100000 | emea |
47
+ | en | english | 100000 | iwslt2017 |
48
+ | eo | esperanto | 94101 | tatoeba; open_subtitles |
49
+ | es | spanish | 100000 | xnli |
50
+ | et | estonian | 100000 | emea |
51
+ | fi | finnish | 100000 | ecb; kde4 |
52
+ | fo | faroese | 23807 | nordic_langid |
53
+ | fr | french | 100000 | iwslt2017 |
54
+ | ga | irish | 100000 | multi_para_crawl |
55
+ | gl | galician | 3096 | tatoeba |
56
+ | hi | hindi | 100000 | xnli |
57
+ | hi_en | hindi | 7180 | cmu_hinglish_dog |
58
+ | hr | croatian | 95844 | hrenwac_para |
59
+ | hu | hungarian | 3801 | europa_ecdc_tm; europa_eac_tm |
60
+ | hy | armenian | 660 | open_subtitles |
61
+ | id | indonesian | 23940 | id_panl_bppt |
62
+ | is | icelandic | 100000 | multi_para_crawl |
63
+ | it | italian | 100000 | iwslt2017 |
64
+ | ja | japanese | 100000 | iwslt2017 |
65
+ | ko | korean | 100000 | iwslt2017 |
66
+ | lt | lithuanian | 100000 | emea |
67
+ | lv | latvian | 100000 | multi_para_crawl |
68
+ | mr | marathi | 51807 | tatoeba |
69
+ | mt | maltese | 100000 | multi_para_crawl |
70
+ | nl | dutch | 100000 | kde4 |
71
+ | no | norwegian | 100000 | multi_para_crawl |
72
+ | pl | polish | 100000 | para_crawl_en_pl |
73
+ | pt | portuguese | 100000 | para_crawl_en_pt |
74
+ | ro | romanian | 100000 | iwslt2017 |
75
+ | ru | russian | 100000 | xnli |
76
+ | sk | slovak | 100000 | multi_para_crawl |
77
+ | sl | slovenian | 100000 | para_crawl_en_sl |
78
+ | sw | swahili | 100000 | xnli |
79
+ | sv | swedish | 100000 | kde4 |
80
+ | th | thai | 100000 | xnli |
81
+ | tl | tagalog | 97241 | multi_para_crawl |
82
+ | tn | serpeti | 100000 | autshumato |
83
+ | tr | turkish | 100000 | xnli |
84
+ | ts | dzonga | 100000 | autshumato |
85
+ | uk | ukrainian | 88533 | para_pat_en_uk |
86
+ | ur | urdu | 100000 | xnli |
87
+ | vi | vietnamese | 100000 | xnli |
88
+ | yo | yoruba | 9970 | menyo20k_mt |
89
+ | zh | chinese | 100000 | xnli |
90
+ | zu | zulu, south africa | 26801 | autshumato |
91
+ """
92
+
93
+
94
+ def main():
95
+ args = get_args()
96
+
97
+ subset_dataset_dict = dict()
98
+
99
+ lines = s.strip().split("\n")
100
+
101
+ with open(args.train_subset, "w", encoding="utf-8") as ftrain, open(args.valid_subset, "w", encoding="utf-8") as fvalid:
102
+ for line in lines:
103
+ row = str(line).split("|")
104
+ row = [col.strip() for col in row if len(col) != 0]
105
+
106
+ if len(row) != 4:
107
+ raise AssertionError("not 4 item, line: {}".format(line))
108
+
109
+ abbr = row[0]
110
+ full = row[1]
111
+ total = int(row[2])
112
+ subsets = [e.strip() for e in row[3].split(";")]
113
+
114
+ count = 0
115
+ for subset in subsets:
116
+ if subset in subset_dataset_dict.keys():
117
+ dataset_dict = subset_dataset_dict[subset]
118
+ else:
119
+ dataset_dict = load_dataset(
120
+ "qgyd2021/language_identification",
121
+ name=subset,
122
+ cache_dir=args.dataset_cache_dir,
123
+ # download_mode=DownloadMode.FORCE_REDOWNLOAD
124
+ )
125
+ subset_dataset_dict[subset] = dataset_dict
126
+
127
+ train_dataset = dataset_dict["train"]
128
+ for sample in train_dataset:
129
+ text = sample["text"]
130
+ language = sample["language"]
131
+ data_source = sample["data_source"]
132
+
133
+ if count > total:
134
+ break
135
+
136
+ if language != abbr:
137
+ continue
138
+
139
+ split = "train" if random.random() < 0.8 else "valid"
140
+
141
+ row_ = {
142
+ "text": text,
143
+ "label": language,
144
+ "language": full,
145
+ "data_source": data_source,
146
+ "split": split,
147
+ }
148
+ row_ = json.dumps(row_, ensure_ascii=False)
149
+
150
+ if split == "train":
151
+ ftrain.write("{}\n".format(row_))
152
+ elif split == "valid":
153
+ fvalid.write("{}\n".format(row_))
154
+ else:
155
+ raise AssertionError
156
+
157
+ count += 1
158
+
159
+ return
160
+
161
+
162
+ if __name__ == "__main__":
163
+ main()
examples/make_subset_details.py CHANGED
@@ -12,7 +12,7 @@ from project_settings import project_path
12
 
13
  def get_args():
14
  parser = argparse.ArgumentParser()
15
- parser.add_argument("--dataset_name", default="tatoeba", type=str)
16
  parser.add_argument(
17
  "--dataset_cache_dir",
18
  default=(project_path / "hub_datasets").as_posix(),
 
12
 
13
  def get_args():
14
  parser = argparse.ArgumentParser()
15
+ parser.add_argument("--dataset_name", default="para_pat_en_uk", type=str)
16
  parser.add_argument(
17
  "--dataset_cache_dir",
18
  default=(project_path / "hub_datasets").as_posix(),
examples/preprocess/preprocess_para_crawl.py CHANGED
@@ -27,7 +27,7 @@ def get_args():
27
  )
28
  parser.add_argument(
29
  "--output_file",
30
- default=(project_path / "data/para_crawl_en_ro.jsonl"),
31
  type=str
32
  )
33
 
@@ -58,12 +58,12 @@ def main():
58
  # "ennl",
59
  # "enpl",
60
  # "enpt",
61
- "enro",
62
  # "ensk",
63
- # "ensl", "ensv"
 
64
  ]
65
 
66
- # TODO: 数据集太大,加载不完。
67
  text_set = set()
68
  counter = defaultdict(int)
69
  with open(args.output_file, "w", encoding="utf-8") as f:
 
27
  )
28
  parser.add_argument(
29
  "--output_file",
30
+ default=(project_path / "data/para_crawl_en_sv.jsonl"),
31
  type=str
32
  )
33
 
 
58
  # "ennl",
59
  # "enpl",
60
  # "enpt",
61
+ # "enro",
62
  # "ensk",
63
+ # "ensl",
64
+ "ensv"
65
  ]
66
 
 
67
  text_set = set()
68
  counter = defaultdict(int)
69
  with open(args.output_file, "w", encoding="utf-8") as f:
examples/preprocess/preprocess_para_pat.py CHANGED
@@ -26,7 +26,7 @@ def get_args():
26
  )
27
  parser.add_argument(
28
  "--output_file",
29
- default=(project_path / "data/para_pat.jsonl"),
30
  type=str
31
  )
32
 
@@ -38,12 +38,25 @@ def main():
38
  args = get_args()
39
 
40
  name_list = [
41
- "cs-en", "de-en", "de-fr", "el-en", "en-es", "en-fr", "en-hu", "en-ja",
42
- "en-ko", "en-pt", "en-ro", "en-ru", "en-sk",
43
- "en-uk",
44
- "en-zh",
45
- "es-fr",
46
- "fr-ja", "fr-ko", "fr-ru"
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ]
48
 
49
  text_set = set()
 
26
  )
27
  parser.add_argument(
28
  "--output_file",
29
+ default=(project_path / "data/para_pat_fr_ru.jsonl"),
30
  type=str
31
  )
32
 
 
38
  args = get_args()
39
 
40
  name_list = [
41
+ # "cs-en",
42
+ # "de-en",
43
+ # "de-fr",
44
+ # "el-en",
45
+ # "en-es",
46
+ # "en-fr",
47
+ # "en-hu",
48
+ # "en-ja",
49
+ # "en-ko",
50
+ # "en-pt",
51
+ # "en-ro",
52
+ # "en-ru",
53
+ # "en-sk",
54
+ # "en-uk",
55
+ # "en-zh",
56
+ # "es-fr",
57
+ # "fr-ja",
58
+ # "fr-ko",
59
+ "fr-ru"
60
  ]
61
 
62
  text_set = set()
language_identification.py CHANGED
@@ -50,7 +50,27 @@ _URLS = {
50
  "para_crawl_en_pl": "data/para_crawl_en_pl.jsonl",
51
  "para_crawl_en_pt": "data/para_crawl_en_pt.jsonl",
52
  "para_crawl_en_ro": "data/para_crawl_en_ro.jsonl",
53
- "para_pat": "data/para_pat.jsonl",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  "php": "data/php.jsonl",
55
  "scandi_langid": "data/scandi_langid.jsonl",
56
  "stsb_multi_mt": "data/stsb_multi_mt.jsonl",
@@ -177,7 +197,27 @@ class LanguageIdentification(datasets.GeneratorBasedBuilder):
177
  datasets.BuilderConfig(name="para_crawl_en_pl", version=VERSION, description="para_crawl_en_pl"),
178
  datasets.BuilderConfig(name="para_crawl_en_pt", version=VERSION, description="para_crawl_en_pt"),
179
  datasets.BuilderConfig(name="para_crawl_en_ro", version=VERSION, description="para_crawl_en_ro"),
180
- datasets.BuilderConfig(name="para_pat", version=VERSION, description="para_pat"),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
181
  datasets.BuilderConfig(name="php", version=VERSION, description="php"),
182
  datasets.BuilderConfig(name="scandi_langid", version=VERSION, description="scandi_langid"),
183
  datasets.BuilderConfig(name="stsb_multi_mt", version=VERSION, description="stsb_multi_mt"),
 
50
  "para_crawl_en_pl": "data/para_crawl_en_pl.jsonl",
51
  "para_crawl_en_pt": "data/para_crawl_en_pt.jsonl",
52
  "para_crawl_en_ro": "data/para_crawl_en_ro.jsonl",
53
+ "para_crawl_en_sk": "data/para_crawl_en_sk.jsonl",
54
+ "para_crawl_en_sl": "data/para_crawl_en_sl.jsonl",
55
+ "para_crawl_en_sv": "data/para_crawl_en_sv.jsonl",
56
+ "para_pat_cs_en": "data/para_pat_cs_en.jsonl",
57
+ "para_pat_de_en": "data/para_pat_de_en.jsonl",
58
+ "para_pat_de_fr": "data/para_pat_de_fr.jsonl",
59
+ "para_pat_el_en": "data/para_pat_el_en.jsonl",
60
+ "para_pat_en_es": "data/para_pat_en_es.jsonl",
61
+ "para_pat_en_hu": "data/para_pat_en_hu.jsonl",
62
+ "para_pat_en_ja": "data/para_pat_en_ja.jsonl",
63
+ "para_pat_en_ko": "data/para_pat_en_ko.jsonl",
64
+ "para_pat_en_pt": "data/para_pat_en_pt.jsonl",
65
+ "para_pat_en_ro": "data/para_pat_en_ro.jsonl",
66
+ "para_pat_en_ru": "data/para_pat_en_ru.jsonl",
67
+ "para_pat_en_sk": "data/para_pat_en_sk.jsonl",
68
+ "para_pat_en_uk": "data/para_pat_en_uk.jsonl",
69
+ "para_pat_en_zh": "data/para_pat_en_zh.jsonl",
70
+ "para_pat_es_fr": "data/para_pat_es_fr.jsonl",
71
+ "para_pat_fr_ja": "data/para_pat_fr_ja.jsonl",
72
+ "para_pat_fr_ko": "data/para_pat_fr_ko.jsonl",
73
+ "para_pat_fr_ru": "data/para_pat_fr_ru.jsonl",
74
  "php": "data/php.jsonl",
75
  "scandi_langid": "data/scandi_langid.jsonl",
76
  "stsb_multi_mt": "data/stsb_multi_mt.jsonl",
 
197
  datasets.BuilderConfig(name="para_crawl_en_pl", version=VERSION, description="para_crawl_en_pl"),
198
  datasets.BuilderConfig(name="para_crawl_en_pt", version=VERSION, description="para_crawl_en_pt"),
199
  datasets.BuilderConfig(name="para_crawl_en_ro", version=VERSION, description="para_crawl_en_ro"),
200
+ datasets.BuilderConfig(name="para_crawl_en_sk", version=VERSION, description="para_crawl_en_sk"),
201
+ datasets.BuilderConfig(name="para_crawl_en_sl", version=VERSION, description="para_crawl_en_sl"),
202
+ datasets.BuilderConfig(name="para_crawl_en_sv", version=VERSION, description="para_crawl_en_sv"),
203
+ datasets.BuilderConfig(name="para_pat_cs_en", version=VERSION, description="para_pat_cs_en"),
204
+ datasets.BuilderConfig(name="para_pat_de_en", version=VERSION, description="para_pat_de_en"),
205
+ datasets.BuilderConfig(name="para_pat_de_fr", version=VERSION, description="para_pat_de_fr"),
206
+ datasets.BuilderConfig(name="para_pat_el_en", version=VERSION, description="para_pat_el_en"),
207
+ datasets.BuilderConfig(name="para_pat_en_es", version=VERSION, description="para_pat_en_es"),
208
+ datasets.BuilderConfig(name="para_pat_en_hu", version=VERSION, description="para_pat_en_hu"),
209
+ datasets.BuilderConfig(name="para_pat_en_ja", version=VERSION, description="para_pat_en_ja"),
210
+ datasets.BuilderConfig(name="para_pat_en_ko", version=VERSION, description="para_pat_en_ko"),
211
+ datasets.BuilderConfig(name="para_pat_en_pt", version=VERSION, description="para_pat_en_pt"),
212
+ datasets.BuilderConfig(name="para_pat_en_ro", version=VERSION, description="para_pat_en_ro"),
213
+ datasets.BuilderConfig(name="para_pat_en_ru", version=VERSION, description="para_pat_en_ru"),
214
+ datasets.BuilderConfig(name="para_pat_en_sk", version=VERSION, description="para_pat_en_sk"),
215
+ datasets.BuilderConfig(name="para_pat_en_uk", version=VERSION, description="para_pat_en_uk"),
216
+ datasets.BuilderConfig(name="para_pat_en_zh", version=VERSION, description="para_pat_en_zh"),
217
+ datasets.BuilderConfig(name="para_pat_es_fr", version=VERSION, description="para_pat_es_fr"),
218
+ datasets.BuilderConfig(name="para_pat_fr_ja", version=VERSION, description="para_pat_fr_ja"),
219
+ datasets.BuilderConfig(name="para_pat_fr_ko", version=VERSION, description="para_pat_fr_ko"),
220
+ datasets.BuilderConfig(name="para_pat_fr_ru", version=VERSION, description="para_pat_fr_ru"),
221
  datasets.BuilderConfig(name="php", version=VERSION, description="php"),
222
  datasets.BuilderConfig(name="scandi_langid", version=VERSION, description="scandi_langid"),
223
  datasets.BuilderConfig(name="stsb_multi_mt", version=VERSION, description="stsb_multi_mt"),
load_data.md CHANGED
@@ -111,22 +111,23 @@
111
  | mt | maltese | 100000 | multi_para_crawl |
112
  | nl | dutch | 100000 | kde4 |
113
  | no | norwegian | 100000 | multi_para_crawl |
114
- | pl | polish | - | ecb |
115
- | pt | portuguese | 10000 | tatoeba |
116
- | ro | romanian | 10000 | kde4 |
117
- | ru | russian | 10000 | xnli |
118
- | sk | slovak | 10000 | multi_para_crawl |
119
- | sl | slovenian | 4589 | europa_ecdc_tm; europa_eac_tm |
120
- | sw | swahili | 10000 | xnli |
121
- | sv | swedish | 10000 | kde4 |
122
- | th | thai | 10000 | xnli |
123
- | tl | tagalog | 10000 | multi_para_crawl |
124
- | tn | serpeti | 10000 | autshumato |
125
- | tr | turkish | 10000 | xnli |
126
- | ts | dzonga | 10000 | autshumato |
127
- | ur | urdu | 10000 | xnli |
128
- | vi | vietnamese | 10000 | xnli |
 
129
  | yo | yoruba | 9970 | menyo20k_mt |
130
- | zh | chinese | 10000 | xnli |
131
- | zu | zulu, south africa | 10000 | autshumato |
132
 
 
111
  | mt | maltese | 100000 | multi_para_crawl |
112
  | nl | dutch | 100000 | kde4 |
113
  | no | norwegian | 100000 | multi_para_crawl |
114
+ | pl | polish | 100000 | para_crawl_en_pl |
115
+ | pt | portuguese | 100000 | para_crawl_en_pt |
116
+ | ro | romanian | 100000 | iwslt2017 |
117
+ | ru | russian | 100000 | xnli |
118
+ | sk | slovak | 100000 | multi_para_crawl |
119
+ | sl | slovenian | 100000 | para_crawl_en_sl |
120
+ | sw | swahili | 100000 | xnli |
121
+ | sv | swedish | 100000 | kde4 |
122
+ | th | thai | 100000 | xnli |
123
+ | tl | tagalog | 97241 | multi_para_crawl |
124
+ | tn | serpeti | 100000 | autshumato |
125
+ | tr | turkish | 100000 | xnli |
126
+ | ts | dzonga | 100000 | autshumato |
127
+ | uk | ukrainian | 88533 | para_pat_en_uk |
128
+ | ur | urdu | 100000 | xnli |
129
+ | vi | vietnamese | 100000 | xnli |
130
  | yo | yoruba | 9970 | menyo20k_mt |
131
+ | zh | chinese | 100000 | xnli |
132
+ | zu | zulu, south africa | 26801 | autshumato |
133