Datasets:
ArXiv:
License:
update
Browse files- README.md +11 -11
- data/bible_para.jsonl +3 -0
- data/ecb.jsonl +3 -0
- data/emea.jsonl +3 -0
- data/kde4.jsonl +3 -0
- data/multi_para_crawl.jsonl +3 -0
- data/open_subtitles.jsonl +3 -0
- data/para_pat.jsonl +3 -0
- data/php.jsonl +3 -0
- data/pib.jsonl +0 -0
- dataset_details.md +544 -3
- docs/picture/bible_para_text_length.jpg +3 -0
- docs/picture/ecb_text_length.jpg +3 -0
- docs/picture/emea_text_length.jpg +3 -0
- docs/picture/kde4_text_length.jpg +3 -0
- docs/picture/multi_para_crawl_text_length.jpg +3 -0
- docs/picture/open_subtitles_text_length.jpg +3 -0
- docs/picture/php_text_length.jpg +3 -0
- examples/make_subset_details.py +1 -1
- examples/preprocess/preprocess_bible_para.py +89 -0
- examples/preprocess/preprocess_ecb.py +89 -0
- examples/preprocess/preprocess_emea.py +89 -0
- examples/preprocess/preprocess_igbo.py +1 -0
- examples/preprocess/preprocess_kde4.py +89 -0
- examples/preprocess/preprocess_multi_para_crawl.py +96 -0
- examples/preprocess/preprocess_open_subtitles.py +96 -0
- examples/preprocess/preprocess_para_crawl.py +91 -0
- examples/preprocess/preprocess_para_pat.py +5 -2
- examples/preprocess/preprocess_php.py +89 -0
- examples/preprocess/preprocess_pib.py +1 -0
- examples/preprocess/preprocess_poleval2019_mt.py +1 -0
- language_identification.py +23 -0
README.md
CHANGED
@@ -35,7 +35,6 @@ Tips:
|
|
35 |
|
36 |
| 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
|
37 |
| :--- | :---: | :---: | :---: | :---: |
|
38 |
-
| tatoeba | [tatoeba](https://tatoeba.org/); [Tatoeba Paper](https://arxiv.org/abs/1812.10464v2) | TRAIN: 702895 | Tatoeba 是句子和翻译的集合。 | [tatoeba](https://huggingface.co/datasets/tatoeba) |
|
39 |
| bucc2018 | [bucc2018](https://comparable.limsi.fr/bucc2018/bucc2018-task.html) | TRAIN: 2173318, TEST: 2125879 | 共享任务:识别可比语料库中的平行句子,语言:de, en, fr, ru, zh | |
|
40 |
| iwslt2017 | [2017.iwslt-1.1.pdf](https://aclanthology.org/2017.iwslt-1.1.pdf) | TRAIN: 2482649, VALID: 11480, TEST: 72470 | IWSLT 2017 多语言任务解决了文本翻译问题,涵盖英语、德语、荷兰语、意大利语和罗马尼亚语等所有方向。 | [iwslt2017](https://huggingface.co/datasets/iwslt2017) |
|
41 |
| bsd_ja_en | [2008.01940v1](https://arxiv.org/abs/2008.01940v1) | TRAIN: 35755, VALID: 3636, TEST: 3702 | 尽管由于并行语料库和基于语料库的训练技术的可用性不断增加,书面文本的机器翻译在过去几年中取得了长足的进步,但即使对于现代系统,口语文本和对话的自动翻译仍然具有挑战性。 在本文中,我们的目标是通过引入新构建的日语-英语商务会话平行语料库来提高会话文本的机器翻译质量。 | [bsd_ja_en](https://huggingface.co/datasets/bsd_ja_en) |
|
@@ -51,9 +50,9 @@ Tips:
|
|
51 |
| id_panl_bppt | | TRAIN: 47916 | BPPT(印度尼西亚技术评估和应用机构)为 PAN 本地化项目(发展亚洲本地语言计算能力的区域性倡议)创建的多域翻译系统并行文本语料库。 该数据集包含大约 24K 个句子,分为 4 个不同主题(经济、国际、科学技术和体育)。 | [id_panl_bppt](https://huggingface.co/datasets/id_panl_bppt) |
|
52 |
| igbo | [Igbo-English Machine Translation](https://arxiv.org/abs/2004.00648v1) | | 在这项工作中,我们讨论了为伊博语(尼日利亚三种主要语言之一)构建标准机器翻译基准数据集所做的努力。 | [igbo_english_machine_translation](https://huggingface.co/datasets/igbo_english_machine_translation) |
|
53 |
| menyo20k_mt | [menyo20k_mt](https://arxiv.org/abs/2103.08647v3) | TRAIN: 19899, VALID: 6655, TEST: 13148 | MENYO-20k 是一个多域并行数据集,其中的文本来自新闻文章、ted 演讲、电影文字记录、广播文字记录、科技文本以及其他由网络和专业翻译人员策划的短文。 | [menyo20k_mt](https://huggingface.co/datasets/menyo20k_mt) |
|
54 |
-
| para_pat | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) |
|
55 |
| pib | [CVIT-PIB](https://arxiv.org/abs/2008.04860) | | 该数据集是 11 种印度语言的大规模句子对齐语料库,即: CVIT-PIB 语料库是印度语言可用的最大多语言语料库。 | [pib](https://huggingface.co/datasets/pib) |
|
56 |
-
| poleval2019_mt | |
|
57 |
|
58 |
|
59 |
机器翻译
|
@@ -62,14 +61,15 @@ https://opus.nlpl.eu/
|
|
62 |
|
63 |
| 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
|
64 |
| :--- | :---: | :---: | :---: | :---: |
|
65 |
-
| bible_para | [bible-uedin](https://opus.nlpl.eu/bible-uedin/corpus/version/bible-uedin) |
|
66 |
-
| ecb | [ECB](https://opus.nlpl.eu/ECB/corpus/version/ECB); |
|
67 |
-
| emea | [EMEA](https://opus.nlpl.eu/EMEA/corpus/version/EMEA); |
|
68 |
-
| kde4 | [KDE4](https://opus.nlpl.eu/KDE4/corpus/version/KDE4); [apps.kde.org](https://apps.kde.org/zh-cn/); [opus.nlpl.eu](https://opus.nlpl.eu/) |
|
69 |
-
| multi_para_crawl | [ParaCrawl](https://aclanthology.org/2020.acl-main.417/); [paracrawl.eu](http://paracrawl.eu); [MultiParaCrawl](https://opus.nlpl.eu/MultiParaCrawl/corpus/version/MultiParaCrawl) | 我们报告了使用开源软件通过抓取网络来创建最大的公开可用并行语料库的方法。 |
|
70 |
-
| open_subtitles | [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles); [L16-1147.pdf](https://aclanthology.org/L16-1147.pdf) |
|
71 |
-
| para_crawl | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) |
|
72 |
-
| php | [PHP](https://opus.nlpl.eu/PHP/corpus/version/PHP) |
|
|
|
73 |
|
74 |
|
75 |
### 参考来源
|
|
|
35 |
|
36 |
| 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
|
37 |
| :--- | :---: | :---: | :---: | :---: |
|
|
|
38 |
| bucc2018 | [bucc2018](https://comparable.limsi.fr/bucc2018/bucc2018-task.html) | TRAIN: 2173318, TEST: 2125879 | 共享任务:识别可比语料库中的平行句子,语言:de, en, fr, ru, zh | |
|
39 |
| iwslt2017 | [2017.iwslt-1.1.pdf](https://aclanthology.org/2017.iwslt-1.1.pdf) | TRAIN: 2482649, VALID: 11480, TEST: 72470 | IWSLT 2017 多语言任务解决了文本翻译问题,涵盖英语、德语、荷兰语、意大利语和罗马尼亚语等所有方向。 | [iwslt2017](https://huggingface.co/datasets/iwslt2017) |
|
40 |
| bsd_ja_en | [2008.01940v1](https://arxiv.org/abs/2008.01940v1) | TRAIN: 35755, VALID: 3636, TEST: 3702 | 尽管由于并行语料库和基于语料库的训练技术的可用性不断增加,书面文本的机器翻译在过去几年中取得了长足的进步,但即使对于现代系统,口语文本和对话的自动翻译仍然具有挑战性。 在本文中,我们的目标是通过引入新构建的日语-英语商务会话平行语料库来提高会话文本的机器翻译质量。 | [bsd_ja_en](https://huggingface.co/datasets/bsd_ja_en) |
|
|
|
50 |
| id_panl_bppt | | TRAIN: 47916 | BPPT(印度尼西亚技术评估和应用机构)为 PAN 本地化项目(发展亚洲本地语言计算能力的区域性倡议)创建的多域翻译系统并行文本语料库。 该数据集包含大约 24K 个句子,分为 4 个不同主题(经济、国际、科学技术和体育)。 | [id_panl_bppt](https://huggingface.co/datasets/id_panl_bppt) |
|
51 |
| igbo | [Igbo-English Machine Translation](https://arxiv.org/abs/2004.00648v1) | | 在这项工作中,我们讨论了为伊博语(尼日利亚三种主要语言之一)构建标准机器翻译基准数据集所做的努力。 | [igbo_english_machine_translation](https://huggingface.co/datasets/igbo_english_machine_translation) |
|
52 |
| menyo20k_mt | [menyo20k_mt](https://arxiv.org/abs/2103.08647v3) | TRAIN: 19899, VALID: 6655, TEST: 13148 | MENYO-20k 是一个多域并行数据集,其中的文本来自新闻文章、ted 演讲、电影文字记录、广播文字记录、科技文本以及其他由网络和专业翻译人员策划的短文。 | [menyo20k_mt](https://huggingface.co/datasets/menyo20k_mt) |
|
53 |
+
| para_pat | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | TRAIN: 10242500 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
|
54 |
| pib | [CVIT-PIB](https://arxiv.org/abs/2008.04860) | | 该数据集是 11 种印度语言的大规模句子对齐语料库,即: CVIT-PIB 语料库是印度语言可用的最大多语言语料库。 | [pib](https://huggingface.co/datasets/pib) |
|
55 |
+
| poleval2019_mt | | | PolEval 是一项受 SemEval 启发的波兰语自然语言处理工具评估活动。 | [poleval2019_mt](https://huggingface.co/datasets/poleval2019_mt) |
|
56 |
|
57 |
|
58 |
机器翻译
|
|
|
61 |
|
62 |
| 数据 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
|
63 |
| :--- | :---: | :---: | :---: | :---: |
|
64 |
+
| bible_para | [bible-uedin](https://opus.nlpl.eu/bible-uedin/corpus/version/bible-uedin) | TRAIN: 245321 | 这是一个多语言平行语料库,根据 Christos Christodoulopoulos 和 Mark Steedman 编译的圣经翻译创建。 | [bible_para](https://huggingface.co/datasets/bible_para) |
|
65 |
+
| ecb | [ECB](https://opus.nlpl.eu/ECB/corpus/version/ECB); | TRAIN: 713510 | | [ecb](https://huggingface.co/datasets/ecb) |
|
66 |
+
| emea | [EMEA](https://opus.nlpl.eu/EMEA/corpus/version/EMEA); | TRAIN: 2600773 | | [emea](https://huggingface.co/datasets/emea) |
|
67 |
+
| kde4 | [KDE4](https://opus.nlpl.eu/KDE4/corpus/version/KDE4); [apps.kde.org](https://apps.kde.org/zh-cn/); [opus.nlpl.eu](https://opus.nlpl.eu/) | TRAIN: 885030 | | [kde4](https://huggingface.co/datasets/kde4) |
|
68 |
+
| multi_para_crawl | [ParaCrawl](https://aclanthology.org/2020.acl-main.417/); [paracrawl.eu](http://paracrawl.eu); [MultiParaCrawl](https://opus.nlpl.eu/MultiParaCrawl/corpus/version/MultiParaCrawl) | TRAIN: 885030 | 我们报告了使用开源软件通过抓取网络来创建最大的公开可用并行语料库的方法。 | [multi_para_crawl](https://huggingface.co/datasets/multi_para_crawl) |
|
69 |
+
| open_subtitles | [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles); [L16-1147.pdf](https://aclanthology.org/L16-1147.pdf) | TRAIN: 11662044 | 我们推出了平行语料库 OpenSubtitles 集合的新主要版本。 该版本由大型电影和电视字幕数据库编译而成,共包含 1689 个双文本,涵盖 60 种语言的 26 亿个句子。 该版本还包含了字幕预处理和对齐方面的许多增强功能,例如自动更正 OCR 错误以及使用元数据来估计每个字幕的质量并对字幕对进行评分。 | [open_subtitles](https://huggingface.co/datasets/open_subtitles) |
|
70 |
+
| para_crawl | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
|
71 |
+
| php | [PHP](https://opus.nlpl.eu/PHP/corpus/version/PHP) | TRAIN: 44007 | 最初从 http://se.php.net/download-docs.php 中提取的并行语料库。该语料库相当嘈杂。 | [php](https://huggingface.co/datasets/php) |
|
72 |
+
| tatoeba | [Tatoeba](https://opus.nlpl.eu/Tatoeba/corpus/version/Tatoeba); [tatoeba](https://tatoeba.org/); [Tatoeba Paper](https://arxiv.org/abs/1812.10464v2) | TRAIN: 702895 | Tatoeba 是句子和翻译的集合。 | [tatoeba](https://huggingface.co/datasets/tatoeba) |
|
73 |
|
74 |
|
75 |
### 参考来源
|
data/bible_para.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:11a0dd7683fda8a4d3fe4f03ecd7d80d3de21ae622ec700b97c187e2e40902f3
|
3 |
+
size 56795164
|
data/ecb.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d3e82ae15215ca00a3163e4f1e86e132bb61d6e93d5b755f535c55cd65db49b
|
3 |
+
size 193115404
|
data/emea.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7b294e135f2f53d5193aedbf9aa37046c1a1319da2131b64c1e22f5caa5444a5
|
3 |
+
size 533869399
|
data/kde4.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bb426472441dae39f3ea05c2d745be9ef24fa2c8a6895dc1cab97f6d22cf35f5
|
3 |
+
size 120914324
|
data/multi_para_crawl.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cafbd0ebeb291bdd5669c89cd881657fc2b6325e86373f2ddb1a536004f48574
|
3 |
+
size 768585465
|
data/open_subtitles.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a6187bcdad02d2c42018f98af04b410f388f73d714ae3e9f70e73127fc5b5180
|
3 |
+
size 1610096121
|
data/para_pat.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:125ac7672e3164064d3512f359a884150ae9e9e59f37622f0572157407fb689d
|
3 |
+
size 13516036688
|
data/php.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ea83f71e75d335fec724f37a6735b98444f6ab853c8e48405ce934187abd337c
|
3 |
+
size 7166249
|
data/pib.jsonl
ADDED
File without changes
|
dataset_details.md
CHANGED
@@ -128,6 +128,78 @@ zu: 26801
|
|
128 |
![autshumato_text_length.jpg](docs/picture/autshumato_text_length.jpg)
|
129 |
|
130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
131 |
#### bsd_ja_en
|
132 |
以下都是 train 训练集的信息
|
133 |
|
@@ -292,6 +364,169 @@ en: 5966
|
|
292 |
![cmu_hinglish_dog_text_length.jpg](docs/picture/cmu_hinglish_dog_text_length.jpg)
|
293 |
|
294 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
295 |
#### europa_eac_tm
|
296 |
以下都是 train 训练集的信息
|
297 |
|
@@ -809,6 +1044,77 @@ de: 203597
|
|
809 |
![iwslt2017_text_length.jpg](docs/picture/iwslt2017_text_length.jpg)
|
810 |
|
811 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
812 |
#### menyo20k_mt
|
813 |
以下都是 train 训练集的信息
|
814 |
|
@@ -912,7 +1218,6 @@ ru: 3216
|
|
912 |
| mike0307 | it | La Russia ha ratificato la versione rivista del trattato. |
|
913 |
| mike0307 | it | Un uomo sta sciando in montagna con un cane. |
|
914 |
|
915 |
-
|
916 |
<details>
|
917 |
<summary>文本长度</summary>
|
918 |
<pre><code>0-10: 591
|
@@ -944,6 +1249,89 @@ ru: 3216
|
|
944 |
![mike0307_text_length.jpg](docs/picture/mike0307_text_length.jpg)
|
945 |
|
946 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
947 |
#### nbnn
|
948 |
以下都是 train 训练集的信息
|
949 |
|
@@ -1009,7 +1397,6 @@ is: 33948
|
|
1009 |
fo: 23807
|
1010 |
```
|
1011 |
|
1012 |
-
|
1013 |
样本示例:
|
1014 |
|
1015 |
| 数据 | 语种 | 样本 |
|
@@ -1033,7 +1420,6 @@ fo: 23807
|
|
1033 |
| nordic_langid | is | den varmaste månaden är juli då medeltemperaturen är c och den kallaste är januari med c |
|
1034 |
| nordic_langid | is | ett tropiskt höglandsklimat råder i trakten |
|
1035 |
|
1036 |
-
|
1037 |
<details>
|
1038 |
<summary>文本长度</summary>
|
1039 |
<pre><code>0-10: 65
|
@@ -1065,6 +1451,161 @@ fo: 23807
|
|
1065 |
![nordic_langid_text_length.jpg](docs/picture/nordic_langid_text_length.jpg)
|
1066 |
|
1067 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1068 |
#### scandi_langid
|
1069 |
以下都是 train 训练集的信息
|
1070 |
|
|
|
128 |
![autshumato_text_length.jpg](docs/picture/autshumato_text_length.jpg)
|
129 |
|
130 |
|
131 |
+
#### bible_para
|
132 |
+
以下都是 train 训练集的信息
|
133 |
+
|
134 |
+
```text
|
135 |
+
语种数量:
|
136 |
+
en: 61352
|
137 |
+
de: 30835
|
138 |
+
hi: 30744
|
139 |
+
fr: 30619
|
140 |
+
es: 30614
|
141 |
+
no: 30594
|
142 |
+
fi: 30563
|
143 |
+
```
|
144 |
+
|
145 |
+
样本示例:
|
146 |
+
|
147 |
+
| 数据 | 语种 | 样本 |
|
148 |
+
| :---: | :---: | :---: |
|
149 |
+
| bible_para | de | Am Anfang schuf Gott Himmel und Erde. |
|
150 |
+
| bible_para | de | Und die Erde war wüst und leer, und es war finster auf der Tiefe; und der Geist Gottes schwebte auf dem Wasser. |
|
151 |
+
| bible_para | de | Und Gott sprach: Es werde Licht! und es ward Licht. |
|
152 |
+
| bible_para | en | In the beginning God created the heavens and the earth. |
|
153 |
+
| bible_para | en | Now the earth was formless and empty. Darkness was on the surface of the deep. God's Spirit was hovering over the surface of the waters. |
|
154 |
+
| bible_para | en | God said, "Let there be light," and there was light. |
|
155 |
+
| bible_para | es | En el principio creó Dios los cielos y la tierra |
|
156 |
+
| bible_para | es | Y la tierra estaba sin orden y vacía. Había tinieblas sobre la faz del océano, y el Espíritu de Dios se movía sobre la faz de las aguas |
|
157 |
+
| bible_para | es | Entonces dijo Dios: "Sea la luz", y fue la luz |
|
158 |
+
| bible_para | fi | Alussa loi Jumala taivaan ja maan. |
|
159 |
+
| bible_para | fi | Ja maa oli autio ja tyhjä, ja pimeys oli syvyyden päällä, ja Jumalan Henki liikkui vetten päällä. |
|
160 |
+
| bible_para | fi | Ja Jumala sanoi: "Tulkoon valkeus". Ja valkeus tuli. |
|
161 |
+
| bible_para | fr | Au commencement, Dieu créa les cieux et la terre. |
|
162 |
+
| bible_para | fr | La terre était informe et vide: il y avait des ténèbres à la surface de l`abîme, et l`esprit de Dieu se mouvait au-dessus des eaux. |
|
163 |
+
| bible_para | fr | Dieu dit: Que la lumière soit! Et la lumière fut. |
|
164 |
+
| bible_para | hi | आदि में परमेश्वर ने आकाश और पृथ्वी की सृष्टि की। |
|
165 |
+
| bible_para | hi | और पृथ्वी बेडौल और सुनसान पड़ी थी; और गहरे जल के ऊपर अन्धियारा था: तथा परमेश्वर का आत्मा जल के ऊपर मण्डलाता था। |
|
166 |
+
| bible_para | hi | तब परमेश्वर ने कहा, उजियाला हो: तो उजियाला हो गया। |
|
167 |
+
| bible_para | no | I begynnelsen skapte Gud himmelen og jorden. |
|
168 |
+
| bible_para | no | Og jorden var øde og tom, og det var mørke over det store dyp, og Guds Ånd svevde over vannene. |
|
169 |
+
| bible_para | no | Da sa Gud: Det bli lys! Og det blev lys. |
|
170 |
+
|
171 |
+
|
172 |
+
<details>
|
173 |
+
<summary>文本长度</summary>
|
174 |
+
<pre><code>0-10: 3
|
175 |
+
10-20: 117
|
176 |
+
20-30: 1107
|
177 |
+
30-40: 1868
|
178 |
+
40-50: 4172
|
179 |
+
50-60: 8359
|
180 |
+
60-70: 14603
|
181 |
+
70-80: 19601
|
182 |
+
80-90: 20681
|
183 |
+
90-100: 19162
|
184 |
+
100-110: 18012
|
185 |
+
110-120: 17120
|
186 |
+
120-130: 16575
|
187 |
+
130-140: 15225
|
188 |
+
140-150: 14108
|
189 |
+
150-160: 12565
|
190 |
+
160-170: 11165
|
191 |
+
170-180: 9563
|
192 |
+
180-190: 8232
|
193 |
+
190-200: 6958
|
194 |
+
200-210: 26125
|
195 |
+
</code></pre>
|
196 |
+
</details>
|
197 |
+
|
198 |
+
文本长度统计图像:
|
199 |
+
|
200 |
+
![bible_para_text_length.jpg](docs/picture/bible_para_text_length.jpg)
|
201 |
+
|
202 |
+
|
203 |
#### bsd_ja_en
|
204 |
以下都是 train 训练集的信息
|
205 |
|
|
|
364 |
![cmu_hinglish_dog_text_length.jpg](docs/picture/cmu_hinglish_dog_text_length.jpg)
|
365 |
|
366 |
|
367 |
+
#### ecb
|
368 |
+
以下都是 train 训练集的信息
|
369 |
+
|
370 |
+
```text
|
371 |
+
语种数量:
|
372 |
+
en: 134237
|
373 |
+
nl: 110712
|
374 |
+
de: 94650
|
375 |
+
fr: 89368
|
376 |
+
el: 78247
|
377 |
+
it: 75944
|
378 |
+
cs: 60797
|
379 |
+
fi: 34964
|
380 |
+
pl: 34591
|
381 |
+
```
|
382 |
+
|
383 |
+
样本示例:
|
384 |
+
|
385 |
+
| 数据 | 语种 | 样本 |
|
386 |
+
| :---: | :---: | :---: |
|
387 |
+
| ecb | cs | Navigation Path : Home > The European Central Bank > Pro návštěvníky > Zarezervujte si návštěvu |
|
388 |
+
| ecb | cs | The European Central Bank |
|
389 |
+
| ecb | cs | Press |
|
390 |
+
| ecb | en | Navigation Path : Home > The European Central Bank > Visiting the ECB > Book a visit |
|
391 |
+
| ecb | en | Book a visit |
|
392 |
+
| ecb | en | At least 3 months in advance In view of the large number of requests , visitor groups are kindly asked to book at least 3 months before the planned date of their visit . |
|
393 |
+
| ecb | de | Navigation Path : Home > The European Central Bank > Rechtlicher Rahmen > Alle Rechtsakte nach Datum geordnet > Alle Jahre > CON / 1998/24 |
|
394 |
+
| ecb | de | Stellungnahme zur Satzung der Bank of England ( CON / 1998/24 ) |
|
395 |
+
| ecb | de | Vereinigtes Königreich , 8.5.1998 , pdf 13 kB , en |
|
396 |
+
| ecb | fr | Navigation Path : Home > The European Central Bank > Cadre juridique > Par ordre chronologique > Toutes années confondues > CON / 1998/24 |
|
397 |
+
| ecb | fr | Avis sur le statut de la Bank of England ( CON / 1998/24 ) |
|
398 |
+
| ecb | fr | Royaume-Uni , 8.5.1998 , pdf 13 kB , en |
|
399 |
+
| ecb | el | Επίσηµη Εφηµερίδα της Ευρωπαϊκής Ένωσης |
|
400 |
+
| ecb | el | ΕΥΡΩΠΑΪΚΗ ΚΕΝΤΡΙΚΗ ΤΡΑΠΕΖΑ ΑΠΟΦΑΣΗ ΤΗΣ ΕΥΡΩΠΑΪΚΗΣ ΚΕΝΤΡΙΚΗΣ ΤΡΑΠΕΖΑΣ της 28ης Νοεµßρίου 2003 σχετικά µε την έγκριση της ποσότητας των κερµάτων ��ου πρόκειται να εκδοθούν το 2004 ( ΕΚΤ / 2003/15 ) ( 2003/860 / ΕΚ ) ΤΟ ∆ΙΟΙΚΗΤΙΚΟ ΣΥΜΒΟΥΛΙΟ ΤΗΣ ΕΥΡΩΠΑΪΚΗΣ ΚΕΝΤΡΙΚΗΣ ΤΡΑΠΕΖΑΣ , |
|
401 |
+
| ecb | el | Έχοντας υπόψη τη συνθήκη για την ίδρυση της Ευρωπαϊκής Κοινότητας , και ιδίως το άρθρο 106 παράγραφος 2 , Εκτιµώντας τα ακόλουθα : ( 1 ) |
|
402 |
+
| ecb | it | Gazzetta ufficiale dell' Unione europea |
|
403 |
+
| ecb | it | BANCA CENTRALE EUROPEA DECISIONE DELLA BANCA CENTRALE EUROPEA del 28 novembre 2003 relativa all' approvazione del volume di conio delle monete metalliche per il 2004 ( BCE / 2003/15 ) ( 2003/860 / CE ) IL CONSIGLIO DIRETTIVO DELLA BANCA CENTRALE EUROPEA , |
|
404 |
+
| ecb | it | visto il trattato che istituisce la Comunità europea , in particolare , l' articolo 106 , paragrafo 2 , considerando quanto segue : ( 1 ) |
|
405 |
+
| ecb | nl | Bijgaand bericht is opgesteld in overleg met Chris Heemeryck en Willy Scheerlinck . |
|
406 |
+
| ecb | nl | Het is duidelijk dat de markten er belang bij hebben dat alle betaalsystemen , ook retailbetalingen , op een veilige en effiënte wijze functioneren . |
|
407 |
+
| ecb | nl | Anderzijds is het ook zo dat systeemkritische systemen meer aandacht verdienen gezien de risico 's eraan verbonden waarbij het implementeren van de Core Principles en het eraan verbonden kostenplaatje gemakkelijker te rechtvaardigen is . |
|
408 |
+
| ecb | fi | Pyöristyksistä johtuen yhteenlaskut eivät välttämättä täsmää . |
|
409 |
+
| ecb | fi | Navigation Path : Home > The European Central Bank > Säädöskokoelma > Kaikki EKP : n lausunnot > CON / 2009/58 |
|
410 |
+
| ecb | fi | Lausunto valtion pääomatukijärjestelmästä ( CON / 2009/58 ) |
|
411 |
+
| ecb | pl | Poszczególne pozycje mogą nie sumować się ze względu na zaokrąglenia . |
|
412 |
+
| ecb | pl | Navigation Path : Home > The European Central Bank > Akty prawne > Wszystkie opinie EBC > CON / 2009/58 |
|
413 |
+
| ecb | pl | Opinia w sprawie programu rekapitalizacji realizowanego przez państwo ( CON / 2009/58 ) |
|
414 |
+
|
415 |
+
|
416 |
+
<details>
|
417 |
+
<summary>文本长度</summary>
|
418 |
+
<pre><code>0-10: 10481
|
419 |
+
10-20: 25366
|
420 |
+
20-30: 24446
|
421 |
+
30-40: 26315
|
422 |
+
40-50: 23151
|
423 |
+
50-60: 21774
|
424 |
+
60-70: 21877
|
425 |
+
70-80: 21334
|
426 |
+
80-90: 21933
|
427 |
+
90-100: 24920
|
428 |
+
100-110: 24543
|
429 |
+
110-120: 26321
|
430 |
+
120-130: 28123
|
431 |
+
130-140: 27011
|
432 |
+
140-150: 26237
|
433 |
+
150-160: 26409
|
434 |
+
160-170: 24372
|
435 |
+
170-180: 22429
|
436 |
+
180-190: 21672
|
437 |
+
190-200: 20193
|
438 |
+
200-210: 244603
|
439 |
+
</code></pre>
|
440 |
+
</details>
|
441 |
+
|
442 |
+
文本长度统计图像:
|
443 |
+
|
444 |
+
![ecb_text_length.jpg](docs/picture/ecb_text_length.jpg)
|
445 |
+
|
446 |
+
|
447 |
+
#### emea
|
448 |
+
以下都是 train 训练集的信息
|
449 |
+
|
450 |
+
```text
|
451 |
+
语种数量:
|
452 |
+
bg: 276277
|
453 |
+
fr: 266731
|
454 |
+
es: 264491
|
455 |
+
el: 263603
|
456 |
+
cs: 262326
|
457 |
+
mt: 259185
|
458 |
+
sk: 256418
|
459 |
+
lt: 254590
|
460 |
+
et: 252368
|
461 |
+
de: 244784
|
462 |
+
```
|
463 |
+
|
464 |
+
样本示例:
|
465 |
+
|
466 |
+
| 数据 | 语种 | 样本 |
|
467 |
+
| :---: | :---: | :---: |
|
468 |
+
| emea | bg | European Medicines Agency |
|
469 |
+
| emea | bg | EMEA/ H/ C/ 471 |
|
470 |
+
| emea | bg | ЕВРОПЕЙСКИ ДОКЛАД ЗА ОБЩЕСТВЕНА ОЦЕНКА (EPAR) |
|
471 |
+
| emea | el | ΕΥΡΩΠΑΪΚΗ ∆ΗΜΟΣΙΑ ΕΚΘΕΣΗ ΑΞΙΟΛΟΓΗΣΗΣ (EPAR) |
|
472 |
+
| emea | el | Περίληψη EPAR για το κοινό |
|
473 |
+
| emea | el | Το παρόν έγγραφο αποτελεί σύνοψη της Ευρωπαϊκής ∆ηµόσιας Έκθεσης Αξιολόγησης (EPAR). |
|
474 |
+
| emea | cs | EVROPSKÁ VEŘEJNÁ ZPRÁVA O HODNOCENÍ (EPAR) |
|
475 |
+
| emea | cs | Souhrn zprávy EPAR určený pro veřejnost |
|
476 |
+
| emea | cs | Tento dokument je souhrnem Evropské veřejné zprávy o hodnocení (European Public Assessment Report, EPAR). |
|
477 |
+
| emea | et | EUROOPA AVALIK HINDAMISARUANNE |
|
478 |
+
| emea | et | Kokkuvõte üldsusele |
|
479 |
+
| emea | et | Käesolev dokument on Euroopa avaliku hindamisaruande kokkuvõte. |
|
480 |
+
| emea | de | EMEA/H/C/471 |
|
481 |
+
| emea | de | EUROPÄISCHER ÖFFENTLICHER BEURTEILUNGSBERICHT (EPAR) |
|
482 |
+
| emea | de | Zusammenfassung des EPAR für die Öffentlichkeit |
|
483 |
+
| emea | mt | RAPPORT TA 'VALUTAZZJONI PUBBLIKA EWROPEW (EPAR) |
|
484 |
+
| emea | mt | Sommarju ta 'l- EPAR għall- pubbliku |
|
485 |
+
| emea | mt | Jispjega kif il- Kumitat għall- Prodotti Mediċinali għall- Użu mill- Bniedem (CHMP) ivvaluta l - istudji mwettqa, sabiex jaslu għar- rakkomandazzjonijiet tagħhom dwar kif tintuża l- mediċina. |
|
486 |
+
| emea | es | INFORME PÚBLICO EUROPEO DE EVALUACIÓN (EPAR) |
|
487 |
+
| emea | es | Resumen del EPAR para el público general |
|
488 |
+
| emea | es | En el presente documento se resume el Informe Público Europeo de Evaluación (EPAR). |
|
489 |
+
| emea | lt | EUROPOS VIEŠAS VERTINIMO PROTOKOLAS (EPAR) |
|
490 |
+
| emea | lt | EPAR santrauka plačiajai visuomenei |
|
491 |
+
| emea | lt | Šis dokumentas yra Europos viešo vertinimo protokolo (EPAR) santrauka. |
|
492 |
+
| emea | fr | RAPPORT EUROPÉEN PUBLIC D’ ÉVALUATION (EPAR) |
|
493 |
+
| emea | fr | Résumé EPAR à l’ intention du public |
|
494 |
+
| emea | fr | Ce document est un résumé du rapport européen public d’ évaluation (EPAR). |
|
495 |
+
| emea | sk | EURÓPSKA VEREJNÁ HODNOTIACA SPRÁVA (EPAR) |
|
496 |
+
| emea | sk | Súhrn správy EPAR pre verejnosť |
|
497 |
+
| emea | sk | Tento dokument je súhrn Európskej verejnej hodnotiacej správy (EPAR). |
|
498 |
+
|
499 |
+
<details>
|
500 |
+
<summary>文本长度</summary>
|
501 |
+
<pre><code>0-10: 63240
|
502 |
+
10-20: 143391
|
503 |
+
20-30: 161354
|
504 |
+
30-40: 151837
|
505 |
+
40-50: 152739
|
506 |
+
50-60: 157749
|
507 |
+
60-70: 156395
|
508 |
+
70-80: 149905
|
509 |
+
80-90: 153075
|
510 |
+
90-100: 150674
|
511 |
+
100-110: 127012
|
512 |
+
110-120: 113108
|
513 |
+
120-130: 104354
|
514 |
+
130-140: 93558
|
515 |
+
140-150: 84809
|
516 |
+
150-160: 76358
|
517 |
+
160-170: 68717
|
518 |
+
170-180: 61172
|
519 |
+
180-190: 54007
|
520 |
+
190-200: 47322
|
521 |
+
200-210: 329997
|
522 |
+
</code></pre>
|
523 |
+
</details>
|
524 |
+
|
525 |
+
文本长度统计图像:
|
526 |
+
|
527 |
+
![emea_text_length.jpg](docs/picture/emea_text_length.jpg)
|
528 |
+
|
529 |
+
|
530 |
#### europa_eac_tm
|
531 |
以下都是 train 训练集的信息
|
532 |
|
|
|
1044 |
![iwslt2017_text_length.jpg](docs/picture/iwslt2017_text_length.jpg)
|
1045 |
|
1046 |
|
1047 |
+
#### kde4
|
1048 |
+
以下都是 train 训练集的信息
|
1049 |
+
|
1050 |
+
```text
|
1051 |
+
语种数量:
|
1052 |
+
en: 179069
|
1053 |
+
fr: 155306
|
1054 |
+
it: 155154
|
1055 |
+
nl: 134233
|
1056 |
+
sv: 132539
|
1057 |
+
fi: 69132
|
1058 |
+
ro: 59597
|
1059 |
+
```
|
1060 |
+
|
1061 |
+
样本示例:
|
1062 |
+
|
1063 |
+
| 数据 | 语种 | 样本 |
|
1064 |
+
| :---: | :---: | :---: |
|
1065 |
+
| kde4 | en | Lauri Watts |
|
1066 |
+
| kde4 | en | & Lauri. Watts. mail; |
|
1067 |
+
| kde4 | en | ROLES_OF_TRANSLATORS |
|
1068 |
+
| kde4 | fr | & traducteurJeromeBlanc; |
|
1069 |
+
| kde4 | fr | Le module externe Babel pour & konqueror; vous donne un accès rapide au service de traduction Babelfish. |
|
1070 |
+
| kde4 | fr | Modules externes |
|
1071 |
+
| kde4 | it | Federico Cozzi federico. cozzi@sns. it Traduzione primordiale Riccardo Iaconelli ruphy@fsfe. org Traduzione finale e revisione completa |
|
1072 |
+
| kde4 | it | Il plugin Babel di & konqueror; permette di accedere facilmente al servizio di traduzione di Babelfish. |
|
1073 |
+
| kde4 | it | traduzione |
|
1074 |
+
| kde4 | fi | & kfind; käyttöohje |
|
1075 |
+
| kde4 | fi | Mikko Ikola ikola@ iki. fi Suomennos |
|
1076 |
+
| kde4 | fi | & kfind; on & kde;: n tiedostojenhakutyökalu |
|
1077 |
+
| kde4 | nl | Het handboek van & kfind; |
|
1078 |
+
| kde4 | nl | & Niels.Reedijk; Pieter.Hoekstra; |
|
1079 |
+
| kde4 | nl | & Dirk.Doerflinger; |
|
1080 |
+
| kde4 | ro | & tradClaudiuCostin; |
|
1081 |
+
| kde4 | ro | gopher s- a născut ca un serviciu de informaţii distribuit de campus la Universitatea Minnesota. Gopher permite utilizatorului să acceseze informaţii de pe serverele Gopher ce rulează pe maşini din Internet. |
|
1082 |
+
| kde4 | ro | Gopher este un serviciu Internet de navigare care utilizează o interfaţă bazată pe meniuri. Utilizatorii selectează informaţii din meniuri care pot returna alt meniu sau să afişeze un fişier text. Un item poate exista pe serverul unde a avut loc interogarea sau poate fi pe un alt server Gopher (sau altă maşină gazdă). Gopher poate tunela din alt Gopher fără ca utilizatorul să ştie că serverul şi/ sau maşina gazdă sînt altele. Gopher ascunde utilizatorului locaţia exactă a calculatoarelor, oferind iluzia unui singur set larg de meniuri interconectate. |
|
1083 |
+
| kde4 | sv | Stefan Asserhäll stefan. asserhall@ comhem. se Översättare |
|
1084 |
+
| kde4 | sv | 2006- 02- 26 3. 5. 1 |
|
1085 |
+
| kde4 | sv | Insticksprogrammet Babel för & konqueror; ger snabb tillgång till Babelfisk översättningsservicen. |
|
1086 |
+
|
1087 |
+
<details>
|
1088 |
+
<summary>文本长度</summary>
|
1089 |
+
<pre><code>0-10: 57182
|
1090 |
+
10-20: 198010
|
1091 |
+
20-30: 170452
|
1092 |
+
30-40: 115210
|
1093 |
+
40-50: 73206
|
1094 |
+
50-60: 47558
|
1095 |
+
60-70: 35365
|
1096 |
+
70-80: 24621
|
1097 |
+
80-90: 18324
|
1098 |
+
90-100: 14390
|
1099 |
+
100-110: 11957
|
1100 |
+
110-120: 10158
|
1101 |
+
120-130: 8643
|
1102 |
+
130-140: 7556
|
1103 |
+
140-150: 6789
|
1104 |
+
150-160: 6163
|
1105 |
+
160-170: 5573
|
1106 |
+
170-180: 5070
|
1107 |
+
180-190: 4741
|
1108 |
+
190-200: 4645
|
1109 |
+
200-210: 59417
|
1110 |
+
</code></pre>
|
1111 |
+
</details>
|
1112 |
+
|
1113 |
+
文本长度统计图像:
|
1114 |
+
|
1115 |
+
![kde4_text_length.jpg](docs/picture/kde4_text_length.jpg)
|
1116 |
+
|
1117 |
+
|
1118 |
#### menyo20k_mt
|
1119 |
以下都是 train 训练集的信息
|
1120 |
|
|
|
1218 |
| mike0307 | it | La Russia ha ratificato la versione rivista del trattato. |
|
1219 |
| mike0307 | it | Un uomo sta sciando in montagna con un cane. |
|
1220 |
|
|
|
1221 |
<details>
|
1222 |
<summary>文本长度</summary>
|
1223 |
<pre><code>0-10: 591
|
|
|
1249 |
![mike0307_text_length.jpg](docs/picture/mike0307_text_length.jpg)
|
1250 |
|
1251 |
|
1252 |
+
#### multi_para_crawl
|
1253 |
+
以下都是 train 训练集的信息
|
1254 |
+
|
1255 |
+
```text
|
1256 |
+
语种数量:
|
1257 |
+
is: 656763
|
1258 |
+
cs: 606771
|
1259 |
+
mt: 446845
|
1260 |
+
lv: 417988
|
1261 |
+
ru: 383922
|
1262 |
+
sk: 381249
|
1263 |
+
ga: 361298
|
1264 |
+
no: 356682
|
1265 |
+
tl: 97241
|
1266 |
+
de: 94976
|
1267 |
+
```
|
1268 |
+
|
1269 |
+
样本示例:
|
1270 |
+
|
1271 |
+
| 数据 | 语种 | 样本 |
|
1272 |
+
| :---: | :---: | :---: |
|
1273 |
+
| multi_para_crawl | cs | barva květina vinný, šeřík, nachový, růžový, oranžový, červená |
|
1274 |
+
| multi_para_crawl | cs | Výměna ubytování na dovolenou v Valencia de Don Juan |
|
1275 |
+
| multi_para_crawl | cs | Je to báječná věc, kromě případů, kdy jste vstoupil do Demon území. |
|
1276 |
+
| multi_para_crawl | is | blóm lit burgundy, lilac, bleikur, grænt, gulur, appelsína, rauður, hvítur |
|
1277 |
+
| multi_para_crawl | is | blóm lit claret, lilac, fjólublátt, bleikur, appelsína, rauður |
|
1278 |
+
| multi_para_crawl | is | Hús til sölu í Valencia de Don Juan |
|
1279 |
+
| multi_para_crawl | de | Weil Polizei nicht Verstärkersysteme im Bereich zu ermöglichen, hatte die Demonstranten zu nutzen, was sie die "Mikrofon der Menschen" nennen, um sicherzustellen, dass Menschen in den Rücken machen konnte hören, welche Westen sagte. |
|
1280 |
+
| multi_para_crawl | de | None "Gedanken-Wellen", keine Kräfte des Denkens, von lebenden oder toten Organismus, Mensch oder Tier. |
|
1281 |
+
| multi_para_crawl | de | Der Autor von Schrödingers Nachruf in der Times schrieb: |
|
1282 |
+
| multi_para_crawl | tl | Dahil ang pulis hindi papayagan ang paglaki mga system sa lugar, ang mga protesters ay nagkaroon na gamitin ang tinatawag nila ang "mikropono ng mga tao" upang matiyak na ang mga tao sa likod ay maaaring marinig kung ano ang sinasabi West. |
|
1283 |
+
| multi_para_crawl | tl | Wala "naisip-waves", walang mga pwersa ng pag-iisip, ng anumang live na o patay na organismo, tao o hayop. |
|
1284 |
+
| multi_para_crawl | tl | Ang may-akda ng Schrödinger's sa pagkamatay sa Ang Times wrote: |
|
1285 |
+
| multi_para_crawl | ga | Tá na deities go léir ceangailte go bealach leo. |
|
1286 |
+
| multi_para_crawl | ga | Faigh iúl nuair a roinnt tú a shonrú glaonna nó ar a dtugtar. |
|
1287 |
+
| multi_para_crawl | ga | Is féidir léiriú le gníomhais, agus is féidir é a athdhéanamh gach lá. |
|
1288 |
+
| multi_para_crawl | sk | Všetky božstvá sú s nimi nejako spojené. |
|
1289 |
+
| multi_para_crawl | sk | Upozornenie ak číslo zadáte hovory alebo je nazývaný. |
|
1290 |
+
| multi_para_crawl | sk | Je možné dokázať skutkami a je možné, že sa to každý deň opakuje. |
|
1291 |
+
| multi_para_crawl | lv | Pirmais satur izaicinājumu Korāna, kas ir vārds Allah bez cilvēka paužot un burtiem, otrais ir Dievišķās kotācijas, kas ir jēga no Allah izteikts vārdos pravieša, kurā viņš ziņo "Kā Dievs teica:", Trešais ir Pravietiskoscitāti, kas ir iedvesma uz viņa paša daiļrunīgs, unikālas vārdiem pravietis.) |
|
1292 |
+
| multi_para_crawl | lv | Tikmēr brāļi bija vērojot Pravieti (Salla Allahu alihi wa sallam) no attāluma un ir traucēts, kad viņi redzēja Addas respektējot Pravieti (Salla Allahu alihi wa sallam), ko kissing viņam un sacīja viens otram: "Redzi, viņš ir jau izkropļo mūsu vergu! " |
|
1293 |
+
| multi_para_crawl | lv | Daudzvalodības Spēle koordinē treneri darbnīca Muzikālā teātra un veltīts Eiropas Valodu dienas tiek svinēta katru gadu 26 septembris. |
|
1294 |
+
| multi_para_crawl | mt | L-ewwel jinkludi l-isfida ta 'l-Koran li hija l-Kelma ta' Allah, mingħajr ma fformula bniedem u l-ittri, it-tieni hija l-kwotazzjonijiet Divina li huwa l-tifsira mill Allah espressa fil-kliem tal-Profeta li fih jirrapporta "Kif Allah qal", il- tielet huwa l-Prophetickwotazzjonijiet li huwa ispirazzjoni għall-Profeta fil elokwenti, kliem unika tiegħu stess.) |
|
1295 |
+
| multi_para_crawl | mt | Sadanittant, l-aħwa kienu josservaw il-Profeta (salla Allahu alihi wa sallam) minn distanza u huma mfixkla meta raw Addas jirrispettaw il-Profeta (salla Allahu alihi wa sallam) mill kissing lilu u qal lil xulxin, "Ħares, huwa diġà corrupting iskjavi tagħna! " Meta Addas luraminnhom talbu għaliex hu kien aġixxa kif kien jagħmel. |
|
1296 |
+
| multi_para_crawl | mt | Il-logħba multilingwiżmu koordinata mill-ħarrieġa Workshop tal mużikali Teatru u ddedikati għall-Jum Ewropew tal-Lingwi huwa ċċelebrat kull sena dwar 26 settembru. |
|
1297 |
+
| multi_para_crawl | no | -gir beskyttelse mot handlingen av solstråling farlig; |
|
1298 |
+
| multi_para_crawl | no | Fra flatvannet i skalaen til loftet i leiligheten på 36m2, minimum høyde på 230cm med 1 king size soverom, marmorbad, stue, air condition, en stor overbygget terrasse på 36m2 med grill og fantastisk havutsikt. |
|
1299 |
+
| multi_para_crawl | no | Lovelight spesielle Blomst Bukett Levering (6 rosa roser, 6 hvite liljer) |
|
1300 |
+
| multi_para_crawl | ru | -дает защиту на действие солнечного излучения опасные; |
|
1301 |
+
| multi_para_crawl | ru | От плоской воды шкалы до чердака квартиры 36 м2, минимальная высота 230 см с 1 кроватью размера “king-size”, мраморной ванной комнатой, гостиной, кондиционером, большой крытой террасой 36 м2 с барбекю и великолепным видом на море. |
|
1302 |
+
| multi_para_crawl | ru | Lovelight Специальный Цветок Букет Доставка (6 розовых роз, 6 белых лилий) |
|
1303 |
+
|
1304 |
+
<details>
|
1305 |
+
<summary>文本长度</summary>
|
1306 |
+
<pre><code>0-10: 3619
|
1307 |
+
10-20: 81839
|
1308 |
+
20-30: 244630
|
1309 |
+
30-40: 274877
|
1310 |
+
40-50: 293878
|
1311 |
+
50-60: 309856
|
1312 |
+
60-70: 289749
|
1313 |
+
70-80: 280666
|
1314 |
+
80-90: 252994
|
1315 |
+
90-100: 233978
|
1316 |
+
100-110: 205016
|
1317 |
+
110-120: 183785
|
1318 |
+
120-130: 160696
|
1319 |
+
130-140: 138655
|
1320 |
+
140-150: 120384
|
1321 |
+
150-160: 105893
|
1322 |
+
160-170: 87708
|
1323 |
+
170-180: 74917
|
1324 |
+
180-190: 68654
|
1325 |
+
190-200: 54071
|
1326 |
+
200-210: 337870
|
1327 |
+
</code></pre>
|
1328 |
+
</details>
|
1329 |
+
|
1330 |
+
文本长度统计图像:
|
1331 |
+
|
1332 |
+
![multi_para_crawl_text_length.jpg](docs/picture/multi_para_crawl_text_length.jpg)
|
1333 |
+
|
1334 |
+
|
1335 |
#### nbnn
|
1336 |
以下都是 train 训练集的信息
|
1337 |
|
|
|
1397 |
fo: 23807
|
1398 |
```
|
1399 |
|
|
|
1400 |
样本示例:
|
1401 |
|
1402 |
| 数据 | 语种 | 样本 |
|
|
|
1420 |
| nordic_langid | is | den varmaste månaden är juli då medeltemperaturen är c och den kallaste är januari med c |
|
1421 |
| nordic_langid | is | ett tropiskt höglandsklimat råder i trakten |
|
1422 |
|
|
|
1423 |
<details>
|
1424 |
<summary>文本长度</summary>
|
1425 |
<pre><code>0-10: 65
|
|
|
1451 |
![nordic_langid_text_length.jpg](docs/picture/nordic_langid_text_length.jpg)
|
1452 |
|
1453 |
|
1454 |
+
#### open_subtitles
|
1455 |
+
以下都是 train 训练集的信息
|
1456 |
+
|
1457 |
+
```text
|
1458 |
+
语种数量:
|
1459 |
+
ru: 5909183
|
1460 |
+
da: 5509522
|
1461 |
+
hi: 77565
|
1462 |
+
en: 73451
|
1463 |
+
bn: 36064
|
1464 |
+
is: 34643
|
1465 |
+
bs: 10212
|
1466 |
+
eo: 10088
|
1467 |
+
hy: 660
|
1468 |
+
fr: 656
|
1469 |
+
```
|
1470 |
+
|
1471 |
+
样本示例:
|
1472 |
+
|
1473 |
+
| 数据 | 语种 | 样本 |
|
1474 |
+
| :---: | :---: | :---: |
|
1475 |
+
| open_subtitles | bn | হবিটোস কাছে কোথাও আছে. |
|
1476 |
+
| open_subtitles | bn | বিষ এখনো তাজা, তিন দিন ধরে. |
|
1477 |
+
| open_subtitles | bn | তারা আমাদের পিছু নিয়েছে. |
|
1478 |
+
| open_subtitles | is | Eitrið er enn öflugt. |
|
1479 |
+
| open_subtitles | is | Þriggja daga gamalt. Þeir veita okkur eftirför. |
|
1480 |
+
| open_subtitles | is | Ef þeir vissu að við erum hér væru þeir búnir að drepa okkur. |
|
1481 |
+
| open_subtitles | bs | Gospodine Borgard... |
|
1482 |
+
| open_subtitles | bs | Imam odgovor za vas iz New Orleansa. |
|
1483 |
+
| open_subtitles | bs | Šta kažu? |
|
1484 |
+
| open_subtitles | eo | Alvenis la respondo por vi el Nov-Orleano. |
|
1485 |
+
| open_subtitles | eo | Kion ili diris? |
|
1486 |
+
| open_subtitles | eo | " Ŝipo "Sundowner" ekironta 21an - PUNKTO " |
|
1487 |
+
| open_subtitles | da | Hver epoke skaber sin efterfølger - Jules Michelet |
|
1488 |
+
| open_subtitles | da | For tiden er vi som nation tæt på at røre himmelen. |
|
1489 |
+
| open_subtitles | da | Det er mig en stor glæde at bekendtgøre kulminationen på menneskehedens videnskabelige præstationer! |
|
1490 |
+
| open_subtitles | ru | Каждая эпоха грезит о преемнике. |
|
1491 |
+
| open_subtitles | ru | В настоящий момент мы как нация, вскоре достанем до неба! |
|
1492 |
+
| open_subtitles | ru | Я взволнован, потому что имею честь объявить о кульминации научных достижений человечества! |
|
1493 |
+
| open_subtitles | en | THE BICYCLE THIEF |
|
1494 |
+
| open_subtitles | en | Is Ricci there? |
|
1495 |
+
| open_subtitles | en | Are you deaf? Come on! |
|
1496 |
+
| open_subtitles | hi | साइकिल चोर |
|
1497 |
+
| open_subtitles | hi | रिच्ची? |
|
1498 |
+
| open_subtitles | hi | रिच्ची है क्या? |
|
1499 |
+
| open_subtitles | fr | A quand rendez-vous prochain ? |
|
1500 |
+
| open_subtitles | fr | Sous éclairs, foudre ou crachin ? Quand charivari achevé. |
|
1501 |
+
| open_subtitles | fr | - Avant le coucher du soleil. - Le lieu ? La lande secrète. |
|
1502 |
+
| open_subtitles | hy | Վեհը զազրելի է, զազրելին` վեհ է: |
|
1503 |
+
| open_subtitles | hy | Մեգը ճեղքենք ու ժանտաշունչ օդում ճախրենք: |
|
1504 |
+
| open_subtitles | hy | Ե՞րբ պիտի երեքով կրկին խմբվենք, Փայլակ-ամպրոպին և կամ` անձրևին: |
|
1505 |
+
|
1506 |
+
|
1507 |
+
<details>
|
1508 |
+
<summary>文本长度</summary>
|
1509 |
+
<pre><code>0-10: 224206
|
1510 |
+
10-20: 1864976
|
1511 |
+
20-30: 3055671
|
1512 |
+
30-40: 2419857
|
1513 |
+
40-50: 1399106
|
1514 |
+
50-60: 953915
|
1515 |
+
60-70: 642223
|
1516 |
+
70-80: 378740
|
1517 |
+
80-90: 231502
|
1518 |
+
90-100: 159793
|
1519 |
+
100-110: 108198
|
1520 |
+
110-120: 72629
|
1521 |
+
120-130: 48165
|
1522 |
+
130-140: 32445
|
1523 |
+
140-150: 21669
|
1524 |
+
150-160: 14946
|
1525 |
+
160-170: 10023
|
1526 |
+
170-180: 7017
|
1527 |
+
180-190: 4915
|
1528 |
+
190-200: 3541
|
1529 |
+
200-210: 8507
|
1530 |
+
</code></pre>
|
1531 |
+
</details>
|
1532 |
+
|
1533 |
+
文本长度统计图像:
|
1534 |
+
|
1535 |
+
![open_subtitles_text_length.jpg](docs/picture/open_subtitles_text_length.jpg)
|
1536 |
+
|
1537 |
+
|
1538 |
+
#### php
|
1539 |
+
以下都是 train 训练集的信息
|
1540 |
+
|
1541 |
+
```text
|
1542 |
+
语种数量:
|
1543 |
+
en: 16703
|
1544 |
+
fr: 15166
|
1545 |
+
it: 7760
|
1546 |
+
ro: 1425
|
1547 |
+
nl: 1166
|
1548 |
+
fi: 896
|
1549 |
+
sv: 891
|
1550 |
+
```
|
1551 |
+
|
1552 |
+
样本示例:
|
1553 |
+
|
1554 |
+
| 数据 | 语种 | 样本 |
|
1555 |
+
| :---: | :---: | :---: |
|
1556 |
+
| php | en | PHP Manual |
|
1557 |
+
| php | en | Prev |
|
1558 |
+
| php | en | Appendix K. |
|
1559 |
+
| php | fr | Manuel PHP |
|
1560 |
+
| php | fr | Précédent |
|
1561 |
+
| php | fr | Annexe K. |
|
1562 |
+
| php | it | Manuale PHP |
|
1563 |
+
| php | it | Indietro |
|
1564 |
+
| php | it | Appendice K. |
|
1565 |
+
| php | fi | PHP Käsikirja |
|
1566 |
+
| php | fi | Edellinen |
|
1567 |
+
| php | fi | Liite K. |
|
1568 |
+
| php | nl | PHP Handleiding |
|
1569 |
+
| php | nl | Terug |
|
1570 |
+
| php | nl | Aanhangsel K. |
|
1571 |
+
| php | ro | Manual PHP |
|
1572 |
+
| php | ro | Înapoi |
|
1573 |
+
| php | ro | Anexa K. |
|
1574 |
+
| php | sv | PHP-manual |
|
1575 |
+
| php | sv | Föregående |
|
1576 |
+
| php | sv | Hem |
|
1577 |
+
|
1578 |
+
<details>
|
1579 |
+
<summary>文本长度</summary>
|
1580 |
+
<pre><code>0-10: 964
|
1581 |
+
10-20: 3140
|
1582 |
+
20-30: 3424
|
1583 |
+
30-40: 3933
|
1584 |
+
40-50: 3513
|
1585 |
+
50-60: 3208
|
1586 |
+
60-70: 3207
|
1587 |
+
70-80: 3117
|
1588 |
+
80-90: 2885
|
1589 |
+
90-100: 2609
|
1590 |
+
100-110: 2209
|
1591 |
+
110-120: 1931
|
1592 |
+
120-130: 1587
|
1593 |
+
130-140: 1365
|
1594 |
+
140-150: 1058
|
1595 |
+
150-160: 951
|
1596 |
+
160-170: 778
|
1597 |
+
170-180: 586
|
1598 |
+
180-190: 500
|
1599 |
+
190-200: 394
|
1600 |
+
200-210: 2648
|
1601 |
+
</code></pre>
|
1602 |
+
</details>
|
1603 |
+
|
1604 |
+
文本长度统计图像:
|
1605 |
+
|
1606 |
+
![php_text_length.jpg](docs/picture/php_text_length.jpg)
|
1607 |
+
|
1608 |
+
|
1609 |
#### scandi_langid
|
1610 |
以下都是 train 训练集的信息
|
1611 |
|
docs/picture/bible_para_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/ecb_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/emea_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/kde4_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/multi_para_crawl_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/open_subtitles_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/php_text_length.jpg
ADDED
Git LFS Details
|
examples/make_subset_details.py
CHANGED
@@ -12,7 +12,7 @@ from project_settings import project_path
|
|
12 |
|
13 |
def get_args():
|
14 |
parser = argparse.ArgumentParser()
|
15 |
-
parser.add_argument("--dataset_name", default="
|
16 |
parser.add_argument(
|
17 |
"--dataset_cache_dir",
|
18 |
default=(project_path / "hub_datasets").as_posix(),
|
|
|
12 |
|
13 |
def get_args():
|
14 |
parser = argparse.ArgumentParser()
|
15 |
+
parser.add_argument("--dataset_name", default="tatoeba", type=str)
|
16 |
parser.add_argument(
|
17 |
"--dataset_cache_dir",
|
18 |
default=(project_path / "hub_datasets").as_posix(),
|
examples/preprocess/preprocess_bible_para.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="bible_para", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/bible_para.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"de-en", "en-es", "en-fi", "en-fr", "en-hi", "en-no"
|
42 |
+
]
|
43 |
+
|
44 |
+
text_set = set()
|
45 |
+
counter = defaultdict(int)
|
46 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
47 |
+
for name in name_list:
|
48 |
+
dataset_dict = load_dataset(
|
49 |
+
path=args.dataset_path,
|
50 |
+
name=name,
|
51 |
+
cache_dir=args.dataset_cache_dir,
|
52 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
53 |
+
)
|
54 |
+
for k, v in dataset_dict.items():
|
55 |
+
split = k
|
56 |
+
if split not in ("train", "validation", "test"):
|
57 |
+
print("skip split: {}".format(split))
|
58 |
+
continue
|
59 |
+
|
60 |
+
for sample in tqdm(v):
|
61 |
+
|
62 |
+
translation = sample["translation"]
|
63 |
+
for language, text in translation.items():
|
64 |
+
text = text.strip()
|
65 |
+
|
66 |
+
if text in text_set:
|
67 |
+
continue
|
68 |
+
text_set.add(text)
|
69 |
+
|
70 |
+
if language not in LANGUAGE_MAP.keys():
|
71 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
72 |
+
|
73 |
+
row = {
|
74 |
+
"text": text,
|
75 |
+
"language": language,
|
76 |
+
"data_source": "bible_para",
|
77 |
+
"split": split
|
78 |
+
}
|
79 |
+
row = json.dumps(row, ensure_ascii=False)
|
80 |
+
f.write("{}\n".format(row))
|
81 |
+
counter[split] += 1
|
82 |
+
|
83 |
+
print("counter: {}".format(counter))
|
84 |
+
|
85 |
+
return
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == "__main__":
|
89 |
+
main()
|
examples/preprocess/preprocess_ecb.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="ecb", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/ecb.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"cs-en", "de-fr", "el-it", "en-nl", "fi-pl"
|
42 |
+
]
|
43 |
+
|
44 |
+
text_set = set()
|
45 |
+
counter = defaultdict(int)
|
46 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
47 |
+
for name in name_list:
|
48 |
+
dataset_dict = load_dataset(
|
49 |
+
path=args.dataset_path,
|
50 |
+
name=name,
|
51 |
+
cache_dir=args.dataset_cache_dir,
|
52 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
53 |
+
)
|
54 |
+
for k, v in dataset_dict.items():
|
55 |
+
split = k
|
56 |
+
if split not in ("train", "validation", "test"):
|
57 |
+
print("skip split: {}".format(split))
|
58 |
+
continue
|
59 |
+
|
60 |
+
for sample in tqdm(v):
|
61 |
+
|
62 |
+
translation = sample["translation"]
|
63 |
+
for language, text in translation.items():
|
64 |
+
text = text.strip()
|
65 |
+
|
66 |
+
if text in text_set:
|
67 |
+
continue
|
68 |
+
text_set.add(text)
|
69 |
+
|
70 |
+
if language not in LANGUAGE_MAP.keys():
|
71 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
72 |
+
|
73 |
+
row = {
|
74 |
+
"text": text,
|
75 |
+
"language": language,
|
76 |
+
"data_source": "ecb",
|
77 |
+
"split": split
|
78 |
+
}
|
79 |
+
row = json.dumps(row, ensure_ascii=False)
|
80 |
+
f.write("{}\n".format(row))
|
81 |
+
counter[split] += 1
|
82 |
+
|
83 |
+
print("counter: {}".format(counter))
|
84 |
+
|
85 |
+
return
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == "__main__":
|
89 |
+
main()
|
examples/preprocess/preprocess_emea.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="emea", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/emea.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"bg-el", "cs-et", "de-mt", "es-lt", "fr-sk"
|
42 |
+
]
|
43 |
+
|
44 |
+
text_set = set()
|
45 |
+
counter = defaultdict(int)
|
46 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
47 |
+
for name in name_list:
|
48 |
+
dataset_dict = load_dataset(
|
49 |
+
path=args.dataset_path,
|
50 |
+
name=name,
|
51 |
+
cache_dir=args.dataset_cache_dir,
|
52 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
53 |
+
)
|
54 |
+
for k, v in dataset_dict.items():
|
55 |
+
split = k
|
56 |
+
if split not in ("train", "validation", "test"):
|
57 |
+
print("skip split: {}".format(split))
|
58 |
+
continue
|
59 |
+
|
60 |
+
for sample in tqdm(v):
|
61 |
+
|
62 |
+
translation = sample["translation"]
|
63 |
+
for language, text in translation.items():
|
64 |
+
text = text.strip()
|
65 |
+
|
66 |
+
if text in text_set:
|
67 |
+
continue
|
68 |
+
text_set.add(text)
|
69 |
+
|
70 |
+
if language not in LANGUAGE_MAP.keys():
|
71 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
72 |
+
|
73 |
+
row = {
|
74 |
+
"text": text,
|
75 |
+
"language": language,
|
76 |
+
"data_source": "emea",
|
77 |
+
"split": split
|
78 |
+
}
|
79 |
+
row = json.dumps(row, ensure_ascii=False)
|
80 |
+
f.write("{}\n".format(row))
|
81 |
+
counter[split] += 1
|
82 |
+
|
83 |
+
print("counter: {}".format(counter))
|
84 |
+
|
85 |
+
return
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == "__main__":
|
89 |
+
main()
|
examples/preprocess/preprocess_igbo.py
CHANGED
@@ -44,6 +44,7 @@ def main():
|
|
44 |
)
|
45 |
print(dataset_dict)
|
46 |
|
|
|
47 |
text_set = set()
|
48 |
counter = defaultdict(int)
|
49 |
with open(args.output_file, "w", encoding="utf-8") as f:
|
|
|
44 |
)
|
45 |
print(dataset_dict)
|
46 |
|
47 |
+
# TODO : 失败
|
48 |
text_set = set()
|
49 |
counter = defaultdict(int)
|
50 |
with open(args.output_file, "w", encoding="utf-8") as f:
|
examples/preprocess/preprocess_kde4.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="kde4", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/kde4.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"en-fr", "en-it", "fi-nl", "it-ro", "nl-sv"
|
42 |
+
]
|
43 |
+
|
44 |
+
text_set = set()
|
45 |
+
counter = defaultdict(int)
|
46 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
47 |
+
for name in name_list:
|
48 |
+
dataset_dict = load_dataset(
|
49 |
+
path=args.dataset_path,
|
50 |
+
name=name,
|
51 |
+
cache_dir=args.dataset_cache_dir,
|
52 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
53 |
+
)
|
54 |
+
for k, v in dataset_dict.items():
|
55 |
+
split = k
|
56 |
+
if split not in ("train", "validation", "test"):
|
57 |
+
print("skip split: {}".format(split))
|
58 |
+
continue
|
59 |
+
|
60 |
+
for sample in tqdm(v):
|
61 |
+
|
62 |
+
translation = sample["translation"]
|
63 |
+
for language, text in translation.items():
|
64 |
+
text = text.strip()
|
65 |
+
|
66 |
+
if text in text_set:
|
67 |
+
continue
|
68 |
+
text_set.add(text)
|
69 |
+
|
70 |
+
if language not in LANGUAGE_MAP.keys():
|
71 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
72 |
+
|
73 |
+
row = {
|
74 |
+
"text": text,
|
75 |
+
"language": language,
|
76 |
+
"data_source": "kde4",
|
77 |
+
"split": split
|
78 |
+
}
|
79 |
+
row = json.dumps(row, ensure_ascii=False)
|
80 |
+
f.write("{}\n".format(row))
|
81 |
+
counter[split] += 1
|
82 |
+
|
83 |
+
print("counter: {}".format(counter))
|
84 |
+
|
85 |
+
return
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == "__main__":
|
89 |
+
main()
|
examples/preprocess/preprocess_multi_para_crawl.py
ADDED
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="multi_para_crawl", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/multi_para_crawl.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"cs-is", "de-tl", "ga-sk", "lv-mt", "nb-ru"
|
42 |
+
]
|
43 |
+
|
44 |
+
language_map = {
|
45 |
+
"nb": "no"
|
46 |
+
}
|
47 |
+
|
48 |
+
text_set = set()
|
49 |
+
counter = defaultdict(int)
|
50 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
51 |
+
for name in name_list:
|
52 |
+
dataset_dict = load_dataset(
|
53 |
+
path=args.dataset_path,
|
54 |
+
name=name,
|
55 |
+
cache_dir=args.dataset_cache_dir,
|
56 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
57 |
+
)
|
58 |
+
for k, v in dataset_dict.items():
|
59 |
+
split = k
|
60 |
+
if split not in ("train", "validation", "test"):
|
61 |
+
print("skip split: {}".format(split))
|
62 |
+
continue
|
63 |
+
|
64 |
+
for sample in tqdm(v):
|
65 |
+
|
66 |
+
translation = sample["translation"]
|
67 |
+
for language, text in translation.items():
|
68 |
+
text = text.strip()
|
69 |
+
|
70 |
+
if text in text_set:
|
71 |
+
continue
|
72 |
+
text_set.add(text)
|
73 |
+
|
74 |
+
if language in language_map.keys():
|
75 |
+
language = language_map[language]
|
76 |
+
|
77 |
+
if language not in LANGUAGE_MAP.keys():
|
78 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
79 |
+
|
80 |
+
row = {
|
81 |
+
"text": text,
|
82 |
+
"language": language,
|
83 |
+
"data_source": "multi_para_crawl",
|
84 |
+
"split": split
|
85 |
+
}
|
86 |
+
row = json.dumps(row, ensure_ascii=False)
|
87 |
+
f.write("{}\n".format(row))
|
88 |
+
counter[split] += 1
|
89 |
+
|
90 |
+
print("counter: {}".format(counter))
|
91 |
+
|
92 |
+
return
|
93 |
+
|
94 |
+
|
95 |
+
if __name__ == "__main__":
|
96 |
+
main()
|
examples/preprocess/preprocess_open_subtitles.py
ADDED
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="open_subtitles", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/open_subtitles.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"bn-is", "bs-eo", "da-ru", "en-hi", "fr-hy"
|
42 |
+
]
|
43 |
+
|
44 |
+
language_map = {
|
45 |
+
"nb": "no"
|
46 |
+
}
|
47 |
+
|
48 |
+
text_set = set()
|
49 |
+
counter = defaultdict(int)
|
50 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
51 |
+
for name in name_list:
|
52 |
+
dataset_dict = load_dataset(
|
53 |
+
path=args.dataset_path,
|
54 |
+
name=name,
|
55 |
+
cache_dir=args.dataset_cache_dir,
|
56 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
57 |
+
)
|
58 |
+
for k, v in dataset_dict.items():
|
59 |
+
split = k
|
60 |
+
if split not in ("train", "validation", "test"):
|
61 |
+
print("skip split: {}".format(split))
|
62 |
+
continue
|
63 |
+
|
64 |
+
for sample in tqdm(v):
|
65 |
+
|
66 |
+
translation = sample["translation"]
|
67 |
+
for language, text in translation.items():
|
68 |
+
text = text.strip()
|
69 |
+
|
70 |
+
if text in text_set:
|
71 |
+
continue
|
72 |
+
text_set.add(text)
|
73 |
+
|
74 |
+
# if language in language_map.keys():
|
75 |
+
# language = language_map[language]
|
76 |
+
|
77 |
+
if language not in LANGUAGE_MAP.keys():
|
78 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
79 |
+
|
80 |
+
row = {
|
81 |
+
"text": text,
|
82 |
+
"language": language,
|
83 |
+
"data_source": "open_subtitles",
|
84 |
+
"split": split
|
85 |
+
}
|
86 |
+
row = json.dumps(row, ensure_ascii=False)
|
87 |
+
f.write("{}\n".format(row))
|
88 |
+
counter[split] += 1
|
89 |
+
|
90 |
+
print("counter: {}".format(counter))
|
91 |
+
|
92 |
+
return
|
93 |
+
|
94 |
+
|
95 |
+
if __name__ == "__main__":
|
96 |
+
main()
|
examples/preprocess/preprocess_para_crawl.py
ADDED
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="para_crawl", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/para_crawl.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"enbg", "encs", "enda", "ende", "enel", "enes", "enet", "enfi", "enfr", "enga", "enhr", "enhu", "enit",
|
42 |
+
"enlt", "enlv", "enmt", "ennl", "enpl", "enpt", "enro", "ensk", "ensl", "ensv"
|
43 |
+
]
|
44 |
+
|
45 |
+
# TODO: 数据集太大,加载不完。
|
46 |
+
text_set = set()
|
47 |
+
counter = defaultdict(int)
|
48 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
49 |
+
for name in name_list:
|
50 |
+
dataset_dict = load_dataset(
|
51 |
+
path=args.dataset_path,
|
52 |
+
name=name,
|
53 |
+
cache_dir=args.dataset_cache_dir,
|
54 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
55 |
+
)
|
56 |
+
for k, v in dataset_dict.items():
|
57 |
+
split = k
|
58 |
+
if split not in ("train", "validation", "test"):
|
59 |
+
print("skip split: {}".format(split))
|
60 |
+
continue
|
61 |
+
|
62 |
+
for sample in tqdm(v):
|
63 |
+
|
64 |
+
translation = sample["translation"]
|
65 |
+
for language, text in translation.items():
|
66 |
+
text = text.strip()
|
67 |
+
|
68 |
+
if text in text_set:
|
69 |
+
continue
|
70 |
+
text_set.add(text)
|
71 |
+
|
72 |
+
if language not in LANGUAGE_MAP.keys():
|
73 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
74 |
+
|
75 |
+
row = {
|
76 |
+
"text": text,
|
77 |
+
"language": language,
|
78 |
+
"data_source": "para_crawl",
|
79 |
+
"split": split
|
80 |
+
}
|
81 |
+
row = json.dumps(row, ensure_ascii=False)
|
82 |
+
f.write("{}\n".format(row))
|
83 |
+
counter[split] += 1
|
84 |
+
|
85 |
+
print("counter: {}".format(counter))
|
86 |
+
|
87 |
+
return
|
88 |
+
|
89 |
+
|
90 |
+
if __name__ == "__main__":
|
91 |
+
main()
|
examples/preprocess/preprocess_para_pat.py
CHANGED
@@ -39,7 +39,10 @@ def main():
|
|
39 |
|
40 |
name_list = [
|
41 |
"cs-en", "de-en", "de-fr", "el-en", "en-es", "en-fr", "en-hu", "en-ja",
|
42 |
-
"en-ko", "en-pt", "en-ro", "en-ru", "en-sk",
|
|
|
|
|
|
|
43 |
"fr-ja", "fr-ko", "fr-ru"
|
44 |
]
|
45 |
|
@@ -75,7 +78,7 @@ def main():
|
|
75 |
text_set.add(text)
|
76 |
|
77 |
if language not in LANGUAGE_MAP.keys():
|
78 |
-
raise AssertionError(language)
|
79 |
|
80 |
row = {
|
81 |
"text": text,
|
|
|
39 |
|
40 |
name_list = [
|
41 |
"cs-en", "de-en", "de-fr", "el-en", "en-es", "en-fr", "en-hu", "en-ja",
|
42 |
+
"en-ko", "en-pt", "en-ro", "en-ru", "en-sk",
|
43 |
+
"en-uk",
|
44 |
+
"en-zh",
|
45 |
+
"es-fr",
|
46 |
"fr-ja", "fr-ko", "fr-ru"
|
47 |
]
|
48 |
|
|
|
78 |
text_set.add(text)
|
79 |
|
80 |
if language not in LANGUAGE_MAP.keys():
|
81 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
82 |
|
83 |
row = {
|
84 |
"text": text,
|
examples/preprocess/preprocess_php.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="php", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/php.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"en-fr", "en-it", "fi-nl", "it-ro", "nl-sv"
|
42 |
+
]
|
43 |
+
|
44 |
+
text_set = set()
|
45 |
+
counter = defaultdict(int)
|
46 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
47 |
+
for name in name_list:
|
48 |
+
dataset_dict = load_dataset(
|
49 |
+
path=args.dataset_path,
|
50 |
+
name=name,
|
51 |
+
cache_dir=args.dataset_cache_dir,
|
52 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
53 |
+
)
|
54 |
+
for k, v in dataset_dict.items():
|
55 |
+
split = k
|
56 |
+
if split not in ("train", "validation", "test"):
|
57 |
+
print("skip split: {}".format(split))
|
58 |
+
continue
|
59 |
+
|
60 |
+
for sample in tqdm(v):
|
61 |
+
|
62 |
+
translation = sample["translation"]
|
63 |
+
for language, text in translation.items():
|
64 |
+
text = text.strip()
|
65 |
+
|
66 |
+
if text in text_set:
|
67 |
+
continue
|
68 |
+
text_set.add(text)
|
69 |
+
|
70 |
+
if language not in LANGUAGE_MAP.keys():
|
71 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
72 |
+
|
73 |
+
row = {
|
74 |
+
"text": text,
|
75 |
+
"language": language,
|
76 |
+
"data_source": "php",
|
77 |
+
"split": split
|
78 |
+
}
|
79 |
+
row = json.dumps(row, ensure_ascii=False)
|
80 |
+
f.write("{}\n".format(row))
|
81 |
+
counter[split] += 1
|
82 |
+
|
83 |
+
print("counter: {}".format(counter))
|
84 |
+
|
85 |
+
return
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == "__main__":
|
89 |
+
main()
|
examples/preprocess/preprocess_pib.py
CHANGED
@@ -51,6 +51,7 @@ def main():
|
|
51 |
"gu-or", "en-gu", "hi-mr", "mr-ta", "en-mr"
|
52 |
]
|
53 |
|
|
|
54 |
text_set = set()
|
55 |
counter = defaultdict(int)
|
56 |
with open(args.output_file, "w", encoding="utf-8") as f:
|
|
|
51 |
"gu-or", "en-gu", "hi-mr", "mr-ta", "en-mr"
|
52 |
]
|
53 |
|
54 |
+
# TODO: 失败
|
55 |
text_set = set()
|
56 |
counter = defaultdict(int)
|
57 |
with open(args.output_file, "w", encoding="utf-8") as f:
|
examples/preprocess/preprocess_poleval2019_mt.py
CHANGED
@@ -41,6 +41,7 @@ def main():
|
|
41 |
"en-pl", "pl-en", "pl-ru", "ru-pl"
|
42 |
]
|
43 |
|
|
|
44 |
text_set = set()
|
45 |
counter = defaultdict(int)
|
46 |
with open(args.output_file, "w", encoding="utf-8") as f:
|
|
|
41 |
"en-pl", "pl-en", "pl-ru", "ru-pl"
|
42 |
]
|
43 |
|
44 |
+
# TODO: 失败
|
45 |
text_set = set()
|
46 |
counter = defaultdict(int)
|
47 |
with open(args.output_file, "w", encoding="utf-8") as f:
|
language_identification.py
CHANGED
@@ -11,19 +11,28 @@ import datasets
|
|
11 |
_URLS = {
|
12 |
"amazon_reviews_multi": "data/amazon_reviews_multi.jsonl",
|
13 |
"autshumato": "data/autshumato.jsonl",
|
|
|
14 |
"bsd_ja_en": "data/bsd_ja_en.jsonl",
|
15 |
"bucc2018": "data/bucc2018.jsonl",
|
16 |
"cmu_hinglish_dog": "data/cmu_hinglish_dog.jsonl",
|
|
|
|
|
17 |
"europa_eac_tm": "data/europa_eac_tm.jsonl",
|
18 |
"europa_ecdc_tm": "data/europa_ecdc_tm.jsonl",
|
19 |
"hind_encorp": "data/hind_encorp.jsonl",
|
20 |
"hrenwac_para": "data/hrenwac_para.jsonl",
|
21 |
"id_panl_bppt": "data/id_panl_bppt.jsonl",
|
22 |
"iwslt2017": "data/iwslt2017.jsonl",
|
|
|
23 |
"menyo20k_mt": "data/menyo20k_mt.jsonl",
|
24 |
"mike0307": "data/mike0307.jsonl",
|
|
|
25 |
"nbnn": "data/nbnn.jsonl",
|
26 |
"nordic_langid": "data/nordic_langid.jsonl",
|
|
|
|
|
|
|
|
|
27 |
"scandi_langid": "data/scandi_langid.jsonl",
|
28 |
"stsb_multi_mt": "data/stsb_multi_mt.jsonl",
|
29 |
"tatoeba": "data/tatoeba.jsonl",
|
@@ -47,6 +56,8 @@ _CITATION = """\
|
|
47 |
LANGUAGE_MAP = {
|
48 |
"ar": "arabic",
|
49 |
"bg": "bulgarian",
|
|
|
|
|
50 |
"cs": "czech",
|
51 |
"da": "danish",
|
52 |
"de": "german",
|
@@ -64,6 +75,7 @@ LANGUAGE_MAP = {
|
|
64 |
"hi_en": "hindi english",
|
65 |
"hr": "croatian",
|
66 |
"hu": "hungarian",
|
|
|
67 |
"id": "indonesian",
|
68 |
"is": "icelandic",
|
69 |
"it": "italian",
|
@@ -86,9 +98,11 @@ LANGUAGE_MAP = {
|
|
86 |
"sw": "swahili",
|
87 |
"sv": "swedish",
|
88 |
"th": "thai",
|
|
|
89 |
"tn": "sepedi",
|
90 |
"tr": "turkish",
|
91 |
"ts": "dzonga",
|
|
|
92 |
"ur": "urdu",
|
93 |
"vi": "vietnamese",
|
94 |
"yo": "yoruba",
|
@@ -105,19 +119,28 @@ class LanguageIdentification(datasets.GeneratorBasedBuilder):
|
|
105 |
BUILDER_CONFIGS = [
|
106 |
datasets.BuilderConfig(name="amazon_reviews_multi", version=VERSION, description="amazon_reviews_multi"),
|
107 |
datasets.BuilderConfig(name="autshumato", version=VERSION, description="autshumato"),
|
|
|
108 |
datasets.BuilderConfig(name="bsd_ja_en", version=VERSION, description="bsd_ja_en"),
|
109 |
datasets.BuilderConfig(name="bucc2018", version=VERSION, description="bucc2018"),
|
110 |
datasets.BuilderConfig(name="cmu_hinglish_dog", version=VERSION, description="cmu_hinglish_dog"),
|
|
|
|
|
111 |
datasets.BuilderConfig(name="europa_eac_tm", version=VERSION, description="europa_eac_tm"),
|
112 |
datasets.BuilderConfig(name="europa_ecdc_tm", version=VERSION, description="europa_ecdc_tm"),
|
113 |
datasets.BuilderConfig(name="hind_encorp", version=VERSION, description="hind_encorp"),
|
114 |
datasets.BuilderConfig(name="hrenwac_para", version=VERSION, description="hrenwac_para"),
|
115 |
datasets.BuilderConfig(name="id_panl_bppt", version=VERSION, description="id_panl_bppt"),
|
116 |
datasets.BuilderConfig(name="iwslt2017", version=VERSION, description="iwslt2017"),
|
|
|
117 |
datasets.BuilderConfig(name="menyo20k_mt", version=VERSION, description="menyo20k_mt"),
|
118 |
datasets.BuilderConfig(name="mike0307", version=VERSION, description="mike0307"),
|
|
|
119 |
datasets.BuilderConfig(name="nbnn", version=VERSION, description="nbnn"),
|
120 |
datasets.BuilderConfig(name="nordic_langid", version=VERSION, description="nordic_langid"),
|
|
|
|
|
|
|
|
|
121 |
datasets.BuilderConfig(name="scandi_langid", version=VERSION, description="scandi_langid"),
|
122 |
datasets.BuilderConfig(name="stsb_multi_mt", version=VERSION, description="stsb_multi_mt"),
|
123 |
datasets.BuilderConfig(name="tatoeba", version=VERSION, description="tatoeba"),
|
|
|
11 |
_URLS = {
|
12 |
"amazon_reviews_multi": "data/amazon_reviews_multi.jsonl",
|
13 |
"autshumato": "data/autshumato.jsonl",
|
14 |
+
"bible_para": "data/bible_para.jsonl",
|
15 |
"bsd_ja_en": "data/bsd_ja_en.jsonl",
|
16 |
"bucc2018": "data/bucc2018.jsonl",
|
17 |
"cmu_hinglish_dog": "data/cmu_hinglish_dog.jsonl",
|
18 |
+
"ecb": "data/ecb.jsonl",
|
19 |
+
"emea": "data/emea.jsonl",
|
20 |
"europa_eac_tm": "data/europa_eac_tm.jsonl",
|
21 |
"europa_ecdc_tm": "data/europa_ecdc_tm.jsonl",
|
22 |
"hind_encorp": "data/hind_encorp.jsonl",
|
23 |
"hrenwac_para": "data/hrenwac_para.jsonl",
|
24 |
"id_panl_bppt": "data/id_panl_bppt.jsonl",
|
25 |
"iwslt2017": "data/iwslt2017.jsonl",
|
26 |
+
"kde4": "data/kde4.jsonl",
|
27 |
"menyo20k_mt": "data/menyo20k_mt.jsonl",
|
28 |
"mike0307": "data/mike0307.jsonl",
|
29 |
+
"multi_para_crawl": "data/multi_para_crawl.jsonl",
|
30 |
"nbnn": "data/nbnn.jsonl",
|
31 |
"nordic_langid": "data/nordic_langid.jsonl",
|
32 |
+
"open_subtitles": "data/open_subtitles.jsonl",
|
33 |
+
# "para_crawl": "data/para_crawl.jsonl",
|
34 |
+
"para_pat": "data/para_pat.jsonl",
|
35 |
+
"php": "data/php.jsonl",
|
36 |
"scandi_langid": "data/scandi_langid.jsonl",
|
37 |
"stsb_multi_mt": "data/stsb_multi_mt.jsonl",
|
38 |
"tatoeba": "data/tatoeba.jsonl",
|
|
|
56 |
LANGUAGE_MAP = {
|
57 |
"ar": "arabic",
|
58 |
"bg": "bulgarian",
|
59 |
+
"bn": "bengali",
|
60 |
+
"bs": "bosnian",
|
61 |
"cs": "czech",
|
62 |
"da": "danish",
|
63 |
"de": "german",
|
|
|
75 |
"hi_en": "hindi english",
|
76 |
"hr": "croatian",
|
77 |
"hu": "hungarian",
|
78 |
+
"hy": "armenian",
|
79 |
"id": "indonesian",
|
80 |
"is": "icelandic",
|
81 |
"it": "italian",
|
|
|
98 |
"sw": "swahili",
|
99 |
"sv": "swedish",
|
100 |
"th": "thai",
|
101 |
+
"tl": "tagalog",
|
102 |
"tn": "sepedi",
|
103 |
"tr": "turkish",
|
104 |
"ts": "dzonga",
|
105 |
+
"uk": "ukrainian",
|
106 |
"ur": "urdu",
|
107 |
"vi": "vietnamese",
|
108 |
"yo": "yoruba",
|
|
|
119 |
BUILDER_CONFIGS = [
|
120 |
datasets.BuilderConfig(name="amazon_reviews_multi", version=VERSION, description="amazon_reviews_multi"),
|
121 |
datasets.BuilderConfig(name="autshumato", version=VERSION, description="autshumato"),
|
122 |
+
datasets.BuilderConfig(name="bible_para", version=VERSION, description="bible_para"),
|
123 |
datasets.BuilderConfig(name="bsd_ja_en", version=VERSION, description="bsd_ja_en"),
|
124 |
datasets.BuilderConfig(name="bucc2018", version=VERSION, description="bucc2018"),
|
125 |
datasets.BuilderConfig(name="cmu_hinglish_dog", version=VERSION, description="cmu_hinglish_dog"),
|
126 |
+
datasets.BuilderConfig(name="ecb", version=VERSION, description="ecb"),
|
127 |
+
datasets.BuilderConfig(name="emea", version=VERSION, description="emea"),
|
128 |
datasets.BuilderConfig(name="europa_eac_tm", version=VERSION, description="europa_eac_tm"),
|
129 |
datasets.BuilderConfig(name="europa_ecdc_tm", version=VERSION, description="europa_ecdc_tm"),
|
130 |
datasets.BuilderConfig(name="hind_encorp", version=VERSION, description="hind_encorp"),
|
131 |
datasets.BuilderConfig(name="hrenwac_para", version=VERSION, description="hrenwac_para"),
|
132 |
datasets.BuilderConfig(name="id_panl_bppt", version=VERSION, description="id_panl_bppt"),
|
133 |
datasets.BuilderConfig(name="iwslt2017", version=VERSION, description="iwslt2017"),
|
134 |
+
datasets.BuilderConfig(name="kde4", version=VERSION, description="kde4"),
|
135 |
datasets.BuilderConfig(name="menyo20k_mt", version=VERSION, description="menyo20k_mt"),
|
136 |
datasets.BuilderConfig(name="mike0307", version=VERSION, description="mike0307"),
|
137 |
+
datasets.BuilderConfig(name="multi_para_crawl", version=VERSION, description="multi_para_crawl"),
|
138 |
datasets.BuilderConfig(name="nbnn", version=VERSION, description="nbnn"),
|
139 |
datasets.BuilderConfig(name="nordic_langid", version=VERSION, description="nordic_langid"),
|
140 |
+
datasets.BuilderConfig(name="open_subtitles", version=VERSION, description="open_subtitles"),
|
141 |
+
# datasets.BuilderConfig(name="para_crawl", version=VERSION, description="para_crawl"),
|
142 |
+
datasets.BuilderConfig(name="para_pat", version=VERSION, description="para_pat"),
|
143 |
+
datasets.BuilderConfig(name="php", version=VERSION, description="php"),
|
144 |
datasets.BuilderConfig(name="scandi_langid", version=VERSION, description="scandi_langid"),
|
145 |
datasets.BuilderConfig(name="stsb_multi_mt", version=VERSION, description="stsb_multi_mt"),
|
146 |
datasets.BuilderConfig(name="tatoeba", version=VERSION, description="tatoeba"),
|