Datasets:
ArXiv:
License:
update
Browse files- README.md +3 -3
- data/autshumato.jsonl +3 -0
- data/bsd_ja_en.jsonl +3 -0
- dataset_details.md +110 -0
- docs/picture/autshumato_text_length.jpg +3 -0
- docs/picture/bsd_ja_en_text_length.jpg +3 -0
- examples/make_subset_details.py +1 -1
- examples/preprocess/preprocess_autshumato.py +93 -0
- examples/preprocess/preprocess_bsd_ja_en.py +86 -0
- language_identification.py +7 -0
README.md
CHANGED
@@ -38,9 +38,8 @@ Tips:
|
|
38 |
| tatoeba | [tatoeba](https://tatoeba.org/); [Tatoeba Paper](https://arxiv.org/abs/1812.10464v2) | TRAIN: 702895 | Tatoeba 是句子和翻译的集合。 | [tatoeba](https://huggingface.co/datasets/tatoeba) |
|
39 |
| bucc2018 | [bucc2018](https://comparable.limsi.fr/bucc2018/bucc2018-task.html) | TRAIN: 2173318, TEST: 2125879 | 共享任务:识别可比语料库中的平行句子,语言:de, en, fr, ru, zh | |
|
40 |
| iwslt2017 | [2017.iwslt-1.1.pdf](https://aclanthology.org/2017.iwslt-1.1.pdf) | TRAIN: 2482649, VALID: 11480, TEST: 72470 | IWSLT 2017 多语言任务解决了文本翻译问题,涵盖英语、德语、荷兰语、意大利语和罗马尼亚语等所有方向。 | [iwslt2017](https://huggingface.co/datasets/iwslt2017) |
|
41 |
-
|
|
42 |
-
|
|
43 |
-
| autshumato | | 样本个数 | Autshumato 项目的目标之一是开发三种南非语言对的机器翻译系统。 | [autshumato](https://huggingface.co/datasets/autshumato) |
|
44 |
| chr_en | [2010.04791](https://arxiv.org/abs/2010.04791) | 样本个数 | ChrEn 是切罗基语-英语并行数据集,用于促进切罗基语和英语之间的机器翻译研究。 ChrEn 资源极少,总共包含 14k 个句子对,其分割方式有利于域内和域外评估。 ChrEn 还包含 5k 切罗基语单语数据以实现半监督学习。 | [chr_en](https://huggingface.co/datasets/chr_en) |
|
45 |
| cmu_hinglish_dog | [CMU_DoG](https://github.com/festvox/datasets-CMU_DoG); [1809.07358](https://arxiv.org/abs/1809.07358) | 样本个数 | 这是印度英语(印地语-英语之间的代码混合)文本对话及其相应的英语版本的集合。 可用于两者之间的翻译。 该数据集由 CMU 的 Alan Black 教授团队提供。 | [cmu_hinglish_dog](https://huggingface.co/datasets/cmu_hinglish_dog) |
|
46 |
| europa_eac_tm | [EAC-Translation Memory](https://joint-research-centre.ec.europa.eu/language-technology-resources/eac-translation-memory_en) | 样本个数 | 该数据集是从英语到多达 25 种语言的手动翻译的语料库,由欧盟教育和文化总局 (EAC) 于 2012 年发布。 | [europa_eac_tm](https://huggingface.co/datasets/europa_eac_tm) |
|
@@ -69,6 +68,7 @@ https://opus.nlpl.eu/
|
|
69 |
| ecb | [ECB](https://opus.nlpl.eu/ECB/corpus/version/ECB); | 样本个数 | | [ecb](https://huggingface.co/datasets/ecb) |
|
70 |
| emea | [EMEA](https://opus.nlpl.eu/EMEA/corpus/version/EMEA); | 样本个数 | | [emea](https://huggingface.co/datasets/emea) |
|
71 |
| kde4 | [KDE4](https://opus.nlpl.eu/KDE4/corpus/version/KDE4); [apps.kde.org](https://apps.kde.org/zh-cn/); [opus.nlpl.eu](https://opus.nlpl.eu/) | 样本个数 | | [kde4](https://huggingface.co/datasets/kde4) |
|
|
|
72 |
| php | [PHP](https://opus.nlpl.eu/PHP/corpus/version/PHP) | 样本个数 | 最初从 http://se.php.net/download-docs.php 中提取的并行语料库。该语料库相当嘈杂。 | [php](https://huggingface.co/datasets/php) |
|
73 |
|
74 |
|
|
|
38 |
| tatoeba | [tatoeba](https://tatoeba.org/); [Tatoeba Paper](https://arxiv.org/abs/1812.10464v2) | TRAIN: 702895 | Tatoeba 是句子和翻译的集合。 | [tatoeba](https://huggingface.co/datasets/tatoeba) |
|
39 |
| bucc2018 | [bucc2018](https://comparable.limsi.fr/bucc2018/bucc2018-task.html) | TRAIN: 2173318, TEST: 2125879 | 共享任务:识别可比语料库中的平行句子,语言:de, en, fr, ru, zh | |
|
40 |
| iwslt2017 | [2017.iwslt-1.1.pdf](https://aclanthology.org/2017.iwslt-1.1.pdf) | TRAIN: 2482649, VALID: 11480, TEST: 72470 | IWSLT 2017 多语言任务解决了文本翻译问题,涵盖英语、德语、荷兰语、意大利语和罗马尼亚语等所有方向。 | [iwslt2017](https://huggingface.co/datasets/iwslt2017) |
|
41 |
+
| bsd_ja_en | [2008.01940v1](https://arxiv.org/abs/2008.01940v1) | TRAIN: 35755, VALID: 3636, TEST: 3702 | 尽管由于并行语料库和基于语料库的训练技术的可用性不断增加,书面文本的机器翻译在过去几年中取得了长足的进步,但即使对于现代系统,口语文本和对话的自动翻译仍然具有挑战性。 在本文中,我们的目标是通过引入新构建的日语-英语商务会话平行语料库来提高会话文本的机器翻译质量。 | [bsd_ja_en](https://huggingface.co/datasets/bsd_ja_en) |
|
42 |
+
| autshumato | | TRAIN: 652824 | Autshumato 项目的目标之一是开发三种南非语言对的机器翻译系统。 | [autshumato](https://huggingface.co/datasets/autshumato) |
|
|
|
43 |
| chr_en | [2010.04791](https://arxiv.org/abs/2010.04791) | 样本个数 | ChrEn 是切罗基语-英语并行数据集,用于促进切罗基语和英语之间的机器翻译研究。 ChrEn 资源极少,总共包含 14k 个句子对,其分割方式有利于域内和域外评估。 ChrEn 还包含 5k 切罗基语单语数据以实现半监督学习。 | [chr_en](https://huggingface.co/datasets/chr_en) |
|
44 |
| cmu_hinglish_dog | [CMU_DoG](https://github.com/festvox/datasets-CMU_DoG); [1809.07358](https://arxiv.org/abs/1809.07358) | 样本个数 | 这是印度英语(印地语-英语之间的代码混合)文本对话及其相应的英语版本的集合。 可用于两者之间的翻译。 该数据集由 CMU 的 Alan Black 教授团队提供。 | [cmu_hinglish_dog](https://huggingface.co/datasets/cmu_hinglish_dog) |
|
45 |
| europa_eac_tm | [EAC-Translation Memory](https://joint-research-centre.ec.europa.eu/language-technology-resources/eac-translation-memory_en) | 样本个数 | 该数据集是从英语到多达 25 种语言的手动翻译的语料库,由欧盟教育和文化总局 (EAC) 于 2012 年发布。 | [europa_eac_tm](https://huggingface.co/datasets/europa_eac_tm) |
|
|
|
68 |
| ecb | [ECB](https://opus.nlpl.eu/ECB/corpus/version/ECB); | 样本个数 | | [ecb](https://huggingface.co/datasets/ecb) |
|
69 |
| emea | [EMEA](https://opus.nlpl.eu/EMEA/corpus/version/EMEA); | 样本个数 | | [emea](https://huggingface.co/datasets/emea) |
|
70 |
| kde4 | [KDE4](https://opus.nlpl.eu/KDE4/corpus/version/KDE4); [apps.kde.org](https://apps.kde.org/zh-cn/); [opus.nlpl.eu](https://opus.nlpl.eu/) | 样本个数 | | [kde4](https://huggingface.co/datasets/kde4) |
|
71 |
+
| open_subtitles | [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles); [L16-1147.pdf](https://aclanthology.org/L16-1147.pdf) | 样本个数 | 我们推出了平行语料库 OpenSubtitles 集合的新主要版本。 该版本由大型电影和电视字幕数据库编译而成,共包含 1689 个双文本,涵盖 60 种语言的 26 亿个句子。 该版本还包含了字幕预处理和对齐方面的许多增强功能,例如自动更正 OCR 错误以及使用元数据来估计每个字幕的质量并对字幕对进行评分。 | [open_subtitles](https://huggingface.co/datasets/open_subtitles) |
|
72 |
| php | [PHP](https://opus.nlpl.eu/PHP/corpus/version/PHP) | 样本个数 | 最初从 http://se.php.net/download-docs.php 中提取的并行语料库。该语料库相当嘈杂。 | [php](https://huggingface.co/datasets/php) |
|
73 |
|
74 |
|
data/autshumato.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:016a32cb21c39511f8b541d298407dedafc13d20da8f65f63b28c5591fad1a96
|
3 |
+
size 101604992
|
data/bsd_ja_en.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ce59ca66051b94a699450ca16b233f27f021f9b083eddefe5d14b1c65167fa02
|
3 |
+
size 5985101
|
dataset_details.md
CHANGED
@@ -69,6 +69,116 @@ zh-cn: 196260
|
|
69 |
![amazon_reviews_multi_text_length.jpg](docs/picture/amazon_reviews_multi_text_length.jpg)
|
70 |
|
71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
#### bucc2018
|
73 |
以下都是 train 训练集的信息
|
74 |
|
|
|
69 |
![amazon_reviews_multi_text_length.jpg](docs/picture/amazon_reviews_multi_text_length.jpg)
|
70 |
|
71 |
|
72 |
+
#### autshumato
|
73 |
+
以下都是 train 训练集的信息
|
74 |
+
|
75 |
+
```text
|
76 |
+
语种数量:
|
77 |
+
en: 292326
|
78 |
+
ts: 207845
|
79 |
+
tn: 125852
|
80 |
+
zu: 26801
|
81 |
+
```
|
82 |
+
|
83 |
+
样本示例:
|
84 |
+
|
85 |
+
| 数据 | 语种 | 样本 |
|
86 |
+
| :---: | :---: | :---: |
|
87 |
+
| autshumato | en | Good progress has been made with the staff composition project in each of the 15 faculties at the NWU . |
|
88 |
+
| autshumato | en | The rector , Prof Thanyani Mariba , congratulated the newcomers on their choice to further their studies at the campus and emphasised the importance of choice and responsibility - both in terms of academic commitments and social endeavours . |
|
89 |
+
| autshumato | en | Complaints against Correctional Services staff , court officials and members of the South African National Defence Force . |
|
90 |
+
| autshumato | tn | Lo tla lemoga gore Thulaganyo ya Setheo ya 2012-2014 e e dirwang mo mafapheng otlhe ka tsamaiso ya ditumalano tsa go dira tiro ke ya gore YBB e fitlhelele maikemisetso a yone kgato ka kgato . |
|
91 |
+
| autshumato | tn | Moreketoro , Mop Thanyani Mariba , o ne a akgolela batlabošeng tlhopho e ba e dirileng ya go tla go tswelela dithuto tsa bone mo khamphaseng eno mme o ne a gatelela botlhokwa jwa tlhopho le maikarabelo - malebana le go ineela ga bone mo dithutong le mo botshelong jwa bone jwa go tsalana le ba bangwe . |
|
92 |
+
| autshumato | tn | Dingongorego kgatlhanong le badiredi ba Tirelo ya Ditshiamiso batlhankedi ba kgotlatshekelo le ditokololo tsa Mophato wa Phemelo wa Bosetšhaba wa Aforikaborwa . |
|
93 |
+
| autshumato | zu | inkululeko yokwakha izinto ngokusebenzisa ubuciko; |
|
94 |
+
| autshumato | zu | Lezozakhiwo ezithathwa njengabantu ngumthetho zingabanamalungelo akuMqulu wamaLungelo kuphela ngendlela edingwa uhlobo lwelungelo kanye nolwaleso sakhiwo esithathwa njengomuntu ngumthetho. |
|
95 |
+
| autshumato | zu | Thina, Bantu baseNingizimu Afrika, Siyakukhumbula ukucekelwa phansi kwamalungelo okwenzeka eminyakeni eyadlula; |
|
96 |
+
| autshumato | ts | Mahungu ya nkoka ya pfhumba ra dyondzo ra ndyangu wa hina hi lama landzelaka : |
|
97 |
+
| autshumato | ts | xikan'we na nhlamuselo ya ntirho na xiyimo laha u nga ta tirha kona na leswaku nkarhi a wu nge hundzi malembe mambirhi |
|
98 |
+
| autshumato | ts | loko xi laveka . |
|
99 |
+
|
100 |
+
<details>
|
101 |
+
<summary>文本长度</summary>
|
102 |
+
<pre><code>0-10: 20573
|
103 |
+
10-20: 47424
|
104 |
+
20-30: 58434
|
105 |
+
30-40: 61884
|
106 |
+
40-50: 73557
|
107 |
+
50-60: 70899
|
108 |
+
60-70: 57249
|
109 |
+
70-80: 42967
|
110 |
+
80-90: 33702
|
111 |
+
90-100: 26516
|
112 |
+
100-110: 21149
|
113 |
+
110-120: 18264
|
114 |
+
120-130: 16390
|
115 |
+
130-140: 14336
|
116 |
+
140-150: 12944
|
117 |
+
150-160: 11351
|
118 |
+
160-170: 9839
|
119 |
+
170-180: 8702
|
120 |
+
180-190: 7294
|
121 |
+
190-200: 6066
|
122 |
+
200-210: 33284
|
123 |
+
</code></pre>
|
124 |
+
</details>
|
125 |
+
|
126 |
+
文本长度统计图像:
|
127 |
+
|
128 |
+
![autshumato_text_length.jpg](docs/picture/autshumato_text_length.jpg)
|
129 |
+
|
130 |
+
|
131 |
+
#### bsd_ja_en
|
132 |
+
以下都是 train 训练集的信息
|
133 |
+
|
134 |
+
```text
|
135 |
+
语种数量:
|
136 |
+
ja: 18054
|
137 |
+
en: 17701
|
138 |
+
```
|
139 |
+
|
140 |
+
样本示例:
|
141 |
+
|
142 |
+
| 数据 | 语种 | 样本 |
|
143 |
+
| :---: | :---: | :---: |
|
144 |
+
| bsd_ja_en | en | Hi this is the systems development department of Company K. |
|
145 |
+
| bsd_ja_en | en | My name is Takaichi from Company H. |
|
146 |
+
| bsd_ja_en | en | Thank you as always. |
|
147 |
+
| bsd_ja_en | ja | はい、K社システム開発部です。 |
|
148 |
+
| bsd_ja_en | ja | H社の高市と申します。 |
|
149 |
+
| bsd_ja_en | ja | いつもお世話になっております。 |
|
150 |
+
|
151 |
+
<details>
|
152 |
+
<summary>文本长度</summary>
|
153 |
+
<pre><code>0-10: 1924
|
154 |
+
10-20: 7921
|
155 |
+
20-30: 7871
|
156 |
+
30-40: 5637
|
157 |
+
40-50: 3521
|
158 |
+
50-60: 2557
|
159 |
+
60-70: 1869
|
160 |
+
70-80: 1399
|
161 |
+
80-90: 944
|
162 |
+
90-100: 721
|
163 |
+
100-110: 496
|
164 |
+
110-120: 324
|
165 |
+
120-130: 224
|
166 |
+
130-140: 123
|
167 |
+
140-150: 85
|
168 |
+
150-160: 51
|
169 |
+
160-170: 33
|
170 |
+
170-180: 19
|
171 |
+
180-190: 18
|
172 |
+
190-200: 9
|
173 |
+
200-210: 9
|
174 |
+
</code></pre>
|
175 |
+
</details>
|
176 |
+
|
177 |
+
文本长度统计图像:
|
178 |
+
|
179 |
+
![bsd_ja_en_text_length.jpg](docs/picture/bsd_ja_en_text_length.jpg)
|
180 |
+
|
181 |
+
|
182 |
#### bucc2018
|
183 |
以下都是 train 训练集的信息
|
184 |
|
docs/picture/autshumato_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/bsd_ja_en_text_length.jpg
ADDED
Git LFS Details
|
examples/make_subset_details.py
CHANGED
@@ -12,7 +12,7 @@ from project_settings import project_path
|
|
12 |
|
13 |
def get_args():
|
14 |
parser = argparse.ArgumentParser()
|
15 |
-
parser.add_argument("--dataset_name", default="
|
16 |
parser.add_argument(
|
17 |
"--dataset_cache_dir",
|
18 |
default=(project_path / "hub_datasets").as_posix(),
|
|
|
12 |
|
13 |
def get_args():
|
14 |
parser = argparse.ArgumentParser()
|
15 |
+
parser.add_argument("--dataset_name", default="bsd_ja_en", type=str)
|
16 |
parser.add_argument(
|
17 |
"--dataset_cache_dir",
|
18 |
default=(project_path / "hub_datasets").as_posix(),
|
examples/preprocess/preprocess_autshumato.py
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="autshumato", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/autshumato.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"autshumato-en-tn", "autshumato-en-zu",
|
42 |
+
"autshumato-en-ts", "autshumato-en-ts-manual",
|
43 |
+
"autshumato-tn", "autshumato-ts"
|
44 |
+
]
|
45 |
+
|
46 |
+
text_set = set()
|
47 |
+
counter = defaultdict(int)
|
48 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
49 |
+
for name in name_list:
|
50 |
+
dataset_dict = load_dataset(
|
51 |
+
path=args.dataset_path,
|
52 |
+
name=name,
|
53 |
+
cache_dir=args.dataset_cache_dir,
|
54 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
55 |
+
)
|
56 |
+
for k, v in dataset_dict.items():
|
57 |
+
split = k
|
58 |
+
if split not in ("train", "validation", "test"):
|
59 |
+
print("skip split: {}".format(split))
|
60 |
+
continue
|
61 |
+
|
62 |
+
for sample in tqdm(v):
|
63 |
+
translation = sample.get("translation")
|
64 |
+
if translation is None:
|
65 |
+
break
|
66 |
+
|
67 |
+
for language, text in translation.items():
|
68 |
+
text = text.strip()
|
69 |
+
|
70 |
+
if text in text_set:
|
71 |
+
continue
|
72 |
+
text_set.add(text)
|
73 |
+
|
74 |
+
if language not in LANGUAGE_MAP.keys():
|
75 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
76 |
+
|
77 |
+
row = {
|
78 |
+
"text": text,
|
79 |
+
"language": language,
|
80 |
+
"data_source": "autshumato",
|
81 |
+
"split": split
|
82 |
+
}
|
83 |
+
row = json.dumps(row, ensure_ascii=False)
|
84 |
+
f.write("{}\n".format(row))
|
85 |
+
counter[split] += 1
|
86 |
+
|
87 |
+
print("counter: {}".format(counter))
|
88 |
+
|
89 |
+
return
|
90 |
+
|
91 |
+
|
92 |
+
if __name__ == "__main__":
|
93 |
+
main()
|
examples/preprocess/preprocess_bsd_ja_en.py
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="bsd_ja_en", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/bsd_ja_en.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
dataset_dict = load_dataset(
|
41 |
+
path=args.dataset_path,
|
42 |
+
cache_dir=args.dataset_cache_dir,
|
43 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
44 |
+
)
|
45 |
+
print(dataset_dict)
|
46 |
+
|
47 |
+
text_set = set()
|
48 |
+
counter = defaultdict(int)
|
49 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
50 |
+
for k, v in dataset_dict.items():
|
51 |
+
split = k
|
52 |
+
if split not in ("train", "validation", "test"):
|
53 |
+
print("skip split: {}".format(split))
|
54 |
+
continue
|
55 |
+
|
56 |
+
for sample in tqdm(v):
|
57 |
+
|
58 |
+
en_sentence = sample["en_sentence"]
|
59 |
+
ja_sentence = sample["ja_sentence"]
|
60 |
+
for language, text in [("en", en_sentence), ("ja", ja_sentence)]:
|
61 |
+
text = text.strip()
|
62 |
+
|
63 |
+
if text in text_set:
|
64 |
+
continue
|
65 |
+
text_set.add(text)
|
66 |
+
|
67 |
+
if language not in LANGUAGE_MAP.keys():
|
68 |
+
raise AssertionError(language)
|
69 |
+
|
70 |
+
row = {
|
71 |
+
"text": text,
|
72 |
+
"language": language,
|
73 |
+
"data_source": "bsd_ja_en",
|
74 |
+
"split": split
|
75 |
+
}
|
76 |
+
row = json.dumps(row, ensure_ascii=False)
|
77 |
+
f.write("{}\n".format(row))
|
78 |
+
counter[split] += 1
|
79 |
+
|
80 |
+
print("counter: {}".format(counter))
|
81 |
+
|
82 |
+
return
|
83 |
+
|
84 |
+
|
85 |
+
if __name__ == '__main__':
|
86 |
+
main()
|
language_identification.py
CHANGED
@@ -10,6 +10,8 @@ import datasets
|
|
10 |
|
11 |
_URLS = {
|
12 |
"amazon_reviews_multi": "data/amazon_reviews_multi.jsonl",
|
|
|
|
|
13 |
"bucc2018": "data/bucc2018.jsonl",
|
14 |
"iwslt2017": "data/iwslt2017.jsonl",
|
15 |
"mike0307": "data/mike0307.jsonl",
|
@@ -64,12 +66,15 @@ LANGUAGE_MAP = {
|
|
64 |
"sw": "swahili",
|
65 |
"sv": "swedish",
|
66 |
"th": "thai",
|
|
|
67 |
"tr": "turkish",
|
|
|
68 |
"ur": "urdu",
|
69 |
"vi": "vietnamese",
|
70 |
"zh": "chinese",
|
71 |
"zh-cn": "simplified chinese",
|
72 |
"zh-tw": "traditional chinese",
|
|
|
73 |
}
|
74 |
|
75 |
|
@@ -78,6 +83,8 @@ class LanguageIdentification(datasets.GeneratorBasedBuilder):
|
|
78 |
|
79 |
BUILDER_CONFIGS = [
|
80 |
datasets.BuilderConfig(name="amazon_reviews_multi", version=VERSION, description="amazon_reviews_multi"),
|
|
|
|
|
81 |
datasets.BuilderConfig(name="bucc2018", version=VERSION, description="bucc2018"),
|
82 |
datasets.BuilderConfig(name="iwslt2017", version=VERSION, description="iwslt2017"),
|
83 |
datasets.BuilderConfig(name="mike0307", version=VERSION, description="mike0307"),
|
|
|
10 |
|
11 |
_URLS = {
|
12 |
"amazon_reviews_multi": "data/amazon_reviews_multi.jsonl",
|
13 |
+
"autshumato": "data/autshumato.jsonl",
|
14 |
+
"bsd_ja_en": "data/bsd_ja_en.jsonl",
|
15 |
"bucc2018": "data/bucc2018.jsonl",
|
16 |
"iwslt2017": "data/iwslt2017.jsonl",
|
17 |
"mike0307": "data/mike0307.jsonl",
|
|
|
66 |
"sw": "swahili",
|
67 |
"sv": "swedish",
|
68 |
"th": "thai",
|
69 |
+
"tn": "sepedi",
|
70 |
"tr": "turkish",
|
71 |
+
"ts": "dzonga",
|
72 |
"ur": "urdu",
|
73 |
"vi": "vietnamese",
|
74 |
"zh": "chinese",
|
75 |
"zh-cn": "simplified chinese",
|
76 |
"zh-tw": "traditional chinese",
|
77 |
+
"zu": "zulu, south africa",
|
78 |
}
|
79 |
|
80 |
|
|
|
83 |
|
84 |
BUILDER_CONFIGS = [
|
85 |
datasets.BuilderConfig(name="amazon_reviews_multi", version=VERSION, description="amazon_reviews_multi"),
|
86 |
+
datasets.BuilderConfig(name="autshumato", version=VERSION, description="autshumato"),
|
87 |
+
datasets.BuilderConfig(name="bsd_ja_en", version=VERSION, description="bsd_ja_en"),
|
88 |
datasets.BuilderConfig(name="bucc2018", version=VERSION, description="bucc2018"),
|
89 |
datasets.BuilderConfig(name="iwslt2017", version=VERSION, description="iwslt2017"),
|
90 |
datasets.BuilderConfig(name="mike0307", version=VERSION, description="mike0307"),
|