Datasets:
ArXiv:
License:
update
Browse files- README.md +2 -2
- data/cmu_hinglish_dog.jsonl +3 -0
- data/europa_eac_tm.jsonl +3 -0
- dataset_details.md +110 -1
- docs/picture/cmu_hinglish_dog_text_length.jpg +3 -0
- docs/picture/europa_eac_tm_text_length.jpg +3 -0
- examples/make_subset_details.py +1 -1
- examples/preprocess/preprocess_cmu_hinglish_dog.py +85 -0
- examples/preprocess/preprocess_europa_eac_tm.py +89 -0
- language_identification.py +5 -0
README.md
CHANGED
@@ -41,8 +41,8 @@ Tips:
|
|
41 |
| bsd_ja_en | [2008.01940v1](https://arxiv.org/abs/2008.01940v1) | TRAIN: 35755, VALID: 3636, TEST: 3702 | 尽管由于并行语料库和基于语料库的训练技术的可用性不断增加,书面文本的机器翻译在过去几年中取得了长足的进步,但即使对于现代系统,口语文本和对话的自动翻译仍然具有挑战性。 在本文中,我们的目标是通过引入新构建的日语-英语商务会话平行语料库来提高会话文本的机器翻译质量。 | [bsd_ja_en](https://huggingface.co/datasets/bsd_ja_en) |
|
42 |
| autshumato | | TRAIN: 652824 | Autshumato 项目的目标之一是开发三种南非语言对的机器翻译系统。 | [autshumato](https://huggingface.co/datasets/autshumato) |
|
43 |
| chr_en | [2010.04791](https://arxiv.org/abs/2010.04791) | 样本个数 | ChrEn 是切罗基语-英语并行数据集,用于促进切罗基语和英语之间的机器翻译研究。 ChrEn 资源极少,总共包含 14k 个句子对,其分割方式有利于域内和域外评估。 ChrEn 还包含 5k 切罗基语单语数据以实现半监督学习。 | [chr_en](https://huggingface.co/datasets/chr_en) |
|
44 |
-
| cmu_hinglish_dog | [CMU_DoG](https://github.com/festvox/datasets-CMU_DoG); [1809.07358](https://arxiv.org/abs/1809.07358) |
|
45 |
-
| europa_eac_tm | [EAC-Translation Memory](https://joint-research-centre.ec.europa.eu/language-technology-resources/eac-translation-memory_en) |
|
46 |
| europa_ecdc_tm | [ECDC-Translation Memory](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en) | 样本个数 | 2012 年 10 月,欧盟 (EU) 机构“欧洲疾病预防和控制中心”(ECDC) 发布了翻译记忆库 (TM),即 25 种语言的句子及其专业翻译的集合。 | [europa_ecdc_tm](https://huggingface.co/datasets/europa_ecdc_tm) |
|
47 |
| flores | [1902.01382](https://arxiv.org/abs/1902.01382) | 样本个数 | 低资源机器翻译的评估数据集:尼泊尔语-英语和僧伽罗语-英语。 | [flores](https://huggingface.co/datasets/flores) |
|
48 |
| giga_fren | | 样本个数 | | [giga_fren](https://huggingface.co/datasets/giga_fren) |
|
|
|
41 |
| bsd_ja_en | [2008.01940v1](https://arxiv.org/abs/2008.01940v1) | TRAIN: 35755, VALID: 3636, TEST: 3702 | 尽管由于并行语料库和基于语料库的训练技术的可用性不断增加,书面文本的机器翻译在过去几年中取得了长足的进步,但即使对于现代系统,口语文本和对话的自动翻译仍然具有挑战性。 在本文中,我们的目标是通过引入新构建的日语-英语商务会话平行语料库来提高会话文本的机器翻译质量。 | [bsd_ja_en](https://huggingface.co/datasets/bsd_ja_en) |
|
42 |
| autshumato | | TRAIN: 652824 | Autshumato 项目的目标之一是开发三种南非语言对的机器翻译系统。 | [autshumato](https://huggingface.co/datasets/autshumato) |
|
43 |
| chr_en | [2010.04791](https://arxiv.org/abs/2010.04791) | 样本个数 | ChrEn 是切罗基语-英语并行数据集,用于促进切罗基语和英语之间的机器翻译研究。 ChrEn 资源极少,总共包含 14k 个句子对,其分割方式有利于域内和域外评估。 ChrEn 还包含 5k 切罗基语单语数据以实现半监督学习。 | [chr_en](https://huggingface.co/datasets/chr_en) |
|
44 |
+
| cmu_hinglish_dog | [CMU_DoG](https://github.com/festvox/datasets-CMU_DoG); [1809.07358](https://arxiv.org/abs/1809.07358) | TRAIN: 13146, VALID: 1645, TEST: 1616 | 这是印度英语(印地语-英语之间的代码混合)文本对话及其相应的英语版本的集合。 可用于两者之间的翻译。 该数据集由 CMU 的 Alan Black 教授团队提供。 | [cmu_hinglish_dog](https://huggingface.co/datasets/cmu_hinglish_dog) |
|
45 |
+
| europa_eac_tm | [EAC-Translation Memory](https://joint-research-centre.ec.europa.eu/language-technology-resources/eac-translation-memory_en) | TRAIN: 16543 | 该数据集是从英语到多达 25 种语言的手动翻译的语料库,由欧盟教育和文化总局 (EAC) 于 2012 年发布。 | [europa_eac_tm](https://huggingface.co/datasets/europa_eac_tm) |
|
46 |
| europa_ecdc_tm | [ECDC-Translation Memory](https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en) | 样本个数 | 2012 年 10 月,欧盟 (EU) 机构“欧洲疾病预防和控制中心”(ECDC) 发布了翻译记忆库 (TM),即 25 种语言的句子及其专业翻译的集合。 | [europa_ecdc_tm](https://huggingface.co/datasets/europa_ecdc_tm) |
|
47 |
| flores | [1902.01382](https://arxiv.org/abs/1902.01382) | 样本个数 | 低资源机器翻译的评估数据集:尼泊尔语-英语和僧伽罗语-英语。 | [flores](https://huggingface.co/datasets/flores) |
|
48 |
| giga_fren | | 样本个数 | | [giga_fren](https://huggingface.co/datasets/giga_fren) |
|
data/cmu_hinglish_dog.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cb50354d75397f1c5f4c16acb82f41be8c2025b394062df1f3474c02c0118226
|
3 |
+
size 2466612
|
data/europa_eac_tm.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ad6d6169125a8e19b1bda9594d625941e69af9097cd408f385c7ee595fdfe63
|
3 |
+
size 2520239
|
dataset_details.md
CHANGED
@@ -211,7 +211,6 @@ zh: 94411
|
|
211 |
| bucc2018 | zh | 1902年,光緒皇帝接受吏部尚書張百熙建議,頒佈〈欽定學堂章程〉,其中尋常小學課目中,有史學、輿地二項。 |
|
212 |
| bucc2018 | zh | 張百熙派吳汝綸赴日本考察教育後,1903年,負責教育改革的張百熙、張之洞、榮慶向皇帝建議重訂學堂章程。 |
|
213 |
|
214 |
-
|
215 |
<details>
|
216 |
<summary>文本长度</summary>
|
217 |
<pre><code>10-20: 9687
|
@@ -242,6 +241,116 @@ zh: 94411
|
|
242 |
![bucc2018_text_length.jpg](docs/picture/bucc2018_text_length.jpg)
|
243 |
|
244 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
245 |
#### iwslt2017
|
246 |
以下都是 train 训练集的信息
|
247 |
|
|
|
211 |
| bucc2018 | zh | 1902年,光緒皇帝接受吏部尚書張百熙建議,頒佈〈欽定學堂章程〉,其中尋常小學課目中,有史學、輿地二項。 |
|
212 |
| bucc2018 | zh | 張百熙派吳汝綸赴日本考察教育後,1903年,負責教育改革的張百熙、張之洞、榮慶向皇帝建議重訂學堂章程。 |
|
213 |
|
|
|
214 |
<details>
|
215 |
<summary>文本长度</summary>
|
216 |
<pre><code>10-20: 9687
|
|
|
241 |
![bucc2018_text_length.jpg](docs/picture/bucc2018_text_length.jpg)
|
242 |
|
243 |
|
244 |
+
#### cmu_hinglish_dog
|
245 |
+
以下都是 train 训练集的信息
|
246 |
+
|
247 |
+
```text
|
248 |
+
语种数量:
|
249 |
+
hi_en: 7180
|
250 |
+
en: 5966
|
251 |
+
```
|
252 |
+
|
253 |
+
样本示例:
|
254 |
+
|
255 |
+
| 数据 | 语种 | 样本 |
|
256 |
+
| :---: | :---: | :---: |
|
257 |
+
| cmu_hinglish_dog | en | what moviie did you see |
|
258 |
+
| cmu_hinglish_dog | en | hello how are you? Have you heard of Batman Begins? It is a great movie! |
|
259 |
+
| cmu_hinglish_dog | en | no tell me more |
|
260 |
+
| cmu_hinglish_dog | hi_en | tumne konsi movie dekhi |
|
261 |
+
| cmu_hinglish_dog | hi_en | hello tum kaise ho? Kya tumne Batman Begins ke bare mein suna hai? Kya great movie hai! |
|
262 |
+
| cmu_hinglish_dog | hi_en | nahi aur batao |
|
263 |
+
|
264 |
+
<details>
|
265 |
+
<summary>文本长度</summary>
|
266 |
+
<pre><code>0-10: 474
|
267 |
+
10-20: 1228
|
268 |
+
20-30: 1800
|
269 |
+
30-40: 1801
|
270 |
+
40-50: 1501
|
271 |
+
50-60: 1207
|
272 |
+
60-70: 977
|
273 |
+
70-80: 773
|
274 |
+
80-90: 641
|
275 |
+
90-100: 516
|
276 |
+
100-110: 397
|
277 |
+
110-120: 348
|
278 |
+
120-130: 274
|
279 |
+
130-140: 206
|
280 |
+
140-150: 165
|
281 |
+
150-160: 161
|
282 |
+
160-170: 115
|
283 |
+
170-180: 98
|
284 |
+
180-190: 81
|
285 |
+
190-200: 64
|
286 |
+
200-210: 319
|
287 |
+
</code></pre>
|
288 |
+
</details>
|
289 |
+
|
290 |
+
文本长度统计图像:
|
291 |
+
|
292 |
+
![cmu_hinglish_dog_text_length.jpg](docs/picture/cmu_hinglish_dog_text_length.jpg)
|
293 |
+
|
294 |
+
|
295 |
+
#### europa_eac_tm
|
296 |
+
以下都是 train 训练集的信息
|
297 |
+
|
298 |
+
```text
|
299 |
+
语种数量:
|
300 |
+
en: 4776
|
301 |
+
bg: 3955
|
302 |
+
fr: 3949
|
303 |
+
es: 3863
|
304 |
+
```
|
305 |
+
|
306 |
+
样本示例:
|
307 |
+
|
308 |
+
| 数据 | 语种 | 样本 |
|
309 |
+
| :---: | :---: | :---: |
|
310 |
+
| europa_eac_tm | bg | КАНДИДАТ |
|
311 |
+
| europa_eac_tm | bg | Формулярът за кандидатстване ще бъде обработен с помощта на компютър. Всички лични данни (имена, адреси, CV-та и др.) ще бъдат използвани в съответствие с Решение (EC) № 45/2001 на Европейския парламент и на Съвета от 18 Декември 2000г. относно защитата на лицата във връзка с използването на личните им данни от страна на институциите и организациите на Общността и свободното движение на тези данни. Предоставената от кандидатите информация, необходима за оценка на техните предложения за финансиране, ще се ползва от службите по съответните програми единствено и само по предназначение. По искане на кандидата, предоставената лична информация може да му бъде изпратена за допълване или корекции. Всички въпроси, отнасящи се до тези данни, следва да бъдат отправяни към съответната Агенция, до която се изпраща формулярът за кандидатстване. Бенефициентите могат да подават жалби по всяко време до Европейската служба за защита на личните данни във връзка с използването на личните им данни. |
|
312 |
+
| europa_eac_tm | bg | ДАТА НА РАЖДАНЕ |
|
313 |
+
| europa_eac_tm | en | APPLICANT |
|
314 |
+
| europa_eac_tm | en | The grant application will be processed by computer. All personal data (such as names, addresses, CVs, etc.) will be processed in accordance with Regulation (EC) No 45/2001 of the European Parliament and of the Council of 18 December 2000 on the protection of individuals with regard to the processing of personal data by the Community institutions and bodies and on the free movement of such data. Information provided by the applicants necessary in order to assess their grant application will be processed solely for that purpose by the department responsible for the programme concerned. On the applicant's request, personal data may be sent to the applicant to be corrected or completed. Any question relating to these data, should be addressed to the appropriate Agency to which the form must be submitted. Beneficiaries may lodge a complaint against the processing of their personal data with the European Data Protection Supervisor at anytime. |
|
315 |
+
| europa_eac_tm | en | DATE OF BIRTH |
|
316 |
+
| europa_eac_tm | es | Número de profesores/formadores |
|
317 |
+
| europa_eac_tm | es | SOLICITANTE |
|
318 |
+
| europa_eac_tm | es | La solicitud de subvención será procesada electrónicamente. Todos los datos personales (como los nombres, direcciones, CV, etc.) serán procesados de acuerdo con la Ley (EC) número 45/2001 del Parlamento Europeo y del Consejo de 18 de diciembre de 2000 sobre la protección de los individuos respecto a los datos personales procesados por parte de las instituciones y organismos de la Comunidad europea y sobre la rectificación de dichos datos. La información proporcionada por los solicitantes se procesará únicamente con el propósito de evaluar la solicitud de subvención por el departamento correspondiente del programa. A petición del solicitante, sus datos personales pueden ser modificados. Cualquier duda relacionada con dichos datos debe dirigirse a la Agencia apropiada. Los beneficiarios pueden interponer una demanda contra el procesamiento de sus datos personales ante el Supervisor de Protección de datos europeo. |
|
319 |
+
| europa_eac_tm | fr | Nb enseignants/formateurs |
|
320 |
+
| europa_eac_tm | fr | CANDIDAT |
|
321 |
+
| europa_eac_tm | fr | Le formulaire de candidature sera traité par ordinateur. Toutes les données à caractère personnel (telles que les noms, adresses, CV, etc.) seront traitées conformément au règlement (CE) N°45/2001 du Parlement européen et du Conseil du 18 décembre 2000 relatif à la protection des personnes physiques à l’égard du traitement des données à caractère personnel par les institutions et organes communautaires et à la libre circulation de ces données. Les renseignements fournis par les candidats qui sont nécessaires pour pouvoir évaluer les demandes de subvention seront traités uniquement à cette fin par le département chargé du programme concerné. Les données à caractère personnel peuvent être envoyées au candidat, à la demande de ce dernier, pour lui permettre de les corriger ou de les compléter. Toute question relative à ces données devrait être adressée à l’Agence nationale pertinente à laquelle le présent formulaire doit être soumis. Les bénéficiaires peuvent à tout moment introduire une plainte contre le traitement de leurs données à caractère personnel auprès du Contrôleur européen de la protection des données. |
|
322 |
+
|
323 |
+
<details>
|
324 |
+
<summary>文本长度</summary>
|
325 |
+
<pre><code>0-10: 2117
|
326 |
+
10-20: 3050
|
327 |
+
20-30: 2661
|
328 |
+
30-40: 1851
|
329 |
+
40-50: 1284
|
330 |
+
50-60: 894
|
331 |
+
60-70: 697
|
332 |
+
70-80: 572
|
333 |
+
80-90: 447
|
334 |
+
90-100: 381
|
335 |
+
100-110: 300
|
336 |
+
110-120: 250
|
337 |
+
120-130: 215
|
338 |
+
130-140: 193
|
339 |
+
140-150: 177
|
340 |
+
150-160: 149
|
341 |
+
160-170: 132
|
342 |
+
170-180: 116
|
343 |
+
180-190: 100
|
344 |
+
190-200: 93
|
345 |
+
200-210: 864
|
346 |
+
</code></pre>
|
347 |
+
</details>
|
348 |
+
|
349 |
+
文本长度统计图像:
|
350 |
+
|
351 |
+
![europa_eac_tm_text_length.jpg](docs/picture/europa_eac_tm_text_length.jpg)
|
352 |
+
|
353 |
+
|
354 |
#### iwslt2017
|
355 |
以下都是 train 训练集的信息
|
356 |
|
docs/picture/cmu_hinglish_dog_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/europa_eac_tm_text_length.jpg
ADDED
Git LFS Details
|
examples/make_subset_details.py
CHANGED
@@ -12,7 +12,7 @@ from project_settings import project_path
|
|
12 |
|
13 |
def get_args():
|
14 |
parser = argparse.ArgumentParser()
|
15 |
-
parser.add_argument("--dataset_name", default="
|
16 |
parser.add_argument(
|
17 |
"--dataset_cache_dir",
|
18 |
default=(project_path / "hub_datasets").as_posix(),
|
|
|
12 |
|
13 |
def get_args():
|
14 |
parser = argparse.ArgumentParser()
|
15 |
+
parser.add_argument("--dataset_name", default="europa_eac_tm", type=str)
|
16 |
parser.add_argument(
|
17 |
"--dataset_cache_dir",
|
18 |
default=(project_path / "hub_datasets").as_posix(),
|
examples/preprocess/preprocess_cmu_hinglish_dog.py
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="cmu_hinglish_dog", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/cmu_hinglish_dog.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
dataset_dict = load_dataset(
|
41 |
+
path=args.dataset_path,
|
42 |
+
cache_dir=args.dataset_cache_dir,
|
43 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
44 |
+
)
|
45 |
+
print(dataset_dict)
|
46 |
+
|
47 |
+
text_set = set()
|
48 |
+
counter = defaultdict(int)
|
49 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
50 |
+
for k, v in dataset_dict.items():
|
51 |
+
split = k
|
52 |
+
if split not in ("train", "validation", "test"):
|
53 |
+
print("skip split: {}".format(split))
|
54 |
+
continue
|
55 |
+
|
56 |
+
for sample in tqdm(v):
|
57 |
+
|
58 |
+
translation = sample["translation"]
|
59 |
+
for language, text in translation.items():
|
60 |
+
text = text.strip()
|
61 |
+
|
62 |
+
if text in text_set:
|
63 |
+
continue
|
64 |
+
text_set.add(text)
|
65 |
+
|
66 |
+
if language not in LANGUAGE_MAP.keys():
|
67 |
+
raise AssertionError(language)
|
68 |
+
|
69 |
+
row = {
|
70 |
+
"text": text,
|
71 |
+
"language": language,
|
72 |
+
"data_source": "cmu_hinglish_dog",
|
73 |
+
"split": split
|
74 |
+
}
|
75 |
+
row = json.dumps(row, ensure_ascii=False)
|
76 |
+
f.write("{}\n".format(row))
|
77 |
+
counter[split] += 1
|
78 |
+
|
79 |
+
print("counter: {}".format(counter))
|
80 |
+
|
81 |
+
return
|
82 |
+
|
83 |
+
|
84 |
+
if __name__ == '__main__':
|
85 |
+
main()
|
examples/preprocess/preprocess_europa_eac_tm.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="europa_eac_tm", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/europa_eac_tm.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"en2bg", "en2es", "en2fr"
|
42 |
+
]
|
43 |
+
|
44 |
+
text_set = set()
|
45 |
+
counter = defaultdict(int)
|
46 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
47 |
+
for name in name_list:
|
48 |
+
dataset_dict = load_dataset(
|
49 |
+
path=args.dataset_path,
|
50 |
+
name=name,
|
51 |
+
cache_dir=args.dataset_cache_dir,
|
52 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
53 |
+
)
|
54 |
+
for k, v in dataset_dict.items():
|
55 |
+
split = k
|
56 |
+
if split not in ("train", "validation", "test"):
|
57 |
+
print("skip split: {}".format(split))
|
58 |
+
continue
|
59 |
+
|
60 |
+
for sample in tqdm(v):
|
61 |
+
|
62 |
+
translation = sample["translation"]
|
63 |
+
for language, text in translation.items():
|
64 |
+
text = text.strip()
|
65 |
+
|
66 |
+
if text in text_set:
|
67 |
+
continue
|
68 |
+
text_set.add(text)
|
69 |
+
|
70 |
+
if language not in LANGUAGE_MAP.keys():
|
71 |
+
raise AssertionError(language)
|
72 |
+
|
73 |
+
row = {
|
74 |
+
"text": text,
|
75 |
+
"language": language,
|
76 |
+
"data_source": "europa_eac_tm",
|
77 |
+
"split": split
|
78 |
+
}
|
79 |
+
row = json.dumps(row, ensure_ascii=False)
|
80 |
+
f.write("{}\n".format(row))
|
81 |
+
counter[split] += 1
|
82 |
+
|
83 |
+
print("counter: {}".format(counter))
|
84 |
+
|
85 |
+
return
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == "__main__":
|
89 |
+
main()
|
language_identification.py
CHANGED
@@ -13,6 +13,8 @@ _URLS = {
|
|
13 |
"autshumato": "data/autshumato.jsonl",
|
14 |
"bsd_ja_en": "data/bsd_ja_en.jsonl",
|
15 |
"bucc2018": "data/bucc2018.jsonl",
|
|
|
|
|
16 |
"iwslt2017": "data/iwslt2017.jsonl",
|
17 |
"mike0307": "data/mike0307.jsonl",
|
18 |
"nbnn": "data/nbnn.jsonl",
|
@@ -50,6 +52,7 @@ LANGUAGE_MAP = {
|
|
50 |
"fr": "french",
|
51 |
"gl": "galician",
|
52 |
"hi": "hindi",
|
|
|
53 |
"is": "icelandic",
|
54 |
"it": "italian",
|
55 |
"ja": "japanese",
|
@@ -86,6 +89,8 @@ class LanguageIdentification(datasets.GeneratorBasedBuilder):
|
|
86 |
datasets.BuilderConfig(name="autshumato", version=VERSION, description="autshumato"),
|
87 |
datasets.BuilderConfig(name="bsd_ja_en", version=VERSION, description="bsd_ja_en"),
|
88 |
datasets.BuilderConfig(name="bucc2018", version=VERSION, description="bucc2018"),
|
|
|
|
|
89 |
datasets.BuilderConfig(name="iwslt2017", version=VERSION, description="iwslt2017"),
|
90 |
datasets.BuilderConfig(name="mike0307", version=VERSION, description="mike0307"),
|
91 |
datasets.BuilderConfig(name="nbnn", version=VERSION, description="nbnn"),
|
|
|
13 |
"autshumato": "data/autshumato.jsonl",
|
14 |
"bsd_ja_en": "data/bsd_ja_en.jsonl",
|
15 |
"bucc2018": "data/bucc2018.jsonl",
|
16 |
+
"cmu_hinglish_dog": "data/cmu_hinglish_dog.jsonl",
|
17 |
+
"europa_eac_tm": "data/europa_eac_tm.jsonl",
|
18 |
"iwslt2017": "data/iwslt2017.jsonl",
|
19 |
"mike0307": "data/mike0307.jsonl",
|
20 |
"nbnn": "data/nbnn.jsonl",
|
|
|
52 |
"fr": "french",
|
53 |
"gl": "galician",
|
54 |
"hi": "hindi",
|
55 |
+
"hi_en": "hindi (english)",
|
56 |
"is": "icelandic",
|
57 |
"it": "italian",
|
58 |
"ja": "japanese",
|
|
|
89 |
datasets.BuilderConfig(name="autshumato", version=VERSION, description="autshumato"),
|
90 |
datasets.BuilderConfig(name="bsd_ja_en", version=VERSION, description="bsd_ja_en"),
|
91 |
datasets.BuilderConfig(name="bucc2018", version=VERSION, description="bucc2018"),
|
92 |
+
datasets.BuilderConfig(name="cmu_hinglish_dog", version=VERSION, description="cmu_hinglish_dog"),
|
93 |
+
datasets.BuilderConfig(name="europa_eac_tm", version=VERSION, description="europa_eac_tm"),
|
94 |
datasets.BuilderConfig(name="iwslt2017", version=VERSION, description="iwslt2017"),
|
95 |
datasets.BuilderConfig(name="mike0307", version=VERSION, description="mike0307"),
|
96 |
datasets.BuilderConfig(name="nbnn", version=VERSION, description="nbnn"),
|