Datasets:

Multilinguality:
translation
Size Categories:
1M<n<10M
Language Creators:
expert-generated
Annotations Creators:
crowdsourced
Tags:
License:
system HF staff commited on
Commit
4940254
0 Parent(s):

Update files from the datasets library (from 1.0.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.0.0

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +27 -0
  2. dataset_infos.json +1 -0
  3. dummy/az_to_en/1.0.0/dummy_data.zip +3 -0
  4. dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/az-tr.train +4 -0
  5. dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/az.dev +4 -0
  6. dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/az.test +4 -0
  7. dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/en.dev +4 -0
  8. dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/en.test +4 -0
  9. dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/en.train +4 -0
  10. dummy/aztr_to_en/1.0.0/dummy_data.zip +3 -0
  11. dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/be.dev +4 -0
  12. dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/be.test +4 -0
  13. dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/be.train +4 -0
  14. dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/en.dev +4 -0
  15. dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/en.test +4 -0
  16. dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/en.train +4 -0
  17. dummy/be_to_en/1.0.0/dummy_data.zip +3 -0
  18. dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/be-ru.train +4 -0
  19. dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/be.dev +4 -0
  20. dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/be.test +4 -0
  21. dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/en.dev +4 -0
  22. dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/en.test +4 -0
  23. dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/en.train +4 -0
  24. dummy/beru_to_en/1.0.0/dummy_data.zip +3 -0
  25. dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/es.dev +4 -0
  26. dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/es.test +4 -0
  27. dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/es.train +4 -0
  28. dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/pt.dev +4 -0
  29. dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/pt.test +4 -0
  30. dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/pt.train +4 -0
  31. dummy/es_to_pt/1.0.0/dummy_data.zip +3 -0
  32. dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/fr.dev +4 -0
  33. dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/fr.test +4 -0
  34. dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/fr.train +4 -0
  35. dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/pt.dev +4 -0
  36. dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/pt.test +4 -0
  37. dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/pt.train +4 -0
  38. dummy/fr_to_pt/1.0.0/dummy_data.zip +3 -0
  39. dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/en.dev +4 -0
  40. dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/en.test +4 -0
  41. dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/en.train +4 -0
  42. dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/gl.dev +4 -0
  43. dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/gl.test +4 -0
  44. dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/gl.train +4 -0
  45. dummy/gl_to_en/1.0.0/dummy_data.zip +3 -0
  46. dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/en.dev +4 -0
  47. dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/en.test +4 -0
  48. dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/en.train +4 -0
  49. dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/gl-pt.train +4 -0
  50. dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/gl.dev +4 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"az_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["az", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "az", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "az_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 186540, "num_examples": 904, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 1226853, "num_examples": 5947, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 122709, "num_examples": 672, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 1536102, "size_in_bytes": 132542011}, "aztr_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["az_tr", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "az_tr", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "aztr_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 186540, "num_examples": 904, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 39834469, "num_examples": 188397, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 122709, "num_examples": 672, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 40143718, "size_in_bytes": 171149627}, "be_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["be", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "be", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "be_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 186606, "num_examples": 665, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 1176899, "num_examples": 4510, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 59328, "num_examples": 249, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 1422833, "size_in_bytes": 132428742}, "beru_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["be_ru", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "be_ru", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "beru_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 186606, "num_examples": 665, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 59953616, "num_examples": 212615, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 59328, "num_examples": 249, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 60199550, "size_in_bytes": 191205459}, "es_to_pt": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["es", "pt"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "es", "output": "pt"}, "builder_name": "ted_hrlr", "config_name": "es_to_pt", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 343640, "num_examples": 1764, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 8611393, "num_examples": 44939, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 181535, "num_examples": 1017, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 9136568, "size_in_bytes": 140142477}, "fr_to_pt": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["fr", "pt"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "fr", "output": "pt"}, "builder_name": "ted_hrlr", "config_name": "fr_to_pt", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 311650, "num_examples": 1495, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 8755387, "num_examples": 43874, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 212317, "num_examples": 1132, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 9279354, "size_in_bytes": 140285263}, "gl_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["gl", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "gl", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "gl_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 193213, "num_examples": 1008, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 1961363, "num_examples": 10018, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 137929, "num_examples": 683, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 2292505, "size_in_bytes": 133298414}, "glpt_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["gl_pt", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "gl_pt", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "glpt_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 193213, "num_examples": 1008, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 11734254, "num_examples": 61803, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 137929, "num_examples": 683, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 12065396, "size_in_bytes": 143071305}, "he_to_pt": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["he", "pt"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "he", "output": "pt"}, "builder_name": "ted_hrlr", "config_name": "he_to_pt", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 361378, "num_examples": 1624, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 10627615, "num_examples": 48512, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 230725, "num_examples": 1146, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 11219718, "size_in_bytes": 142225627}, "it_to_pt": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["it", "pt"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "it", "output": "pt"}, "builder_name": "ted_hrlr", "config_name": "it_to_pt", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 324726, "num_examples": 1670, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 8905825, "num_examples": 46260, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 210375, "num_examples": 1163, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 9440926, "size_in_bytes": 140446835}, "pt_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["pt", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "pt", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "pt_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 347803, "num_examples": 1804, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 9772911, "num_examples": 51786, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 207960, "num_examples": 1194, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 10328674, "size_in_bytes": 141334583}, "ru_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["ru", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "ru", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "ru_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1459576, "num_examples": 5477, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 58778442, "num_examples": 208107, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 1318357, "num_examples": 4806, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 61556375, "size_in_bytes": 192562284}, "ru_to_pt": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["ru", "pt"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "ru", "output": "pt"}, "builder_name": "ted_hrlr", "config_name": "ru_to_pt", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 409062, "num_examples": 1589, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 11882860, "num_examples": 47279, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 276866, "num_examples": 1185, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 12568788, "size_in_bytes": 143574697}, "tr_to_en": {"description": "Data sets derived from TED talk transcripts for comparing similar language pairs\nwhere one is high resource and the other is low resource.\n", "citation": "@inproceedings{Ye2018WordEmbeddings,\n author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},\n title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},\n booktitle = {HLT-NAACL},\n year = {2018},\n }\n", "homepage": "https://github.com/neulab/word-embeddings-for-nmt", "license": "", "features": {"translation": {"languages": ["tr", "en"], "id": null, "_type": "Translation"}}, "supervised_keys": {"input": "tr", "output": "en"}, "builder_name": "ted_hrlr", "config_name": "tr_to_en", "version": {"version_str": "1.0.0", "description": "", "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 1026406, "num_examples": 5030, "dataset_name": "ted_hrlr"}, "train": {"name": "train", "num_bytes": 38607636, "num_examples": 182451, "dataset_name": "ted_hrlr"}, "validation": {"name": "validation", "num_bytes": 832358, "num_examples": 4046, "dataset_name": "ted_hrlr"}}, "download_checksums": {"http://www.phontron.com/data/qi18naacl-dataset.tar.gz": {"num_bytes": 131005909, "checksum": "216a86c3df4d4f522856fe9b920ff5be6b394d769cc88974ae8f9f5546953bbc"}}, "download_size": 131005909, "dataset_size": 40466400, "size_in_bytes": 171472309}}
dummy/az_to_en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27d6e9bab89ca2bee5491cff3fa0801bc033519acb7c714357ff0b3950c20ba5
3
+ size 2763
dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/az-tr.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .
2
+ razılaşdıq ? hə ?
3
+ onda başlayaq .
4
+ uşaqlıq dövrünüzü keçsəniz də fırtıqınızı yediyiniz olub ?
dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/az.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ 11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .
2
+ atam balaca boz radiosunda bbc xəbərlərinə qulaq asırdı .
3
+ üzündə o vaxt heç alışıq olmadığımız bir təbəssüm vardı , çünki xəbərlər əksərən üzərdi onu .
4
+ `` `` '' taliban geri çəkildi ! '' '' - deyə qışqırdı atam . ''
dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/az.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ bugün mən gözlənilməz kəşflər barədə danışacam .
2
+ mən indi günəş texnologiyası sənayesində çalışıram .
3
+ və mənim kiçik başlanğıc nöqtəm özümüzü güclə çevrənin içinə atmaq və bunu da ...
4
+ ... bunun üçün fərqli qaynaqlara nəzər salıb .
dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/en.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ when i was 11 , i remember waking up one morning to the sound of joy in my house .
2
+ my father was listening to bbc news on his small , gray radio .
3
+ there was a big smile on his face which was unusual then , because the news mostly depressed him .
4
+ `` `` '' the taliban are gone ! '' '' my father shouted . ''
dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/en.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ today i 'm going to talk about unexpected discoveries .
2
+ now i work in the solar technology industry .
3
+ and my small startup is looking to force ourselves into the environment by paying attention to ...
4
+ ... paying attention to crowd-sourcing .
dummy/aztr_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/az_tr_to_en/en.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ please raise your hand if something applies to you .
2
+ are we agreed ? yes ?
3
+ then let 's begin .
4
+ have you ever eaten a booger long past your childhood ?
dummy/aztr_to_en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa750c84d33161590336f4dd638816d19faaf1ccd867a1b1d9ccd05a574734c9
3
+ size 2811
dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/be.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ 11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .
2
+ atam balaca boz radiosunda bbc xəbərlərinə qulaq asırdı .
3
+ üzündə o vaxt heç alışıq olmadığımız bir təbəssüm vardı , çünki xəbərlər əksərən üzərdi onu .
4
+ `` `` '' taliban geri çəkildi ! '' '' - deyə qışqırdı atam . ''
dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/be.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ bugün mən gözlənilməz kəşflər barədə danışacam .
2
+ mən indi günəş texnologiyası sənayesində çalışıram .
3
+ və mənim kiçik başlanğıc nöqtəm özümüzü güclə çevrənin içinə atmaq və bunu da ...
4
+ ... bunun üçün fərqli qaynaqlara nəzər salıb .
dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/be.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .
2
+ razılaşdıq ? hə ?
3
+ onda başlayaq .
4
+ uşaqlıq dövrünüzü keçsəniz də fırtıqınızı yediyiniz olub ?
dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/en.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ when i was 11 , i remember waking up one morning to the sound of joy in my house .
2
+ my father was listening to bbc news on his small , gray radio .
3
+ there was a big smile on his face which was unusual then , because the news mostly depressed him .
4
+ `` `` '' the taliban are gone ! '' '' my father shouted . ''
dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/en.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ today i 'm going to talk about unexpected discoveries .
2
+ now i work in the solar technology industry .
3
+ and my small startup is looking to force ourselves into the environment by paying attention to ...
4
+ ... paying attention to crowd-sourcing .
dummy/be_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_to_en/en.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ please raise your hand if something applies to you .
2
+ are we agreed ? yes ?
3
+ then let 's begin .
4
+ have you ever eaten a booger long past your childhood ?
dummy/be_to_en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc60e565e0304ed35775eb910531082095948c7d85c7558615a3bcf1d957c6a1
3
+ size 2763
dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/be-ru.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .
2
+ razılaşdıq ? hə ?
3
+ onda başlayaq .
4
+ uşaqlıq dövrünüzü keçsəniz də fırtıqınızı yediyiniz olub ?
dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/be.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ 11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .
2
+ atam balaca boz radiosunda bbc xəbərlərinə qulaq asırdı .
3
+ üzündə o vaxt heç alışıq olmadığımız bir təbəssüm vardı , çünki xəbərlər əksərən üzərdi onu .
4
+ `` `` '' taliban geri çəkildi ! '' '' - deyə qışqırdı atam . ''
dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/be.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ bugün mən gözlənilməz kəşflər barədə danışacam .
2
+ mən indi günəş texnologiyası sənayesində çalışıram .
3
+ və mənim kiçik başlanğıc nöqtəm özümüzü güclə çevrənin içinə atmaq və bunu da ...
4
+ ... bunun üçün fərqli qaynaqlara nəzər salıb .
dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/en.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ when i was 11 , i remember waking up one morning to the sound of joy in my house .
2
+ my father was listening to bbc news on his small , gray radio .
3
+ there was a big smile on his face which was unusual then , because the news mostly depressed him .
4
+ `` `` '' the taliban are gone ! '' '' my father shouted . ''
dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/en.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ today i 'm going to talk about unexpected discoveries .
2
+ now i work in the solar technology industry .
3
+ and my small startup is looking to force ourselves into the environment by paying attention to ...
4
+ ... paying attention to crowd-sourcing .
dummy/beru_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/be_ru_to_en/en.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ please raise your hand if something applies to you .
2
+ are we agreed ? yes ?
3
+ then let 's begin .
4
+ have you ever eaten a booger long past your childhood ?
dummy/beru_to_en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2325f3f62d823ba1c5e9e271e0efa9f73157d9a7d1d464ac958ad452d4109d1
3
+ size 2811
dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/es.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ 11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .
2
+ atam balaca boz radiosunda bbc xəbərlərinə qulaq asırdı .
3
+ üzündə o vaxt heç alışıq olmadığımız bir təbəssüm vardı , çünki xəbərlər əksərən üzərdi onu .
4
+ `` `` '' taliban geri çəkildi ! '' '' - deyə qışqırdı atam . ''
dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/es.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ bugün mən gözlənilməz kəşflər barədə danışacam .
2
+ mən indi günəş texnologiyası sənayesində çalışıram .
3
+ və mənim kiçik başlanğıc nöqtəm özümüzü güclə çevrənin içinə atmaq və bunu da ...
4
+ ... bunun üçün fərqli qaynaqlara nəzər salıb .
dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/es.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .
2
+ razılaşdıq ? hə ?
3
+ onda başlayaq .
4
+ uşaqlıq dövrünüzü keçsəniz də fırtıqınızı yediyiniz olub ?
dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/pt.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ when i was 11 , i remember waking up one morning to the sound of joy in my house .
2
+ my father was listening to bbc news on his small , gray radio .
3
+ there was a big smile on his face which was unusual then , because the news mostly depressed him .
4
+ `` `` '' the taliban are gone ! '' '' my father shouted . ''
dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/pt.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ today i 'm going to talk about unexpected discoveries .
2
+ now i work in the solar technology industry .
3
+ and my small startup is looking to force ourselves into the environment by paying attention to ...
4
+ ... paying attention to crowd-sourcing .
dummy/es_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/es_to_pt/pt.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ please raise your hand if something applies to you .
2
+ are we agreed ? yes ?
3
+ then let 's begin .
4
+ have you ever eaten a booger long past your childhood ?
dummy/es_to_pt/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98a296c205b001bb43853e47b108b22d72dbf25330abf6ee8d2299c7ecad59e1
3
+ size 2763
dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/fr.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ 11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .
2
+ atam balaca boz radiosunda bbc xəbərlərinə qulaq asırdı .
3
+ üzündə o vaxt heç alışıq olmadığımız bir təbəssüm vardı , çünki xəbərlər əksərən üzərdi onu .
4
+ `` `` '' taliban geri çəkildi ! '' '' - deyə qışqırdı atam . ''
dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/fr.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ bugün mən gözlənilməz kəşflər barədə danışacam .
2
+ mən indi günəş texnologiyası sənayesində çalışıram .
3
+ və mənim kiçik başlanğıc nöqtəm özümüzü güclə çevrənin içinə atmaq və bunu da ...
4
+ ... bunun üçün fərqli qaynaqlara nəzər salıb .
dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/fr.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .
2
+ razılaşdıq ? hə ?
3
+ onda başlayaq .
4
+ uşaqlıq dövrünüzü keçsəniz də fırtıqınızı yediyiniz olub ?
dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/pt.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ when i was 11 , i remember waking up one morning to the sound of joy in my house .
2
+ my father was listening to bbc news on his small , gray radio .
3
+ there was a big smile on his face which was unusual then , because the news mostly depressed him .
4
+ `` `` '' the taliban are gone ! '' '' my father shouted . ''
dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/pt.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ today i 'm going to talk about unexpected discoveries .
2
+ now i work in the solar technology industry .
3
+ and my small startup is looking to force ourselves into the environment by paying attention to ...
4
+ ... paying attention to crowd-sourcing .
dummy/fr_to_pt/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/fr_to_pt/pt.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ please raise your hand if something applies to you .
2
+ are we agreed ? yes ?
3
+ then let 's begin .
4
+ have you ever eaten a booger long past your childhood ?
dummy/fr_to_pt/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f967919e487e7db91e424fa5b9a52c07f88d16a870edfa8d404208552fdd433
3
+ size 2763
dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/en.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ when i was 11 , i remember waking up one morning to the sound of joy in my house .
2
+ my father was listening to bbc news on his small , gray radio .
3
+ there was a big smile on his face which was unusual then , because the news mostly depressed him .
4
+ `` `` '' the taliban are gone ! '' '' my father shouted . ''
dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/en.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ today i 'm going to talk about unexpected discoveries .
2
+ now i work in the solar technology industry .
3
+ and my small startup is looking to force ourselves into the environment by paying attention to ...
4
+ ... paying attention to crowd-sourcing .
dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/en.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ please raise your hand if something applies to you .
2
+ are we agreed ? yes ?
3
+ then let 's begin .
4
+ have you ever eaten a booger long past your childhood ?
dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/gl.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ 11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .
2
+ atam balaca boz radiosunda bbc xəbərlərinə qulaq asırdı .
3
+ üzündə o vaxt heç alışıq olmadığımız bir təbəssüm vardı , çünki xəbərlər əksərən üzərdi onu .
4
+ `` `` '' taliban geri çəkildi ! '' '' - deyə qışqırdı atam . ''
dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/gl.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ bugün mən gözlənilməz kəşflər barədə danışacam .
2
+ mən indi günəş texnologiyası sənayesində çalışıram .
3
+ və mənim kiçik başlanğıc nöqtəm özümüzü güclə çevrənin içinə atmaq və bunu da ...
4
+ ... bunun üçün fərqli qaynaqlara nəzər salıb .
dummy/gl_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_to_en/gl.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .
2
+ razılaşdıq ? hə ?
3
+ onda başlayaq .
4
+ uşaqlıq dövrünüzü keçsəniz də fırtıqınızı yediyiniz olub ?
dummy/gl_to_en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d3294fd69a3f7c49239a20a03a03b04db5ae28b86b2cf40c6be33adf82a2b39
3
+ size 2763
dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/en.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ when i was 11 , i remember waking up one morning to the sound of joy in my house .
2
+ my father was listening to bbc news on his small , gray radio .
3
+ there was a big smile on his face which was unusual then , because the news mostly depressed him .
4
+ `` `` '' the taliban are gone ! '' '' my father shouted . ''
dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/en.test ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ today i 'm going to talk about unexpected discoveries .
2
+ now i work in the solar technology industry .
3
+ and my small startup is looking to force ourselves into the environment by paying attention to ...
4
+ ... paying attention to crowd-sourcing .
dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/en.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ please raise your hand if something applies to you .
2
+ are we agreed ? yes ?
3
+ then let 's begin .
4
+ have you ever eaten a booger long past your childhood ?
dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/gl-pt.train ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ zəhmət olmasa , sizə xitab edən sözlər eşidəndə əlinizi qaldırın .
2
+ razılaşdıq ? hə ?
3
+ onda başlayaq .
4
+ uşaqlıq dövrünüzü keçsəniz də fırtıqınızı yediyiniz olub ?
dummy/glpt_to_en/1.0.0/dummy_data-zip-extracted/dummy_data/datasets/gl_pt_to_en/gl.dev ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ 11 yaşımdaydım . səhərin birində , evimizdəki sevinc səslərinə oyandığım indiki kimi yadımdadır .
2
+ atam balaca boz radiosunda bbc xəbərlərinə qulaq asırdı .
3
+ üzündə o vaxt heç alışıq olmadığımız bir təbəssüm vardı , çünki xəbərlər əksərən üzərdi onu .
4
+ `` `` '' taliban geri çəkildi ! '' '' - deyə qışqırdı atam . ''