Datasets:

ArXiv:
License:
HoneyTian commited on
Commit
80f22ba
1 Parent(s): b07144f
README.md CHANGED
@@ -55,8 +55,8 @@ Tips:
55
  | menyo20k_mt | [menyo20k_mt](https://arxiv.org/abs/2103.08647v3) | TRAIN: 19899, VALID: 6655, TEST: 13148 | MENYO-20k 是一个多域并行数据集,其中的文本来自新闻文章、ted 演讲、电影文字记录、广播文字记录、科技文本以及其他由网络和专业翻译人员策划的短文。 | [menyo20k_mt](https://huggingface.co/datasets/menyo20k_mt) |
56
  | pib | [CVIT-PIB](https://arxiv.org/abs/2008.04860) | | 该数据集是 11 种印度语言的大规模句子对齐语料库,即: CVIT-PIB 语料库是印度语言可用的最大多语言语料库。 | [pib](https://huggingface.co/datasets/pib) |
57
  | poleval2019_mt | | | PolEval 是一项受 SemEval 启发的波兰语自然语言处理工具评估活动。 | [poleval2019_mt](https://huggingface.co/datasets/poleval2019_mt) |
58
- | wmt19 | [statmt.org](https://www.statmt.org/wmt19/translation-task.html) | 样本个数 | 我们的目标是尽可能使用公开的数据源。我们的训练数据主要来源是Europarl 语料库、 UN 语料库、新闻评论语料库和 ParaCrawl语料库。我们还发布了单语 新闻抓取语料库。将提供其他特定语言的语料库。 | [wmt/wmt19](https://huggingface.co/datasets/wmt/wmt19) |
59
- | ro_sts_parallel | | 样本个数 | 我们提出 RO-STS-Parallel - 通过将 STS 英语数据集翻译成罗马尼亚语而获得的并行罗马尼亚语-英语数据集。 | [ro_sts_parallel](https://huggingface.co/datasets/ro_sts_parallel) |
60
 
61
 
62
 
@@ -100,9 +100,9 @@ https://opus.nlpl.eu/
100
  | para_crawl | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
101
  | php | [PHP](https://opus.nlpl.eu/PHP/corpus/version/PHP) | TRAIN: 44007 | 最初从 http://se.php.net/download-docs.php 中提取的并行语料库。该语料库相当嘈杂。 | [php](https://huggingface.co/datasets/php) |
102
  | tatoeba | [Tatoeba](https://opus.nlpl.eu/Tatoeba/corpus/version/Tatoeba); [tatoeba](https://tatoeba.org/); [Tatoeba Paper](https://arxiv.org/abs/1812.10464v2) | TRAIN: 702895 | Tatoeba 是句子和翻译的集合。 | [tatoeba](https://huggingface.co/datasets/tatoeba) |
103
- | qed_amara | [QED](https://opus.nlpl.eu/QED/corpus/version/QED) | 样本个数 | | [qed_amara](https://huggingface.co/datasets/qed_amara) |
104
- | setimes | [SETIMES](https://opus.nlpl.eu/SETIMES/corpus/version/SETIMES) | 样本个数 | 英语和东南欧语言平行语料库 | [setimes](https://huggingface.co/datasets/setimes) |
105
- | spc | [SPC](https://opus.nlpl.eu/SPC/corpus/version/SPC) | 样本个数 | | [spc](https://huggingface.co/datasets/spc) |
106
  | tanzil | [Tanzil](https://opus.nlpl.eu/Tanzil/corpus/version/Tanzil) | 样本个数 | | [tanzil](https://huggingface.co/datasets/tanzil) |
107
 
108
 
 
55
  | menyo20k_mt | [menyo20k_mt](https://arxiv.org/abs/2103.08647v3) | TRAIN: 19899, VALID: 6655, TEST: 13148 | MENYO-20k 是一个多域并行数据集,其中的文本来自新闻文章、ted 演讲、电影文字记录、广播文字记录、科技文本以及其他由网络和专业翻译人员策划的短文。 | [menyo20k_mt](https://huggingface.co/datasets/menyo20k_mt) |
56
  | pib | [CVIT-PIB](https://arxiv.org/abs/2008.04860) | | 该数据集是 11 种印度语言的大规模句子对齐语料库,即: CVIT-PIB 语料库是印度语言可用的最大多语言语料库。 | [pib](https://huggingface.co/datasets/pib) |
57
  | poleval2019_mt | | | PolEval 是一项受 SemEval 启发的波兰语自然语言处理工具评估活动。 | [poleval2019_mt](https://huggingface.co/datasets/poleval2019_mt) |
58
+ | wmt19 | [statmt.org](https://www.statmt.org/wmt19/translation-task.html) | | 我们的目标是尽可能使用公开的数据源。我们的训练数据主要来源是Europarl 语料库、 UN 语料库、新闻评论语料库和 ParaCrawl语料库。我们还发布了单语 新闻抓取语料库。将提供其他特定语言的语料库。 | [wmt/wmt19](https://huggingface.co/datasets/wmt/wmt19) |
59
+ | ro_sts_parallel | | TRAIN: 21226, VALID: 5470, TEST: 4693 | 我们提出 RO-STS-Parallel - 通过将 STS 英语数据集翻译成罗马尼亚语而获得的并行罗马尼亚语-英语数据集。 | [ro_sts_parallel](https://huggingface.co/datasets/ro_sts_parallel) |
60
 
61
 
62
 
 
100
  | para_crawl | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
101
  | php | [PHP](https://opus.nlpl.eu/PHP/corpus/version/PHP) | TRAIN: 44007 | 最初从 http://se.php.net/download-docs.php 中提取的并行语料库。该语料库相当嘈杂。 | [php](https://huggingface.co/datasets/php) |
102
  | tatoeba | [Tatoeba](https://opus.nlpl.eu/Tatoeba/corpus/version/Tatoeba); [tatoeba](https://tatoeba.org/); [Tatoeba Paper](https://arxiv.org/abs/1812.10464v2) | TRAIN: 702895 | Tatoeba 是句子和翻译的集合。 | [tatoeba](https://huggingface.co/datasets/tatoeba) |
103
+ | qed_amara | [QED](https://opus.nlpl.eu/QED/corpus/version/QED) | TRAIN: 4183836 | | [qed_amara](https://huggingface.co/datasets/qed_amara) |
104
+ | setimes | [SETIMES](https://opus.nlpl.eu/SETIMES/corpus/version/SETIMES) | | 英语和东南欧语言平行语料库 | [setimes](https://huggingface.co/datasets/setimes) |
105
+ | spc | [SPC](https://opus.nlpl.eu/SPC/corpus/version/SPC) | TRAIN: 98327 | | [spc](https://huggingface.co/datasets/spc) |
106
  | tanzil | [Tanzil](https://opus.nlpl.eu/Tanzil/corpus/version/Tanzil) | 样本个数 | | [tanzil](https://huggingface.co/datasets/tanzil) |
107
 
108
 
data/pib.jsonl DELETED
File without changes
data/qed_amara.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60618e6fb0434283f8db9b16ef5aed3a26c10274ace68c6f9f1217d2b243666f
3
+ size 655462579
data/ro_sts_parallel.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b53f8e142dbf2b638d2e5fe7ccab3f56df46c3540637aabb7991017610a1329c
3
+ size 4757386
data/spc.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cac9e135432501855d0f7a5a03378aae169c371bdf04c8b94abb4cfa7fd8f24d
3
+ size 14645493
dataset_details.md CHANGED
@@ -1811,9 +1811,142 @@ sv: 891
1811
  ![php_text_length.jpg](docs/picture/php_text_length.jpg)
1812
 
1813
 
1814
- #### scandi_langid
1815
  以下都是 train 训练集的信息
1816
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1817
 
1818
  ```text
1819
  语种数量:
@@ -1822,10 +1955,8 @@ no: 79846
1822
  da: 79844
1823
  ```
1824
 
1825
-
1826
  样本示例:
1827
 
1828
-
1829
  | 数据 | 语种 | 样本 |
1830
  | :---: | :---: | :---: |
1831
  | scandi_langid | no | Det høres flott ut, men hvem sa at det skal være lett? |
@@ -1872,6 +2003,65 @@ da: 79844
1872
  ![scandi_langid_text_length.jpg](docs/picture/scandi_langid_text_length.jpg)
1873
 
1874
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1875
  #### stsb_multi_mt
1876
  以下都是 train 训练集的信息
1877
 
 
1811
  ![php_text_length.jpg](docs/picture/php_text_length.jpg)
1812
 
1813
 
1814
+ #### qed_amara
1815
  以下都是 train 训练集的信息
1816
 
1817
+ ```text
1818
+ 语种数量:
1819
+ ar: 573873
1820
+ ko: 570071
1821
+ ja: 476512
1822
+ en: 469301
1823
+ es: 415296
1824
+ it: 411257
1825
+ de: 374472
1826
+ fr: 372513
1827
+ he: 262705
1828
+ nl: 257836
1829
+ ```
1830
+
1831
+ 样本示例:
1832
+
1833
+ | 数据 | 语种 | 样本 |
1834
+ | :---: | :---: | :---: |
1835
+ | qed_amara | ar | قد تعتقد أنك تعرف |
1836
+ | qed_amara | ar | الكثير عن الأمريكيين الأصليين من الأفلام المشهورة |
1837
+ | qed_amara | ar | والكتب |
1838
+ | qed_amara | ko | 여러분은 미국 원주민에 대해 |
1839
+ | qed_amara | ko | 유명한 영화나 |
1840
+ | qed_amara | ko | 책이나, |
1841
+ | qed_amara | de | Kunst muss schön sein. |
1842
+ | qed_amara | de | Künstler müssen schön sein. |
1843
+ | qed_amara | de | Kunst muss schön sein... |
1844
+ | qed_amara | fr | Art doit être beau. |
1845
+ | qed_amara | fr | Artiste doit être beau. |
1846
+ | qed_amara | fr | L'art doit être beau... (Grommèle) |
1847
+ | qed_amara | es | Así que ahora es el momento para el primer concurso sobre mutación. |
1848
+ | qed_amara | es | En una prueba anterior, definimos los títeres variables para sostener 3 cadenas, cuerdas Moe, Larry y Curly |
1849
+ | qed_amara | es | Sin embargo, en algunas de las películas Stooges, |
1850
+ | qed_amara | it | E' il momento del primo quiz riguardo la mutazione. |
1851
+ | qed_amara | it | Nel quiz precedente abbiamo definito la variabile 'stooges' per contenere tre stringhe: 'Moe' , 'Larry' e 'Curly' . |
1852
+ | qed_amara | it | Ma in alcuni film sugli Stooges |
1853
+ | qed_amara | en | So now it's time for the first quiz about mutation. |
1854
+ | qed_amara | en | In a previous quiz, we defined the variable stooges to hold 3 strings, strings Moe, Larry and Curly. |
1855
+ | qed_amara | en | But in some of the Stooges films, |
1856
+ | qed_amara | ja | 前の小テストでは文字列Moe、Larry、Curlyの |
1857
+ | qed_amara | ja | 3つの文字列を保持するリストstoogesを定義しました しかし「Three Stooges」ではカーリーの代わりに シェンプが登場する回もありました |
1858
+ | qed_amara | ja | なのでこの小テストでは |
1859
+ | qed_amara | he | אתם אולי חושבים שאתם יודעים הרבה על ילידים אמריקאים דרך סרטים פופולריים, ספרים, |
1860
+ | qed_amara | he | אבל מסתבר שהרבה ממה שאנחנו חושבים שאנחנו יודעים על דמויות ילידיות אמריקאיות מפורסמות |
1861
+ | qed_amara | he | לא בדיוק נכון. קחו את סקאג'אוואה לדוגמה. אתם בטח זוכרים אותה |
1862
+ | qed_amara | nl | Je denkt dat je veel weet over inheemse Amerikanen door populaire films, boeken en lessen op school, maar het blijkt dat veel van wat we denken te weten over inheemse Amerikaanse figuren niet helemaal juist is. |
1863
+ | qed_amara | nl | Neem bijvoorbeeld Sacajewea. |
1864
+ | qed_amara | nl | Je herinnert je waarschijnlijk een mooie Indiaanse vrouw die een exotisch leven leidde en diende als de alwetende gids voor de expeditie van Lewis en Clark, toch? |
1865
+
1866
+ <details>
1867
+ <summary>文本长度</summary>
1868
+ <pre><code>0-10: 114213
1869
+ 10-20: 563838
1870
+ 20-30: 699924
1871
+ 30-40: 589120
1872
+ 40-50: 447887
1873
+ 50-60: 330009
1874
+ 60-70: 249652
1875
+ 70-80: 196826
1876
+ 80-90: 160826
1877
+ 90-100: 133670
1878
+ 100-110: 110869
1879
+ 110-120: 92294
1880
+ 120-130: 76834
1881
+ 130-140: 64563
1882
+ 140-150: 53572
1883
+ 150-160: 46013
1884
+ 160-170: 38617
1885
+ 170-180: 32304
1886
+ 180-190: 27173
1887
+ 190-200: 22835
1888
+ 200-210: 132797
1889
+ </code></pre>
1890
+ </details>
1891
+
1892
+ 文本长度统计图像:
1893
+
1894
+ ![qed_amara_text_length.jpg](docs/picture/qed_amara_text_length.jpg)
1895
+
1896
+
1897
+ #### ro_sts_parallel
1898
+ 以下都是 train 训练集的信息
1899
+
1900
+ ```text
1901
+ 语种数量:
1902
+ ro: 10687
1903
+ en: 10539
1904
+ ```
1905
+
1906
+ 样本示例:
1907
+
1908
+ | 数据 | 语种 | 样本 |
1909
+ | :---: | :---: | :---: |
1910
+ | ro_sts_parallel | en | A plane is taking off. |
1911
+ | ro_sts_parallel | en | An air plane is taking off. |
1912
+ | ro_sts_parallel | en | A man is playing a large flute. |
1913
+ | ro_sts_parallel | ro | Un avion decolează. |
1914
+ | ro_sts_parallel | ro | Un avion este în curs de decolare. |
1915
+ | ro_sts_parallel | ro | Un bărbat cântă la un flaut mare. |
1916
+
1917
+ <details>
1918
+ <summary>文本长度</summary>
1919
+ <pre><code>0-10: 1
1920
+ 10-20: 177
1921
+ 20-30: 1996
1922
+ 30-40: 3553
1923
+ 40-50: 4020
1924
+ 50-60: 3211
1925
+ 60-70: 2033
1926
+ 70-80: 1378
1927
+ 80-90: 931
1928
+ 90-100: 728
1929
+ 100-110: 557
1930
+ 110-120: 527
1931
+ 120-130: 413
1932
+ 130-140: 347
1933
+ 140-150: 314
1934
+ 150-160: 248
1935
+ 160-170: 235
1936
+ 170-180: 175
1937
+ 180-190: 112
1938
+ 190-200: 86
1939
+ 200-210: 184
1940
+ </code></pre>
1941
+ </details>
1942
+
1943
+ 文本长度统计图像:
1944
+
1945
+ ![ro_sts_parallel_text_length.jpg](docs/picture/ro_sts_parallel_text_length.jpg)
1946
+
1947
+
1948
+ #### scandi_langid
1949
+ 以下都是 train 训练集的信息
1950
 
1951
  ```text
1952
  语种数量:
 
1955
  da: 79844
1956
  ```
1957
 
 
1958
  样本示例:
1959
 
 
1960
  | 数据 | 语种 | 样本 |
1961
  | :---: | :---: | :---: |
1962
  | scandi_langid | no | Det høres flott ut, men hvem sa at det skal være lett? |
 
2003
  ![scandi_langid_text_length.jpg](docs/picture/scandi_langid_text_length.jpg)
2004
 
2005
 
2006
+ #### spc
2007
+ 以下都是 train ��练集的信息
2008
+
2009
+ ```text
2010
+ 语种数量:
2011
+ en: 54132
2012
+ af: 35214
2013
+ el: 6784
2014
+ zh: 2197
2015
+ ```
2016
+
2017
+ 样本示例:
2018
+
2019
+ | 数据 | 语种 | 样本 |
2020
+ | :---: | :---: | :---: |
2021
+ | spc | af | STAATSKOERANT, 1 JULIE 2008 No. 31197 3 |
2022
+ | spc | af | van die |
2023
+ | spc | af | President van die Republiek van Suid-Afrika |
2024
+ | spc | en | Government Gazettee, 1 JULY 2008 No. 31197 3 |
2025
+ | spc | en | by the |
2026
+ | spc | en | President of the Republic of South Africa |
2027
+ | spc | el | Το Συµβούλιο είναι ο κύριος φορέας λήψης των πολιτικών αποφάσεων της Ευρωπαϊκής Ένωσης. |
2028
+ | spc | el | Οι υπουργοί των κρατών µελών συνεδριάζουν στο Συµβούλιο της Ευρωπαϊκής Ένωσης. |
2029
+ | spc | el | Ανάλογα µε τα θέµατα της ηµερήσιας διάταξης, κάθε χώρα εκπροσωπείται από τον αρµόδιο για το εκάστοτε θέµα υπουργό (εξωτερικών υποθέσεων, οικονοµικών, κοινωνικών υποθέσεων, µεταφορών, γεωργίας, κλπ.). |
2030
+ | spc | zh | 中国 证券 监督 管理 委员会 令 第53 号 |
2031
+ | spc | zh | < 上市 公司 重大 资产 重组 管理 办法 > 已经 2008 年 3月 24日 中国 证券 监督 管理 委员会 第224 次 主席 办公会议 审议 通过 , 现 予 公布 , 自 2008 年 5月 18日 起 施行 . |
2032
+ | spc | zh | 中国 证券 监督 管理 委员会 主席 : 尚福林 |
2033
+
2034
+ <details>
2035
+ <summary>文本长度</summary>
2036
+ <pre><code>0-10: 10809
2037
+ 10-20: 26199
2038
+ 20-30: 14787
2039
+ 30-40: 9025
2040
+ 40-50: 5463
2041
+ 50-60: 3631
2042
+ 60-70: 2934
2043
+ 70-80: 3240
2044
+ 80-90: 2907
2045
+ 90-100: 1547
2046
+ 100-110: 1347
2047
+ 110-120: 1209
2048
+ 120-130: 1169
2049
+ 130-140: 1111
2050
+ 140-150: 986
2051
+ 150-160: 957
2052
+ 160-170: 941
2053
+ 170-180: 834
2054
+ 180-190: 784
2055
+ 190-200: 728
2056
+ 200-210: 7719
2057
+ </code></pre>
2058
+ </details>
2059
+
2060
+ 文本长度统计图像:
2061
+
2062
+ ![spc_text_length.jpg](docs/picture/spc_text_length.jpg)
2063
+
2064
+
2065
  #### stsb_multi_mt
2066
  以下都是 train 训练集的信息
2067
 
docs/picture/qed_amara_text_length.jpg ADDED

Git LFS Details

  • SHA256: 3685fb1cefb89ba1224f0ba78ddd84f1ad0ab071ad867a63aca4758da82b3c46
  • Pointer size: 130 Bytes
  • Size of remote file: 19.1 kB
docs/picture/ro_sts_parallel_text_length.jpg ADDED

Git LFS Details

  • SHA256: e36c89c49f3f65b42fe6d2d88d62815cca35db75e66fc258dd5fa9787a6d72ea
  • Pointer size: 130 Bytes
  • Size of remote file: 18.4 kB
docs/picture/spc_text_length.jpg ADDED

Git LFS Details

  • SHA256: 342129cec1eb95d93b04fdc19a586e7358aba2fc728c052782d25cf4b4ca788a
  • Pointer size: 130 Bytes
  • Size of remote file: 17.2 kB
examples/make_subset_details.py CHANGED
@@ -12,7 +12,7 @@ from project_settings import project_path
12
 
13
  def get_args():
14
  parser = argparse.ArgumentParser()
15
- parser.add_argument("--dataset_name", default="para_pat_en_uk", type=str)
16
  parser.add_argument(
17
  "--dataset_cache_dir",
18
  default=(project_path / "hub_datasets").as_posix(),
 
12
 
13
  def get_args():
14
  parser = argparse.ArgumentParser()
15
+ parser.add_argument("--dataset_name", default="spc", type=str)
16
  parser.add_argument(
17
  "--dataset_cache_dir",
18
  default=(project_path / "hub_datasets").as_posix(),
examples/preprocess/preprocess_pib.py CHANGED
@@ -38,20 +38,22 @@ def main():
38
  args = get_args()
39
 
40
  name_list = [
41
- "or-ur", "ml-or", "bn-ta", "gu-mr", "hi-or",
42
- "en-or", "mr-ur", "en-ta", "hi-ta", "bn-en",
43
- "bn-or", "ml-ta", "gu-ur", "bn-ml", "ml-pa",
44
- "en-pa", "bn-hi", "hi-pa", "gu-te", "pa-ta",
45
- "hi-ml", "or-te", "en-ml", "en-hi", "bn-pa",
46
- "mr-te", "mr-pa", "bn-te", "gu-hi", "ta-ur",
47
- "te-ur", "or-pa", "gu-ml", "gu-pa", "hi-te",
48
- "en-te", "ml-te", "pa-ur", "hi-ur", "mr-or",
49
- "en-ur", "ml-ur", "bn-mr", "gu-ta", "pa-te",
50
- "bn-gu", "bn-ur", "ml-mr", "or-ta", "ta-te",
 
 
51
  "gu-or", "en-gu", "hi-mr", "mr-ta", "en-mr"
52
  ]
53
 
54
- # TODO: 失败
55
  text_set = set()
56
  counter = defaultdict(int)
57
  with open(args.output_file, "w", encoding="utf-8") as f:
 
38
  args = get_args()
39
 
40
  name_list = [
41
+ "or-ur",
42
+ "ml-or",
43
+ "bn-ta", "gu-mr", "hi-or",
44
+ "en-or", "mr-ur", "en-ta", "hi-ta", "bn-en",
45
+ "bn-or", "ml-ta", "gu-ur", "bn-ml", "ml-pa",
46
+ "en-pa", "bn-hi", "hi-pa", "gu-te", "pa-ta",
47
+ "hi-ml", "or-te", "en-ml", "en-hi", "bn-pa",
48
+ "mr-te", "mr-pa", "bn-te", "gu-hi", "ta-ur",
49
+ "te-ur", "or-pa", "gu-ml", "gu-pa", "hi-te",
50
+ "en-te", "ml-te", "pa-ur", "hi-ur", "mr-or",
51
+ "en-ur", "ml-ur", "bn-mr", "gu-ta", "pa-te",
52
+ "bn-gu", "bn-ur", "ml-mr", "or-ta", "ta-te",
53
  "gu-or", "en-gu", "hi-mr", "mr-ta", "en-mr"
54
  ]
55
 
56
+ # TODO: 失败, 下载不到文件 http://preon.iiit.ac.in/~jerin/resources/datasets/pib_v1.3.tar.gz
57
  text_set = set()
58
  counter = defaultdict(int)
59
  with open(args.output_file, "w", encoding="utf-8") as f:
examples/preprocess/preprocess_qed_amara.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python3
2
+ # -*- coding: utf-8 -*-
3
+ import argparse
4
+ from collections import defaultdict
5
+ import json
6
+ import os
7
+ import sys
8
+
9
+ pwd = os.path.abspath(os.path.dirname(__file__))
10
+ sys.path.append(os.path.join(pwd, "../../"))
11
+
12
+ import datasets
13
+ from datasets import load_dataset, DownloadMode
14
+ from tqdm import tqdm
15
+
16
+ from language_identification import LANGUAGE_MAP
17
+ from project_settings import project_path
18
+
19
+
20
+ def get_args():
21
+ parser = argparse.ArgumentParser()
22
+ parser.add_argument("--dataset_path", default="qed_amara", type=str)
23
+ parser.add_argument(
24
+ "--dataset_cache_dir",
25
+ default=(project_path / "hub_datasets").as_posix(),
26
+ type=str
27
+ )
28
+ parser.add_argument(
29
+ "--output_file",
30
+ default=(project_path / "data/qed_amara.jsonl"),
31
+ type=str
32
+ )
33
+
34
+ args = parser.parse_args()
35
+ return args
36
+
37
+
38
+ def main():
39
+ args = get_args()
40
+
41
+ name_list = [
42
+ "ar-ko",
43
+ "de-fr",
44
+ "es-it",
45
+ "en-ja",
46
+ "he-nl"
47
+ ]
48
+
49
+ text_set = set()
50
+ counter = defaultdict(int)
51
+ with open(args.output_file, "w", encoding="utf-8") as f:
52
+ for name in name_list:
53
+ try:
54
+ dataset_dict = load_dataset(
55
+ path=args.dataset_path,
56
+ name=name,
57
+ cache_dir=args.dataset_cache_dir,
58
+ # download_mode=DownloadMode.FORCE_REDOWNLOAD
59
+ )
60
+ except datasets.builder.DatasetGenerationError:
61
+ print("skip subset: {}".format(name))
62
+ continue
63
+ for k, v in dataset_dict.items():
64
+ split = k
65
+ if split not in ("train", "validation", "test"):
66
+ print("skip split: {}".format(split))
67
+ continue
68
+
69
+ for sample in tqdm(v):
70
+ translation = sample["translation"]
71
+ for language, text in translation.items():
72
+ text = text.strip()
73
+ text = text.replace(" ", " ")
74
+ text = text.replace("­", "-")
75
+
76
+ if text in text_set:
77
+ continue
78
+ text_set.add(text)
79
+
80
+ if language not in LANGUAGE_MAP.keys():
81
+ raise AssertionError("language: {}, text: {}".format(language, text))
82
+
83
+ row = {
84
+ "text": text,
85
+ "language": language,
86
+ "data_source": "qed_amara",
87
+ "split": split
88
+ }
89
+ row = json.dumps(row, ensure_ascii=False)
90
+ f.write("{}\n".format(row))
91
+ counter[split] += 1
92
+
93
+ print("counter: {}".format(counter))
94
+
95
+ return
96
+
97
+
98
+ if __name__ == "__main__":
99
+ main()
examples/preprocess/preprocess_ro_sts_parallel.py ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python3
2
+ # -*- coding: utf-8 -*-
3
+ import argparse
4
+ from collections import defaultdict
5
+ import json
6
+ import os
7
+ import sys
8
+
9
+ pwd = os.path.abspath(os.path.dirname(__file__))
10
+ sys.path.append(os.path.join(pwd, "../../"))
11
+
12
+ from datasets import load_dataset, DownloadMode
13
+ from tqdm import tqdm
14
+
15
+ from language_identification import LANGUAGE_MAP
16
+ from project_settings import project_path
17
+
18
+
19
+ def get_args():
20
+ parser = argparse.ArgumentParser()
21
+ parser.add_argument("--dataset_path", default="ro_sts_parallel", type=str)
22
+ parser.add_argument(
23
+ "--dataset_cache_dir",
24
+ default=(project_path / "hub_datasets").as_posix(),
25
+ type=str
26
+ )
27
+ parser.add_argument(
28
+ "--output_file",
29
+ default=(project_path / "data/ro_sts_parallel.jsonl"),
30
+ type=str
31
+ )
32
+
33
+ args = parser.parse_args()
34
+ return args
35
+
36
+
37
+ def main():
38
+ args = get_args()
39
+
40
+ dataset_dict = load_dataset(
41
+ path=args.dataset_path,
42
+ cache_dir=args.dataset_cache_dir,
43
+ # download_mode=DownloadMode.FORCE_REDOWNLOAD
44
+ )
45
+ print(dataset_dict)
46
+
47
+ text_set = set()
48
+ counter = defaultdict(int)
49
+ with open(args.output_file, "w", encoding="utf-8") as f:
50
+ for k, v in dataset_dict.items():
51
+ split = k
52
+ if split not in ("train", "validation", "test"):
53
+ print("skip split: {}".format(split))
54
+ continue
55
+
56
+ for sample in tqdm(v):
57
+ translation = sample["translation"]
58
+ for language, text in translation.items():
59
+ text = text.strip()
60
+ text = text.replace(" ", " ")
61
+ text = text.replace("­", "-")
62
+
63
+ if text in text_set:
64
+ continue
65
+ text_set.add(text)
66
+
67
+ if language not in LANGUAGE_MAP.keys():
68
+ raise AssertionError("language: {}, text: {}".format(language, text))
69
+
70
+ row = {
71
+ "text": text,
72
+ "language": language,
73
+ "data_source": "ro_sts_parallel",
74
+ "split": split
75
+ }
76
+ row = json.dumps(row, ensure_ascii=False)
77
+ f.write("{}\n".format(row))
78
+ counter[split] += 1
79
+
80
+ print("counter: {}".format(counter))
81
+
82
+ return
83
+
84
+
85
+ if __name__ == '__main__':
86
+ main()
examples/preprocess/preprocess_setimes.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python3
2
+ # -*- coding: utf-8 -*-
3
+ import argparse
4
+ from collections import defaultdict
5
+ import json
6
+ import os
7
+ import sys
8
+
9
+ pwd = os.path.abspath(os.path.dirname(__file__))
10
+ sys.path.append(os.path.join(pwd, "../../"))
11
+
12
+ import datasets
13
+ from datasets import load_dataset, DownloadMode
14
+ from tqdm import tqdm
15
+
16
+ from language_identification import LANGUAGE_MAP
17
+ from project_settings import project_path
18
+
19
+
20
+ def get_args():
21
+ parser = argparse.ArgumentParser()
22
+ parser.add_argument("--dataset_path", default="setimes", type=str)
23
+ parser.add_argument(
24
+ "--dataset_cache_dir",
25
+ default=(project_path / "hub_datasets").as_posix(),
26
+ type=str
27
+ )
28
+ parser.add_argument(
29
+ "--output_file",
30
+ default=(project_path / "data/setimes.jsonl"),
31
+ type=str
32
+ )
33
+
34
+ args = parser.parse_args()
35
+ return args
36
+
37
+
38
+ def main():
39
+ args = get_args()
40
+
41
+ name_list = [
42
+ "bg-bs",
43
+ "bg-el",
44
+ "bg-en",
45
+ "bg-hr",
46
+ "bg-mk",
47
+ "bg-ro",
48
+ "bg-sq",
49
+ "bg-sr",
50
+ "bg-tr",
51
+ "bs-el",
52
+ "bs-en",
53
+ "bs-hr",
54
+ "bs-mk",
55
+ "bs-ro",
56
+ "bs-sq",
57
+ "bs-sr",
58
+ "bs-tr",
59
+ "el-en",
60
+ "el-hr",
61
+ "el-mk",
62
+ "el-ro",
63
+ "el-sq",
64
+ "el-sr",
65
+ "el-tr",
66
+ "en-hr",
67
+ "en-mk",
68
+ "en-ro",
69
+ "en-sq",
70
+ "en-sr",
71
+ "en-tr",
72
+ "hr-mk",
73
+ "hr-ro",
74
+ "hr-sq",
75
+ "hr-sr",
76
+ "hr-tr",
77
+ "mk-ro",
78
+ "mk-sq",
79
+ "mk-sr",
80
+ "mk-tr",
81
+ "ro-sq",
82
+ "ro-sr",
83
+ "ro-tr",
84
+ "sq-sr",
85
+ "sq-tr",
86
+ "sr-tr",
87
+ ]
88
+
89
+ # TODO: http://nlp.ffzg.hr 访问不到。
90
+ text_set = set()
91
+ counter = defaultdict(int)
92
+ with open(args.output_file, "w", encoding="utf-8") as f:
93
+ for name in name_list:
94
+ try:
95
+ dataset_dict = load_dataset(
96
+ path=args.dataset_path,
97
+ name=name,
98
+ cache_dir=args.dataset_cache_dir,
99
+ # download_mode=DownloadMode.FORCE_REDOWNLOAD
100
+ )
101
+ except datasets.builder.DatasetGenerationError:
102
+ print("skip subset: {}".format(name))
103
+ continue
104
+ for k, v in dataset_dict.items():
105
+ split = k
106
+ if split not in ("train", "validation", "test"):
107
+ print("skip split: {}".format(split))
108
+ continue
109
+
110
+ for sample in tqdm(v):
111
+ translation = sample["translation"]
112
+ for language, text in translation.items():
113
+ text = text.strip()
114
+ text = text.replace(" ", " ")
115
+ text = text.replace("­", "-")
116
+
117
+ if text in text_set:
118
+ continue
119
+ text_set.add(text)
120
+
121
+ if language not in LANGUAGE_MAP.keys():
122
+ raise AssertionError("language: {}, text: {}".format(language, text))
123
+
124
+ row = {
125
+ "text": text,
126
+ "language": language,
127
+ "data_source": "setimes",
128
+ "split": split
129
+ }
130
+ row = json.dumps(row, ensure_ascii=False)
131
+ f.write("{}\n".format(row))
132
+ counter[split] += 1
133
+
134
+ print("counter: {}".format(counter))
135
+
136
+ return
137
+
138
+
139
+ if __name__ == "__main__":
140
+ main()
examples/preprocess/preprocess_spc.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python3
2
+ # -*- coding: utf-8 -*-
3
+ import argparse
4
+ from collections import defaultdict
5
+ import json
6
+ import os
7
+ import sys
8
+
9
+ pwd = os.path.abspath(os.path.dirname(__file__))
10
+ sys.path.append(os.path.join(pwd, "../../"))
11
+
12
+ import datasets
13
+ from datasets import load_dataset, DownloadMode
14
+ from tqdm import tqdm
15
+
16
+ from language_identification import LANGUAGE_MAP
17
+ from project_settings import project_path
18
+
19
+
20
+ def get_args():
21
+ parser = argparse.ArgumentParser()
22
+ parser.add_argument("--dataset_path", default="spc", type=str)
23
+ parser.add_argument(
24
+ "--dataset_cache_dir",
25
+ default=(project_path / "hub_datasets").as_posix(),
26
+ type=str
27
+ )
28
+ parser.add_argument(
29
+ "--output_file",
30
+ default=(project_path / "data/spc.jsonl"),
31
+ type=str
32
+ )
33
+
34
+ args = parser.parse_args()
35
+ return args
36
+
37
+
38
+ def main():
39
+ args = get_args()
40
+
41
+ name_list = [
42
+ "af-en",
43
+ "el-en",
44
+ "en-zh",
45
+ ]
46
+
47
+ text_set = set()
48
+ counter = defaultdict(int)
49
+ with open(args.output_file, "w", encoding="utf-8") as f:
50
+ for name in name_list:
51
+ try:
52
+ dataset_dict = load_dataset(
53
+ path=args.dataset_path,
54
+ name=name,
55
+ cache_dir=args.dataset_cache_dir,
56
+ # download_mode=DownloadMode.FORCE_REDOWNLOAD
57
+ )
58
+ except datasets.builder.DatasetGenerationError:
59
+ print("skip subset: {}".format(name))
60
+ continue
61
+ for k, v in dataset_dict.items():
62
+ split = k
63
+ if split not in ("train", "validation", "test"):
64
+ print("skip split: {}".format(split))
65
+ continue
66
+
67
+ for sample in tqdm(v):
68
+ translation = sample["translation"]
69
+ if len(set(translation.values())) != len(translation.values()):
70
+ continue
71
+ for language, text in translation.items():
72
+ text = text.strip()
73
+ text = text.replace(" ", " ")
74
+ text = text.replace("­", "-")
75
+
76
+ if text in text_set:
77
+ continue
78
+ text_set.add(text)
79
+
80
+ if language not in LANGUAGE_MAP.keys():
81
+ raise AssertionError("language: {}, text: {}".format(language, text))
82
+
83
+ row = {
84
+ "text": text,
85
+ "language": language,
86
+ "data_source": "spc",
87
+ "split": split
88
+ }
89
+ row = json.dumps(row, ensure_ascii=False)
90
+ f.write("{}\n".format(row))
91
+ counter[split] += 1
92
+
93
+ print("counter: {}".format(counter))
94
+
95
+ return
96
+
97
+
98
+ if __name__ == "__main__":
99
+ main()
examples/preprocess/preprocess_tanzil.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python3
2
+ # -*- coding: utf-8 -*-
3
+ import argparse
4
+ from collections import defaultdict
5
+ import json
6
+ import os
7
+ import sys
8
+
9
+ pwd = os.path.abspath(os.path.dirname(__file__))
10
+ sys.path.append(os.path.join(pwd, "../../"))
11
+
12
+ import datasets
13
+ from datasets import load_dataset, DownloadMode
14
+ from tqdm import tqdm
15
+
16
+ from language_identification import LANGUAGE_MAP
17
+ from project_settings import project_path
18
+
19
+
20
+ def get_args():
21
+ parser = argparse.ArgumentParser()
22
+ parser.add_argument("--dataset_path", default="tanzil", type=str)
23
+ parser.add_argument(
24
+ "--dataset_cache_dir",
25
+ default=(project_path / "hub_datasets").as_posix(),
26
+ type=str
27
+ )
28
+ parser.add_argument(
29
+ "--output_file",
30
+ default=(project_path / "data/tanzil.jsonl"),
31
+ type=str
32
+ )
33
+
34
+ args = parser.parse_args()
35
+ return args
36
+
37
+
38
+ def main():
39
+ args = get_args()
40
+
41
+ name_list = [
42
+ "af-en",
43
+ "el-en",
44
+ "en-zh",
45
+ ]
46
+
47
+ text_set = set()
48
+ counter = defaultdict(int)
49
+ with open(args.output_file, "w", encoding="utf-8") as f:
50
+ for name in name_list:
51
+ try:
52
+ dataset_dict = load_dataset(
53
+ path=args.dataset_path,
54
+ name=name,
55
+ cache_dir=args.dataset_cache_dir,
56
+ # download_mode=DownloadMode.FORCE_REDOWNLOAD
57
+ )
58
+ except datasets.builder.DatasetGenerationError:
59
+ print("skip subset: {}".format(name))
60
+ continue
61
+ for k, v in dataset_dict.items():
62
+ split = k
63
+ if split not in ("train", "validation", "test"):
64
+ print("skip split: {}".format(split))
65
+ continue
66
+
67
+ for sample in tqdm(v):
68
+ translation = sample["translation"]
69
+ if len(set(translation.values())) != len(translation.values()):
70
+ continue
71
+ for language, text in translation.items():
72
+ text = text.strip()
73
+ text = text.replace(" ", " ")
74
+ text = text.replace("­", "-")
75
+
76
+ if text in text_set:
77
+ continue
78
+ text_set.add(text)
79
+
80
+ if language not in LANGUAGE_MAP.keys():
81
+ raise AssertionError("language: {}, text: {}".format(language, text))
82
+
83
+ row = {
84
+ "text": text,
85
+ "language": language,
86
+ "data_source": "tanzil",
87
+ "split": split
88
+ }
89
+ row = json.dumps(row, ensure_ascii=False)
90
+ f.write("{}\n".format(row))
91
+ counter[split] += 1
92
+
93
+ print("counter: {}".format(counter))
94
+
95
+ return
96
+
97
+
98
+ if __name__ == "__main__":
99
+ main()
examples/preprocess/preprocess_wmt19.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python3
2
+ # -*- coding: utf-8 -*-
3
+ import argparse
4
+ from collections import defaultdict
5
+ import json
6
+ import os
7
+ import sys
8
+
9
+ pwd = os.path.abspath(os.path.dirname(__file__))
10
+ sys.path.append(os.path.join(pwd, "../../"))
11
+
12
+ import datasets
13
+ from datasets import load_dataset, DownloadMode
14
+ from tqdm import tqdm
15
+
16
+ from language_identification import LANGUAGE_MAP
17
+ from project_settings import project_path
18
+
19
+
20
+ def get_args():
21
+ parser = argparse.ArgumentParser()
22
+ parser.add_argument("--dataset_path", default="wmt/wmt19", type=str)
23
+ parser.add_argument(
24
+ "--dataset_cache_dir",
25
+ default=(project_path / "hub_datasets").as_posix(),
26
+ type=str
27
+ )
28
+ parser.add_argument(
29
+ "--output_file",
30
+ default=(project_path / "data/wmt19.jsonl"),
31
+ type=str
32
+ )
33
+
34
+ args = parser.parse_args()
35
+ return args
36
+
37
+
38
+ def main():
39
+ args = get_args()
40
+
41
+ name_list = [
42
+ "cs-en",
43
+ "de-en",
44
+ "fi-en",
45
+ "fr-de",
46
+ "gu-en",
47
+ "kk-en",
48
+ "lt-en",
49
+ "ru-en",
50
+ "zh-en",
51
+ ]
52
+
53
+ # TODO: 全部子集加载都会失败.
54
+ text_set = set()
55
+ counter = defaultdict(int)
56
+ with open(args.output_file, "w", encoding="utf-8") as f:
57
+ for name in name_list:
58
+ try:
59
+ dataset_dict = load_dataset(
60
+ path=args.dataset_path,
61
+ name=name,
62
+ cache_dir=args.dataset_cache_dir,
63
+ # download_mode=DownloadMode.FORCE_REDOWNLOAD
64
+ )
65
+ except datasets.builder.DatasetGenerationError:
66
+ print("skip subset: {}".format(name))
67
+ continue
68
+ for k, v in dataset_dict.items():
69
+ split = k
70
+ if split not in ("train", "validation", "test"):
71
+ print("skip split: {}".format(split))
72
+ continue
73
+
74
+ for sample in tqdm(v):
75
+ translation = sample["translation"]
76
+ for language, text in translation.items():
77
+ text = text.strip()
78
+ text = text.replace(" ", " ")
79
+ text = text.replace("­", "-")
80
+
81
+ if text in text_set:
82
+ continue
83
+ text_set.add(text)
84
+
85
+ if language not in LANGUAGE_MAP.keys():
86
+ raise AssertionError("language: {}, text: {}".format(language, text))
87
+
88
+ row = {
89
+ "text": text,
90
+ "language": language,
91
+ "data_source": "wmt19",
92
+ "split": split
93
+ }
94
+ row = json.dumps(row, ensure_ascii=False)
95
+ f.write("{}\n".format(row))
96
+ counter[split] += 1
97
+
98
+ print("counter: {}".format(counter))
99
+
100
+ return
101
+
102
+
103
+ if __name__ == "__main__":
104
+ main()
language_identification.py CHANGED
@@ -72,7 +72,10 @@ _URLS = {
72
  "para_pat_fr_ko": "data/para_pat_fr_ko.jsonl",
73
  "para_pat_fr_ru": "data/para_pat_fr_ru.jsonl",
74
  "php": "data/php.jsonl",
 
 
75
  "scandi_langid": "data/scandi_langid.jsonl",
 
76
  "stsb_multi_mt": "data/stsb_multi_mt.jsonl",
77
  "tatoeba": "data/tatoeba.jsonl",
78
  "xnli": "data/xnli.jsonl",
@@ -93,6 +96,7 @@ _CITATION = """\
93
 
94
 
95
  LANGUAGE_MAP = {
 
96
  "ar": "arabic",
97
  "bg": "bulgarian",
98
  "bn": "bengali",
@@ -110,6 +114,8 @@ LANGUAGE_MAP = {
110
  "fr": "french",
111
  "ga": "irish",
112
  "gl": "galician",
 
 
113
  "hi": "hindi",
114
  "hi_en": "hindi english",
115
  "hr": "croatian",
@@ -120,6 +126,7 @@ LANGUAGE_MAP = {
120
  "it": "italian",
121
  "ja": "japanese",
122
  "ko": "korean",
 
123
  "lt": "lithuanian",
124
  "lv": "latvian",
125
  "mr": "marathi",
@@ -219,7 +226,10 @@ class LanguageIdentification(datasets.GeneratorBasedBuilder):
219
  datasets.BuilderConfig(name="para_pat_fr_ko", version=VERSION, description="para_pat_fr_ko"),
220
  datasets.BuilderConfig(name="para_pat_fr_ru", version=VERSION, description="para_pat_fr_ru"),
221
  datasets.BuilderConfig(name="php", version=VERSION, description="php"),
 
 
222
  datasets.BuilderConfig(name="scandi_langid", version=VERSION, description="scandi_langid"),
 
223
  datasets.BuilderConfig(name="stsb_multi_mt", version=VERSION, description="stsb_multi_mt"),
224
  datasets.BuilderConfig(name="tatoeba", version=VERSION, description="tatoeba"),
225
  datasets.BuilderConfig(name="xnli", version=VERSION, description="xnli"),
 
72
  "para_pat_fr_ko": "data/para_pat_fr_ko.jsonl",
73
  "para_pat_fr_ru": "data/para_pat_fr_ru.jsonl",
74
  "php": "data/php.jsonl",
75
+ "qed_amara": "data/qed_amara.jsonl",
76
+ "ro_sts_parallel": "data/ro_sts_parallel.jsonl",
77
  "scandi_langid": "data/scandi_langid.jsonl",
78
+ "spc": "data/spc.jsonl",
79
  "stsb_multi_mt": "data/stsb_multi_mt.jsonl",
80
  "tatoeba": "data/tatoeba.jsonl",
81
  "xnli": "data/xnli.jsonl",
 
96
 
97
 
98
  LANGUAGE_MAP = {
99
+ "af": "boolean (afrikaans)",
100
  "ar": "arabic",
101
  "bg": "bulgarian",
102
  "bn": "bengali",
 
114
  "fr": "french",
115
  "ga": "irish",
116
  "gl": "galician",
117
+ "gu": "gujarati",
118
+ "he": "hebrew",
119
  "hi": "hindi",
120
  "hi_en": "hindi english",
121
  "hr": "croatian",
 
126
  "it": "italian",
127
  "ja": "japanese",
128
  "ko": "korean",
129
+ "kk": "kazakh",
130
  "lt": "lithuanian",
131
  "lv": "latvian",
132
  "mr": "marathi",
 
226
  datasets.BuilderConfig(name="para_pat_fr_ko", version=VERSION, description="para_pat_fr_ko"),
227
  datasets.BuilderConfig(name="para_pat_fr_ru", version=VERSION, description="para_pat_fr_ru"),
228
  datasets.BuilderConfig(name="php", version=VERSION, description="php"),
229
+ datasets.BuilderConfig(name="qed_amara", version=VERSION, description="qed_amara"),
230
+ datasets.BuilderConfig(name="ro_sts_parallel", version=VERSION, description="ro_sts_parallel"),
231
  datasets.BuilderConfig(name="scandi_langid", version=VERSION, description="scandi_langid"),
232
+ datasets.BuilderConfig(name="spc", version=VERSION, description="spc"),
233
  datasets.BuilderConfig(name="stsb_multi_mt", version=VERSION, description="stsb_multi_mt"),
234
  datasets.BuilderConfig(name="tatoeba", version=VERSION, description="tatoeba"),
235
  datasets.BuilderConfig(name="xnli", version=VERSION, description="xnli"),
load_data.md CHANGED
@@ -23,6 +23,7 @@
23
  | fr | french | 10000 | iwslt2017 |
24
  | ga | irish | 10000 | multi_para_crawl |
25
  | gl | galician | 3096 | tatoeba |
 
26
  | hi | hindi | 10000 | open_subtitles |
27
  | hi_en | hindi | 7180 | cmu_hinglish_dog |
28
  | hr | croatian | 10000 | hrenwac_para |
@@ -32,6 +33,7 @@
32
  | is | icelandic | 2973 | europa_ecdc_tm; europa_eac_tm |
33
  | it | italian | 10000 | iwslt2017 |
34
  | ja | japanese | 10000 | iwslt2017 |
 
35
  | ko | korean | 10000 | iwslt2017 |
36
  | lt | lithuanian | 10000 | emea |
37
  | lv | latvian | 4595 | europa_ecdc_tm; europa_eac_tm |
@@ -78,6 +80,7 @@
78
 
79
  | 语种 | 语种全称 | 样本个数 | 数据来源 |
80
  | :--- | :---: | :---: | :---: |
 
81
  | ar | arabic | 100000 | iwslt2017 |
82
  | bg | bulgarian | 100000 | xnli |
83
  | bn | bengali | 36064 | open_subtitles |
@@ -86,7 +89,7 @@
86
  | da | danish | 100000 | open_subtitles |
87
  | de | german | 100000 | iwslt2017 |
88
  | el | modern greek | 100000 | emea |
89
- | en | english | 100000 | iwslt2017 |
90
  | eo | esperanto | 94101 | tatoeba; open_subtitles |
91
  | es | spanish | 100000 | xnli |
92
  | et | estonian | 100000 | emea |
@@ -95,6 +98,7 @@
95
  | fr | french | 100000 | iwslt2017 |
96
  | ga | irish | 100000 | multi_para_crawl |
97
  | gl | galician | 3096 | tatoeba |
 
98
  | hi | hindi | 100000 | xnli |
99
  | hi_en | hindi | 7180 | cmu_hinglish_dog |
100
  | hr | croatian | 95844 | hrenwac_para |
@@ -104,6 +108,7 @@
104
  | is | icelandic | 100000 | multi_para_crawl |
105
  | it | italian | 100000 | iwslt2017 |
106
  | ja | japanese | 100000 | iwslt2017 |
 
107
  | ko | korean | 100000 | iwslt2017 |
108
  | lt | lithuanian | 100000 | emea |
109
  | lv | latvian | 100000 | multi_para_crawl |
@@ -128,6 +133,6 @@
128
  | ur | urdu | 100000 | xnli |
129
  | vi | vietnamese | 100000 | xnli |
130
  | yo | yoruba | 9970 | menyo20k_mt |
131
- | zh | chinese | 100000 | xnli |
132
  | zu | zulu, south africa | 26801 | autshumato |
133
 
 
23
  | fr | french | 10000 | iwslt2017 |
24
  | ga | irish | 10000 | multi_para_crawl |
25
  | gl | galician | 3096 | tatoeba |
26
+ | gu | gujarati | - | - |
27
  | hi | hindi | 10000 | open_subtitles |
28
  | hi_en | hindi | 7180 | cmu_hinglish_dog |
29
  | hr | croatian | 10000 | hrenwac_para |
 
33
  | is | icelandic | 2973 | europa_ecdc_tm; europa_eac_tm |
34
  | it | italian | 10000 | iwslt2017 |
35
  | ja | japanese | 10000 | iwslt2017 |
36
+ | kk | kazakh | - | - |
37
  | ko | korean | 10000 | iwslt2017 |
38
  | lt | lithuanian | 10000 | emea |
39
  | lv | latvian | 4595 | europa_ecdc_tm; europa_eac_tm |
 
80
 
81
  | 语种 | 语种全称 | 样本个数 | 数据来源 |
82
  | :--- | :---: | :---: | :---: |
83
+ | af | afrikaans | 35214 | spc |
84
  | ar | arabic | 100000 | iwslt2017 |
85
  | bg | bulgarian | 100000 | xnli |
86
  | bn | bengali | 36064 | open_subtitles |
 
89
  | da | danish | 100000 | open_subtitles |
90
  | de | german | 100000 | iwslt2017 |
91
  | el | modern greek | 100000 | emea |
92
+ | en | english | 200000 | iwslt2017 |
93
  | eo | esperanto | 94101 | tatoeba; open_subtitles |
94
  | es | spanish | 100000 | xnli |
95
  | et | estonian | 100000 | emea |
 
98
  | fr | french | 100000 | iwslt2017 |
99
  | ga | irish | 100000 | multi_para_crawl |
100
  | gl | galician | 3096 | tatoeba |
101
+ | gu | gujarati | - | - |
102
  | hi | hindi | 100000 | xnli |
103
  | hi_en | hindi | 7180 | cmu_hinglish_dog |
104
  | hr | croatian | 95844 | hrenwac_para |
 
108
  | is | icelandic | 100000 | multi_para_crawl |
109
  | it | italian | 100000 | iwslt2017 |
110
  | ja | japanese | 100000 | iwslt2017 |
111
+ | kk | kazakh | - | - |
112
  | ko | korean | 100000 | iwslt2017 |
113
  | lt | lithuanian | 100000 | emea |
114
  | lv | latvian | 100000 | multi_para_crawl |
 
133
  | ur | urdu | 100000 | xnli |
134
  | vi | vietnamese | 100000 | xnli |
135
  | yo | yoruba | 9970 | menyo20k_mt |
136
+ | zh | chinese | 200000 | xnli |
137
  | zu | zulu, south africa | 26801 | autshumato |
138