holylovenia commited on
Commit
6aec44d
1 Parent(s): 85b30ea

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +113 -0
README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: other
4
+ language:
5
+ - eng
6
+ - vie
7
+ - tha
8
+ - mya
9
+ - jav
10
+ - ind
11
+ - tgl
12
+ - zlm
13
+ - ceb
14
+ - fil
15
+ - khm
16
+ - lao
17
+ - mad
18
+ - pam
19
+ pretty_name: Qed
20
+ task_categories:
21
+ - machine-translation
22
+ - self-supervised-pretraining
23
+ tags:
24
+ - machine-translation
25
+ - self-supervised-pretraining
26
+ ---
27
+
28
+ QED - The QCRI Educational Domain Corpus (formerly QCRI AMARA Corpus) is an open multilingual collection of subtitles for educational videos and lectures collaboratively transcribed and translated over the AMARA web-based platform.
29
+ It's developed by Qatar Computing Research Institute, Arabic Language Technologies Group. Along with English, it covers multiple SEA languages, such as vie (Vietnamese), mya (Burnmese), jav (Javanese), id (Indonesia), tha (Thai), tl (Tagalog),
30
+ ms (Malaysia).
31
+
32
+
33
+ ## Languages
34
+
35
+ eng, vie, tha, mya, jav, ind, tgl, zlm, ceb, fil, khm, lao, mad, pam
36
+
37
+ ## Supported Tasks
38
+
39
+ Machine Translation, Self Supervised Pretraining
40
+
41
+ ## Dataset Usage
42
+ ### Using `datasets` library
43
+ ```
44
+ from datasets import load_dataset
45
+ dset = datasets.load_dataset("SEACrowd/qed", trust_remote_code=True)
46
+ ```
47
+ ### Using `seacrowd` library
48
+ ```import seacrowd as sc
49
+ # Load the dataset using the default config
50
+ dset = sc.load_dataset("qed", schema="seacrowd")
51
+ # Check all available subsets (config names) of the dataset
52
+ print(sc.available_config_names("qed"))
53
+ # Load the dataset using a specific config
54
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
55
+ ```
56
+
57
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
58
+
59
+
60
+ ## Dataset Homepage
61
+
62
+ [https://opus.nlpl.eu/QED/corpus/version/QED](https://opus.nlpl.eu/QED/corpus/version/QED)
63
+
64
+ ## Dataset Version
65
+
66
+ Source: 2.0.0. SEACrowd: 2024.06.20.
67
+
68
+ ## Dataset License
69
+
70
+ Other License (others)
71
+
72
+ ## Citation
73
+
74
+ If you are using the **Qed** dataloader in your work, please cite the following:
75
+ ```
76
+ @inproceedings{abdelali-etal-2014-amara,
77
+ title = "The {AMARA} Corpus: Building Parallel Language Resources for the Educational Domain",
78
+ author = "Abdelali, Ahmed and
79
+ Guzman, Francisco and
80
+ Sajjad, Hassan and
81
+ Vogel, Stephan",
82
+ editor = "Calzolari, Nicoletta and
83
+ Choukri, Khalid and
84
+ Declerck, Thierry and
85
+ Loftsson, Hrafn and
86
+ Maegaard, Bente and
87
+ Mariani, Joseph and
88
+ Moreno, Asuncion and
89
+ Odijk, Jan and
90
+ Piperidis, Stelios",
91
+ booktitle = "Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}'14)",
92
+ month = may,
93
+ year = "2014",
94
+ address = "Reykjavik, Iceland",
95
+ publisher = "European Language Resources Association (ELRA)",
96
+ url = "http://www.lrec-conf.org/proceedings/lrec2014/pdf/877_Paper.pdf",
97
+ pages = "1856--1862",
98
+ abstract = "This paper presents the AMARA corpus of on-line educational content: a new parallel corpus of educational video subtitles, multilingually aligned for 20 languages, i.e. 20 monolingual corpora and 190 parallel corpora.
99
+ This corpus includes both resource-rich languages such as English and Arabic, and resource-poor languages such as Hindi and Thai. In this paper, we describe the gathering, validation, and preprocessing of a large collection of parallel,
100
+ community-generated subtitles. Furthermore, we describe the methodology used to prepare the data for Machine Translation tasks. Additionally, we provide a document-level, jointly aligned development and test sets for 14 language pairs,
101
+ designed for tuning and testing Machine Translation systems. We provide baseline results for these tasks, and highlight some of the challenges we face when building machine translation systems for educational content.",
102
+ }
103
+
104
+
105
+ @article{lovenia2024seacrowd,
106
+ title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
107
+ author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
108
+ year={2024},
109
+ eprint={2406.10118},
110
+ journal={arXiv preprint arXiv: 2406.10118}
111
+ }
112
+
113
+ ```