holylovenia
commited on
Commit
•
2c2c08f
1
Parent(s):
e1188b1
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
license: cc-by-nc-nd-4.0
|
4 |
+
language:
|
5 |
+
- eng
|
6 |
+
- ind
|
7 |
+
pretty_name: Oil
|
8 |
+
---
|
9 |
+
|
10 |
+
The Online Indonesian Learning (OIL) dataset or corpus currently contains lessons from three Indonesian teachers who have posted content on YouTube.
|
11 |
+
|
12 |
+
|
13 |
+
## Languages
|
14 |
+
|
15 |
+
eng, ind
|
16 |
+
|
17 |
+
## Supported Tasks
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
## Dataset Usage
|
22 |
+
### Using `datasets` library
|
23 |
+
```
|
24 |
+
from datasets import load_dataset
|
25 |
+
dset = datasets.load_dataset("SEACrowd/oil", trust_remote_code=True)
|
26 |
+
```
|
27 |
+
### Using `seacrowd` library
|
28 |
+
```import seacrowd as sc
|
29 |
+
# Load the dataset using the default config
|
30 |
+
dset = sc.load_dataset("oil", schema="seacrowd")
|
31 |
+
# Check all available subsets (config names) of the dataset
|
32 |
+
print(sc.available_config_names("oil"))
|
33 |
+
# Load the dataset using a specific config
|
34 |
+
dset = sc.load_dataset_by_config_name(config_name="<config_name>")
|
35 |
+
```
|
36 |
+
|
37 |
+
More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
|
38 |
+
|
39 |
+
|
40 |
+
## Dataset Homepage
|
41 |
+
|
42 |
+
[https://huggingface.co/datasets/ZMaxwell-Smith/OIL](https://huggingface.co/datasets/ZMaxwell-Smith/OIL)
|
43 |
+
|
44 |
+
## Dataset Version
|
45 |
+
|
46 |
+
Source: 1.0.0. SEACrowd: 2024.06.20.
|
47 |
+
|
48 |
+
## Dataset License
|
49 |
+
|
50 |
+
Creative Commons Attribution Non Commercial No Derivatives 4.0 (cc-by-nc-nd-4.0)
|
51 |
+
|
52 |
+
## Citation
|
53 |
+
|
54 |
+
If you are using the **Oil** dataloader in your work, please cite the following:
|
55 |
+
```
|
56 |
+
@inproceedings{maxwelll-smith-foley-2023-automated,
|
57 |
+
title = "Automated speech recognition of {I}ndonesian-{E}nglish language lessons on {Y}ou{T}ube using transfer learning",
|
58 |
+
author = "Maxwell-Smith, Zara and Foley, Ben",
|
59 |
+
editor = "Serikov, Oleg
|
60 |
+
and Voloshina, Ekaterina
|
61 |
+
and Postnikova, Anna
|
62 |
+
and Klyachko, Elena
|
63 |
+
and Vylomova, Ekaterina
|
64 |
+
and Shavrina, Tatiana
|
65 |
+
and Le Ferrand, Eric
|
66 |
+
and Malykh, Valentin
|
67 |
+
and Tyers, Francis
|
68 |
+
and Arkhangelskiy, Timofey
|
69 |
+
and Mikhailov, Vladislav",
|
70 |
+
booktitle = "Proceedings of the Second Workshop on NLP Applications to Field Linguistics",
|
71 |
+
month = may,
|
72 |
+
year = "2023",
|
73 |
+
address = "Dubrovnik, Croatia",
|
74 |
+
publisher = "Association for Computational Linguistics",
|
75 |
+
url = "https://aclanthology.org/2023.fieldmatters-1.1",
|
76 |
+
doi = "10.18653/v1/2023.fieldmatters-1.1",
|
77 |
+
pages = "1--16",
|
78 |
+
abstract = "Experiments to fine-tune large multilingual models with limited data from a specific domain or setting has potential
|
79 |
+
to improve automatic speech recognition (ASR) outcomes. This paper reports on the use of the Elpis ASR pipeline to fine-tune two
|
80 |
+
pre-trained base models, Wav2Vec2-XLSR-53 and Wav2Vec2-Large-XLSR-Indonesian, with various mixes of data from 3 YouTube channels
|
81 |
+
teaching Indonesian with English as the language of instruction. We discuss our results inferring new lesson audio (22-46%
|
82 |
+
word error rate) in the context of speeding data collection in diverse and specialised settings. This study is an example of how
|
83 |
+
ASR can be used to accelerate natural language research, expanding ethically sourced data in low-resource settings.",
|
84 |
+
}
|
85 |
+
|
86 |
+
|
87 |
+
@article{lovenia2024seacrowd,
|
88 |
+
title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
|
89 |
+
author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
|
90 |
+
year={2024},
|
91 |
+
eprint={2406.10118},
|
92 |
+
journal={arXiv preprint arXiv: 2406.10118}
|
93 |
+
}
|
94 |
+
|
95 |
+
```
|