File size: 950 Bytes
9d943bd
 
 
 
8061802
 
d5d9a53
 
9b3c7c9
61e0e1e
0eb73e1
 
8061802
9d943bd
91f4480
 
c3b6d15
6d38573
db1dda9
fb6f768
c3b6d15
91f4480
 
 
 
 
 
 
 
 
404c0d8
 
91f4480
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
task_categories:
- automatic-speech-recognition
pretty_name: MangoSpeech

configs:
- config_name: rozdympodcast
  data_files: "data/rozdympodcast.parquet"
- config_name: opodcast
  data_files: "data/opodcast.parquet"
- config_name: test
  data_files: "data/test.parquet"

---
# The list of all subsets in the dataset
Each subset is generated splitting videos from given particular ukrainiam YouTube channel
All subsets are in test split

- "opodcast" subset is from channel "О! ПОДКАСТ"
- "rozdympodcast" subset is from channel "Роздум | Подкаст" 
- "test" subset is just a small subset of samples


# Loading a particular subset
```
>>> data_files = {"train": "data/<your_subset>.parquet"}
>>> data = load_dataset("Zarakun/youtube_ua_subtitles_test", data_files=data_files)
>>> data
DatasetDict({
    train: Dataset({
        features: ['audio', 'rate', 'duration', 'sentence'],
        num_rows: <some_number>
    })
})
```