File size: 2,356 Bytes
b363dac
 
 
 
 
 
 
 
 
 
 
 
 
 
ea3a784
 
 
 
b363dac
 
 
a982171
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
dataset_info:
  features:
  - name: audio
    dtype: audio
  - name: text
    dtype: string
  - name: speaker_id
    dtype: string
  splits:
  - name: validation
    num_bytes: 180166870.0
    num_examples: 8
  - name: test
    num_bytes: 285107770.0
    num_examples: 11
  download_size: 284926490
  dataset_size: 465274640.0
---
# Dataset Card for "tedlium-long-form"

To create the dataset:
```python
import os
import numpy as np
from datasets import load_dataset, DatasetDict, Dataset, Audio
import soundfile as sf
from tqdm import tqdm

tedlium = load_dataset("LIUM/tedlium", "release3")
merged_dataset = DatasetDict()

validation_speaker_ids = [
    "Al_Gore",
    "Barry_Schwartz",
    "Blaise_Agueray_Arcas",
    "Brian_Cox",
    "Craig_Venter",
    "David_Merrill",
    "Elizabeth_Gilbert",
    "Wade_Davis",
]
validation_dataset_merged = {speaker_id: {"audio": [], "text": ""} for speaker_id in validation_speaker_ids}

test_speaker_ids = [
    "AimeeMullins",
    "BillGates",
    "DanBarber",
    "DanBarber_2010_S103",
    "DanielKahneman",
    "EricMead_2009P_EricMead",
    "GaryFlake",
    "JamesCameron",
    "JaneMcGonigal",
    "MichaelSpecter",
    "RobertGupta",
]
test_dataset_merged = {speaker_id: {"audio": [], "text": ""} for speaker_id in test_speaker_ids}

for split, dataset in zip(["validation", "test"], [validation_dataset_merged, test_dataset_merged]):
    sampling_rate = tedlium[split].features["audio"].sampling_rate

    for sample in tqdm(tedlium[split]):
        if sample["speaker_id"] in dataset:
            dataset[sample["speaker_id"]]["audio"].extend(sample["audio"]["array"])
            dataset[sample["speaker_id"]]["text"] += " " + sample["text"]

    audio_paths = []
    os.makedirs(split, exist_ok=True)
    for speaker in dataset:
        path = os.path.join(split, f"{speaker}-merged.wav")
        audio_paths.append(path)
        sf.write(path, np.asarray(dataset[speaker]["audio"]), samplerate=sampling_rate)

    merged_dataset[split] = Dataset.from_dict({"audio": audio_paths}).cast_column("audio", Audio())
    # remove spaced apostrophes (e.g. it 's -> it's)
    merged_dataset[split] = merged_dataset[split].add_column("text", [dataset[speaker]["text"].replace(" '", "'") for speaker in dataset])
    merged_dataset[split] = merged_dataset[split].add_column("speaker_id", dataset.keys())

```