Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerExceededMaximumDurationError
Need help to make the dataset viewer work? Open a discussion for direct support.
audio
audio | dataset
string | text
string | id
string |
---|---|---|---|
ami | if you if you S. S. H. and they have this big warning about doing nothing at all in the gateway machine. | AMI_EN2001a_H04_MEO069_0330297_0330718 |
|
ami | I've gotten mm hardly any | AMI_EN2001a_H00_MEE068_0414915_0415078 |
|
ami | It's yeah, I mean the wave data are obviously not gonna get off there completely. | AMI_EN2001a_H03_MEE067_0319290_0319815 |
|
ami | Yeah, it'll it'll play them in some order in which they were set because otherwise it's gonna be more entertaining. | AMI_EN2001a_H04_MEO069_0145515_0146152 |
|
ami | Yeah. | AMI_EN2001a_H03_MEE067_0478127_0478164 |
|
ami | Hmm. | AMI_EN2001a_H02_FEO065_0436920_0436957 |
|
ami | All these fancy pens. | AMI_EN2001a_H04_MEO069_0171941_0172087 |
|
ami | Like something that represents the whole series in in a v in a structure very similar to the structure in which we represent individual um meetings | AMI_EN2001a_H04_MEO069_0122764_0123754 |
|
ami | 'Cause if we're gonna allow disjoint segments for example, then how are we gonna know what's gonna be in context at any given time? | AMI_EN2001a_H03_MEE067_0368111_0368920 |
|
ami | Is anyone of you for the for the document frequency over total frequency, you gonna have total frequencies of words then with that, right? | AMI_EN2001a_H04_MEO069_0292554_0293396 |
|
ami | Like, I don't know, copies of Shakespeare or something. | AMI_EN2001a_H03_MEE067_0296353_0296603 |
|
ami | I'm not quite so what it did you want to do it, i you just wanted to assign | AMI_EN2001a_H02_FEO065_0081159_0081631 |
|
ami | And that will obviously make it much easier to display. | AMI_EN2001a_H03_MEE067_0210498_0210848 |
|
ami | Uh I ordered according to the um starting times of the utterances. | AMI_EN2001a_H01_FEO066_0436592_0437029 |
|
ami | 'Cause if we are, I reckon we should all read our classes out of the database. | AMI_EN2001a_H03_MEE067_0032020_0032405 |
|
ami | And and probably separate to that an information about the different topics like that | AMI_EN2001a_H04_MEO069_0143300_0143643 |
|
ami | that means sort of we have multiple levels of of representation, which we probably | AMI_EN2001a_H04_MEO069_0043779_0044242 |
|
ami | Hmm. | AMI_EN2001a_H04_MEO069_0276847_0276933 |
|
ami | Yeah. | AMI_EN2001a_H01_FEO066_0189116_0189239 |
|
ami | then we should probably find some abstraction model | AMI_EN2001a_H04_MEO069_0044746_0044987 |
|
ami | Are we still gonna dump it into a database? | AMI_EN2001a_H03_MEE067_0031358_0031560 |
|
ami | say. | AMI_EN2001a_H03_MEE067_0405743_0405802 |
|
ami | Like with the data structures, I'm just like over these vague ideas of some trees, I'm f | AMI_EN2001a_H04_MEO069_0511004_0511535 |
|
ami | Yeah, yeah. | AMI_EN2001a_H00_MEE068_0240098_0240136 |
|
ami | I thought that was the whole beauty that like you can just make a new X. M. L. file and sort of tie that to the other and and it tre | AMI_EN2001a_H04_MEO069_0194411_0194948 |
|
ami | Hmm. | AMI_EN2001a_H00_MEE068_0205991_0206009 |
|
ami | Hmm. | AMI_EN2001a_H04_MEO069_0119247_0119290 |
|
ami | Mm. | AMI_EN2001a_H00_MEE068_0132274_0132288 |
|
ami | The thing is I'm away this weekend. | AMI_EN2001a_H04_MEO069_0007991_0008261 |
|
ami | So I'm just wondering if there's ways to abandon the whole concept of of meetings and sort of but just not really treating separate meetings as too much of a separate entity. | AMI_EN2001a_H04_MEO069_0380400_0381446 |
|
ami | So I'd just be building the data structure again. | AMI_EN2001a_H03_MEE067_0495951_0496174 |
|
ami | Yeah, you'd have to count it yourself, yeah. | AMI_EN2001a_H04_MEO069_0342264_0342453 |
|
ami | then skip it because it's probably something with a dot in between, which is usually not something you wanna have and | AMI_EN2001a_H04_MEO069_0304005_0304468 |
|
ami | The temps, yeah. | AMI_EN2001a_H04_MEO069_0325487_0325565 |
|
ami | Are they spoken numbers? | AMI_EN2001a_H04_MEO069_0275384_0275512 |
|
ami | Well, that's easy. | AMI_EN2001a_H03_MEE067_0176892_0176988 |
|
ami | That's what I'm guessing that's, you know, | AMI_EN2001a_H01_FEO066_0474962_0475221 |
|
ami | Let's check that out. | AMI_EN2001a_H04_MEO069_0120444_0120583 |
|
ami | And then just sort of everyone make sure everyone understand the interface. | AMI_EN2001a_H04_MEO069_0012177_0012665 |
|
ami | Yeah. | AMI_EN2001a_H01_FEO066_0484951_0484980 |
|
ami | In memory, yeah. | AMI_EN2001a_H03_MEE067_0097639_0097766 |
|
ami | Yeah | AMI_EN2001a_H04_MEO069_0194391_0194411 |
|
ami | Yeah, and that's also fairly easy to store along with our segments, isn't it. | AMI_EN2001a_H04_MEO069_0212730_0213069 |
|
ami | Okay. | AMI_EN2001a_H04_MEO069_0099586_0099696 |
|
ami | Yeah, one group, yeah. | AMI_EN2001a_H01_FEO066_0441100_0441243 |
|
ami | When do we have to meet again then with this? | AMI_EN2001a_H04_MEO069_0503539_0503807 |
|
ami | Yeah, I it would be useful for me as well. | AMI_EN2001a_H03_MEE067_0309109_0309372 |
|
ami | Gosh. | AMI_EN2001a_H04_MEO069_0000560_0000601 |
|
ami | Sort of c I was just thinking you know like if if the overhead for having the same amount of data coming from two d files instead of from one file is massive | AMI_EN2001a_H04_MEO069_0195726_0196470 |
|
ami | Am I the only one who needs it with frequencies? | AMI_EN2001a_H04_MEO069_0309421_0309606 |
|
ami | Um I just um wondered, so who's uh then doing um the frequencies on on the words | AMI_EN2001a_H01_FEO066_0333857_0334962 |
|
ami | Yeah. | AMI_EN2001a_H03_MEE067_0323471_0323505 |
|
ami | Yeah, for me it's better if they're by meeting. | AMI_EN2001a_H00_MEE068_0463820_0464033 |
|
ami | Th Yeah, the search is I guess the search is sort of a strange beast anyway because for the search we're leaving the NITE X. M. L. framework. | AMI_EN2001a_H04_MEO069_0025836_0026585 |
|
ami | Yeah, one series has the um same three starting letters. | AMI_EN2001a_H01_FEO066_0466281_0466707 |
|
ami | And just build one in memory. | AMI_EN2001a_H03_MEE067_0098250_0098399 |
|
ami | I have no idea. | AMI_EN2001a_H03_MEE067_0098777_0098944 |
|
ami | I have that really excited pirate copied thing. | AMI_EN2001a_H04_MEO069_0424718_0425031 |
|
ami | So basically apart from the display module, the i the display itself | AMI_EN2001a_H04_MEO069_0023174_0023651 |
|
ami | Yeah. | AMI_EN2001a_H02_FEO065_0490168_0490205 |
|
ami | N Uh no no, it's f for | AMI_EN2001a_H01_FEO066_0162752_0162930 |
|
ami | No but I mean like how how Jasmine does it internally I don't know, | AMI_EN2001a_H04_MEO069_0087994_0088294 |
|
ami | I can try to do it and send it to you. | AMI_EN2001a_H01_FEO066_0433402_0433692 |
|
ami | Right. | AMI_EN2001a_H00_MEE068_0181418_0181435 |
|
ami | Did you also order | AMI_EN2001a_H02_FEO065_0435874_0436021 |
|
ami | sort of like but that, like the problem with that is it's easy to do in the text level. | AMI_EN2001a_H04_MEO069_0093349_0093712 |
|
ami | Like is it just the first and the last line? | AMI_EN2001a_H04_MEO069_0454853_0454997 |
|
ami | so the interface is mainly while it's running just working on data that's just loaded from a file, I guess. | AMI_EN2001a_H04_MEO069_0024471_0025049 |
|
ami | It's just like bef until the information density is up and running. | AMI_EN2001a_H03_MEE067_0289725_0290180 |
|
ami | So then you'd start with all your utterances here, and when you go up to get topic segments, you go to here here here here here here here. | AMI_EN2001a_H03_MEE067_0405802_0406582 |
|
ami | Because if we're doing Like I think for for the information density we uh we should calculate it on the lowest level, not on the highest. | AMI_EN2001a_H04_MEO069_0372047_0372898 |
|
ami | So | AMI_EN2001a_H04_MEO069_0144018_0144100 |
|
ami | and they have a score. | AMI_EN2001a_H04_MEO069_0141169_0141314 |
|
ami | Mm-hmm. | AMI_EN2001a_H04_MEO069_0060512_0060622 |
|
ami | Yes, but what are the other things that's uh some kind of number? | AMI_EN2001a_H02_FEO065_0271867_0272305 |
|
ami | And that's not so much what he meant with not possibly loading everything was that you m um load all the uh annotation stuff, | AMI_EN2001a_H01_FEO066_0064517_0065545 |
|
ami | For every single word? | AMI_EN2001a_H03_MEE067_0070589_0070782 |
|
ami | Hmm. | AMI_EN2001a_H00_MEE068_0149028_0149049 |
|
ami | Yeah. | AMI_EN2001a_H04_MEO069_0217655_0217714 |
|
ami | But like 'cause | AMI_EN2001a_H04_MEO069_0372898_0372951 |
|
ami | For example for the dialogue acts and so on. | AMI_EN2001a_H01_FEO066_0188191_0188384 |
|
ami | Hmm. | AMI_EN2001a_H01_FEO066_0047482_0047500 |
|
ami | Okay, so maybe we should build a b store a mean measure for the segments and meetings as well? | AMI_EN2001a_H03_MEE067_0138344_0139072 |
|
ami | then maybe we can more or less use the same code and just make a few ifs and stuff. | AMI_EN2001a_H04_MEO069_0139834_0140209 |
|
ami | How do you do that? | AMI_EN2001a_H04_MEO069_0489772_0489847 |
|
ami | But I'm I'm still confused 'cause I thought like that's just what Jonathan said we do c that we can't do, like load a massive document of that size. | AMI_EN2001a_H04_MEO069_0108365_0109244 |
|
ami | Oh then I need something different later anyway. | AMI_EN2001a_H04_MEO069_0458471_0458693 |
|
ami | Yeah, I I need frequency as well. | AMI_EN2001a_H04_MEO069_0353711_0353909 |
|
ami | we can probably just start with the Java hash map and like just hash map over it and see how far we get. | AMI_EN2001a_H04_MEO069_0310337_0310812 |
|
ami | And then yeah. | AMI_EN2001a_H02_FEO065_0263972_0264145 |
|
ami | Do we have to demonstrate something next week? | AMI_EN2001a_H03_MEE067_0504724_0504888 |
|
ami | Uh th yeah. | AMI_EN2001a_H01_FEO066_0333444_0333587 |
|
ami | Yeah. | AMI_EN2001a_H03_MEE067_0412181_0412299 |
|
ami | I can probably just implement like a five line Java hash table frequency dictionary builder and see | AMI_EN2001a_H04_MEO069_0339398_0339933 |
|
ami | Okay. | AMI_EN2001a_H04_MEO069_0445600_0445650 |
|
ami | 'Kay. | AMI_EN2001a_H01_FEO066_0333826_0333857 |
|
ami | So um I would for example need the um most freq um frequent words. | AMI_EN2001a_H01_FEO066_0337667_0338486 |
|
ami | Like after this. | AMI_EN2001a_H03_MEE067_0232846_0232936 |
|
ami | Yeah. | AMI_EN2001a_H00_MEE068_0026583_0026620 |
|
ami | And just have different like fine-grainedness levels sort of. | AMI_EN2001a_H04_MEO069_0053867_0054209 |
All eight of datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
from datasets import load_dataset
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", split="train")
"esc-benchmark"
: the repository namespace. This is fixed for all ESC datasets."librispeech"
: the dataset name. This can be changed to any of any one of the eight datasets in ESC to download that dataset.split="train"
: the split. Set this to one of train/validation/test to generate a specific split. Omit thesplit
argument to generate all splits for a dataset.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
Dataset Information
A data point can be accessed by indexing the dataset object loaded through load_dataset
:
print(librispeech[0])
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
{
'dataset': 'librispeech',
'audio': {'path': '/home/esc-bencher/.cache/huggingface/datasets/downloads/extracted/d2da1969fe9e7d06661b5dc370cf2e3c119a14c35950045bcb76243b264e4f01/374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'chapter sixteen i might have told you of the beginning of this liaison in a few lines but i wanted you to see every step by which we came i to agree to whatever marguerite wished',
'id': '374-180298-0000'
}
Data Fields
dataset
: name of the ESC dataset from which the sample is taken.audio
: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.text
: the transcription of the audio file.id
: unique id of the data sample.
Data Preparation
Audio
The audio for all ESC datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: dataset[0]["audio"]
the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate
. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio"
column, i.e. dataset[0]["audio"]
should always be preferred over dataset["audio"][0]
.
Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (<unk>) or converting symbolic punctuation to spelled out form (<comma> to ,). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are not provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esc-benchmark/esc for scoring.
Access
All eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
- Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
- GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
- SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the LibriVox project. It is licensed under CC-BY-4.0.
Example Usage:
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech")
Train/validation splits:
train
(combination oftrain.clean.100
,train.clean.360
andtrain.other.500
)validation.clean
validation.other
Test splits:
test.clean
test.other
Also available are subsets of the train split, which can be accessed by setting the subconfig
argument:
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", subconfig="clean.100")
clean.100
: 100 hours of training data from the 'clean' subsetclean.360
: 360 hours of training data from the 'clean' subsetother.500
: 500 hours of training data from the 'other' subset
Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.
Example usage:
common_voice = load_dataset("esc-benchmark/esc-datasets", "common_voice", use_auth_token=True)
Training/validation splits:
train
validation
Test splits:
test
VoxPopuli
VoxPopuli s a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
voxpopuli = load_dataset("esc-benchmark/esc-datasets", "voxpopuli")
Training/validation splits:
train
validation
Test splits:
test
TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
tedlium = load_dataset("esc-benchmark/esc-datasets", "tedlium")
Training/validation splits:
train
validation
Test splits:
test
GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
gigaspeech = load_dataset("esc-benchmark/esc-datasets", "gigaspeech", use_auth_token=True)
Training/validation splits:
train
(l
subset of training data (2,500 h))validation
Test splits:
test
Also available are subsets of the train split, which can be accessed by setting the subconfig
argument:
gigaspeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="xs", use_auth_token=True)
xs
: extra-small subset of training data (10 h)s
: small subset of training data (250 h)m
: medium subset of training data (1,000 h)xl
: extra-large subset of training data (10,000 h)
SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", use_auth_token=True)
Training/validation splits:
train
(l
subset of training data (~5,000 h))validation
Test splits:
test
Also available are subsets of the train split, which can be accessed by setting the subconfig
argument:
spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="s", use_auth_token=True)
s
: small subset of training data (~200 h)m
: medium subset of training data (~1,000 h)
Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
earnings22 = load_dataset("esc-benchmark/esc-datasets", "earnings22")
Training/validation splits:
train
validation
Test splits:
test
AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
ami = load_dataset("esc-benchmark/esc-datasets", "ami")
Training/validation splits:
train
validation
Test splits:
test
- Downloads last month
- 10