You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Three of the ESB datasets have specific terms of usage that must be agreed to before using the data.
To do so, fill in the access forms on the specific datasets' pages:

Log in or Sign Up to review the conditions and access this dataset content.

As a part of ESB benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.

The diagnostic dataset can be downloaded and prepared in much the same way as the ESB datasets:

from datasets import load_dataset

esb_diagnostic_ami = load_dataset("esb/diagnostic-dataset", "ami")

Data Selection

Audio

To provide an adequate representation of all ESB datasets, we chose to use at least 1 hour of audio from the validation sets of each of the 8 constituent ESB datasets. Following the convention of LibriSpeech, we then used a public ASR model to further split each dataset into clean/other based on WER. (Note that for LibriSpeech we kept the existing clean/other splits.). The clean subset represents the 'easier' 50% of samples, and the other subset the more difficult 50%.

To obtain the clean diagnostic-subset of AMI, either "slice" the clean/other split:

ami_diagnostic_clean = esc_diagnostic_ami["clean"]

Or download the clean subset standalone:

ami_diagnostic_clean = load_dataset("esb/diagnostic-dataset", "ami", split="clean")

Transcriptions

Firstly, the transcriptions were generated by a human without the bias of the original transcript. The transcriptions follow a strict orthographic and verbatim style guide, where every word, disfluency and partial word is transcribed. Punctuation and formatting follows standard English print orthography (eg. ‘July 10th in 2021.’). Breaks in thought and partial words are indicated via ‘--’. In addition to the orthographic transcriptions, a normalised format was produced, with all punctuation removed and non-standard-words such as dates, currencies and abbreviations verbalised in the exact way they are spoken (eg. ’july tenth in twenty twenty one’).

Although great care was taken in standardisation of orthography, a remaining amount of ambiguity in transcription exists, especially around the use of commas and the choice of introducing sentence breaks for utterances starting with ‘And’. Each sample was then checked by a second human with access to both the original ground truth as well as the independently produced style-consistent transcript. Both versions were merged to produce new high quality ground truths in both the normalised and orthographic text format.

Dataset Information

A data point can be accessed by indexing the dataset object loaded through load_dataset:

print(ami_diagnostic_clean[0])

A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:

{
  'audio': {'path': None,
      'array': array([ 7.01904297e-04,  7.32421875e-04,  7.32421875e-04, ...,
             -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
      'sampling_rate': 16000},
    'ortho_transcript': 'So, I guess we have to reflect on our experiences with remote controls to decide what, um, we would like to see in a convenient practical',
    'norm_transcript': 'so i guess we have to reflect on our experiences with remote controls to decide what um we would like to see in a convenient practical',
    'id': 'AMI_ES2011a_H00_FEE041_0062835_0064005',
    'dataset': 'ami',
}

Data Fields

  • audio: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.

  • ortho_transcript: the orthographic transcription of the audio file.

  • norm_transcript: the normalised transcription of the audio file.

  • id: unique id of the data sample.

  • dataset: string name of a dataset the sample belongs to.

We encourage participants to train their ASR system on the AMI dataset, the smallest of the 8 ESB datasets, and then evaluate their system on the ortho_transcript for all of the datasets in the diagnostic dataset. This gives a representation of how the system is likely to fare on other audio domains. The predictions can then be normalised by removing casing and punctuation, converting numbers to spelled-out form and expanding abbreviations, and then assessed against the norm_transcript. This gives a representation of the effect of orthography for system performance.

Access

All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:

Contributions

We show our greatest appreciation to Georg Kucsko, Keenan Freyberg and Michael Shulman from Suno.ai for creating and annotating the diagnostic dataset.

Downloads last month
1,823
Edit dataset card