Datasets:

Modalities:
Audio
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
edacc / README.md
sanchit-gandhi's picture
add description
cdd0de8
|
raw
history blame
6.72 kB
metadata
configs:
  - config_name: default
    data_files:
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: speaker
      dtype: string
    - name: text
      dtype: string
    - name: accent
      dtype: string
    - name: raw_accent
      dtype: string
    - name: gender
      dtype: string
    - name: l1
      dtype: string
    - name: audio
      dtype: audio
  splits:
    - name: validation
      num_bytes: 2615574877.928
      num_examples: 9848
    - name: test
      num_bytes: 4926549782.438
      num_examples: 9289
  download_size: 6951164322
  dataset_size: 7542124660.365999
task_categories:
  - automatic-speech-recognition
  - audio-classification

Dataset Description

EdAcc: The Edinburgh International Accents of English Corpus

The Edinburgh International Accents of English Corpus (EdAcc) is a new automatic speech recognition (ASR) dataset composed of 40 hours of English dyadic conversations between speakers with a diverse set of accents. EdAcc includes a wide range of first and second-language varieties of English and a linguistic background profile of each speaker.

Supported Tasks and Leaderboards

  • Automatic Speech Recognition (ASR): the model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://groups.inf.ed.ac.uk/edacc/leaderboard.html and ranks models based on their WER scores on the dev and test sets
  • Audio Classification: the model is presented with an audio file and asked to predict the accent or gender of the speaker. The most common evaluation metric is the percentage accuracy.

How to use

The datasets library allows you to load and pre-process EdAcc in just 2 lines of code. The dataset can be downloaded from the Hugging Face Hub and pre-processed by using the load_dataset function.

For example, the following code cell loads and pre-processes the EdAcc dataset, and subsequently returns the first sample in the validation (dev) set:

from datasets import load_dataset

edacc = load_dataset("edinburghcstr/edacc")
sample = edacc["validation"][0]

Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. The only change is that you can no longer access individual samples using Python indexing (i.e. edacc["validation"][0]). Instead, you have to iterate over the dataset, using a for loop for example:

from datasets import load_dataset

edacc = load_dataset("edinburghcstr/edacc", streaming=True)
sample = next(iter(edacc["validation"]))

For more information, refer to the blog post A Complete Guide to Audio Datasets.

Dataset Structure

Data Instances

A typical data point comprises the loaded audio sample, usually called audio and its transcription, called text. Some additional information about the speaker's gender, accent and native language (L1) are also provided:

{'speaker': 'EDACC-C06-A',
 'text': 'C ELEVEN DASH P ONE',
 'accent': 'Southern British English',
 'raw_accent': 'English',
 'gender': 'male',
 'l1': 'Southern British English',
 'audio': {'path': 'EDACC-C06-1.wav',
  'array': array([ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00, ...,
         -3.05175781e-05, -3.05175781e-05, -6.10351562e-05]),
  'sampling_rate': 32000}}

Data Fields

  • speaker: the speaker id
  • text: the target transcription of the audio file
  • accent: the speaker accent as annotated by a trained linguist. These accents are standardised into common categories, as opposed to the raw_accents, which are free-form text descriptions of the speaker accent
  • raw_accent: the speaker accent as described by the speaker themselves
  • gender: the gender of the speaker
  • l1: the native language (L1) of the speaker, standardised by the trained linguist
  • audio: a dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].

Dataset Creation

The data collection process for EdAcc is structured to elicit natural speech. Participants conducted relaxed conversations over Zoom, accompanied by a comprehensive questionnaire to gather further metadata information. This questionnaire captures detailed information on participants' linguistic backgrounds, including their first and second languages, the onset of English learning, language use across different life domains, residential history, the nature of their relationship with their conversation partner, and self-perceptions of their English accent. Additionally, it collects data on participants' social demographics such as age, gender, ethnic background, and education level. The resultant conversations are transcribed by professional transcribers, ensuring each speaker's turn, along with any overlaps, environmental sounds, laughter, and hesitations, are accurately documented, contributing to the richness and authenticity of the dataset.

Licensing Information

Public Domain, Creative Commons Attribution-ShareAlike International Public License (CC-BY-SA)

Citation Information

@inproceedings{sanabria23edacc,
   title="{The Edinburgh International Accents of English Corpus: Towards the Democratization of English ASR}",
   author={Sanabria, Ramon and Bogoychev, Nikolay and  Markl, Nina and Carmantini, Andrea and  Klejch, Ondrej and Bell, Peter},
   booktitle={ICASSP 2023},
   year={2023},
}

Contributions

Thanks to @sanchit-gandhi for adding this dataset.