ami / README.md
kamilakesbi's picture
Update README.md
8cdaae2 verified
metadata
dataset_info:
  - config_name: ihm
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: timestamps_start
        sequence: float64
      - name: timestamps_end
        sequence: float64
      - name: speakers
        sequence: string
    splits:
      - name: train
        num_bytes: 9326329826
        num_examples: 136
      - name: validation
        num_bytes: 1113896048
        num_examples: 18
      - name: test
        num_bytes: 1044169059
        num_examples: 16
    download_size: 10267627474
    dataset_size: 11484394933
  - config_name: sdm
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: timestamps_start
        sequence: float64
      - name: timestamps_end
        sequence: float64
      - name: speakers
        sequence: string
    splits:
      - name: train
        num_bytes: 9208897240
        num_examples: 134
      - name: validation
        num_bytes: 1113930821
        num_examples: 18
      - name: test
        num_bytes: 1044187355
        num_examples: 16
    download_size: 10679615636
    dataset_size: 11367015416
configs:
  - config_name: ihm
    data_files:
      - split: train
        path: ihm/train-*
      - split: validation
        path: ihm/validation-*
      - split: test
        path: ihm/test-*
  - config_name: sdm
    data_files:
      - split: train
        path: sdm/train-*
      - split: validation
        path: sdm/validation-*
      - split: test
        path: sdm/test-*
license: cc-by-4.0
language:
  - en
tags:
  - speaker-diarization
  - voice-activity-detection
  - speaker-segmentation

Dataset Card for the AMI dataset for speaker diarization

The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers.

Note: This dataset has been preprocessed using diarizers. It makes the dataset compatible with the diarizers library to fine-tune pyannote segmentation models.

Example Usage

from datasets import load_dataset
ds = load_dataset("diarizers-community/ami", "ihm")

print(ds)

gives:

DatasetDict({
    train: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 136
    })
    validation: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 18
    })
    test: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 16
    })
})

Dataset source

Citation

@article{article,
author = {Mccowan, Iain and Carletta, J and Kraaij, Wessel and Ashby, Simone and Bourban, S and Flynn, M and Guillemot, M and Hain, Thomas and Kadlec, J and Karaiskos, V and Kronenthal, M and Lathoud, Guillaume and Lincoln, Mike and Lisowska Masson, Agnes and Post, Wilfried and Reidsma, Dennis and Wellner, P},
year = {2005},
month = {01},
pages = {},
title = {The AMI meeting corpus},
journal = {Int'l. Conf. on Methods and Techniques in Behavioral Research}
}

Contribution

Thanks to @kamilakesbi and @sanchit-gandhi for adding this dataset.