simsamu / README.md
kamilakesbi's picture
Update README.md
47362e2 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: timestamps_start
      sequence: float64
    - name: timestamps_end
      sequence: float64
    - name: speakers
      sequence: string
  splits:
    - name: train
      num_bytes: 186825817
      num_examples: 61
  download_size: 172710554
  dataset_size: 186825817
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
tags:
  - speaker diarization
  - speaker-segmentation
  - voice-activity-detection
language:
  - fr
license: mit

Dataset Card for the Simsamu dataset

This repository contains recordings of simulated medical dispatch dialogs in the french language, annotated for diarization and transcription. It is published under the MIT license.

These dialogs were recorded as part of the training of emergency medicine interns, which consisted in simulating a medical dispatch call where the interns took turns playing the caller and the regulating doctor.

Each situation was decided randomly in advance, blind to who was playing the medical dispatcher (e.g., road accident, chest pain, burns, etc.). The affiliations between the caller and the patient (family, friend, colleague...) and the caller's communication mode is then randomly selected. The caller had to adapt his or her performance to the communication mode associated with the situation. Seven communication modes were defined: shy, procedural, angry, cooperative, frightened, impassive, incomprehensible.

Regarding sound quality, the voice of the regulating doctor is directly picked up by a microphone, whereas the voice of the caller is transmitted through the phone network and re-emitted by a phone speaker before being picked up by the microphone. This leads to different acoustic characteristics between the caller's voice and the regulator's, the later one often being much clearer. This phenomena is also present in actual dispatch services recordings, where the regulator's voice is directly recorded in a quiet room whereas the caller is often calling from noisier environments and its voice is altered by the phone network compression.

The dataset is composed of 61 audio recordings with a total duration of 3h 15 and an average duration per recording of 3 minutes 11 seconds

Note: This dataset has been preprocessed using diarizers. It makes the dataset compatible with diarizers to fine-tune pyannote segmentation models.

Example Usage

from datasets import load_dataset
ds = load_dataset("diarizers-community/simsamu")

print(ds)

gives:

DatasetDict({
    train: Dataset({
        features: ['audio', 'timestamps_start', 'timestamps_end', 'speakers'],
        num_rows: 61
    })
})

Dataset source

Contribution

Thanks to @kamilakesbi and @sanchit-gandhi for adding this dataset.