speech-bsd-hf / README.md
cromz22's picture
Update README.md
c925805
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: 'no'
      dtype: int64
    - name: en_speaker
      dtype: string
    - name: en_sentence
      dtype: string
    - name: en_spkid
      dtype: string
    - name: en_wav
      dtype: string
    - name: en_spk_gender
      dtype: string
    - name: en_spk_state
      dtype: string
    - name: scenario_id
      dtype: string
    - name: scenario_tag
      dtype: string
    - name: scenario_title
      dtype: string
    - name: scenario_original_language
      dtype: string
    - name: ja_speaker
      dtype: string
    - name: ja_sentence
      dtype: string
    - name: ja_spkid
      dtype: string
    - name: ja_wav
      dtype: string
    - name: ja_spk_gender
      dtype: string
    - name: ja_spk_prefecture
      dtype: string
  splits:
    - name: train
      num_bytes: 4857393677
      num_examples: 40000
    - name: validation
      num_bytes: 508034664.324
      num_examples: 4102
    - name: test
      num_bytes: 569876258.76
      num_examples: 4240
  download_size: 5807088419
  dataset_size: 5935304600.084001
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: cc-by-nc-sa-4.0
task_categories:
  - translation
language:
  - ja
  - en
pretty_name: SpeechBSD
extra_gated_prompt: >-
  Please enter your affiliation and agree to the following terms to use the
  dataset.
extra_gated_fields:
  Affiliation: text
  I accept the license: checkbox
  I agree to not attempt to determine the identity of speakers in this dataset: checkbox

SpeechBSD Dataset

This is an extension of the BSD corpus, a Japanese--English dialogue translation corpus, with audio files and speaker attribute information. Although the primary intended usage is for speech-to-text translation, it can be viewed as text/speech Japanese/English/cross-language dialogue corpus and can be used for various tasks.

Dataset Statistics

Train Dev. Test
Scenarios 670 69 69
Sentences 20,000 2,051 2,120
En audio (h) 20.1 2.1 2.1
Ja audio (h) 25.3 2.7 2.7
En audio gender (male % / female %) 47.2 / 52.8 50.1 / 49.9 44.4 / 55.6
Ja audio gender (male % / female %) 68.0 / 32.0 62.3 / 37.7 69.0 / 31.0

Data Structure

We also provide the dataset at the GitHub repository. The data structure there is similar to the original BSD corpus. Here, the structure is changed to be represented in the JSONL format where one instance contains one audio file.

Data Instances

There are two types of instances, one that contains English wav file, and one that contains Japanese wav file.

A typical instance that contains an English wav file looks like this:

{
  'audio': {'path': '/path/to/speech-bsd-hf/data/dev/190315_E001_17_spk0_no10_en.wav', 'array': array([ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00, ...,
       -6.10351562e-05, -1.83105469e-04,  6.10351562e-05]), 'sampling_rate': 16000},
  'no': 10,
  'en_speaker': 'Mr. Ben Sherman',
  'en_sentence': 'You can also check out major institutions like banks, accounting companies, and market research companies.',
  'en_spkid': '190315_E001_17_spk0_en',
  'en_wav': '190315_E001_17_spk0_no10_en.wav',
  'en_spk_gender': 'M',
  'en_spk_state': 'CA',
  'scenario_id': '190315_E001_17',
  'scenario_tag': 'training',
  'scenario_title': 'Training: How to do research',
  'scenario_original_language': 'en',
  'ja_speaker': None,
  'ja_sentence': None,
  'ja_spkid': None,
  'ja_wav': None,
  'ja_spk_gender': None,
  'ja_spk_prefecture': None
}

In the corresponding instance that contains a Japanese wav file, en_speaker, en_sentence, en_spkid, en_wav, en_spk_gender, and en_spk_state are None and corresponding Japanese ones are filled instead.

Data Fields

Each utterance is a part of a scenario.

The scenario information is shown with scenario_id, scenario_tag, scenario_title, and scenario_original_language, which corresponds respectively to id, tag, title, and original_language in the original BSD corpus.

The utterance-specific fields are the following:

  • no, ja_speaker, en_speaker, ja_sentence, en_sentence are identical to the ones of the BSD corpus.
  • ja_spkid and en_spkid show speaker IDs consistent throughout the conversation.
  • ja_wav and en_wav show the wavfile names.
  • ja_spk_gender and en_spk_gender show the gender of the speaker of the corresponding wav file (either "M" or "F").
  • ja_spk_prefecture and en_spk_state show the speaker's region of origin.
  • As any other huggingface audio datasets, audio contains full path to the audio file in your environment, audio array, and sampling rate (which is 16000). You will need librosa and soundfile packages to have access to these fields.

Here are some additional notes:

  • Speakers are different if speaker ID is different. For example, if a conversation is spoken by two speakers taking turns, there would be 4 speakers (2 Japanese speakers and 2 English speakers). However, it's possible that speakers with different speaker ID is actually spoken by the same person because of the way audio is collected.

  • Gender information of audio does not necessarily match with the one inferrable from text. For example, even if the en_speaker is "Mr. Sam Lee", the audio may contain female voice. This is because no explicit gender information is given in the original BSD corpus.

  • Japanese speech is collected from Japanese speakers who are from Japan.

    • ja_spk_prefecture is one of the 47 prefectures or "不明" (unknown). Japanese prefectures have four different ending characters, "県", "府", "都", and "道".
      • Prefectures that ends with "県" or "府" does not contain those characters (e.g., "神奈川", "京都").
      • Tokyo is "東京" without "都".
      • Hokkaido is "北海道".
  • English speech is collected from English speakers who are from the US.

    • en_spk is one of the 50 states, written in postal abbreviation.

Citation

If you find the dataset useful, please cite our ACL 2023 Findings paper: Towards Speech Dialogue Translation Mediating Speakers of Different Languages.

@inproceedings{shimizu-etal-2023-towards,
  title = "Towards Speech Dialogue Translation Mediating Speakers of Different Languages",
  author = "Shimizu, Shuichiro  and
    Chu, Chenhui  and
    Li, Sheng  and
    Kurohashi, Sadao",
  booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
  month = jul,
  year = "2023",
  address = "Toronto, Canada",
  publisher = "Association for Computational Linguistics",
  url = "https://aclanthology.org/2023.findings-acl.72",
  pages = "1122--1134",
  abstract = "We present a new task, speech dialogue translation mediating speakers of different languages. We construct the SpeechBSD dataset for the task and conduct baseline experiments. Furthermore, we consider context to be an important aspect that needs to be addressed in this task and propose two ways of utilizing context, namely monolingual context and bilingual context. We conduct cascaded speech translation experiments using Whisper and mBART, and show that bilingual context performs better in our settings.",
}

License

This dataset is licensed under CC-BY-NC-SA 4.0.