Datasets:
annotations_creators:
- expert-generated
language:
- es
language_creators:
- other
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: >-
CIEMPIESS LIGHT CORPUS: Audio and Transcripts of Mexican Spanish Broadcast
Conversations.
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- ciempiess
- spanish
- mexican spanish
- ciempiess project
- ciempiess-unam project
task_categories:
- automatic-speech-recognition
task_ids: []
Dataset Card for ciempiess_light
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: CIEMPIESS-UNAM Project
- Repository: CIEMPIESS LIGHT at LDC
- Paper: CIEMPIESS: A New Open-Sourced Mexican Spanish Radio Corpus
- Point of Contact: Carlos Mena
Dataset Summary
The CIEMPIESS LIGHT is a Radio Corpus designed to create acoustic models for automatic speech recognition and it is made up by recordings of spontaneous conversations in Mexican Spanish between a radio moderator and his guests. It is an enhanced version of the CIEMPIESS Corpus (LDC item LDC2015S07).
CIEMPIESS LIGHT is "light" because it doesn't include much of the files of the first version of CIEMPIESS and it is "enhanced" because it has a lot of improvements, some of them suggested by our community of users, that make this version more convenient for modern speech recognition engines.
The CIEMPIESS LIGHT Corpus was created at the Laboratorio de Teconologías del Lenguaje of the Facultad de Ingeniería (FI) in the Universidad Nacional Autónoma de México (UNAM) between 2015 and 2016 by Carlos Daniel Hernández Mena, supervised by José Abel Herrera Camacho, head of Laboratory.
CIEMPIESS is the acronym for:
"Corpus de Investigación en Español de México del Posgrado de Ingeniería Eléctrica y Servicio Social".
Example Usage
The CIEMPIESS LIGHT contains only the train split:
from datasets import load_dataset
ciempiess_light = load_dataset("ciempiess/ciempiess_light")
It is also valid to do:
from datasets import load_dataset
ciempiess_light = load_dataset("ciempiess/ciempiess_light",split="train")
Supported Tasks
automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
Languages
The language of the corpus is Spanish with the accent of Central Mexico.
Dataset Structure
Data Instances
{'audio_id': 'CMPL_F_32_11ANG_00003',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/5acd9ef350f022d5acb7f2a4f9de90371ffd5552c8d1bf849ca16a83e582fe4b/train/female/F_32/CMPL_F_32_11ANG_00003.flac',
'array': array([ 6.1035156e-05, -2.1362305e-04, -4.8828125e-04, ...,
3.3569336e-04, 6.1035156e-04, 0.0000000e+00], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'F_32',
'gender': 'female',
'duration': 3.256999969482422,
'normalized_text': 'estamos con el profesor javier estejel vargas'
}
Data Fields
audio_id
(string) - id of audio segmentaudio
(datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).speaker_id
(string) - id of speakergender
(string) - gender of speaker (male or female)duration
(float32) - duration of the audio file in seconds.normalized_text
(string) - normalized audio segment transcription
Data Splits
The corpus counts just with the train split which has a total of 16663 speech files from 53 male speakers and 34 female speakers with a total duration of 18 hours and 25 minutes.
Dataset Creation
Curation Rationale
The CIEMPIESS LIGHT (CL) Corpus has the following characteristics:
The CL has a total of 16663 audio files of 53 male speakers and 34 female speakers. It has a total duration of 18 hours and 25 minutes.
The total number of audio files that come from male speakers is 12521 with a total duration of 12 hours and 41 minutes. The total number of audio files that come from female speakers is 4142 with a total duration of 4142 hours and 4 minutes. So, CL is not balanced in gender.
Every audio file in the CL has a duration between 2 and 10 seconds approximately.
Data in CL is classified by gender and also by speaker, so one can easily select audios from a particular set of speakers to do experiments.
Audio files in the CL and the first CIEMPIESS are all of the same type. In both, speakers talk about legal and lawyer issues. They also talk about things related to the UNAM University and the Facultad de Derecho de la UNAM.
As in the first CIEMPIESS Corpus, transcriptions in the CL were made by humans.
Speakers in the CL are not present in any other CIEMPIESS dataset.
Audio files in the CL are distributed in a 16khz@16bit mono format.
Source Data
Initial Data Collection and Normalization
The CIEMPIESS LIGHT is a Radio Corpus designed to train acoustic models of automatic speech recognition and it is made out of recordings of spontaneous conversations in Spanish between a radio moderator and his guests. These recordings were taken in mp3 from PODCAST UNAM and they were created by RADIO-IUS that is a radio station that belongs to UNAM and by Mirador Universitario that is a TV program that also belongs to UNAM.
Annotations
Annotation process
The annotation process is at follows:
- A whole podcast is manually segmented keeping just the portions containing good quality speech.
- A second pass os segmentation is performed; this time to separate speakers and put them in different folders.
- The resulting speech files between 2 and 10 seconds are transcribed by students from different departments (computing, engineering, linguistics). Most of them are native speakers but not with a particular training as transcribers.
Who are the annotators?
The CIEMPIESS LIGHT Corpus was created by the social service program "Desarrollo de Tecnologías del Habla" of the "Facultad de Ingeniería" (FI) in the "Universidad Nacional Autónoma de México" (UNAM) between 2015 and 2016 by Carlos Daniel Hernández Mena, head of the program.
Personal and Sensitive Information
The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from publicly available podcasts, so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset.
Considerations for Using the Data
Social Impact of Dataset
This dataset is valuable because it contains spontaneous speech.
Discussion of Biases
The dataset is not gender balanced. It is comprised of 53 male speakers and 34 female speakers and the vocabulary is limited to legal issues.
Other Known Limitations
"CIEMPIESS LIGHT CORPUS" by Carlos Daniel Hernández Mena and Abel Herrera is licensed under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Dataset Curators
The dataset was collected by students belonging to the social service program "Desarrollo de Tecnologías del Habla". It was curated by Carlos Daniel Hernández Mena in 2016.
Licensing Information
Citation Information
@misc{carlosmenaciempiesslightt2017,
title={CIEMPIESS LIGHT CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.},
ldc_catalog_no={LDC2017S23},
DOI={https://doi.org/10.35111/64rg-yk97},
author={Hernandez Mena, Carlos Daniel and Herrera, Abel},
journal={Linguistic Data Consortium, Philadelphia},
year={2017},
url={https://catalog.ldc.upenn.edu/LDC2017S23},
}
Contributions
The authors want to thank to Alejandro V. Mena, Elena Vera and Angélica Gutiérrez for their support to the social service program: "Desarrollo de Tecnologías del Habla." We also thank to the social service students for all the hard work.