nazneen's picture
model documentation
218c41a
metadata
License: MIT
language:
  - multilingual
tags:
  - wav2vec2
  - automatic-speech-recognition

Model Card for vakyansh-wav2vec2-indian-english-enm-700

Model Details

Model Description

The model creators note in the associated paper:

The model is a self supervised learning based audio pre-trained model which learns cross lingual speech representations from raw audio across 23 Indic languages. It is built on top of wav2vec 2.0 which is solved by training a contrastive task over masked latent speech representations and jointly learns the quantization of latents shared across all languages.

  • Developed by: Harveen Singh Chadha

  • Shared by [Optional]: Harveen Singh Chadha

  • Model type: Automatic Speech Recognition

  • Language(s) (NLP): More information needed

  • License: MIT

  • Parent Model: Wav2Vec2

  • Resources for more information:

Uses

Direct Use

This model can be used for the task of automatic speech recognition.

Downstream Use [Optional]

More information needed.

Out-of-Scope Use

The model should not be used to intentionally create hostile or alienating environments for people.

Bias, Risks, and Limitations

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

Training Details

Training Data

The model creators note in the associated paper:

All our data has been processed through the open sourced framework called Vakyansh . The basic steps of the process are - 1.) Download and convert audio to wav format with sample rate 16000, number of channels 1 and bit rate per sample of 16. 2.) We split an audio into voiced chunks using voice activity detection . We make sure that all the voiced chunks lie between 1 and 30 seconds. 3.) To detect and reject noisy samples we use a signal to noise ratio (SNR) approach described by [Kim and Stern, 2008]. We consider any audio sample below a SNR value of 25 as noise and do not include them in training data. 4.) We perform speaker and gender identification on our audio data. A high level representation of voice is learnt using a voice encoder based on [Wan et al., 2020]. For each audio sample the voice encoder creates a 256 dimensional encoding that summarizes characteristics of the spoken voice. For gender identification we train a support vector machine algorithm on the embedding with manually labelled data.

Our goal for speaker identification was to get a sense of the number of speakers in a particular audio source. To estimate we use a hierarchical clustering approach to cluster similar embeddings in the sense of cosine similarity. The number of speakers are thus the number of clusters.

Training Procedure

Preprocessing

More information needed

Speeds, Sizes, Times

More information needed

Evaluation

Testing Data, Factors & Metrics

Testing Data

More information needed

Factors

More information needed

Metrics

More information needed

Results

More information needed

Model Examination

More information needed

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: 8 Tesla V100 GPUs
  • Hours used: 10,000
  • Cloud Provider: More information needed
  • Compute Region: More information needed
  • Carbon Emitted: More information needed

Technical Specifications [optional]

Model Architecture and Objective

More information needed

Compute Infrastructure

More information needed

Hardware

More information needed

Software

More information needed.

Citation

BibTeX:

More information needed

@misc{chadha2022vakyansh,
    title={Vakyansh: ASR Toolkit for Low Resource Indic languages},
    author={Harveen Singh Chadha and Anirudh Gupta and Priyanshi Shah and Neeraj Chhimwal and Ankur Dhuriya and Rishabh Gaur and Vivek Raghavan},
    year={2022},
    eprint={2203.16512},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Glossary [optional]

More information needed

More Information [optional]

More information needed

Model Card Authors [optional]

Harveen Singh Chadha in collaboration with Ezi Ozoani and the Hugging Face team

Model Card Contact

More information needed

How to Get Started with the Model

Use the code below to get started with the model.

Click to expand
from transformers import AutoProcessor, AutoModelForCTC

processor = AutoProcessor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700")

model = AutoModelForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700")