File size: 1,698 Bytes
19570c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
datasets:
- voidful/NMSQA
language:
- en
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This model was pretrained using Facebook-base-960h model on NMSQA dataset. The task is Automatic Speech Recognition (ASR) in which the questions and context sentences are used.  
This is a checkpoint with WER 10.58 on dev set. 

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

The input of the models are from NMSQA dataset. The task of the dataset is Spoken QA, but in this model I used the sentences for ASR.
The input audios are both from context and questions. This ASR model was trained on using training and dev set of NMSQA. 

- **Developed by:** Merve Menevse
- **Model type:** Supervised ML
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** facebook/wav2vec2-base-960h


## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model should be used as fine-tuned model for wav2vec2. 

## How to Get Started with the Model

     from transformers import AutoModel

     model = AutoModel.from_pretrained("menevsem/wav2vec2-base-960h-nmsqa-asr")

## Training Details

### Training Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

 The model was trained using voidful/NMSQA train and dev set. 

## Evaluation

For evalaution WER metric is used on dev set.