Model Card: LEVI Whisper Large-v2 Fine-Tuned Model
Model Information
Model Name: levicu/LEVI_whisper_large-v2
Description: This model is a fine-tuned version of the OpenAI Whisper Large-v2 model, tailored for speech recognition tasks using the LEVI v2 dataset, which consists of classroom audiovisual recording data.
Model Architecture: openai/whisper-large-v2
Dataset: LEVI_LoFi_v2/TRAIN (per-utterance transcript and 16k WAV audio)
both student and tutor speech were used
manifest: LEVI_LoFi_v2_TRAIN_punc+cased.csv
Training Details
Training Procedure:
LoRA Parameter Efficient Fine-tuning technique with the following parameters:
r=32
lora_alpha=64
target_modules=["q_proj", "v_proj"]
lora_dropout=0.05
bias="none"
INT8 quantization
Trained for 6 epochs with a learning rate of 1e-4 and warmup steps of 100 without gradient accumulation.
Evaluation Metrics: Word Error Rate (WER)
Evaluation
Testing Data
Test Data 1: LoFi Students (LEVI_LoFi_v2_TEST_punc+cased_student)
Test Data 2: LoFi Tutors (LEVI_LoFi_v2_TEST_punc+cased_tutor)
Test Data 3: HiFi Students (LEVI_orig11_HiFi_punc+cased_student)
Test Data 4: HiFi Tutor (LEVI_orig11_HiFi_punc+cased_tutor)
Metric
Word Error Rate (WER)
Results
Test Data 1: 39.9%
Test Data 2: 13.7%
Test Data 3: 42.8%
Test Data 4: 19.7%
Usage
Usage: The model can be used for speech recognition tasks. Inputs should be audio files, and the model outputs transcriptions.
Limitations and Ethical Considerations
Limitations: None provided.
Ethical Considerations: Consider the ethical implications of using this model, particularly in scenarios involving sensitive or private information.