File size: 2,897 Bytes
633a9e7
 
 
54c4dba
633a9e7
 
 
b722a56
633a9e7
96f8bab
3135966
b722a56
3135966
d118fdd
633a9e7
 
b722a56
633a9e7
f3fef21
633a9e7
f96ad24
99a28ed
cce5842
 
 
 
 
 
 
 
 
 
77b03e4
 
cce5842
 
67a746a
 
cce5842
 
 
77b03e4
cce5842
77b03e4
 
 
cce5842
 
 
 
 
77b03e4
cce5842
 
 
77b03e4
cce5842
77b03e4
cce5842
e65b698
 
 
 
 
7652ef8
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
language: hr
datasets:
- parlaspeech-hr
tags:
- audio
- automatic-speech-recognition
- parlaspeech
widget:
- example_title: example 1
  src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/SBiNG.wav
- example_title: example 2
  src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020578b.flac.wav

---

# wav2vec2-xls-r-parlaspeech-hr

This model for Croatian ASR is based on the [facebook/wav2vec2-xls-r-300m model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) and was fine-tuned with 72 hours of recordings and transcripts from the Croatian parliament. This training dataset is an early result of the second iteration of the [ParlaMint project](https://www.clarin.eu/content/parlamint-towards-comparable-parliamentary-corpora) inside which the dataset will be extended and published under the name of ParlaSpeech-HR and an open licence.

The efforts resulting in this model were coordinated by Nikola Ljubešić, the rough manual data alignment was performed by Ivo-Pavao Jazbec, the method for fine automatic data alignment from [Plüss et al.](https://arxiv.org/abs/2010.02810) was applied by Vuk Batanović and Lenka Bajčetić, the transcripts were normalised by Danijel Korzinek, while the final modelling was performed by Peter Rupnik.

Initial evaluation on partially noisy data showed the model to achieve a word error rate of 13.68% and a character error rate of 4.56%.

## Usage in `transformers`

```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
import torch
import os

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained(
    "classla/wav2vec2-xls-r-parlaspeech-hr")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-xls-r-parlaspeech-hr")


# download the example wav files:
os.system("wget https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav")

# read the wav file 
speech, sample_rate = sf.read("00020570a.flac.wav")
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.to(device)

# remove the raw wav file
os.system("rm 00020570a.flac.wav")

# retrieve logits
logits = model.to(device)(input_values).logits

# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()

# transcription: 'veliki broj poslovnih subjekata posluje sa minusom velik dio'
```

## Training hyperparameters

In fine-tuning, the following arguments were used:

| arg                           | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 16    |
| `gradient_accumulation_steps` | 4     |
| `num_train_epochs`            | 8     |
| `learning_rate`               | 3e-4  |
| `warmup_steps`                | 500   |