File size: 1,260 Bytes
051026e
 
 
 
 
 
 
 
 
 
 
 
 
46aa866
 
 
 
 
a267a01
 
 
 
051026e
46aa866
 
 
 
 
3662f54
46aa866
 
 
 
 
 
 
 
3e49052
46aa866
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
dataset_info:
  features:
  - name: name
    dtype: string
  - name: speaker_embeddings
    sequence: float32
  splits:
  - name: validation
    num_bytes: 634175
    num_examples: 305
  download_size: 979354
  dataset_size: 634175
license: mit
language:
- ar
size_categories:
- n<1K
task_categories:
  - text-to-speech
  - audio-to-audio
pretty_name: Arabic(M) Speaker Embeddings
---

# Arabic Speaker Embeddings extracted from ASC and ClArTTS

There is one speaker embedding for each utterance in the validation set of both datasets. The speaker embeddings are 512-element X-vectors.

[Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus) has 100 files for a single male speaker and [ClArTTS](https://huggingface.co/datasets/MBZUAI/ClArTTS) has 205 files for a single male speaker.

The X-vectors were extracted using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), which uses the `speechbrain/spkrec-xvect-voxceleb` model.

Usage:

```python
from datasets import load_dataset

embeddings_dataset = load_dataset("herwoww/arabic_xvector_embeddings", split="validation")
speaker_embedding = torch.tensor(embeddings_dataset[1]["speaker_embeddings"]).unsqueeze(0)
```