dusdn commited on
Commit
7a141ea
1 Parent(s): ea025df

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +90 -0
  2. avg_model.pt +3 -0
  3. config.yaml +78 -0
  4. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
+ pipeline_tag: summarization
6
+ tags:
7
+ - speaker embedding
8
+ - wespeaker
9
+ - speaker modelling
10
+ ---
11
+
12
+
13
+ Official model provided by [Wespeaker](https://github.com/wenet-e2e/wespeaker) project, ResNet293 based r-vector (After large margin finetune)
14
+
15
+ The model is trained on VoxCeleb2 Dev dataset, containing 5994 speakers.
16
+
17
+ ## Model Sources
18
+
19
+ <!-- Provide the basic links for the model. -->
20
+
21
+ - **Repository:** https://github.com/wenet-e2e/wespeaker
22
+ - **Paper:** https://arxiv.org/pdf/2210.17016.pdf
23
+ - **Demo:** https://huggingface.co/spaces/wenet/wespeaker_demo
24
+
25
+
26
+ ## Results on VoxCeleb
27
+ | Model | Params | Flops | LM | AS-Norm | vox1-O-clean | vox1-E-clean | vox1-H-clean |
28
+ |:------|:------:|:------|:--:|:-------:|:------------:|:------------:|:------------:|
29
+ | ResNet293-TSTP-emb256 | 28.62M | 28.10G | × | × | 0.595 | 0.756 | 1.433 |
30
+ | | | | × | √ | 0.537 | 0.701 | 1.276 |
31
+ | | | | √ | × | 0.532 | 0.707 | 1.311 |
32
+ | | | | √ | √ | **0.447** | **0.657** | **1.183** |
33
+
34
+ ## Install Wespeaker
35
+
36
+ ``` sh
37
+ pip install git+https://github.com/wenet-e2e/wespeaker.git
38
+ ```
39
+
40
+ for development install:
41
+
42
+ ``` sh
43
+ git clone https://github.com/wenet-e2e/wespeaker.git
44
+ cd wespeaker
45
+ pip install -e .
46
+ ```
47
+
48
+
49
+ ### Command line Usage
50
+
51
+ ``` sh
52
+ $ wespeaker -p resnet293_download_dir --task embedding --audio_file audio.wav --output_file embedding.txt
53
+ $ wespeaker -p resnet293_download_dir --task embedding_kaldi --wav_scp wav.scp --output_file /path/to/embedding
54
+ $ wespeaker -p resnet293_download_dir --task similarity --audio_file audio.wav --audio_file2 audio2.wav
55
+ $ wespeaker -p resnet293_download_dir --task diarization --audio_file audio.wav
56
+ ```
57
+
58
+ ### Python Programming Usage
59
+
60
+ ``` python
61
+ import wespeaker
62
+
63
+ model = wespeaker.load_model_local(resnet293_download_dir)
64
+ # set_gpu to enable the cuda inference, number < 0 means using CPU
65
+ model.set_gpu(0)
66
+
67
+ # embedding/embedding_kaldi/similarity/diarization
68
+ embedding = model.extract_embedding('audio.wav')
69
+ utt_names, embeddings = model.extract_embedding_list('wav.scp')
70
+ similarity = model.compute_similarity('audio1.wav', 'audio2.wav')
71
+ diar_result = model.diarize('audio.wav')
72
+
73
+ # register and recognize
74
+ model.register('spk1', 'spk1_audio1.wav')
75
+ model.register('spk2', 'spk2_audio1.wav')
76
+ model.register('spk3', 'spk3_audio1.wav')
77
+ result = model.recognize('spk1_audio2.wav')
78
+ ```
79
+
80
+ ## Citation
81
+ ```bibtex
82
+ @inproceedings{wang2023wespeaker,
83
+ title={Wespeaker: A research and production oriented speaker embedding learning toolkit},
84
+ author={Wang, Hongji and Liang, Chengdong and Wang, Shuai and Chen, Zhengyang and Zhang, Binbin and Xiang, Xu and Deng, Yanlei and Qian, Yanmin},
85
+ booktitle={IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
86
+ pages={1--5},
87
+ year={2023},
88
+ organization={IEEE}
89
+ }
90
+ ```
avg_model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7b8447ea84345e26432461b4dda224d1af34a3985a438268e0dfbbde059b1ce
3
+ size 133998630
config.yaml ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ data_type: shard
2
+ dataloader_args:
3
+ batch_size: 32
4
+ drop_last: true
5
+ num_workers: 16
6
+ pin_memory: false
7
+ prefetch_factor: 8
8
+ dataset_args:
9
+ aug_prob: 0.6
10
+ fbank_args:
11
+ dither: 1.0
12
+ frame_length: 25
13
+ frame_shift: 10
14
+ num_mel_bins: 80
15
+ num_frms: 200
16
+ shuffle: true
17
+ shuffle_args:
18
+ shuffle_size: 2500
19
+ spec_aug: false
20
+ spec_aug_args:
21
+ max_f: 8
22
+ max_t: 10
23
+ num_f_mask: 1
24
+ num_t_mask: 1
25
+ prob: 0.6
26
+ speed_perturb: true
27
+ exp_dir: exp/ResNet293-TSTP-emb256-fbank80-num_frms200-aug0.6-spTrue-saFalse-ArcMargin-SGD-epoch150
28
+ gpus:
29
+ - 0
30
+ - 1
31
+ log_batch_interval: 100
32
+ loss: CrossEntropyLoss
33
+ loss_args: {}
34
+ margin_scheduler: MarginScheduler
35
+ margin_update:
36
+ epoch_iter: 17062
37
+ final_margin: 0.2
38
+ fix_start_epoch: 40
39
+ increase_start_epoch: 20
40
+ increase_type: exp
41
+ initial_margin: 0.0
42
+ update_margin: true
43
+ model: ResNet293
44
+ model_args:
45
+ embed_dim: 256
46
+ feat_dim: 80
47
+ pooling_func: TSTP
48
+ two_emb_layer: false
49
+ model_init: null
50
+ noise_data: data/musan/lmdb
51
+ num_avg: 2
52
+ num_epochs: 150
53
+ optimizer: SGD
54
+ optimizer_args:
55
+ lr: 0.1
56
+ momentum: 0.9
57
+ nesterov: true
58
+ weight_decay: 0.0001
59
+ projection_args:
60
+ easy_margin: false
61
+ embed_dim: 256
62
+ num_class: 17982
63
+ project_type: arc_margin
64
+ scale: 32.0
65
+ reverb_data: data/rirs/lmdb
66
+ save_epoch_interval: 5
67
+ scheduler: ExponentialDecrease
68
+ scheduler_args:
69
+ epoch_iter: 17062
70
+ final_lr: 5.0e-05
71
+ initial_lr: 0.1
72
+ num_epochs: 150
73
+ scale_ratio: 1.0
74
+ warm_from_zero: true
75
+ warm_up_epoch: 6
76
+ seed: 42
77
+ train_data: data/vox2_dev/shard.list
78
+ train_label: data/vox2_dev/utt2spk
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbb1ccc7754caff552ebc46347a51aaee2669bb24efc740e665d1a1133d20e98
3
+ size 114336285