anton-l HF staff nazneen commited on
Commit
eca583f
1 Parent(s): 0a1a74d

model documentation (#3)

Browse files

- model documentation (8aaa52ca4ae1b7794e1a93d21ff9cc9aa341552d)
- Update README.md (96ea6b9690c241df88186f98e4abef7303e56501)


Co-authored-by: Nazneen Rajani <nazneen@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +197 -0
README.md ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - superb
5
+ tags:
6
+ - speech
7
+ - audio
8
+ - wav2vec2
9
+ - audio-classification
10
+ license: apache-2.0
11
+ ---
12
+ # Model Card for wav2vec2-base-superb-sv
13
+
14
+
15
+ # Model Details
16
+
17
+ ## Model Description
18
+
19
+
20
+ - **Developed by:** Shu-wen Yang et al.
21
+ - **Shared by:** Anton Lozhkov
22
+ - **Model type:** Wav2Vec2 with an XVector head
23
+ - **Language(s) (NLP):** English
24
+ - **License:** Apache 2.0
25
+ - **Related Models:**
26
+ - **Parent Model:** wav2vec2-large-lv60
27
+ - **Resources for more information:**
28
+ - [GitHub Repo](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1)
29
+ - [Associated Paper](https://arxiv.org/abs/2105.010517)
30
+
31
+
32
+ # Uses
33
+
34
+
35
+ ## Direct Use
36
+
37
+ This is a ported version of
38
+ [S3PRL's Wav2Vec2 for the SUPERB Speaker Verification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1).
39
+
40
+ The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz
41
+ sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
42
+
43
+ For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
44
+
45
+ ## Out-of-Scope Use
46
+
47
+ The model should not be used to intentionally create hostile or alienating environments for people.
48
+
49
+ # Bias, Risks, and Limitations
50
+
51
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
52
+
53
+
54
+ ## Recommendations
55
+
56
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
57
+
58
+
59
+ # Training Details
60
+
61
+ ## Training Data
62
+
63
+ See the [superb dataset card](https://huggingface.co/datasets/superb)
64
+
65
+ ## Training Procedure
66
+
67
+
68
+ ### Preprocessing
69
+
70
+ More information needed
71
+
72
+ ### Speeds, Sizes, Times
73
+
74
+ More information needed
75
+
76
+ # Evaluation
77
+
78
+
79
+ ## Testing Data, Factors & Metrics
80
+
81
+ ### Testing Data
82
+
83
+ See the [superb dataset card](https://huggingface.co/datasets/superb)
84
+
85
+ ### Factors
86
+
87
+
88
+ ### Metrics
89
+
90
+ More information needed
91
+ ## Results
92
+
93
+ More information needed
94
+
95
+ # Model Examination
96
+
97
+ More information needed
98
+
99
+ # Environmental Impact
100
+
101
+
102
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
103
+
104
+ - **Hardware Type:** More information needed
105
+ - **Hours used:** More information needed
106
+ - **Cloud Provider:** More information needed
107
+ - **Compute Region:** More information needed
108
+ - **Carbon Emitted:** More information needed
109
+
110
+ # Technical Specifications [optional]
111
+
112
+ ## Model Architecture and Objective
113
+
114
+ More information needed
115
+
116
+ ## Compute Infrastructure
117
+
118
+ More information needed
119
+
120
+ ### Hardware
121
+
122
+ More information needed
123
+
124
+ ### Software
125
+ More information needed
126
+
127
+ # Citation
128
+
129
+
130
+ **BibTeX:**
131
+ ```
132
+ @misc{https://doi.org/10.48550/arxiv.2006.11477,
133
+ doi = {10.48550/ARXIV.2006.11477},
134
+
135
+ url = {https://arxiv.org/abs/2006.11477},
136
+
137
+ author = {Baevski, Alexei and Zhou, Henry and Mohamed, Abdelrahman and Auli, Michael},
138
+
139
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
140
+
141
+ title = {wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations},
142
+
143
+ publisher = {arXiv},
144
+
145
+
146
+ @misc{https://doi.org/10.48550/arxiv.2105.01051,
147
+ doi = {10.48550/ARXIV.2105.01051},
148
+
149
+ url = {https://arxiv.org/abs/2105.01051},
150
+
151
+ author = {Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y. and Liu, Andy T. and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and Huang, Tzu-Hsien and Tseng, Wei-Cheng and Lee, Ko-tik and Liu, Da-Rong and Huang, Zili and Dong, Shuyan and Li, Shang-Wen and Watanabe, Shinji and Mohamed, Abdelrahman and Lee, Hung-yi},
152
+
153
+ keywords = {Computation and Language (cs.CL), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
154
+
155
+ title = {SUPERB: Speech processing Universal PERformance Benchmark},
156
+
157
+ publisher = {arXiv},
158
+
159
+ year = {2021},
160
+ }
161
+
162
+
163
+ ```
164
+
165
+
166
+ # Glossary [optional]
167
+ More information needed
168
+
169
+ # More Information [optional]
170
+
171
+ More information needed
172
+
173
+ # Model Card Authors [optional]
174
+
175
+
176
+ Anton Lozhkov in collaboration with Ezi Ozoani and the Hugging Face team
177
+
178
+ # Model Card Contact
179
+
180
+ More information needed
181
+
182
+ # How to Get Started with the Model
183
+
184
+ Use the code below to get started with the model.
185
+
186
+ <details>
187
+ <summary> Click to expand </summary>
188
+
189
+ ```python
190
+ from transformers import AutoProcessor, AutoModelForAudioXVector
191
+
192
+ processor = AutoProcessor.from_pretrained("anton-l/wav2vec2-base-superb-sv")
193
+
194
+ model = AutoModelForAudioXVector.from_pretrained("anton-l/wav2vec2-base-superb-sv")
195
+
196
+ ```
197
+ </details>