Token Classification
Transformers
Safetensors
distilbert
Inference Endpoints
Xmm commited on
Commit
16fdb17
1 Parent(s): 5a9b5b1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +238 -0
README.md ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ - af
5
+ - sq
6
+ - ar
7
+ - an
8
+ - hy
9
+ - ast
10
+ - az
11
+ - ba
12
+ - eu
13
+ - bar
14
+ - be
15
+ - bn
16
+ - inc
17
+ - bs
18
+ - br
19
+ - bg
20
+ - my
21
+ - ca
22
+ - ceb
23
+ - ce
24
+ - zh
25
+ - cv
26
+ - hr
27
+ - cs
28
+ - da
29
+ - nl
30
+ - en
31
+ - et
32
+ - fi
33
+ - fr
34
+ - gl
35
+ - ka
36
+ - de
37
+ - el
38
+ - gu
39
+ - ht
40
+ - he
41
+ - hi
42
+ - hu
43
+ - is
44
+ - io
45
+ - id
46
+ - ga
47
+ - it
48
+ - ja
49
+ - jv
50
+ - kn
51
+ - kk
52
+ - ky
53
+ - ko
54
+ - la
55
+ - lv
56
+ - lt
57
+ - roa
58
+ - nds
59
+ - lm
60
+ - mk
61
+ - mg
62
+ - ms
63
+ - ml
64
+ - mr
65
+ - mn
66
+ - min
67
+ - ne
68
+ - new
69
+ - nb
70
+ - nn
71
+ - oc
72
+ - fa
73
+ - pms
74
+ - pl
75
+ - pt
76
+ - pa
77
+ - ro
78
+ - ru
79
+ - sco
80
+ - sr
81
+ - hr
82
+ - scn
83
+ - sk
84
+ - sl
85
+ - aze
86
+ - es
87
+ - su
88
+ - sw
89
+ - sv
90
+ - tl
91
+ - tg
92
+ - th
93
+ - ta
94
+ - tt
95
+ - te
96
+ - tr
97
+ - uk
98
+ - ud
99
+ - uz
100
+ - vi
101
+ - vo
102
+ - war
103
+ - cy
104
+ - fry
105
+ - pnb
106
+ - yo
107
+ license: apache-2.0
108
+ datasets:
109
+ - wikipedia
110
+ ---
111
+
112
+ # Model Card for DistilBERT base multilingual (cased)
113
+
114
+ # Table of Contents
115
+
116
+ 1. [Model Details](#model-details)
117
+ 2. [Uses](#uses)
118
+ 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
119
+ 4. [Training Details](#training-details)
120
+ 5. [Evaluation](#evaluation)
121
+ 6. [Environmental Impact](#environmental-impact)
122
+ 7. [Citation](#citation)
123
+ 8. [How To Get Started With the Model](#how-to-get-started-with-the-model)
124
+
125
+ # Model Details
126
+
127
+ ## Model Description
128
+
129
+ This model is a distilled version of the [BERT base multilingual model](https://huggingface.co/bert-base-multilingual-cased/). The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). This model is cased: it does make a difference between english and English.
130
+
131
+ The model is trained on the concatenation of Wikipedia in 104 different languages listed [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages).
132
+ The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters (compared to 177M parameters for mBERT-base).
133
+ On average, this model, referred to as DistilmBERT, is twice as fast as mBERT-base.
134
+
135
+ We encourage potential users of this model to check out the [BERT base multilingual model card](https://huggingface.co/bert-base-multilingual-cased) to learn more about usage, limitations and potential biases.
136
+
137
+ - **Developed by:** Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf (Hugging Face)
138
+ - **Model type:** Transformer-based language model
139
+ - **Language(s) (NLP):** 104 languages; see full list [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages)
140
+ - **License:** Apache 2.0
141
+ - **Related Models:** [BERT base multilingual model](https://huggingface.co/bert-base-multilingual-cased)
142
+ - **Resources for more information:**
143
+ - [GitHub Repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
144
+ - [Associated Paper](https://arxiv.org/abs/1910.01108)
145
+
146
+ # Uses
147
+
148
+ ## Direct Use and Downstream Use
149
+
150
+ You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you.
151
+
152
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
153
+
154
+ ## Out of Scope Use
155
+
156
+ The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
157
+
158
+ # Bias, Risks, and Limitations
159
+
160
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
161
+
162
+ ## Recommendations
163
+
164
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
165
+
166
+ # Training Details
167
+
168
+ - The model was pretrained with the supervision of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the concatenation of Wikipedia in 104 different languages
169
+ - The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters.
170
+ - Further information about the training procedure and data is included in the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model card.
171
+
172
+ # Evaluation
173
+
174
+ The model developers report the following accuracy results for DistilmBERT (see [GitHub Repo](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)):
175
+
176
+ > Here are the results on the test sets for 6 of the languages available in XNLI. The results are computed in the zero shot setting (trained on the English portion and evaluated on the target language portion):
177
+
178
+ | Model | English | Spanish | Chinese | German | Arabic | Urdu |
179
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---:|
180
+ | mBERT base cased (computed) | 82.1 | 74.6 | 69.1 | 72.3 | 66.4 | 58.5 |
181
+ | mBERT base uncased (reported)| 81.4 | 74.3 | 63.8 | 70.5 | 62.1 | 58.3 |
182
+ | DistilmBERT | 78.2 | 69.1 | 64.0 | 66.3 | 59.1 | 54.7 |
183
+
184
+ # Environmental Impact
185
+
186
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
187
+
188
+ - **Hardware Type:** More information needed
189
+ - **Hours used:** More information needed
190
+ - **Cloud Provider:** More information needed
191
+ - **Compute Region:** More information needed
192
+ - **Carbon Emitted:** More information needed
193
+
194
+ # Citation
195
+
196
+ ```bibtex
197
+ @article{Sanh2019DistilBERTAD,
198
+ title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
199
+ author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
200
+ journal={ArXiv},
201
+ year={2019},
202
+ volume={abs/1910.01108}
203
+ }
204
+ ```
205
+
206
+ APA
207
+ - Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
208
+
209
+ # How to Get Started With the Model
210
+
211
+ You can use the model directly with a pipeline for masked language modeling:
212
+
213
+ ```python
214
+ >>> from transformers import pipeline
215
+ >>> unmasker = pipeline('fill-mask', model='distilbert-base-multilingual-cased')
216
+ >>> unmasker("Hello I'm a [MASK] model.")
217
+
218
+ [{'score': 0.040800247341394424,
219
+ 'sequence': "Hello I'm a virtual model.",
220
+ 'token': 37859,
221
+ 'token_str': 'virtual'},
222
+ {'score': 0.020015988498926163,
223
+ 'sequence': "Hello I'm a big model.",
224
+ 'token': 22185,
225
+ 'token_str': 'big'},
226
+ {'score': 0.018680453300476074,
227
+ 'sequence': "Hello I'm a Hello model.",
228
+ 'token': 31178,
229
+ 'token_str': 'Hello'},
230
+ {'score': 0.017396586015820503,
231
+ 'sequence': "Hello I'm a model model.",
232
+ 'token': 13192,
233
+ 'token_str': 'model'},
234
+ {'score': 0.014229810796678066,
235
+ 'sequence': "Hello I'm a perfect model.",
236
+ 'token': 43477,
237
+ 'token_str': 'perfect'}]
238
+ ```