onnxport commited on
Commit
0574a55
1 Parent(s): 5176333

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +220 -0
README.md ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ inference: false
4
+ tags:
5
+ - onnx
6
+ - exbert
7
+ license: apache-2.0
8
+ datasets:
9
+ - bookcorpus
10
+ - wikipedia
11
+ ---
12
+
13
+ # ONNX export of distilbert-base-uncased
14
+
15
+ This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
16
+ introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found
17
+ [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). This model is uncased: it does
18
+ not make a difference between english and English.
19
+
20
+ ## Model description
21
+
22
+ DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
23
+ self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
24
+ with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
25
+ process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
26
+ with three objectives:
27
+
28
+ - Distillation loss: the model was trained to return the same probabilities as the BERT base model.
29
+ - Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
30
+ sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
31
+ model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
32
+ usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
33
+ tokens. It allows the model to learn a bidirectional representation of the sentence.
34
+ - Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
35
+ model.
36
+
37
+ This way, the model learns the same inner representation of the English language than its teacher model, while being
38
+ faster for inference or downstream tasks.
39
+
40
+ ## Intended uses & limitations
41
+
42
+ You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
43
+ be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
44
+ fine-tuned versions on a task that interests you.
45
+
46
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
47
+ to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
48
+ generation you should look at model like GPT2.
49
+
50
+ ### How to use
51
+
52
+ You can use this model directly with a pipeline for masked language modeling:
53
+
54
+ ```python
55
+ >>> from transformers import pipeline
56
+ >>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
57
+ >>> unmasker("Hello I'm a [MASK] model.")
58
+
59
+ [{'sequence': "[CLS] hello i'm a role model. [SEP]",
60
+ 'score': 0.05292855575680733,
61
+ 'token': 2535,
62
+ 'token_str': 'role'},
63
+ {'sequence': "[CLS] hello i'm a fashion model. [SEP]",
64
+ 'score': 0.03968575969338417,
65
+ 'token': 4827,
66
+ 'token_str': 'fashion'},
67
+ {'sequence': "[CLS] hello i'm a business model. [SEP]",
68
+ 'score': 0.034743521362543106,
69
+ 'token': 2449,
70
+ 'token_str': 'business'},
71
+ {'sequence': "[CLS] hello i'm a model model. [SEP]",
72
+ 'score': 0.03462274372577667,
73
+ 'token': 2944,
74
+ 'token_str': 'model'},
75
+ {'sequence': "[CLS] hello i'm a modeling model. [SEP]",
76
+ 'score': 0.018145186826586723,
77
+ 'token': 11643,
78
+ 'token_str': 'modeling'}]
79
+ ```
80
+
81
+ Here is how to use this model to get the features of a given text in PyTorch:
82
+
83
+ ```python
84
+ from transformers import DistilBertTokenizer, DistilBertModel
85
+ tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
86
+ model = DistilBertModel.from_pretrained("distilbert-base-uncased")
87
+ text = "Replace me by any text you'd like."
88
+ encoded_input = tokenizer(text, return_tensors='pt')
89
+ output = model(**encoded_input)
90
+ ```
91
+
92
+ and in TensorFlow:
93
+
94
+ ```python
95
+ from transformers import DistilBertTokenizer, TFDistilBertModel
96
+ tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
97
+ model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
98
+ text = "Replace me by any text you'd like."
99
+ encoded_input = tokenizer(text, return_tensors='tf')
100
+ output = model(encoded_input)
101
+ ```
102
+
103
+ ### Limitations and bias
104
+
105
+ Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
106
+ predictions. It also inherits some of
107
+ [the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
108
+
109
+ ```python
110
+ >>> from transformers import pipeline
111
+ >>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
112
+ >>> unmasker("The White man worked as a [MASK].")
113
+
114
+ [{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
115
+ 'score': 0.1235365942120552,
116
+ 'token': 20987,
117
+ 'token_str': 'blacksmith'},
118
+ {'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
119
+ 'score': 0.10142576694488525,
120
+ 'token': 10533,
121
+ 'token_str': 'carpenter'},
122
+ {'sequence': '[CLS] the white man worked as a farmer. [SEP]',
123
+ 'score': 0.04985016956925392,
124
+ 'token': 7500,
125
+ 'token_str': 'farmer'},
126
+ {'sequence': '[CLS] the white man worked as a miner. [SEP]',
127
+ 'score': 0.03932540491223335,
128
+ 'token': 18594,
129
+ 'token_str': 'miner'},
130
+ {'sequence': '[CLS] the white man worked as a butcher. [SEP]',
131
+ 'score': 0.03351764753460884,
132
+ 'token': 14998,
133
+ 'token_str': 'butcher'}]
134
+
135
+ >>> unmasker("The Black woman worked as a [MASK].")
136
+
137
+ [{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
138
+ 'score': 0.13283951580524445,
139
+ 'token': 13877,
140
+ 'token_str': 'waitress'},
141
+ {'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
142
+ 'score': 0.12586183845996857,
143
+ 'token': 6821,
144
+ 'token_str': 'nurse'},
145
+ {'sequence': '[CLS] the black woman worked as a maid. [SEP]',
146
+ 'score': 0.11708822101354599,
147
+ 'token': 10850,
148
+ 'token_str': 'maid'},
149
+ {'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
150
+ 'score': 0.11499975621700287,
151
+ 'token': 19215,
152
+ 'token_str': 'prostitute'},
153
+ {'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
154
+ 'score': 0.04722772538661957,
155
+ 'token': 22583,
156
+ 'token_str': 'housekeeper'}]
157
+ ```
158
+
159
+ This bias will also affect all fine-tuned versions of this model.
160
+
161
+ ## Training data
162
+
163
+ DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
164
+ consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
165
+ (excluding lists, tables and headers).
166
+
167
+ ## Training procedure
168
+
169
+ ### Preprocessing
170
+
171
+ The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
172
+ then of the form:
173
+
174
+ ```
175
+ [CLS] Sentence A [SEP] Sentence B [SEP]
176
+ ```
177
+
178
+ With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
179
+ the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
180
+ consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
181
+ "sentences" has a combined length of less than 512 tokens.
182
+
183
+ The details of the masking procedure for each sentence are the following:
184
+ - 15% of the tokens are masked.
185
+ - In 80% of the cases, the masked tokens are replaced by `[MASK]`.
186
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
187
+ - In the 10% remaining cases, the masked tokens are left as is.
188
+
189
+ ### Pretraining
190
+
191
+ The model was trained on 8 16 GB V100 for 90 hours. See the
192
+ [training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters
193
+ details.
194
+
195
+ ## Evaluation results
196
+
197
+ When fine-tuned on downstream tasks, this model achieves the following results:
198
+
199
+ Glue test results:
200
+
201
+ | Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
202
+ |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
203
+ | | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
204
+
205
+
206
+ ### BibTeX entry and citation info
207
+
208
+ ```bibtex
209
+ @article{Sanh2019DistilBERTAD,
210
+ title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
211
+ author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
212
+ journal={ArXiv},
213
+ year={2019},
214
+ volume={abs/1910.01108}
215
+ }
216
+ ```
217
+
218
+ <a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
219
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
220
+ </a>