1 ---
2 language: en
3 license: apache-2.0
4 datasets:
5 - bookcorpus
6 - wikipedia
7 ---
8
9 # BERT large model (uncased) whole word masking
10
11 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
12 [this paper](https://arxiv.org/abs/1810.04805) and first released in
13 [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
14 between english and English.
15
16 Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
17
18 The training is identical -- each masked WordPiece token is predicted independently.
19
20 Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
21 the Hugging Face team.
22
23 ## Model description
24
25 BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
26 was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
27 publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
28 was pretrained with two objectives:
29
30 - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
31 the entire masked sentence through the model and has to predict the masked words. This is different from traditional
32 recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
33 GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
34 sentence.
35 - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
36 they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
37 predict if the two sentences were following each other or not.
38
39 This way, the model learns an inner representation of the English language that can then be used to extract features
40 useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
41 classifier using the features produced by the BERT model as inputs.
42
43
44 This model has the following configuration:
45
46 - 24-layer
47 - 1024 hidden dimension
48 - 16 attention heads
49 - 336M parameters.
50
51 ## Intended uses & limitations
52
53 You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
54 be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
55 fine-tuned versions on a task that interests you.
56
57 Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
58 to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
59 generation you should look at model like GPT2.
60
61 ### How to use
62
63 You can use this model directly with a pipeline for masked language modeling:
64
65 ```python
66 >>> from transformers import pipeline
67 >>> unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking')
68 >>> unmasker("Hello I'm a [MASK] model.")
69 [
70 {
71 'sequence': "[CLS] hello i'm a fashion model. [SEP]",
72 'score': 0.15813860297203064,
73 'token': 4827,
74 'token_str': 'fashion'
75 }, {
76 'sequence': "[CLS] hello i'm a cover model. [SEP]",
77 'score': 0.10551052540540695,
78 'token': 3104,
79 'token_str': 'cover'
80 }, {
81 'sequence': "[CLS] hello i'm a male model. [SEP]",
82 'score': 0.08340442180633545,
83 'token': 3287,
84 'token_str': 'male'
85 }, {
86 'sequence': "[CLS] hello i'm a super model. [SEP]",
87 'score': 0.036381796002388,
88 'token': 3565,
89 'token_str': 'super'
90 }, {
91 'sequence': "[CLS] hello i'm a top model. [SEP]",
92 'score': 0.03609578311443329,
93 'token': 2327,
94 'token_str': 'top'
95 }
96 ]
97 ```
98
99 Here is how to use this model to get the features of a given text in PyTorch:
100
101 ```python
102 from transformers import BertTokenizer, BertModel
103 tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking')
104 model = BertModel.from_pretrained("bert-large-uncased-whole-word-masking")
105 text = "Replace me by any text you'd like."
106 encoded_input = tokenizer(text, return_tensors='pt')
107 output = model(**encoded_input)
108 ```
109
110 and in TensorFlow:
111
112 ```python
113 from transformers import BertTokenizer, TFBertModel
114 tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking')
115 model = TFBertModel.from_pretrained("bert-large-uncased-whole-word-masking")
116 text = "Replace me by any text you'd like."
117 encoded_input = tokenizer(text, return_tensors='tf')
118 output = model(encoded_input)
119 ```
120
121 ### Limitations and bias
122
123 Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
124 predictions:
125
126 ```python
127 >>> from transformers import pipeline
128 >>> unmasker = pipeline('fill-mask', model='bert-large-uncased-whole-word-masking')
129 >>> unmasker("The man worked as a [MASK].")
130 [
131 {
132 "sequence":"[CLS] the man worked as a waiter. [SEP]",
133 "score":0.09823174774646759,
134 "token":15610,
135 "token_str":"waiter"
136 },
137 {
138 "sequence":"[CLS] the man worked as a carpenter. [SEP]",
139 "score":0.08976428955793381,
140 "token":10533,
141 "token_str":"carpenter"
142 },
143 {
144 "sequence":"[CLS] the man worked as a mechanic. [SEP]",
145 "score":0.06550426036119461,
146 "token":15893,
147 "token_str":"mechanic"
148 },
149 {
150 "sequence":"[CLS] the man worked as a butcher. [SEP]",
151 "score":0.04142395779490471,
152 "token":14998,
153 "token_str":"butcher"
154 },
155 {
156 "sequence":"[CLS] the man worked as a barber. [SEP]",
157 "score":0.03680137172341347,
158 "token":13362,
159 "token_str":"barber"
160 }
161 ]
162
163 >>> unmasker("The woman worked as a [MASK].")
164 [
165 {
166 "sequence":"[CLS] the woman worked as a waitress. [SEP]",
167 "score":0.2669651508331299,
168 "token":13877,
169 "token_str":"waitress"
170 },
171 {
172 "sequence":"[CLS] the woman worked as a maid. [SEP]",
173 "score":0.13054853677749634,
174 "token":10850,
175 "token_str":"maid"
176 },
177 {
178 "sequence":"[CLS] the woman worked as a nurse. [SEP]",
179 "score":0.07987703382968903,
180 "token":6821,
181 "token_str":"nurse"
182 },
183 {
184 "sequence":"[CLS] the woman worked as a prostitute. [SEP]",
185 "score":0.058545831590890884,
186 "token":19215,
187 "token_str":"prostitute"
188 },
189 {
190 "sequence":"[CLS] the woman worked as a cleaner. [SEP]",
191 "score":0.03834161534905434,
192 "token":20133,
193 "token_str":"cleaner"
194 }
195 ]
196 ```
197
198 This bias will also affect all fine-tuned versions of this model.
199
200 ## Training data
201
202 The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
203 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
204 headers).
205
206 ## Training procedure
207
208 ### Preprocessing
209
210 The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
211 then of the form:
212
213 ```
214 [CLS] Sentence A [SEP] Sentence B [SEP]
215 ```
216
217 With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
218 the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
219 consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
220 "sentences" has a combined length of less than 512 tokens.
221
222 The details of the masking procedure for each sentence are the following:
223 - 15% of the tokens are masked.
224 - In 80% of the cases, the masked tokens are replaced by `[MASK]`.
225 - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
226 - In the 10% remaining cases, the masked tokens are left as is.
227
228 ### Pretraining
229
230 The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
231 of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
232 used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
233 learning rate warmup for 10,000 steps and linear decay of the learning rate after.
234
235 ## Evaluation results
236
237 When fine-tuned on downstream tasks, this model achieves the following results:
238
239 Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
240 ---------------------------------------- | :-------------: | :----------------:
241 BERT-Large, Uncased (Whole Word Masking) | 92.8/86.7 | 87.07
242
243
244 ### BibTeX entry and citation info
245
246 ```bibtex
247 @article{DBLP:journals/corr/abs-1810-04805,
248 author = {Jacob Devlin and
249 Ming{-}Wei Chang and
250 Kenton Lee and
251 Kristina Toutanova},
252 title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
253 Understanding},
254 journal = {CoRR},
255 volume = {abs/1810.04805},
256 year = {2018},
257 url = {http://arxiv.org/abs/1810.04805},
258 archivePrefix = {arXiv},
259 eprint = {1810.04805},
260 timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
261 biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
262 bibsource = {dblp computer science bibliography, https://dblp.org}
263 }
264 ```