Dongsung commited on
Commit
8d00106
1 Parent(s): 387585b

Update model description

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -15,7 +15,11 @@ This model is a pre-trained BERT-Large trained in two phases on the [Graphcore/w
15
 
16
  ## Model description
17
 
18
- Pre-trained BERT Large model trained on Wikipedia data.
 
 
 
 
19
 
20
 
21
  ## Training and evaluation data
15
 
16
  ## Model description
17
 
18
+ BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabeled texts. It enables easy and fast fine-tuning for different downstream task such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
19
+
20
+ It was trained with two objectives in pretraining : Masked language modeling(MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
21
+
22
+ It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Model description
23
 
24
 
25
  ## Training and evaluation data