JeswinMS4 commited on
Commit
8902020
1 Parent(s): 79ecd2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -5,4 +5,41 @@ datasets:
5
  - hate_speech_offensive
6
  tags:
7
  - finance
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - hate_speech_offensive
6
  tags:
7
  - finance
8
+ language:
9
+ - en
10
+ ---
11
+ # BERT base model (uncased)
12
+
13
+ Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
14
+ [this paper](https://arxiv.org/abs/1810.04805) and first released in
15
+ [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
16
+ between english and English.
17
+
18
+ ## Model Details
19
+
20
+ ### Model Description
21
+
22
+ BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
23
+ was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
24
+ publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
25
+ was pretrained with two objectives:
26
+
27
+ - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
28
+ the entire masked sentence through the model and has to predict the masked words. This is different from traditional
29
+ recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
30
+ GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
31
+ sentence.
32
+ - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
33
+ they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
34
+ predict if the two sentences were following each other or not.
35
+
36
+ This way, the model learns an inner representation of the English language that can then be used to extract features
37
+ useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
38
+ classifier using the features produced by the BERT model as inputs.
39
+
40
+ - **Developed by:** [Jeswin MS, Venkatesh R, Kushal S Ballari]
41
+ - **Model type:** [Intent Classification]
42
+ - **Language(s) (NLP):** [English]
43
+ - **License:** []
44
+ - **Finetuned from model [optional]:** [Bert-base-uncased]
45
+