espejelomar commited on
Commit
5847a94
1 Parent(s): b883354

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -74
README.md CHANGED
@@ -30,80 +30,6 @@ learn = cnn_learner(dls, resnet34, metrics=error_rate)
30
  learn.fine_tune(2)
31
  ```
32
 
33
- BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
34
- was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
35
- publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
36
- was pretrained with two objectives:
37
-
38
- - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
39
- the entire masked sentence through the model and has to predict the masked words. This is different from traditional
40
- recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
41
- GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
42
- sentence.
43
- - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
44
- they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
45
- predict if the two sentences were following each other or not.
46
-
47
- This way, the model learns an inner representation of the English language that can then be used to extract features
48
- useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
49
- classifier using the features produced by the BERT model as inputs.
50
-
51
- ## Intended uses & limitations
52
-
53
- You can use the model to further fine-tune tasks that might be related to classifying animals; however, note that this model is primarily intended to illustrate the ease of integrating fastai-trained models into the HuggingFace Hub. For pretrained image classification models, see the [HuggingFace Hub](https://huggingface.co/models?pipeline_tag=image-classification&sort=downloads) and from the task menu select Image Classification.
54
-
55
- ### How to use
56
-
57
- You can use this model directly with a pipeline for masked language modeling:
58
-
59
- ```python
60
- >>> from transformers import pipeline
61
- >>> unmasker = pipeline('fill-mask', model='bert-base-cased')
62
- >>> unmasker("Hello I'm a [MASK] model.")
63
-
64
- [{'sequence': "[CLS] Hello I'm a fashion model. [SEP]",
65
- 'score': 0.09019174426794052,
66
- 'token': 4633,
67
- 'token_str': 'fashion'},
68
- {'sequence': "[CLS] Hello I'm a new model. [SEP]",
69
- 'score': 0.06349995732307434,
70
- 'token': 1207,
71
- 'token_str': 'new'},
72
- {'sequence': "[CLS] Hello I'm a male model. [SEP]",
73
- 'score': 0.06228214129805565,
74
- 'token': 2581,
75
- 'token_str': 'male'},
76
- {'sequence': "[CLS] Hello I'm a professional model. [SEP]",
77
- 'score': 0.0441727414727211,
78
- 'token': 1848,
79
- 'token_str': 'professional'},
80
- {'sequence': "[CLS] Hello I'm a super model. [SEP]",
81
- 'score': 0.03326151892542839,
82
- 'token': 7688,
83
- 'token_str': 'super'}]
84
- ```
85
-
86
- Here is how to use this model to get the features of a given text in PyTorch:
87
-
88
- ```python
89
- from transformers import BertTokenizer, BertModel
90
- tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
91
- model = BertModel.from_pretrained("bert-base-cased")
92
- text = "Replace me by any text you'd like."
93
- encoded_input = tokenizer(text, return_tensors='pt')
94
- output = model(**encoded_input)
95
- ```
96
-
97
- and in TensorFlow:
98
-
99
- ```python
100
- from transformers import BertTokenizer, TFBertModel
101
- tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
102
- model = TFBertModel.from_pretrained("bert-base-cased")
103
- text = "Replace me by any text you'd like."
104
- encoded_input = tokenizer(text, return_tensors='tf')
105
- output = model(encoded_input)
106
- ```
107
 
108
  ## Training data
109
 
 
30
  learn.fine_tune(2)
31
  ```
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Training data
35