Update README.md
Browse files
README.md
CHANGED
@@ -5,10 +5,10 @@ license: apache-2.0
|
|
5 |
datasets:
|
6 |
- tweets
|
7 |
widget:
|
8 |
-
- text: "COVID-19 vaccine is
|
9 |
---
|
10 |
|
11 |
-
# Disclaimer: This page is
|
12 |
|
13 |
# Vaccinating COVID tweets
|
14 |
- A part of MDLD for DS class at SNU
|
@@ -49,164 +49,6 @@ Preprocessing, hardware used, hyperparameters...
|
|
49 |
year={2020}
|
50 |
}
|
51 |
```
|
52 |
-
------------------------
|
53 |
-
|
54 |
-
## Intended uses & limitations
|
55 |
-
|
56 |
-
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
|
57 |
-
|
58 |
-
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
|
59 |
-
|
60 |
-
fine-tuned versions on a task that interests you.
|
61 |
-
|
62 |
-
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
|
63 |
-
|
64 |
-
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
|
65 |
-
|
66 |
-
generation you should look at model like GPT2.
|
67 |
-
|
68 |
-
### How to use
|
69 |
-
|
70 |
-
You can use this model directly with a pipeline for masked language modeling:
|
71 |
-
|
72 |
-
```python
|
73 |
-
|
74 |
-
>>> from transformers import pipeline
|
75 |
-
|
76 |
-
>>> unmasker = pipeline('fill-mask', model='ans/vaccinating-covid-tweets')
|
77 |
-
|
78 |
-
>>> unmasker("Hello I'm a [MASK] model.")
|
79 |
-
|
80 |
-
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
|
81 |
-
|
82 |
-
'score': 0.1073106899857521,
|
83 |
-
|
84 |
-
'token': 4827,
|
85 |
-
|
86 |
-
'token_str': 'fashion'},
|
87 |
-
|
88 |
-
{'sequence': "[CLS] hello i'm a role model. [SEP]",
|
89 |
-
|
90 |
-
'score': 0.08774490654468536,
|
91 |
-
|
92 |
-
'token': 2535,
|
93 |
-
|
94 |
-
'token_str': 'role'},
|
95 |
-
|
96 |
-
{'sequence': "[CLS] hello i'm a new model. [SEP]",
|
97 |
-
|
98 |
-
'score': 0.05338378623127937,
|
99 |
-
|
100 |
-
'token': 2047,
|
101 |
-
|
102 |
-
'token_str': 'new'},
|
103 |
-
|
104 |
-
{'sequence': "[CLS] hello i'm a super model. [SEP]",
|
105 |
-
|
106 |
-
'score': 0.04667217284440994,
|
107 |
-
|
108 |
-
'token': 3565,
|
109 |
-
|
110 |
-
'token_str': 'super'},
|
111 |
-
|
112 |
-
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
|
113 |
-
|
114 |
-
'score': 0.027095865458250046,
|
115 |
-
|
116 |
-
'token': 2986,
|
117 |
-
|
118 |
-
'token_str': 'fine'}]
|
119 |
-
|
120 |
-
```
|
121 |
-
|
122 |
-
Here is how to use this model to get the features of a given text in PyTorch:
|
123 |
-
|
124 |
-
```python
|
125 |
-
|
126 |
-
from transformers import BertTokenizer, BertModel
|
127 |
-
|
128 |
-
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
|
129 |
-
|
130 |
-
model = BertModel.from_pretrained("bert-base-uncased")
|
131 |
-
|
132 |
-
text = "Replace me by any text you'd like."
|
133 |
-
|
134 |
-
encoded_input = tokenizer(text, return_tensors='pt')
|
135 |
-
|
136 |
-
output = model(**encoded_input)
|
137 |
-
|
138 |
-
```
|
139 |
-
|
140 |
-
|
141 |
-
### Limitations and bias
|
142 |
-
|
143 |
-
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
|
144 |
-
|
145 |
-
This bias will also affect all fine-tuned versions of this model.
|
146 |
-
|
147 |
-
|
148 |
-
## Training data
|
149 |
-
|
150 |
-
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
|
151 |
-
|
152 |
-
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
|
153 |
-
|
154 |
-
headers).
|
155 |
-
|
156 |
-
## Training procedure
|
157 |
-
|
158 |
-
### Preprocessing
|
159 |
-
|
160 |
-
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
|
161 |
-
|
162 |
-
then of the form:
|
163 |
-
|
164 |
-
```
|
165 |
-
|
166 |
-
[CLS] Sentence A [SEP] Sentence B [SEP]
|
167 |
-
|
168 |
-
```
|
169 |
-
|
170 |
-
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
|
171 |
-
|
172 |
-
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
|
173 |
-
|
174 |
-
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
|
175 |
-
|
176 |
-
"sentences" has a combined length of less than 512 tokens.
|
177 |
-
|
178 |
-
The details of the masking procedure for each sentence are the following:
|
179 |
-
|
180 |
-
- 15% of the tokens are masked.
|
181 |
-
|
182 |
-
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
183 |
-
|
184 |
-
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
185 |
-
|
186 |
-
- In the 10% remaining cases, the masked tokens are left as is.
|
187 |
-
|
188 |
-
### Pretraining
|
189 |
-
|
190 |
-
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
|
191 |
-
|
192 |
-
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
|
193 |
-
|
194 |
-
used is Adam with a learning rate of 1e-4, \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\(\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_{1} = 0.9\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\) and \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\(\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_{2} = 0.999\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\), a weight decay of 0.01,
|
195 |
-
|
196 |
-
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
|
197 |
-
|
198 |
-
## Evaluation results
|
199 |
-
|
200 |
-
When fine-tuned on downstream tasks, this model achieves the following results:
|
201 |
-
|
202 |
-
Glue test results:
|
203 |
-
|
204 |
-
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|
205 |
-
|
206 |
-
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|
207 |
-
|
208 |
-
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
|
209 |
-
|
210 |
# Contributors
|
211 |
- Ahn, Hyunju
|
212 |
- An, Jiyong
|
@@ -214,9 +56,4 @@ Glue test results:
|
|
214 |
- Jeong, Seokho
|
215 |
- Kim, Jungmin
|
216 |
- Kim, Sangbeom
|
217 |
-
- Advisor: Dr. Wen-Syan Li
|
218 |
-
|
219 |
-
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.
|
220 |
-
|
221 |
-
|
222 |
|
|
|
5 |
datasets:
|
6 |
- tweets
|
7 |
widget:
|
8 |
+
- text: "COVID-19 vaccine is ineffective to prevent from infection."
|
9 |
---
|
10 |
|
11 |
+
# Disclaimer: This page is under maintenance. Please DO NOT refer to the information on this page to make any decision yet.
|
12 |
|
13 |
# Vaccinating COVID tweets
|
14 |
- A part of MDLD for DS class at SNU
|
|
|
49 |
year={2020}
|
50 |
}
|
51 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
# Contributors
|
53 |
- Ahn, Hyunju
|
54 |
- An, Jiyong
|
|
|
56 |
- Jeong, Seokho
|
57 |
- Kim, Jungmin
|
58 |
- Kim, Sangbeom
|
|
|
|
|
|
|
|
|
|
|
59 |
|