ans commited on
Commit
6441201
1 Parent(s): 1ab00c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -54
README.md CHANGED
@@ -2,12 +2,12 @@
2
 
3
  language: en
4
  tags:
5
- - bertweet
6
  license: apache-2.0
7
  datasets:
8
  - tweets
9
- - fake new information
10
-
11
  ---
12
 
13
  # Vaccinating COVID tweets
@@ -15,19 +15,7 @@ datasets:
15
 
16
  Fine-tuned model on English language using a masked language modeling (MLM) objective from BERTweet in [this repository](https://github.com/VinAIResearch/BERTweet) for the classification task for false/misleading information about COVID-19 vaccines.
17
 
18
- # Contributors
19
- - Ahn, Hyunju
20
- - An, Jiyong
21
- - An, Seungchan
22
- - Jeong, Seokho
23
- - Kim, Jungmin
24
- - Kim, Sangbeom
25
- - Advisor: Dr. Wen-Syan Li
26
-
27
- Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.
28
-
29
-
30
- # BERT base model (uncased)
31
 
32
  Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
33
 
@@ -37,13 +25,41 @@ Pretrained model on English language using a masked language modeling (MLM) obje
37
 
38
  between english and English.
39
 
 
40
 
 
41
 
42
- ## Model description
 
 
 
 
 
 
 
 
43
 
44
- BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
45
 
 
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ## Intended uses & limitations
49
 
@@ -185,7 +201,7 @@ The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total)
185
 
186
  of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
187
 
188
- used is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,
189
 
190
  learning rate warmup for 10,000 steps and linear decay of the learning rate after.
191
 
@@ -201,42 +217,16 @@ Glue test results:
201
 
202
  | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
203
 
204
- ### BibTeX entry and citation info
205
-
206
- ```bibtex
207
-
208
- @article{DBLP:journals/corr/abs-1810-04805,
209
-
210
- author = {Jacob Devlin and
211
-
212
- Ming{-}Wei Chang and
213
-
214
- Kenton Lee and
215
-
216
- Kristina Toutanova},
217
-
218
- title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
219
-
220
- Understanding},
221
-
222
- journal = {CoRR},
223
-
224
- volume = {abs/1810.04805},
225
-
226
- year = {2018},
227
-
228
- url = {http://arxiv.org/abs/1810.04805},
229
-
230
- archivePrefix = {arXiv},
231
-
232
- eprint = {1810.04805},
233
-
234
- timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
235
 
236
- biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
237
 
238
- bibsource = {dblp computer science bibliography, https://dblp.org}
239
 
240
- }
241
 
242
- ```
 
2
 
3
  language: en
4
  tags:
5
+ - text-classifciation
6
  license: apache-2.0
7
  datasets:
8
  - tweets
9
+ widget:
10
+ - text: "Vaccine is effective"
11
  ---
12
 
13
  # Vaccinating COVID tweets
 
15
 
16
  Fine-tuned model on English language using a masked language modeling (MLM) objective from BERTweet in [this repository](https://github.com/VinAIResearch/BERTweet) for the classification task for false/misleading information about COVID-19 vaccines.
17
 
18
+ # Vaccinating COVID tweets
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
21
 
 
25
 
26
  between english and English.
27
 
28
+ ## Model description
29
 
30
+ You can embed local or remote images using `![](...)`
31
 
32
+ ## Intended uses & limitations
33
+
34
+ #### How to use
35
+
36
+ ```python
37
+ # You can include sample code which will be formatted
38
+ ```
39
+
40
+ #### Limitations and bias
41
 
42
+ Provide examples of latent issues and potential remediations.
43
 
44
+ ## Training data
45
 
46
+ Describe the data you used to train the model.
47
+ If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
48
+
49
+ ## Training procedure
50
+
51
+ Preprocessing, hardware used, hyperparameters...
52
+
53
+ ## Eval results
54
+
55
+ ### BibTeX entry and citation info
56
+
57
+ ```bibtex
58
+ @inproceedings{...,
59
+ year={2020}
60
+ }
61
+ ```
62
+ ------------------------
63
 
64
  ## Intended uses & limitations
65
 
 
201
 
202
  of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
203
 
204
+ used is Adam with a learning rate of 1e-4, \\\\\\\\(\\\\beta_{1} = 0.9\\\\\\\\) and \\\\\\\\(\\\\beta_{2} = 0.999\\\\\\\\), a weight decay of 0.01,
205
 
206
  learning rate warmup for 10,000 steps and linear decay of the learning rate after.
207
 
 
217
 
218
  | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
219
 
220
+ # Contributors
221
+ - Ahn, Hyunju
222
+ - An, Jiyong
223
+ - An, Seungchan
224
+ - Jeong, Seokho
225
+ - Kim, Jungmin
226
+ - Kim, Sangbeom
227
+ - Advisor: Dr. Wen-Syan Li
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
228
 
229
+ Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.
230
 
 
231
 
 
232