ans commited on
Commit
7720a03
1 Parent(s): 3f0c369

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -5,9 +5,11 @@ license: apache-2.0
5
  datasets:
6
  - tweets
7
  widget:
8
- - text: "Vaccine is effective."
9
  ---
10
 
 
 
11
  # Vaccinating COVID tweets
12
  - A part of MDLD for DS class at SNU
13
 
@@ -189,7 +191,7 @@ The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total)
189
 
190
  of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
191
 
192
- used is Adam with a learning rate of 1e-4, \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\(\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_{1} = 0.9\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\) and \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\(\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_{2} = 0.999\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\), a weight decay of 0.01,
193
 
194
  learning rate warmup for 10,000 steps and linear decay of the learning rate after.
195
 
 
5
  datasets:
6
  - tweets
7
  widget:
8
+ - text: "COVID-19 vaccine is effective to prevent from infection."
9
  ---
10
 
11
+ # Disclaimer: This page is in maintenance. DO NOT ...
12
+
13
  # Vaccinating COVID tweets
14
  - A part of MDLD for DS class at SNU
15
 
 
191
 
192
  of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
193
 
194
+ used is Adam with a learning rate of 1e-4, \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\(\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_{1} = 0.9\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\) and \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\(\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_{2} = 0.999\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\), a weight decay of 0.01,
195
 
196
  learning rate warmup for 10,000 steps and linear decay of the learning rate after.
197