|
--- |
|
|
|
language: en |
|
tags: |
|
- text-classifciation |
|
license: apache-2.0 |
|
datasets: |
|
- tweets |
|
widget: |
|
- text: "Vaccine is effective" |
|
--- |
|
|
|
# Vaccinating COVID tweets |
|
- A part of MDLD for DS class at SNU |
|
|
|
Fine-tuned model on English language using a masked language modeling (MLM) objective from BERTweet in [this repository](https://github.com/VinAIResearch/BERTweet) for the classification task for false/misleading information about COVID-19 vaccines. |
|
|
|
# Vaccinating COVID tweets |
|
|
|
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in |
|
|
|
[this paper](https://arxiv.org/abs/1810.04805) and first released in |
|
|
|
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference |
|
|
|
between english and English. |
|
|
|
## Model description |
|
|
|
You can embed local or remote images using `![](...)` |
|
|
|
## Intended uses & limitations |
|
|
|
#### How to use |
|
|
|
```python |
|
# You can include sample code which will be formatted |
|
``` |
|
|
|
#### Limitations and bias |
|
|
|
Provide examples of latent issues and potential remediations. |
|
|
|
## Training data |
|
|
|
Describe the data you used to train the model. |
|
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. |
|
|
|
## Training procedure |
|
|
|
Preprocessing, hardware used, hyperparameters... |
|
|
|
## Eval results |
|
|
|
### BibTeX entry and citation info |
|
|
|
```bibtex |
|
@inproceedings{..., |
|
year={2020} |
|
} |
|
``` |
|
------------------------ |
|
|
|
## Intended uses & limitations |
|
|
|
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to |
|
|
|
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for |
|
|
|
fine-tuned versions on a task that interests you. |
|
|
|
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) |
|
|
|
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text |
|
|
|
generation you should look at model like GPT2. |
|
|
|
### How to use |
|
|
|
You can use this model directly with a pipeline for masked language modeling: |
|
|
|
```python |
|
|
|
>>> from transformers import pipeline |
|
|
|
>>> unmasker = pipeline('fill-mask', model='ans/vaccinating-covid-tweets') |
|
|
|
>>> unmasker("Hello I'm a [MASK] model.") |
|
|
|
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]", |
|
|
|
'score': 0.1073106899857521, |
|
|
|
'token': 4827, |
|
|
|
'token_str': 'fashion'}, |
|
|
|
{'sequence': "[CLS] hello i'm a role model. [SEP]", |
|
|
|
'score': 0.08774490654468536, |
|
|
|
'token': 2535, |
|
|
|
'token_str': 'role'}, |
|
|
|
{'sequence': "[CLS] hello i'm a new model. [SEP]", |
|
|
|
'score': 0.05338378623127937, |
|
|
|
'token': 2047, |
|
|
|
'token_str': 'new'}, |
|
|
|
{'sequence': "[CLS] hello i'm a super model. [SEP]", |
|
|
|
'score': 0.04667217284440994, |
|
|
|
'token': 3565, |
|
|
|
'token_str': 'super'}, |
|
|
|
{'sequence': "[CLS] hello i'm a fine model. [SEP]", |
|
|
|
'score': 0.027095865458250046, |
|
|
|
'token': 2986, |
|
|
|
'token_str': 'fine'}] |
|
|
|
``` |
|
|
|
Here is how to use this model to get the features of a given text in PyTorch: |
|
|
|
```python |
|
|
|
from transformers import BertTokenizer, BertModel |
|
|
|
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') |
|
|
|
model = BertModel.from_pretrained("bert-base-uncased") |
|
|
|
text = "Replace me by any text you'd like." |
|
|
|
encoded_input = tokenizer(text, return_tensors='pt') |
|
|
|
output = model(**encoded_input) |
|
|
|
``` |
|
|
|
|
|
### Limitations and bias |
|
|
|
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased |
|
|
|
This bias will also affect all fine-tuned versions of this model. |
|
|
|
|
|
## Training data |
|
|
|
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 |
|
|
|
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and |
|
|
|
headers). |
|
|
|
## Training procedure |
|
|
|
### Preprocessing |
|
|
|
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are |
|
|
|
then of the form: |
|
|
|
``` |
|
|
|
[CLS] Sentence A [SEP] Sentence B [SEP] |
|
|
|
``` |
|
|
|
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in |
|
|
|
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a |
|
|
|
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two |
|
|
|
"sentences" has a combined length of less than 512 tokens. |
|
|
|
The details of the masking procedure for each sentence are the following: |
|
|
|
- 15% of the tokens are masked. |
|
|
|
- In 80% of the cases, the masked tokens are replaced by `[MASK]`. |
|
|
|
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. |
|
|
|
- In the 10% remaining cases, the masked tokens are left as is. |
|
|
|
### Pretraining |
|
|
|
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size |
|
|
|
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer |
|
|
|
used is Adam with a learning rate of 1e-4, \\\\\\\\\\\\\\\\(\\\\\\\\beta_{1} = 0.9\\\\\\\\\\\\\\\\) and \\\\\\\\\\\\\\\\(\\\\\\\\beta_{2} = 0.999\\\\\\\\\\\\\\\\), a weight decay of 0.01, |
|
|
|
learning rate warmup for 10,000 steps and linear decay of the learning rate after. |
|
|
|
## Evaluation results |
|
|
|
When fine-tuned on downstream tasks, this model achieves the following results: |
|
|
|
Glue test results: |
|
|
|
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |
|
|
|
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| |
|
|
|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | |
|
|
|
# Contributors |
|
- Ahn, Hyunju |
|
- An, Jiyong |
|
- An, Seungchan |
|
- Jeong, Seokho |
|
- Kim, Jungmin |
|
- Kim, Sangbeom |
|
- Advisor: Dr. Wen-Syan Li |
|
|
|
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. |
|
|
|
|
|
|
|
|