Update README.md
Browse files
README.md
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# COVID-Twitter-BERT (CT-BERT) v1
|
2 |
+
BERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19.
|
3 |
+
|
4 |
+
Find more info on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
|
5 |
+
|
6 |
+
|
7 |
+
## Overview
|
8 |
+
This model was trained on 160M tweets collected between January 12 and April 16, 2020 containing at least one of the keywords "wuhan", "ncov", "coronavirus", "covid", or "sars-cov-2". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.
|
9 |
+
|
10 |
+
This model was evaluated based on downstream classification tasks, but it could be used for any other NLP task which can leverage contextual embeddings.
|
11 |
+
|
12 |
+
In order to achieve best results, make sure to use the same text preprocessing as we did for pretraining. This involves replacing user mentions, urls and emojis. You can find a script on our projects [GitHub repo](https://github.com/digitalepidemiologylab/covid-twitter-bert).
|
13 |
+
|
14 |
+
## Example usage
|
15 |
+
```python
|
16 |
+
tokenizer = AutoTokenizer.from_pretrained("digitalepidemiologylab/covid-twitter-bert")
|
17 |
+
model = AutoModel.from_pretrained("digitalepidemiologylab/covid-twitter-bert")
|
18 |
+
```
|
19 |
+
|
20 |
+
## References
|
21 |
+
[1] Martin Müller, Marcel Salaté, Per E Kummervold. "COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter" arXiv preprint arXiv:2005.07503 (2020).
|