julien-c HF staff commited on
Commit
603a5d2
1 Parent(s): 19798c8

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/vinai/bertweet-base/README.md

Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets
2
+
3
+ - BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure, using the same model configuration as [BERT-base](https://github.com/google-research/bert).
4
+ - The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic.
5
+ - BERTweet does better than its competitors RoBERTa-base and [XLM-R-base](https://arxiv.org/abs/1911.02116) and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.
6
+
7
+ The general architecture and experimental results of BERTweet can be found in our [paper](https://arxiv.org/abs/2005.10200):
8
+
9
+ @inproceedings{bertweet,
10
+ title = {{BERTweet: A pre-trained language model for English Tweets}},
11
+ author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
12
+ booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
13
+ year = {2020}
14
+ }
15
+
16
+ **Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software.
17
+
18
+ For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
19
+
20
+ ### <a name="install2"></a> Installation
21
+
22
+ - Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
23
+ - Install `transformers`:
24
+ - `git clone https://github.com/huggingface/transformers.git`
25
+ - `cd transformers`
26
+ - `pip3 install --upgrade .`
27
+ - Install `emoji`: `pip3 install emoji`
28
+
29
+ ### <a name="models2"></a> Pre-trained models
30
+
31
+
32
+ Model | #params | Arch. | Pre-training data
33
+ ---|---|---|---
34
+ `vinai/bertweet-base` | 135M | base | 845M English Tweets (cased)
35
+ `vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
36
+ `vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
37
+
38
+ Two pre-trained models `vinai/bertweet-covid19-base-cased` and `vinai/bertweet-covid19-base-uncased` are resulted by further pre-training the pre-trained model `vinai/bertweet-base` on a corpus of 23M COVID-19 English Tweets for 40 epochs.
39
+
40
+ ### <a name="usage2"></a> Example usage
41
+
42
+
43
+ ```python
44
+ import torch
45
+ from transformers import AutoModel, AutoTokenizer
46
+
47
+ bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
48
+ tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
49
+
50
+ # INPUT TWEET IS ALREADY NORMALIZED!
51
+ line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
52
+
53
+ input_ids = torch.tensor([tokenizer.encode(line)])
54
+
55
+ with torch.no_grad():
56
+ features = bertweet(input_ids) # Models outputs are now tuples
57
+
58
+ ## With TensorFlow 2.0+:
59
+ # from transformers import TFAutoModel
60
+ # bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
61
+ ```
62
+
63
+ ### <a name="preprocess"></a> Normalize raw input Tweets
64
+
65
+ Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument.
66
+
67
+ ```python
68
+ import torch
69
+ from transformers import AutoTokenizer
70
+
71
+ # Load the AutoTokenizer with a normalization mode if the input Tweet is raw
72
+ tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
73
+
74
+ # from transformers import BertweetTokenizer
75
+ # tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
76
+
77
+ line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"
78
+
79
+ input_ids = torch.tensor([tokenizer.encode(line)])
80
+ ```