Update README.md
Browse files
README.md
CHANGED
@@ -61,3 +61,8 @@ with torch.no_grad():
|
|
61 |
# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-large")
|
62 |
```
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
61 |
# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-large")
|
62 |
```
|
63 |
|
64 |
+
### <a name="preprocess"></a> Normalize raw input Tweets
|
65 |
+
|
66 |
+
Before applying BPE to the pre-training corpus of English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets.
|
67 |
+
|
68 |
+
Please find examples of normalizing raw input Tweets at [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet#preprocess)!
|