language: en
tags:
- timelms
- twitter
license: mit
datasets:
- twitter-api
Twitter 2022 154M (RoBERTa-base, 154M - full update)
This is a RoBERTa-base model trained on 154M tweets until the end of December 2022 (from original checkpoint, no incremental updates). A large model trained on the same data is available here.
These 154M tweets result from filtering 220M tweets obtained exclusively from the Twitter Academic API, covering every month between 2018-01 and 2022-12. Filtering and preprocessing details are available in the TimeLMs paper.
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.
For other models trained until different periods, check this table.
Preprocess Text
Replace usernames and links for placeholders: "@user" and "http". If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
Example Masked Language Model
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
Output:
------------------------------
So glad I'm <mask> vaccinated.
1) 0.26251 not
2) 0.25460 a
3) 0.12611 in
4) 0.11036 the
5) 0.04210 getting
------------------------------
I keep forgetting to bring a <mask>.
1) 0.09274 charger
2) 0.04727 lighter
3) 0.04469 mask
4) 0.04395 drink
5) 0.03644 camera
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.57683 Squid
2) 0.17419 The
3) 0.04198 the
4) 0.00970 Spring
5) 0.00921 Big
Example Tweet Embeddings
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
Output:
Most similar to: The book was awesome
------------------------------
1) 0.99403 The movie was great
2) 0.98006 Just finished reading 'Embeddings in NLP'
3) 0.97314 What time is the next game?
4) 0.92448 I just ordered fried chicken 🐣
Example Feature Extraction
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
BibTeX entry and citation info
Please cite the reference paper if you use this model.
@article{loureiro2023tweet,
title={Tweet Insights: A Visualization Platform to Extract Temporal Insights from Twitter},
author={Loureiro, Daniel and Rezaee, Kiamehr and Riahi, Talayeh and Barbieri, Francesco and Neves, Leonardo and Anke, Luis Espinosa and Camacho-Collados, Jose},
journal={arXiv preprint arXiv:2308.02142},
year={2023}
}