File size: 4,742 Bytes
010e1fb
 
 
 
 
 
 
 
 
 
844c7b6
 
 
def19b9
844c7b6
 
 
def19b9
844c7b6
 
 
 
 
 
93b451e
 
 
 
 
 
 
844c7b6
 
 
 
 
 
 
 
 
 
 
93b451e
 
844c7b6
 
 
 
 
 
 
 
 
93b451e
844c7b6
 
 
 
93b451e
844c7b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93b451e
844c7b6
 
 
 
93b451e
844c7b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
---
language: en
tags:
- timelms
- twitter
license: mit
datasets:
- twitter-api
---

# Twitter March 2021 (RoBERTa-base, 111M)

This is a RoBERTa-base model trained on 111.26M tweets until the end of March 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).

Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).

For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).

## Preprocess Text 
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
    preprocessed_text = []
    for t in text.split():
        if len(t) > 1:
            t = '@user' if t[0] == '@' and t.count('@') == 1 else t
            t = 'http' if t.startswith('http') else t
        preprocessed_text.append(t)
    return ' '.join(preprocessed_text)
```

## Example Masked Language Model 

```python
from transformers import pipeline, AutoTokenizer

MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)

def pprint(candidates, n):
    for i in range(n):
        token = tokenizer.decode(candidates[i]['token'])
        score = candidates[i]['score']
        print("%d) %.5f %s" % (i+1, score, token))

texts = [
    "So glad I'm <mask> vaccinated.",
    "I keep forgetting to bring a <mask>.",
    "Looking forward to watching <mask> Game tonight!",
]

for text in texts:
    t = preprocess(text)
    print(f"{'-'*30}\n{t}")
    candidates = fill_mask(t)
    pprint(candidates, 5)
```

Output: 

```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.42688  getting
2) 0.30230  not
3) 0.07375  fully
4) 0.03619  already
5) 0.03055  being
------------------------------
I keep forgetting to bring a <mask>.
1) 0.07603  mask
2) 0.04933  book
3) 0.04029  knife
4) 0.03461  laptop
5) 0.03069  bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.53945  the
2) 0.27647  The
3) 0.03881  End
4) 0.01711  this
5) 0.00831  Championship
```

## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter

def get_embedding(text):  # naive approach for demonstration
  text = preprocess(text)
  encoded_input = tokenizer(text, return_tensors='pt')
  features = model(**encoded_input)
  features = features[0].detach().cpu().numpy() 
  return np.mean(features[0], axis=0) 


MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)

query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣", 
          "The movie was great",
          "What time is the next game?",
          "Just finished reading 'Embeddings in NLP'"]

sims = Counter()
for tweet in tweets:
    sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
    sims[tweet] = sim

print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
    print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output: 

```
Most similar to:  The book was awesome
------------------------------
1) 0.99106 The movie was great
2) 0.96662 Just finished reading 'Embeddings in NLP'
3) 0.96150 I just ordered fried chicken 🐣
4) 0.95560 What time is the next game?
```

## Example Feature Extraction 

```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np

MODEL = "cardiffnlp/twitter-roberta-base-mar2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)

text = "Good night 😊"
text = preprocess(text)

# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy() 
features_mean = np.mean(features[0], axis=0) 
#features_max = np.max(features[0], axis=0)

# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0) 
# #features_max = np.max(features[0], axis=0)
```