File size: 758 Bytes
ffee87b
 
 
 
a005092
ffee87b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05e5e91
ffee87b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
EconBERTa - RoBERTa further trained on 4GB of uncompressed text sourced from economics books.

Example usage for MLM: 

```python
from transformers import RobertaTokenizer, RobertaForMaskedLM
from transformers import pipeline

tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMaskedLM.from_pretrained('models').cpu()
model.eval()
mlm = pipeline('fill-mask', model = model, tokenizer  = tokenizer)
test = "ECB - euro, FED - <mask>, BoJ - yen"
print(mlm(test)[:2])

[{'sequence': 'ECB - euro, FED - dollar, BoJ - yen',
  'score': 0.7342271208763123,
  'token': 1404,
  'token_str': ' dollar'},
 {'sequence': 'ECB - euro, FED - dollars, BoJ - yen',
  'score': 0.10828445851802826,
  'token': 1932,
  'token_str': ' dollars'}]
```