Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ A greek pre-trained language model based on [RoBERTa](https://arxiv.org/abs/1907
|
|
11 |
|
12 |
The model is pre-trained on a corpus of 458,293 documents collected from greek social media (Twitter, Instagram, Facebook and YouTube). A RoBERTa tokenizer trained from scratch on the same corpus is also included.
|
13 |
|
14 |
-
The corpus has been provided by [Palo LTD](http://www.paloservices.com/)
|
15 |
|
16 |
|
17 |
## Requirements
|
@@ -50,7 +50,7 @@ tokenizer = AutoTokenizer.from_pretrained("pchatz/palobert-base-greek-social-med
|
|
50 |
|
51 |
model = AutoModelForMaskedLM.from_pretrained("pchatz/palobert-base-greek-social-media")
|
52 |
```
|
53 |
-
You can use this model directly with a pipeline for masked language modeling
|
54 |
|
55 |
```python
|
56 |
from transformers import pipeline
|
|
|
11 |
|
12 |
The model is pre-trained on a corpus of 458,293 documents collected from greek social media (Twitter, Instagram, Facebook and YouTube). A RoBERTa tokenizer trained from scratch on the same corpus is also included.
|
13 |
|
14 |
+
The corpus has been provided by [Palo LTD](http://www.paloservices.com/).
|
15 |
|
16 |
|
17 |
## Requirements
|
|
|
50 |
|
51 |
model = AutoModelForMaskedLM.from_pretrained("pchatz/palobert-base-greek-social-media")
|
52 |
```
|
53 |
+
You can use this model directly with a pipeline for masked language modeling:
|
54 |
|
55 |
```python
|
56 |
from transformers import pipeline
|