Update README.md
Browse files
README.md
CHANGED
@@ -28,13 +28,24 @@ More precisely, it was pretrained with the Masked language modeling (MLM) object
|
|
28 |
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
|
29 |
|
30 |
|
31 |
-
Here is how to use this model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
```python
|
34 |
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
35 |
|
36 |
-
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta
|
37 |
-
model = AutoModelForMaskedLM.from_pretrained("xlm-roberta
|
38 |
|
39 |
# prepare input
|
40 |
text = "Replace me by any text you'd like."
|
|
|
28 |
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
|
29 |
|
30 |
|
31 |
+
**Here is how to use this model using pipeline in transformers:**
|
32 |
+
|
33 |
+
```py
|
34 |
+
from transformers import pipeline
|
35 |
+
|
36 |
+
pipe = pipeline("token-classification", model="tejakota/finetuned-xlm-roberta",aggregation_strategy="simple")
|
37 |
+
result = pipe("David is going to New York tomorrow")
|
38 |
+
print(result)
|
39 |
+
```
|
40 |
+
|
41 |
+
|
42 |
+
**Here is how to use this model to get the features of a given text in PyTorch:**
|
43 |
|
44 |
```python
|
45 |
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
46 |
|
47 |
+
tokenizer = AutoTokenizer.from_pretrained('tejakota/finetuned-xlm-roberta')
|
48 |
+
model = AutoModelForMaskedLM.from_pretrained("tejakota/finetuned-xlm-roberta")
|
49 |
|
50 |
# prepare input
|
51 |
text = "Replace me by any text you'd like."
|