Back to all models
token-classification mask_token: <mask>
Query this model
馃敟 This model is currently loaded and running on the Inference API. 鈿狅笍 This model could not be loaded by the inference API. 鈿狅笍 This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/mrm8488/RuPERTa-base-finetuned-ner
Share Copied link to clipboard

Monthly model downloads

mrm8488/RuPERTa-base-finetuned-ner mrm8488/RuPERTa-base-finetuned-ner
70 downloads
last 30 days

pytorch

tf

Contributed by

mrm8488 Manuel Romero
119 models

How to use this model directly from the 馃/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("mrm8488/RuPERTa-base-finetuned-ner") model = AutoModelForTokenClassification.from_pretrained("mrm8488/RuPERTa-base-finetuned-ner")

RuPERTa-base (Spanish RoBERTa) + NER 馃巸馃彿

This model is a fine-tuned on NER-C version of RuPERTa-base for NER downstream task.

Details of the downstream task (NER) - Dataset

Dataset # Examples
Train 329 K
Dev 40 K
B-LOC
B-MISC
B-ORG
B-PER
I-LOC
I-MISC
I-ORG
I-PER
O

Metrics on evaluation set 馃Ь

Metric # score
F1 77.55
Precision 75.53
Recall 79.68

Model in action 馃敤

Example of usage:

import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer

id2label = {
    "0": "B-LOC",
    "1": "B-MISC",
    "2": "B-ORG",
    "3": "B-PER",
    "4": "I-LOC",
    "5": "I-MISC",
    "6": "I-ORG",
    "7": "I-PER",
    "8": "O"
}

text ="Julien, CEO de HF, naci贸 en Francia."
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)

outputs = model(input_ids)
last_hidden_states = outputs[0]

for m in last_hidden_states:
  for index, n in enumerate(m):
    if(index > 0 and index <= len(text.split(" "))):
      print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])

'''
Output:
--------
Julien,: I-PER
CEO: O
de: O
HF,: B-ORG
naci贸: I-PER
en: I-PER
Francia.: I-LOC
'''

Yeah! Not too bad 馃帀

Created by Manuel Romero/@mrm8488

Made with in Spain