Back to all models
Model: mrm8488/RuPERTa-base-finetuned-ner

Monthly model downloads

mrm8488/RuPERTa-base-finetuned-ner mrm8488/RuPERTa-base-finetuned-ner
- downloads
last 30 days

pytorch

tf

Contributed by

mrm8488 Manuel Romero
67 models

How to use this model directly from the 馃/transformers library:

			
Copy model
tokenizer = AutoTokenizer.from_pretrained("mrm8488/RuPERTa-base-finetuned-ner") model = AutoModelForTokenClassification.from_pretrained("mrm8488/RuPERTa-base-finetuned-ner")

RuPERTa-base (Spanish RoBERTa) + NER 馃巸馃彿

This model is a fine-tuned on NER-C version of RuPERTa-base for NER downstream task.

Details of the downstream task (NER) - Dataset

Dataset # Examples
Train 329 K
Dev 40 K
B-LOC
B-MISC
B-ORG
B-PER
I-LOC
I-MISC
I-ORG
I-PER
O

Metrics on evaluation set 馃Ь

Metric # score
F1 77.55
Precision 75.53
Recall 79.68

Model in action 馃敤

Example of usage:

import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer

id2label = {
    "0": "B-LOC",
    "1": "B-MISC",
    "2": "B-ORG",
    "3": "B-PER",
    "4": "I-LOC",
    "5": "I-MISC",
    "6": "I-ORG",
    "7": "I-PER",
    "8": "O"
}

text ="Julien, CEO de HF, naci贸 en Francia."
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)

outputs = model(input_ids)
last_hidden_states = outputs[0]

for m in last_hidden_states:
  for index, n in enumerate(m):
    if(index > 0 and index <= len(text.split(" "))):
      print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])

'''
Output:
--------
Julien,: I-PER
CEO: O
de: O
HF,: B-ORG
naci贸: I-PER
en: I-PER
Francia.: I-LOC
'''

Yeah! Not too bad 馃帀

Created by Manuel Romero/@mrm8488

Made with in Spain