RuPERTa-base (Spanish RoBERTa) + NER ππ·
This model is a fine-tuned on NER-C version of RuPERTa-base for NER downstream task.
Details of the downstream task (NER) - Dataset
Dataset | # Examples |
---|---|
Train | 329 K |
Dev | 40 K |
Labels covered:
B-LOC
B-MISC
B-ORG
B-PER
I-LOC
I-MISC
I-ORG
I-PER
O
Metrics on evaluation set π§Ύ
Metric | # score |
---|---|
F1 | 77.55 |
Precision | 75.53 |
Recall | 79.68 |
Model in action π¨
Example of usage:
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
id2label = {
"0": "B-LOC",
"1": "B-MISC",
"2": "B-ORG",
"3": "B-PER",
"4": "I-LOC",
"5": "I-MISC",
"6": "I-ORG",
"7": "I-PER",
"8": "O"
}
text ="Julien, CEO de HF, naciΓ³ en Francia."
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
last_hidden_states = outputs[0]
for m in last_hidden_states:
for index, n in enumerate(m):
if(index > 0 and index <= len(text.split(" "))):
print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])
'''
Output:
--------
Julien,: I-PER
CEO: O
de: O
HF,: B-ORG
naciΓ³: I-PER
en: I-PER
Francia.: I-LOC
'''
Yeah! Not too bad π
Created by Manuel Romero/@mrm8488
Made with β₯ in Spain
- Downloads last month
- 40
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support