Back to all models
token-classification mask_token: <mask>
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$ curl -X POST \
https://api-inference.huggingface.co/models/mrm8488/RuPERTa-base-finetuned-pos
Share Copied link to clipboard

Monthly model downloads

mrm8488/RuPERTa-base-finetuned-pos mrm8488/RuPERTa-base-finetuned-pos
54 downloads
last 30 days

pytorch

tf

Contributed by

mrm8488 Manuel Romero
90 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("mrm8488/RuPERTa-base-finetuned-pos") model = AutoModelForTokenClassification.from_pretrained("mrm8488/RuPERTa-base-finetuned-pos")

RuPERTa-base (Spanish RoBERTa) + POS 🎃🏷

This model is a fine-tuned on CONLL CORPORA version of RuPERTa-base for POS downstream task.

Details of the downstream task (POS) - Dataset

Dataset # Examples
Train 445 K
Dev 55 K
ADJ
ADP
ADV
AUX
CCONJ
DET
INTJ
NOUN
NUM
PART
PRON
PROPN
PUNCT
SCONJ
SYM
VERB

Metrics on evaluation set 🧾

Metric # score
F1 97.39
Precision 97.47
Recall 9732

Model in action 🔨

Example of usage

import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('mrm8488/RuPERTa-base-finetuned-pos')
model = AutoModelForTokenClassification.from_pretrained('mrm8488/RuPERTa-base-finetuned-pos')

id2label = {
    "0": "O",
    "1": "ADJ",
    "2": "ADP",
    "3": "ADV",
    "4": "AUX",
    "5": "CCONJ",
    "6": "DET",
    "7": "INTJ",
    "8": "NOUN",
    "9": "NUM",
    "10": "PART",
    "11": "PRON",
    "12": "PROPN",
    "13": "PUNCT",
    "14": "SCONJ",
    "15": "SYM",
    "16": "VERB"
}

text ="Mis amigos están pensando viajar a Londres este verano."
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)

outputs = model(input_ids)
last_hidden_states = outputs[0]

for m in last_hidden_states:
  for index, n in enumerate(m):
    if(index > 0 and index <= len(text.split(" "))):
      print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])

'''
Output:
--------
Mis: NUM
amigos: PRON
están: AUX
pensando: ADV
viajar: VERB
a: ADP
Londres: PROPN
este: DET
verano..: NOUN
'''

Yeah! Not too bad 🎉

Created by Manuel Romero/@mrm8488 | LinkedIn

Made with in Spain