ROBERTA BASE (cased) trained on private Bulgarian sentiment-analysis dataset

This is a Multilingual Roberta model.

This model is cased: it does make a difference between bulgarian and Bulgarian.

How to use

Here is how to use this model in PyTorch:

>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> 
>>> model_id = "rmihaylov/roberta-base-sentiment-bg"
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>>
>>> inputs = tokenizer.batch_encode_plus(['Това е умно.', 'Това е тъпо.'], return_tensors='pt')
>>> outputs = model(**inputs)
>>> torch.softmax(outputs, dim=1).tolist()

[[0.0004746630438603461, 0.9995253086090088],
 [0.9986956715583801, 0.0013043134240433574]]
Downloads last month
37
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for rmihaylov/roberta-base-sentiment-bg

Adapters
4 models

Datasets used to train rmihaylov/roberta-base-sentiment-bg