Deberta for Sentiment Analysis

This is a Deberta model finetuned on over 1 million reviews from Amazon's multi-reviews dataset.

How to use the model

import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

def get_sentiment(sentence):
    bert_dict = {}
    vectors = tokenizer(sentence, return_tensors='pt').to(device)
    outputs = bert_model(**vectors).logits
    probs = torch.nn.functional.softmax(outputs, dim = 1)[0]
    bert_dict['neg'] = round(probs[0].item(), 3)
    bert_dict['neu'] = round(probs[1].item(), 3)
    bert_dict['pos'] = round(probs[2].item(), 3)
    return bert_dict

MODEL_NAME = 'RashidNLP/Amazon-Deberta-Base-Sentiment'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

bert_model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME, num_labels = 3).to(device)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)

get_sentiment("This is quite a mess you have made")
Downloads last month
16
Safetensors
Model size
184M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train RashidNLP/Amazon-Deberta-Base-Sentiment