Edit model card

Sentiment at aequa-tech

Model Description

This model is a fine-tuned version of AlBERTo Italian model on sentiment analysis

Training Details

Training Data

Training Hyperparameters

  • learning_rate: 2e-5
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam

Evaluation

Testing Data

It was tested on SENTIPOLC 2016 test set

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.1.2
  • Datasets 2.19.0
  • Accelerate 0.30.0

How to use this model:

model = AutoModelForSequenceClassification.from_pretrained('aequa-tech/sentiment-it',num_labels=3, ignore_mismatched_sizes=True) 
tokenizer = AutoTokenizer.from_pretrained("m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0") 
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer, top_k=None)
classifier("L'insostenibile leggerezza dell'essere")
Downloads last month
14