Edit model card

Model Name: XLM-RoBERTa-German-Sentiment

Overview

XLM-RoBERTa-German-Sentiment model is designed to perform Sentiment Analysis for 8 Languages and more specifically German language.
This model leverages the XLM-RoBERTa architecture, a choice inspired by the superior performance of Facebook's RoBERTa over Google's BERT across numerous benchmarks.
The decision to use XLM-RoBERTa stems from its multilingual capabilities. Specifically tailored for the German language, this model has been fine-tuned on over 200,000 German-language sentiment analysis samples, more on the training of the model can be found in the paper.
The dataset utilized for training, available at this GitHub repository this dataset is developed by Oliver Guhr, We extend our gratitude to him for making the dataset open source,the dataset was influential in refining the model's accuracy and responsiveness to the nuances of German sentiment. Our model and finetuning is based on sentiment analysis model called xlm-t [https://arxiv.org/abs/2104.12250].

Model Details

  • Architecture: XLM-RoBERTa
  • Performance: 87% Weighted F1 score .
  • Limitations: The model is only train and tested on the German language, but can handle the other 8 languages with lower accuracy.

How to Use

I have developed Python desktop application for the inference at my repository.
To use this model, you need to install the Hugging Face Transformers library and PyTorch. You can do this using pip:

pip install torch transformers
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
text = "Erneuter Streik in der S-Bahn"
model = AutoModelForSequenceClassification.from_pretrained('ssary/XLM-RoBERTa-German-sentiment')
tokenizer = AutoTokenizer.from_pretrained('ssary/XLM-RoBERTa-German-sentiment')
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
with torch.no_grad():
    outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
sentiment_classes = ['negative', 'neutral', 'positive']
print(sentiment_classes[predictions.argmax()]) # for the class with highest probability
print(predictions) # for each class probability

Acknowledgments

This model was developed by Sary Nasser at HTW-Berlin under supervision of Martin Steinicke.

References

Downloads last month
484
Safetensors
Model size
278M params
Tensor type
F32
·