|
--- |
|
license: mit |
|
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest |
|
language: |
|
- en |
|
library_name: transformers |
|
tags: |
|
- Roberta |
|
- Sentiment Analysis |
|
widget: |
|
- text: This product is really great! |
|
- text: This product is really bad! |
|
--- |
|
|
|
# π Fine-tuned RoBERTa for Sentiment Analysis on Reviews π |
|
|
|
This is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the [Amazon Reviews dataset](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews) for sentiment analysis. |
|
|
|
## π Model Details |
|
|
|
- **π Model Name:** `AnkitAI/reviews-roberta-base-sentiment-analysis` |
|
- **π Base Model:** `cardiffnlp/twitter-roberta-base-sentiment-latest` |
|
- **π Dataset:** [Amazon Reviews](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews) |
|
- **βοΈ Fine-tuning:** This model was fine-tuned for sentiment analysis with a classification head for binary sentiment classification (positive and negative). |
|
|
|
## ποΈ Training |
|
|
|
The model was trained using the following parameters: |
|
|
|
- **π§ Learning Rate:** 2e-5 |
|
- **π¦ Batch Size:** 16 |
|
- **β³ Epochs:** 3 |
|
- **βοΈ Weight Decay:** 0.01 |
|
- **π
Evaluation Strategy:** Epoch |
|
|
|
|
|
### ποΈ Training Details |
|
|
|
- **π Eval Loss:** 0.1049 |
|
- **β±οΈ Eval Runtime:** 3177.538 seconds |
|
- **π Eval Samples/Second:** 226.591 |
|
- **π Eval Steps/Second:** 7.081 |
|
- **π Epoch:** 3.0 |
|
- **π Train Runtime:** 110070.6349 seconds |
|
- **π Train Samples/Second:** 78.495 |
|
- **π Train Steps/Second:** 2.453 |
|
- **π Train Loss:** 0.0858 |
|
- **β³ Eval Accuracy:** 97.19% |
|
- **π Eval Precision:** 97.9% |
|
- **β±οΈ Eval Recall:** 97.18% |
|
- **π Eval F1 Score:** 97.19% |
|
|
|
|
|
## π Usage |
|
|
|
You can use this model directly with the Hugging Face `transformers` library: |
|
|
|
```python |
|
from transformers import RobertaForSequenceClassification, RobertaTokenizer |
|
|
|
model_name = "AnkitAI/reviews-roberta-base-sentiment-analysis" |
|
model = RobertaForSequenceClassification.from_pretrained(model_name) |
|
tokenizer = RobertaTokenizer.from_pretrained(model_name) |
|
|
|
# Example usage |
|
inputs = tokenizer("This product is great!", return_tensors="pt") |
|
outputs = model(**inputs) # 1 for positive, 0 for negative |
|
|
|
# Get sentiment |
|
logits = outputs.logits |
|
print(logits) |
|
``` |
|
|
|
|
|
## π License |
|
|
|
This model is licensed under the [MIT License](LICENSE). |