|
--- |
|
license: mit |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
# Fine-tuned RoBERTa for Sentiment Analysis on Amazon Reviews |
|
|
|
This is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the [Amazon Reviews dataset](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews) for sentiment analysis. |
|
|
|
## Model Details |
|
|
|
- **Model Name:** AnkitAI/reviews-roberta-base-sentiment-analysis |
|
- **Base Model:** cardiffnlp/twitter-roberta-base-sentiment-latest |
|
- **Dataset:** [Amazon Reviews](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews) |
|
- **Fine-tuning:** This model was fine-tuned for sentiment analysis with a classification head for binary sentiment classification (positive and negative). |
|
|
|
## Training |
|
|
|
The model was trained using the following parameters: |
|
|
|
- **Learning Rate:** 2e-5 |
|
- **Batch Size:** 16 |
|
- **Epochs:** 3 |
|
- **Weight Decay:** 0.01 |
|
- **Evaluation Strategy:** Epoch |
|
|
|
## Usage |
|
|
|
You can use this model directly with the Hugging Face `transformers` library: |
|
|
|
```python |
|
from transformers import RobertaForSequenceClassification, RobertaTokenizer |
|
|
|
model_name = "AnkitAI/reviews-roberta-base-sentiment-analysis" |
|
model = RobertaForSequenceClassification.from_pretrained(model_name) |
|
tokenizer = RobertaTokenizer.from_pretrained(model_name) |
|
|
|
# Example usage |
|
inputs = tokenizer("This product is great!", return_tensors="pt") |
|
outputs = model(**inputs) |
|
``` |
|
## License |
|
|
|
This model is licensed under the mit license |