Fine-tuned DistilBERT for Sentiment Analysis
Model Description
This model is a fine-tuned version of DistilBERT for sentiment analysis tasks. It was trained on the IMDB dataset to classify movie reviews as positive or negative. It can be used in applications where text sentiment analysis is needed, such as social media monitoring or customer feedback analysis.
- Model Architecture: DistilBERT (transformer-based model)
- Task: Sentiment Analysis
- Labels:
- Positive
- Negative
Training Details
- Dataset: IMDB movie reviews dataset
- Training Data Size: 20,000 samples for training and 5,000 samples for evaluation
- Epochs: 3
- Batch Size: 16
- Learning Rate: 2e-5
- Optimizer: AdamW with weight decay
Evaluation Metrics
The model was evaluated on a held-out test set using the following metrics:
- Accuracy: 0.95
- F1 Score: 0.94
- Precision: 0.93
- Recall: 0.92
Usage
Example Code
To use this sentiment analysis model with the Hugging Face Transformers library:
from transformers import pipeline
# Load the model from the Hugging Face Hub
sentiment_pipeline = pipeline("sentiment-analysis", model="Beehzod/smart_sentiment_analysis")
# Example predictions
text = "This movie was fantastic! I really enjoyed it."
results = sentiment_pipeline(text)
for result in results:
print(f"Label: {result['label']}, Score: {result['score']:.4f}")
- Downloads last month
- 108
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.