license: mit
metrics:
- accuracy
pipeline_tag: text-classification
library_name: transformers
Sentiment Analysis Model
This repository contains a fine-tuned sentiment analysis model based on the DistilBERT architecture. The model is capable of classifying text as either positive or negative sentiment.
Model code
Model Information
- Model Name: rohansb10/sentiment_analysis_model
- Base Model: DistilBERT
- Task: Binary Sentiment Classification (Positive/Negative)
- Training Data: IMDB Dataset (sample of 1000 reviews)
Installation
To use this model, you'll need to install the following dependencies:
pip install transformers torch
Usage
Here's a sample code to use the sentiment analysis model:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load the model and tokenizer from the Hugging Face Hub
model_name = "rohansb10/sentiment_analysis_model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Set device to GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# Function to predict sentiment
def predict_sentiment(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True).to(device)
with torch.no_grad():
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(probabilities, dim=-1).item()
return predicted_class, probabilities[0].tolist()
# Test the model with some sample texts
sample_texts = [
"I absolutely loved the movie! It was fantastic.",
"The product did not meet my expectations. Very disappointing.",
"It's okay, not great but not terrible either.",
]
# Run predictions on the sample texts
for text in sample_texts:
predicted_class, probabilities = predict_sentiment(text)
sentiment = "positive" if predicted_class == 1 else "negative"
print(f"Text: {text}")
print(f"Predicted Sentiment: {sentiment}, Probabilities: {probabilities}\n")
Example Output
When you run the code above, you should see output similar to this:
Text: I absolutely loved the movie! It was fantastic.
Predicted Sentiment: positive, Probabilities: [0.01234, 0.98766]
Text: The product did not meet my expectations. Very disappointing.
Predicted Sentiment: negative, Probabilities: [0.99876, 0.00124]
Text: It's okay, not great but not terrible either.
Predicted Sentiment: negative, Probabilities: [0.67890, 0.32110]
Model Performance
The model was trained on a sample of 1000 reviews from the IMDB dataset. For detailed performance metrics, including accuracy, precision, recall, and F1-score, please refer to the model card on the Hugging Face Hub.
Limitations
- The model was trained on a small sample of movie reviews, which may limit its generalization to other domains.
- It performs binary classification (positive/negative) and does not handle neutral sentiments explicitly.
- Performance may vary on texts that are significantly different from movie reviews.
Contributing
Contributions to improve the model or extend its capabilities are welcome. Please feel free to open an issue or submit a pull request.
License
Please refer to the model card on the Hugging Face Hub for licensing information.
Contact
For any questions or feedback, please open an issue in this repository.