--- license: apache-2.0 datasets: - chawki17/Electronic_Product_Reviews_data language: - en metrics: - accuracy - precision - recall - f1 base_model: - chawki17/my_sentiment_model pipeline_tag: text-classification library_name: transformers --- # Sentiment Analysis Model This model is a fine-tuned version of DistilBERT for sentiment analysis. It classifies text into three categories: **Positive**, **Neutral**, and **Negative**. ## Model Details - **Model Type**: DistilBERT (DistilBERT-base-uncased) - **Fine-Tuning Task**: Sentiment Analysis - **Classes**: 3 (Positive, Neutral, Negative) - **Dataset**: Custom sentiment dataset with labeled "Positive", "Neutral", and "Negative" text data. ## Intended Use This model can be used to classify text as **Positive**, **Neutral**, or **Negative**. It's ideal for applications that require sentiment classification, such as customer feedback analysis, reviews, or social media sentiment monitoring. ## Model Performance This model was fine-tuned on a custom sentiment dataset. Below are its performance metrics : - **Accuracy**: 0.91 - **F1-Score**: 0.89 - **Precision**: 0.89 - **Recall**: 0.89 ## License This model is licensed under the [MIT License](https://opensource.org/licenses/MIT). ## Usage ### Install Hugging Face Transformers To use this model, you need to install the `transformers` library. You can do so with the following command: ```bash pip install transformers ``` ### Example Code to Use the Model ```python from transformers import pipeline # Load the sentiment-analysis pipeline sentiment_analysis = pipeline("text-classification", model="chawki17/my_sentiment_model") # Example text text = "I love this product!" # Predict sentiment result = sentiment_analysis(text) print(result) ``` ### Inputs and Outputs - **Input**: A string of text (e.g., a customer review). - **Output**: A sentiment label (Positive, Neutral, or Negative) with confidence score. Example output: ```json [{'label': 'POSITIVE', 'score': 0.98}] ``` ## Limitations - The model may not perform well on text data from domains that were not part of the training set. - It may not generalize well to very short texts or highly domain-specific language. - The model is based on English text data and may not work well for other languages. ## Model Card and Documentation For more details on this model and its performance, visit the [model page on Hugging Face](https://huggingface.co/chawki17/my_sentiment_model). ## Acknowledgements - This model was fine-tuned using the DistilBERT architecture. - The dataset was custom-built for this sentiment analysis task.