karimbkh's picture
Create README.md
07fa9b4
|
raw
history blame
2.93 kB
metadata
license: mit
datasets:
  - yelp_review_full
language:
  - en
metrics:
  - accuracy
  - f1
library_name: transformers

Model Card

Sentiment Analysis of Restaurant Reviews from Yelp Dataset

Overview

  • Task: Sentiment classification of restaurant reviews from the Yelp dataset.
  • Model: Fine-tuned BERT (Bidirectional Encoder Representations from Transformers) for sequence classification.
  • Training Dataset: Yelp dataset containing restaurant reviews.
  • Training Framework: PyTorch and Transformers library.

Model Details

  • Pre-trained Model: BERT-base-uncased.
  • Input: Cleaned and preprocessed restaurant reviews.
  • Output: Binary classification (positive or negative sentiment).
  • Tokenization: BERT tokenizer with a maximum sequence length of 240 tokens.
  • Optimizer: AdamW with a learning rate of 3e-5.
  • Learning Rate Scheduler: Linear scheduler with no warmup steps.
  • Loss Function: CrossEntropyLoss.
  • Batch Size: 16.
  • Number of Epochs: 2.

Data Preprocessing

  1. Loaded Yelp reviews dataset and business dataset.
  2. Merged datasets on the "business_id" column.
  3. Removed unnecessary columns and duplicates.
  4. Translated star ratings into binary sentiment labels (positive or negative).
  5. Upsampled the minority class (negative sentiment) to address imbalanced data.
  6. Cleaned text data by removing non-letters, converting to lowercase, and tokenizing.

Model Training

  1. Split the dataset into training (70%), validation (15%), and test (15%) sets.
  2. Tokenized, padded, and truncated input sequences.
  3. Created attention masks to differentiate real tokens from padding.
  4. Fine-tuned BERT using the specified hyperparameters.
  5. Tracked training and validation accuracy and loss for each epoch.

Model Evaluation

  1. Achieved high accuracy and F1 scores on both the validation and test sets.
  2. Generalization observed, as the accuracy on the test set was similar to the validation set.
  3. The model showed improvement in validation loss, indicating no overfitting.

Model Deployment

  1. Saved the trained model and tokenizer.
  2. Published the model and tokenizer to the Hugging Face Model Hub.
  3. Demonstrated how to load and use the model for making predictions.

Model Performance

  • Validation Accuracy: ≈ 97.5% - 97.8%
  • Test Accuracy: ≈ 97.8%
  • F1 Score: ≈ 97.8% - 97.9%

Limitations

  • Excluding stopwords may impact contextual understanding, but it was necessary to handle token length limitations.
  • Performance may vary on reviews in languages other than English.

Conclusion

The fine-tuned BERT model demonstrates robust sentiment analysis on Yelp restaurant reviews. Its high accuracy and F1 scores indicate effectiveness in capturing sentiment from user-generated content. The model is suitable for deployment in applications requiring sentiment classification for restaurant reviews.