Edit model card

DistilBERT model for text classification on IMDB movie reviews to identify positive and negative sentiments. Using the IMDB dataset with 50,000 reviews, the text was preprocessed and tokenized with the Hugging Face transformers library. DistilBERT, a lightweight version of BERT, was fine-tuned for binary classification. The training process included optimizing hyperparameters and applying early stopping to prevent overfitting. The model achieved around 90% accuracy, demonstrating its effectiveness. It was then deployed in a web app for real-time sentiment analysis of movie reviews.

Downloads last month
7