tashrifmahmud
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -33,10 +33,10 @@ It achieves the following results on the evaluation set (model has been updated
|
|
33 |
|
34 |
This model is a fine-tuned version of the DistilBERT transformer architecture for sentiment analysis. It was trained on the IMDB dataset for binary classification, distinguishing between positive and negative sentiment in movie reviews. The model has been further fine-tuned on the Rotten Tomatoes dataset to improve its generalization and performance on movie-related text.
|
35 |
|
36 |
-
**Architecture:** DistilBERT (a distilled version of BERT for faster inference).
|
37 |
-
**Task:** Sentiment Analysis (binary classification: positive or negative sentiment).
|
38 |
-
**Pre-training:** The model was pre-trained on a large corpus (BERT's original training).
|
39 |
-
**Fine-tuning:** Fine-tuned using both IMDB and Rotten Tomatoes datasets.
|
40 |
|
41 |
## Intended uses & limitations
|
42 |
|
@@ -57,8 +57,8 @@ This model may struggle with sarcasm, irony, and nuanced expressions of sentimen
|
|
57 |
|
58 |
**Training data:**
|
59 |
|
60 |
-
**IMDB dataset:** The model was initially trained on the IMDB movie reviews dataset, which consists of 25,000 reviews labeled as positive or negative.
|
61 |
-
**Rotten Tomatoes dataset:** To improve the model's performance and generalization, it was further fine-tuned using the Rotten Tomatoes dataset, which contains movie reviews and ratings.
|
62 |
Evaluation data:
|
63 |
|
64 |
**Test data from Rotten Tomatoes:** The model's evaluation was performed using the test set of the Rotten Tomatoes dataset to assess its ability to generalize to unseen movie reviews.
|
|
|
33 |
|
34 |
This model is a fine-tuned version of the DistilBERT transformer architecture for sentiment analysis. It was trained on the IMDB dataset for binary classification, distinguishing between positive and negative sentiment in movie reviews. The model has been further fine-tuned on the Rotten Tomatoes dataset to improve its generalization and performance on movie-related text.
|
35 |
|
36 |
+
- **Architecture:** DistilBERT (a distilled version of BERT for faster inference).
|
37 |
+
- **Task:** Sentiment Analysis (binary classification: positive or negative sentiment).
|
38 |
+
- **Pre-training:** The model was pre-trained on a large corpus (BERT's original training).
|
39 |
+
- **Fine-tuning:** Fine-tuned using both IMDB and Rotten Tomatoes datasets.
|
40 |
|
41 |
## Intended uses & limitations
|
42 |
|
|
|
57 |
|
58 |
**Training data:**
|
59 |
|
60 |
+
- **IMDB dataset:** The model was initially trained on the IMDB movie reviews dataset, which consists of 25,000 reviews labeled as positive or negative.
|
61 |
+
- **Rotten Tomatoes dataset:** To improve the model's performance and generalization, it was further fine-tuned using the Rotten Tomatoes dataset, which contains movie reviews and ratings.
|
62 |
Evaluation data:
|
63 |
|
64 |
**Test data from Rotten Tomatoes:** The model's evaluation was performed using the test set of the Rotten Tomatoes dataset to assess its ability to generalize to unseen movie reviews.
|