naveedui commited on
Commit
389f26a
1 Parent(s): 192284a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -8,7 +8,7 @@ model-index:
8
  results: []
9
  ---
10
 
11
- <!-- This model is a fine-tuned version of distilbert-base-uncased, tailored specifically for sentiment analysis. DistilBERT, a distilled version of the more complex BERT model, offers a good balance between performance and resource efficiency, making it ideal for environments where computational resources are limited. -->
12
 
13
  # results
14
 
@@ -18,17 +18,22 @@ It achieves the following results on the evaluation set:
18
 
19
  ## Model description
20
 
21
- More information needed
22
 
23
  ## Intended uses & limitations
24
 
25
- More information needed
 
 
 
26
 
27
  ## Training and evaluation data
28
 
29
- More information needed
 
30
 
31
  ## Training procedure
 
32
 
33
  ### Training hyperparameters
34
 
 
8
  results: []
9
  ---
10
 
11
+ This model is a fine-tuned version of distilbert-base-uncased, tailored specifically for sentiment analysis. DistilBERT, a distilled version of the more complex BERT model, offers a good balance between performance and resource efficiency, making it ideal for environments where computational resources are limited.
12
 
13
  # results
14
 
 
18
 
19
  ## Model description
20
 
21
+ This model is a fine-tuned version of distilbert-base-uncased, tailored specifically for sentiment analysis. DistilBERT, a distilled version of the more complex BERT model, offers a good balance between performance and resource efficiency, making it ideal for environments where computational resources are limited.
22
 
23
  ## Intended uses & limitations
24
 
25
+ This model is intended for use in NLP applications where sentiment analysis of English movie reviews is required. It can be easily integrated into applications for analyzing customer feedback, conducting market research, or enhancing user experience by understanding sentiments expressed in text.
26
+
27
+ The current model is specifically tuned for sentiments in movie reviews and may not perform as well when used on texts from other domains. Additionally, the model's performance might vary depending on the nature of the text, such as informal language or idioms that were not prevalent in the training data.
28
+
29
 
30
  ## Training and evaluation data
31
 
32
+ The model was fine-tuned using the IMDb movie reviews dataset available through HuggingFace's datasets library. This dataset comprises 50,000 highly polar movie reviews split evenly into training and test sets, providing rich text data for training sentiment analysis models. For the purpose of fine-tuning, only 10% of the training set was used to expedite the training process while maintaining a representative sample of the data.
33
+
34
 
35
  ## Training procedure
36
+ The fine-tuning was performed on Google Colab, utilizing the pre-configured DistilBERT model loaded from HuggingFace's transformers library. The model was fine-tuned for 3 epochs with a batch size of 8 and a learning rate of 5e-5. Special care was taken to maintain the integrity of the tokenization using DistilBERT's default tokenizer, ensuring that the input data was appropriately pre-processed.
37
 
38
  ### Training hyperparameters
39