hamzawaheed commited on
Commit
3973858
1 Parent(s): 7d6c9f3

Model save

Browse files
Files changed (2) hide show
  1. README.md +44 -107
  2. model.safetensors +1 -1
README.md CHANGED
@@ -3,126 +3,63 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: distilbert-base-uncased
5
  tags:
6
- - emotion-classification
7
- - text-classification
8
- - distilbert
9
  metrics:
10
- - accuracy
 
 
 
11
  ---
12
 
 
 
 
13
  # emotion-classification-model
14
 
15
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased).
16
  It achieves the following results on the evaluation set:
17
- - **Loss:** 0.1789
18
- - **Accuracy:** 0.931
19
-
20
- ## Model Description
21
-
22
- The **Emotion Classification Model** is a fine-tuned version of the `distilbert-base-uncased` transformer architecture, adapted specifically for classifying text into six distinct emotions. DistilBERT, a distilled version of BERT, offers a lightweight yet powerful foundation, enabling efficient training and inference without significant loss in performance.
23
-
24
- This model leverages the pre-trained language understanding capabilities of DistilBERT to accurately categorize textual data into the following emotion classes:
25
-
26
- - **Joy**
27
- - **Sadness**
28
- - **Anger**
29
- - **Fear**
30
- - **Surprise**
31
- - **Disgust**
32
-
33
- By fine-tuning on the `dair-ai/emotion` dataset, the model has been optimized to recognize and differentiate subtle emotional cues in various text inputs, making it suitable for applications that require nuanced sentiment analysis and emotional intelligence.
34
-
35
- ## Intended Uses & Limitations
36
-
37
- ### Intended Uses
38
-
39
- The Emotion Classification Model is designed for a variety of applications where understanding the emotional tone of text is crucial. Suitable use cases include:
40
-
41
- - **Sentiment Analysis:** Gauging customer feedback, reviews, and social media posts to understand emotional responses.
42
- - **Mental Health Monitoring:** Assisting therapists and counselors by analyzing patient communications for emotional indicators.
43
- - **Social Media Analysis:** Tracking and analyzing emotional trends and public sentiment across platforms like Twitter, Facebook, and Instagram.
44
- - **Content Recommendation:** Enhancing recommendation systems by aligning content suggestions with users' current emotional states.
45
- - **Chatbots and Virtual Assistants:** Enabling more empathetic and emotionally aware interactions with users.
46
-
47
- ### Limitations
48
-
49
- While the Emotion Classification Model demonstrates strong performance across various tasks, it has certain limitations:
50
-
51
- - **Bias in Training Data:** The model may inherit biases present in the `dair-ai/emotion` dataset, potentially affecting its performance across different demographics, cultures, or contexts.
52
- - **Contextual Understanding:** The model analyzes text in isolation and may struggle with understanding nuanced emotions that depend on broader conversational context or preceding interactions.
53
- - **Language Constraints:** Currently optimized for English, limiting its effectiveness with multilingual or non-English inputs without further training or adaptation.
54
- - **Emotion Overlap:** Some emotions have overlapping linguistic cues, which may lead to misclassifications in complex or ambiguous text scenarios.
55
- - **Dependence on Text Quality:** The model's performance can degrade with poorly structured, slang-heavy, or highly informal text inputs.
56
-
57
- ## Training and Evaluation Data
58
 
59
- ### Dataset
60
 
61
- The model was trained and evaluated on the [`dair-ai/emotion`](https://huggingface.co/datasets/dair-ai/emotion) dataset, a comprehensive collection of textual data annotated for emotion classification.
62
 
63
- ### Dataset Statistics
64
 
65
- - **Total Samples:** 20,000
66
- - **Training Set:** 16,000 samples
67
- - **Validation Set:** 2,000 samples
68
- - **Test Set:** 2,000 samples
69
- - **Emotion Classes:** 6
70
- - **Joy:** 3,000 samples
71
- - **Sadness:** 3,500 samples
72
- - **Anger:** 2,500 samples
73
- - **Fear:** 2,000 samples
74
- - **Surprise:** 4,000 samples
75
- - **Disgust:** 2,000 samples
76
 
77
- ### Data Preprocessing
78
 
79
- Prior to training, the dataset underwent the following preprocessing steps:
80
 
81
- 1. **Tokenization:** Utilized the `DistilBertTokenizerFast` from the `distilbert-base-uncased` model to tokenize the input text. Each text sample was converted into token IDs, ensuring compatibility with the DistilBERT architecture.
82
- 2. **Padding & Truncation:** Applied padding and truncation to maintain a uniform sequence length of 32 tokens. This step ensures efficient batching and consistent input dimensions for the model.
83
- 3. **Batch Processing:** Employed parallel processing using all available CPU cores minus one to expedite the tokenization process across training, validation, and test sets.
84
- 4. **Format Conversion:** Converted the tokenized datasets into PyTorch tensors to facilitate seamless integration with the PyTorch-based `Trainer` API.
85
 
86
- ### Evaluation Metrics
87
-
88
- The model's performance was assessed using the following metrics:
89
-
90
- - **Accuracy:** Measures the proportion of correctly predicted samples out of the total samples.
91
-
92
- ## Training Procedure
93
-
94
- ### Training Hyperparameters
95
 
96
  The following hyperparameters were used during training:
97
-
98
- - **Learning Rate:** `6e-05`
99
- - **Training Batch Size:** `16` per device
100
- - **Evaluation Batch Size:** `32` per device
101
- - **Number of Epochs:** `2`
102
- - **Weight Decay:** `0.01`
103
- - **Gradient Accumulation Steps:** `2` (effectively simulating a batch size of `32`)
104
- - **Mixed Precision Training:** Enabled (Native AMP) if CUDA is available
105
-
106
- ### Optimization Strategies
107
-
108
- - **Mixed Precision Training:** Utilized PyTorch's Native AMP to accelerate training and reduce memory consumption when a CUDA-enabled GPU is available.
109
- - **Gradient Accumulation:** Implemented gradient accumulation with `2` steps to effectively increase the batch size without exceeding GPU memory limits.
110
- - **Early Stopping:** Incorporated `EarlyStoppingCallback` with a patience of `2` epochs to halt training if the validation loss does not improve, preventing overfitting.
111
- - **Checkpointing:** Configured to save model checkpoints at the end of each epoch, retaining only the two most recent checkpoints to manage storage efficiently.
112
-
113
- ### Training Duration
114
-
115
- - **Total Training Time:** Approximately `2.40` minutes
116
- ### Logging and Monitoring
117
-
118
- - **Logging Directory:** `./logs`
119
- - **Logging Steps:** Every `10` steps
120
- - **Reporting To:** TensorBoard
121
- - **Tools Used:** TensorBoard for real-time visualization of training metrics, including loss and accuracy.
122
-
123
- ### Training Results
124
-
125
- After training, the model achieved the following performance metrics:
126
-
127
- - **Validation Accuracy:** `93.10%`
128
- - **Test Accuracy:** `93.10%`
 
3
  license: apache-2.0
4
  base_model: distilbert-base-uncased
5
  tags:
6
+ - generated_from_trainer
 
 
7
  metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: emotion-classification-model
11
+ results: []
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
  # emotion-classification-model
18
 
19
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1819
22
+ - Accuracy: 0.93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ ## Model description
25
 
26
+ More information needed
27
 
28
+ ## Intended uses & limitations
29
 
30
+ More information needed
 
 
 
 
 
 
 
 
 
 
31
 
32
+ ## Training and evaluation data
33
 
34
+ More information needed
35
 
36
+ ## Training procedure
 
 
 
37
 
38
+ ### Training hyperparameters
 
 
 
 
 
 
 
 
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 6e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 32
44
+ - seed: 42
45
+ - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 32
47
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
+ - lr_scheduler_type: linear
49
+ - num_epochs: 2
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
+ | 0.2197 | 1.0 | 500 | 0.2142 | 0.918 |
57
+ | 0.1269 | 2.0 | 1000 | 0.1819 | 0.93 |
58
+
59
+
60
+ ### Framework versions
61
+
62
+ - Transformers 4.46.2
63
+ - Pytorch 2.5.1+cu118
64
+ - Datasets 3.1.0
65
+ - Tokenizers 0.20.3
 
 
 
 
 
 
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d524fab86ec1d4aa2295b167981db805b6345df6b619a2f0218bb657c93d949f
3
  size 267844872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b79f9836c85434ae7bb12bb806d9716782b568903ff97c1f9b2568474df36beb
3
  size 267844872