Adriana213 commited on
Commit
83190d4
1 Parent(s): df5f0f0

Update Model Card

Browse files
Files changed (1) hide show
  1. README.md +25 -7
README.md CHANGED
@@ -2,15 +2,19 @@
2
  license: apache-2.0
3
  tags:
4
  - generated_from_keras_callback
 
 
5
  base_model: distilbert-base-uncased
6
  model-index:
7
  - name: emotion-analysis-distilbert
8
  results: []
 
 
 
 
 
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
- probably proofread and complete it, then remove this comment. -->
13
-
14
  # emotion-analysis-distilbert
15
 
16
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
@@ -19,15 +23,18 @@ It achieves the following results on the evaluation set:
19
 
20
  ## Model description
21
 
22
- More information needed
23
 
24
  ## Intended uses & limitations
25
 
26
- More information needed
 
 
 
27
 
28
  ## Training and evaluation data
29
 
30
- More information needed
31
 
32
  ## Training procedure
33
 
@@ -36,14 +43,25 @@ More information needed
36
  The following hyperparameters were used during training:
37
  - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
38
  - training_precision: float32
 
 
 
 
39
 
40
  ### Training results
41
 
 
 
 
 
42
 
 
 
 
43
 
44
  ### Framework versions
45
 
46
  - Transformers 4.40.2
47
  - TensorFlow 2.15.0
48
  - Datasets 2.19.1
49
- - Tokenizers 0.19.1
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_keras_callback
5
+ - text-classification
6
+ - sentiment-analysis
7
  base_model: distilbert-base-uncased
8
  model-index:
9
  - name: emotion-analysis-distilbert
10
  results: []
11
+ metrics:
12
+ - accuracy
13
+ - f1
14
+ - confusion_matrix
15
+ library_name: transformers
16
  ---
17
 
 
 
 
18
  # emotion-analysis-distilbert
19
 
20
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
 
23
 
24
  ## Model description
25
 
26
+ The model is based on the DistilBERT architecture, a distilled version of the BERT model, which is suitable for tasks requiring efficient inference without sacrificing performance. This specific model has been fine-tuned to predict emotions from text inputs.
27
 
28
  ## Intended uses & limitations
29
 
30
+ This model is intended for text classification tasks, particularly sentiment analysis and emotion recognition, where input texts need to be categorized into predefined emotion categories. It can be used in various applications such as chatbots, social media sentiment analysis, and customer feedback analysis.
31
+
32
+ The model's performance may vary based on the diversity and complexity of the emotional expressions in the input data.
33
+ It may not generalize well to different domains or languages without further adaptation.
34
 
35
  ## Training and evaluation data
36
 
37
+ The model was trained and evaluated on the "emotion" dataset, which includes labeled examples for emotion classification. The dataset consists of training, validation, and test sets, each containing text samples labeled with corresponding emotion categories.
38
 
39
  ## Training procedure
40
 
 
43
  The following hyperparameters were used during training:
44
  - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
45
  - training_precision: float32
46
+ -
47
+ - Optimizer: Adam with a learning rate of 5e-05, beta1=0.9, beta2=0.999, and epsilon=1e-07.
48
+ - Batch size: 64
49
+ - Number of epochs: 3
50
 
51
  ### Training results
52
 
53
+ - Accuracy: 0.9305
54
+ - F1 Score: 0.9300
55
+
56
+ ## Evaluation Metrics
57
 
58
+ The model's performance was evaluated using the following metrics:
59
+ - Accuracy: The proportion of correctly predicted labels.
60
+ - F1 Score: The weighted average of precision and recall, which provides a balanced measure for multi-class classification.
61
 
62
  ### Framework versions
63
 
64
  - Transformers 4.40.2
65
  - TensorFlow 2.15.0
66
  - Datasets 2.19.1
67
+ - Tokenizers 0.19.1