christinacdl commited on
Commit
b97c019
1 Parent(s): 2fc9414

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -10
README.md CHANGED
@@ -5,9 +5,23 @@ tags:
5
  - generated_from_trainer
6
  metrics:
7
  - accuracy
 
 
 
8
  model-index:
9
  - name: XLM_RoBERTa-Clickbait-Detection-new
10
  results: []
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,24 +29,27 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # XLM_RoBERTa-Clickbait-Detection-new
17
 
18
- This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.1071
21
  - Micro F1: 0.9834
22
  - Macro F1: 0.9833
23
  - Accuracy: 0.9834
24
 
25
- ## Model description
 
 
 
 
26
 
27
- More information needed
 
 
28
 
29
  ## Intended uses & limitations
30
 
31
  More information needed
32
 
33
- ## Training and evaluation data
34
-
35
- More information needed
36
 
37
  ## Training procedure
38
 
@@ -48,9 +65,47 @@ The following hyperparameters were used during training:
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - num_epochs: 4
51
-
52
- ### Training results
53
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
 
56
  ### Framework versions
@@ -58,4 +113,4 @@ The following hyperparameters were used during training:
58
  - Transformers 4.36.1
59
  - Pytorch 2.1.0+cu121
60
  - Datasets 2.13.1
61
- - Tokenizers 0.15.0
 
5
  - generated_from_trainer
6
  metrics:
7
  - accuracy
8
+ - f1
9
+ - recall
10
+ - precision
11
  model-index:
12
  - name: XLM_RoBERTa-Clickbait-Detection-new
13
  results: []
14
+ datasets:
15
+ - christinacdl/clickbait_detection_dataset
16
+ language:
17
+ - en
18
+ - el
19
+ - ru
20
+ - ro
21
+ - de
22
+ - it
23
+ - es
24
+ pipeline_tag: text-classification
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  # XLM_RoBERTa-Clickbait-Detection-new
31
 
32
+ This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the christinacdl/clickbait_detection_dataset dataset.
33
  It achieves the following results on the evaluation set:
34
  - Loss: 0.1071
35
  - Micro F1: 0.9834
36
  - Macro F1: 0.9833
37
  - Accuracy: 0.9834
38
 
39
+ It achieves the following results on the test set:
40
+ - Accuracy: 0.9838922630050172
41
+ - Micro-F1 Score: 0.9838922630050172
42
+ - Macro-F1 Score: 0.9838416247418498
43
+ - Matthews Correlation Coefficient: 0.9676867009951606
44
 
45
+ - Precision of each class: [0.98156425 0.98597897]
46
+ - Recall of each class: [0.98431373 0.98351648]
47
+ - F1 score of each class: [0.98293706 0.98474619]
48
 
49
  ## Intended uses & limitations
50
 
51
  More information needed
52
 
 
 
 
53
 
54
  ## Training procedure
55
 
 
65
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
  - lr_scheduler_type: linear
67
  - num_epochs: 4
68
+ - early stopping patience: 2
69
+ - adam epsilon: 1e-8
70
+ - gradient_checkpointing: True
71
+ - max_grad_norm: 1.0
72
+ - seed: 42
73
+ - optimizer: adamw_torch_fused
74
+ - max_steps': -1
75
+ - warmup_ratio: 0
76
+ - group_by_length: True
77
+ - max_seq_length': 512
78
+ - save_steps: 1000
79
+ - logging_steps': 500
80
+ - evaluation_strategy: epoch
81
+ - save_strategy:epoch
82
+ - eval_steps: 1000
83
+ - save_total_limit: 2
84
+
85
+
86
+ ### All results from Training and Evaluation
87
+ - "epoch": 4.0,
88
+ - "eval_accuracy": 0.9844203855294428,
89
+ - "eval_loss": 0.08027808368206024,
90
+ - "eval_macro_f1": 0.9843695357857132,
91
+ - "eval_micro_f1": 0.9844203855294428,
92
+ - "eval_runtime": 124.9733,
93
+ - "eval_samples": 3787,
94
+ - "eval_samples_per_second": 30.302,
95
+ - "eval_steps_per_second": 1.896,
96
+ - "predict_accuracy": 0.9838922630050172,
97
+ - "predict_loss": 0.07716809958219528,
98
+ - "predict_macro_f1": 0.9838416247418498,
99
+ - "predict_micro_f1": 0.9838922630050172,
100
+ - "predict_runtime": 127.7861,
101
+ - "predict_samples": 3787,
102
+ - "predict_samples_per_second": 29.635,
103
+ - "predict_steps_per_second": 1.855,
104
+ - "train_loss": 0.057462599486458765,
105
+ - "train_runtime": 25253.576,
106
+ - "train_samples": 30296,
107
+ - "train_samples_per_second": 4.799,
108
+ - "train_steps_per_second": 0.15
109
 
110
 
111
  ### Framework versions
 
113
  - Transformers 4.36.1
114
  - Pytorch 2.1.0+cu121
115
  - Datasets 2.13.1
116
+ - Tokenizers 0.15.0