ppsingh commited on
Commit
9305c5f
1 Parent(s): 39b22d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -4
README.md CHANGED
@@ -6,6 +6,17 @@ tags:
6
  model-index:
7
  - name: ADAPMIT-multilabel-climatebert
8
  results: []
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,7 +24,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # ADAPMIT-multilabel-climatebert
15
 
16
- This model is a fine-tuned version of [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.3535
19
  - Precision-micro: 0.8999
@@ -28,7 +39,8 @@ It achieves the following results on the evaluation set:
28
 
29
  ## Model description
30
 
31
- More information needed
 
32
 
33
  ## Intended uses & limitations
34
 
@@ -36,7 +48,21 @@ More information needed
36
 
37
  ## Training and evaluation data
38
 
39
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ## Training procedure
42
 
@@ -62,10 +88,16 @@ The following hyperparameters were used during training:
62
  | 0.0767 | 4.0 | 3136 | 0.3367 | 0.8999 | 0.8563 | 0.9000 | 0.9173 | 0.8588 | 0.9173 | 0.9085 | 0.8524 | 0.9085 |
63
  | 0.0475 | 5.0 | 3920 | 0.3535 | 0.8999 | 0.8559 | 0.9001 | 0.9173 | 0.8592 | 0.9173 | 0.9085 | 0.8521 | 0.9085 |
64
 
 
 
 
 
 
 
65
 
66
  ### Framework versions
67
 
68
  - Transformers 4.38.1
69
  - Pytorch 2.1.0+cu121
70
  - Datasets 2.18.0
71
- - Tokenizers 0.15.2
 
6
  model-index:
7
  - name: ADAPMIT-multilabel-climatebert
8
  results: []
9
+ datasets:
10
+ - GIZ/policy_classification
11
+ co2_eq_emissions:
12
+ emissions: 23.3572576873636
13
+ source: codecarbon
14
+ training_type: fine-tuning
15
+ on_cloud: true
16
+ cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
17
+ ram_total_size: 12.6747894287109
18
+ hours_used: 0.529
19
+ hardware_used: 1 x Tesla T4
20
  ---
21
 
22
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
24
 
25
  # ADAPMIT-multilabel-climatebert
26
 
27
+ This model is a fine-tuned version of [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) on the [Policy-Classification](https://huggingface.co/datasets/GIZ/policy_classification) dataset.
28
  It achieves the following results on the evaluation set:
29
  - Loss: 0.3535
30
  - Precision-micro: 0.8999
 
39
 
40
  ## Model description
41
 
42
+ The purpose of this model is to predict multiple labels simultaneously from a given input data. Specifically, the model will predict 2 labels -
43
+ AdaptationLabel, MitigationLabel - that are relevant to a particular task or application
44
 
45
  ## Intended uses & limitations
46
 
 
48
 
49
  ## Training and evaluation data
50
 
51
+ - Training Dataset: 10031
52
+ | Class | Positive Count of Class|
53
+ |:-------------|:--------|
54
+ | Action | 5416 |
55
+ | Plans | 2140 |
56
+ | Policy | 1396|
57
+ | Target | 2911 |
58
+
59
+ - Validation Dataset: 932
60
+ | Class | Positive Count of Class|
61
+ |:-------------|:--------|
62
+ | Action | 513 |
63
+ | Plans | 198 |
64
+ | Policy | 122 |
65
+ | Target | 256 |
66
 
67
  ## Training procedure
68
 
 
88
  | 0.0767 | 4.0 | 3136 | 0.3367 | 0.8999 | 0.8563 | 0.9000 | 0.9173 | 0.8588 | 0.9173 | 0.9085 | 0.8524 | 0.9085 |
89
  | 0.0475 | 5.0 | 3920 | 0.3535 | 0.8999 | 0.8559 | 0.9001 | 0.9173 | 0.8592 | 0.9173 | 0.9085 | 0.8521 | 0.9085 |
90
 
91
+ |label | precision |recall |f1-score| support|
92
+ |:-------------:|:---------:|:-----:|:------:|:------:|
93
+ |Action |0.828 |0.807 |0.817 | 513.0 |
94
+ |Plans |0.560 |0.707 |0.625 | 198.0 |
95
+ |Policy |0.727 |0.786 |0.756 | 122.0 |
96
+ |Target |0.741 |0.886 |0.808 | 256.0 |
97
 
98
  ### Framework versions
99
 
100
  - Transformers 4.38.1
101
  - Pytorch 2.1.0+cu121
102
  - Datasets 2.18.0
103
+ - Tokenizers 0.15.2