dstefa commited on
Commit
01296cd
1 Parent(s): 39d71ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -14
README.md CHANGED
@@ -2,15 +2,46 @@
2
  license: mit
3
  base_model: roberta-base
4
  tags:
5
- - generated_from_trainer
 
 
6
  metrics:
7
  - accuracy
8
  - f1
9
  - precision
10
  - recall
 
 
 
 
 
 
 
 
 
11
  model-index:
12
- - name: roberta-base_stress_classification
13
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,7 +49,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # roberta-base_stress_classification
20
 
21
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.1800
24
  - Accuracy: 0.9647
@@ -26,17 +57,14 @@ It achieves the following results on the evaluation set:
26
  - Precision: 0.9647
27
  - Recall: 0.9647
28
 
29
- ## Model description
30
 
31
- More information needed
32
 
33
- ## Intended uses & limitations
34
-
35
- More information needed
36
-
37
- ## Training and evaluation data
38
-
39
- More information needed
40
 
41
  ## Training procedure
42
 
@@ -63,9 +91,36 @@ The following hyperparameters were used during training:
63
  | 0.0618 | 5.0 | 40000 | 0.2128 | 0.9536 | 0.9536 | 0.9546 | 0.9536 |
64
 
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ### Framework versions
67
 
68
  - Transformers 4.32.1
69
  - Pytorch 2.1.0+cu121
70
  - Datasets 2.12.0
71
- - Tokenizers 0.13.2
 
2
  license: mit
3
  base_model: roberta-base
4
  tags:
5
+ - stress
6
+ - classification
7
+ - glassdoor
8
  metrics:
9
  - accuracy
10
  - f1
11
  - precision
12
  - recall
13
+ widget:
14
+ - text: >-
15
+ They also caused so much stress because some leaders valued optics over output.: Stressed
16
+ - text: >-
17
+ Way too much work pressure.: Stressed
18
+ - text: >-
19
+ Understaffed, lots of deck revisions, unpredictable, terrible technology.: Stressed
20
+ - text: >-
21
+ Nice environment, good work life balance.: Not Stressed
22
  model-index:
23
+ - name: roberta-base_topic_classification_nyt_news
24
+ results:
25
+ - task:
26
+ name: Text Classification
27
+ type: text-classification
28
+ dataset:
29
+ name: New_York_Times_Topics
30
+ type: News
31
+ metrics:
32
+ - type: F1
33
+ name: F1
34
+ value: 0.97
35
+ - type: accuracy
36
+ name: accuracy
37
+ value: 0.97
38
+ - type: precision
39
+ name: precision
40
+ value: 0.97
41
+ - type: recall
42
+ name: recall
43
+ value: 0.97
44
+ pipeline_tag: text-classification
45
  ---
46
 
47
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
49
 
50
  # roberta-base_stress_classification
51
 
52
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glassdoor dataset based on 100000 employees' reviews.
53
  It achieves the following results on the evaluation set:
54
  - Loss: 0.1800
55
  - Accuracy: 0.9647
 
57
  - Precision: 0.9647
58
  - Recall: 0.9647
59
 
60
+ ## Training data
61
 
62
+ Training data was classified as follow:
63
 
64
+ class |Description
65
+ -|-
66
+ 0 |Not Stressed
67
+ 1 |Stressed
 
 
 
68
 
69
  ## Training procedure
70
 
 
91
  | 0.0618 | 5.0 | 40000 | 0.2128 | 0.9536 | 0.9536 | 0.9546 | 0.9536 |
92
 
93
 
94
+ ### Model performance
95
+
96
+ -|precision|recall|f1|support
97
+ -|-|-|-|-
98
+ Not Stressed|0.96|0.97|0.97|10000
99
+ Stressed|0.97|0.96|0.97|10000
100
+ | | | |
101
+ accuracy|||0.97|20000
102
+ macro avg|0.97|0.97|0.97|20000
103
+ weighted avg|0.97|0.97|0.97|20000
104
+
105
+
106
+ ### How to use roberta-base_topic_classification_nyt_news with HuggingFace
107
+
108
+ ```python
109
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
110
+ from transformers import pipeline
111
+
112
+ tokenizer = AutoTokenizer.from_pretrained("dstefa/roberta-base_topic_classification_nyt_news")
113
+ model = AutoModelForSequenceClassification.from_pretrained("dstefa/roberta-base_topic_classification_nyt_news")
114
+ pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
115
+
116
+ text = "They also caused so much stress because some leaders valued optics over output."
117
+ pipe(text)
118
+
119
+ [{'label': 'Stressed', 'score': 0.9959163069725037}]
120
+
121
  ### Framework versions
122
 
123
  - Transformers 4.32.1
124
  - Pytorch 2.1.0+cu121
125
  - Datasets 2.12.0
126
+ - Tokenizers 0.13.2