File size: 6,714 Bytes
9584838
6836e71
807e1d3
6836e71
 
 
 
9584838
be6875d
c224e0b
be6875d
68547bd
be6875d
 
 
dd1d390
be6875d
dd1d390
 
807e1d3
dd1d390
be6875d
dd1d390
be6875d
dd1d390
 
be6875d
 
 
2381c9c
dd1d390
be6875d
 
 
dd1d390
 
be6875d
dd1d390
be6875d
9582066
7d25100
ebd8565
dd1d390
 
be6875d
7fe8e00
 
87b47ae
7fe8e00
be6875d
 
 
7d25100
be6875d
9582066
be6875d
7d25100
 
4be05be
1fb1087
4be05be
be6875d
 
 
7fe8e00
 
 
 
 
be6875d
 
691d7ae
9582066
 
 
691d7ae
be6875d
 
7d25100
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be6875d
dd1d390
1fb1087
dd1d390
691d7ae
 
dd1d390
1fb1087
74aa410
691d7ae
 
 
 
 
 
 
be6875d
691d7ae
be6875d
dd1d390
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be6875d
1fb1087
 
 
be6875d
1fb1087
 
 
be6875d
1fb1087
 
 
be6875d
9582066
1fb1087
 
 
be6875d
1fb1087
 
 
9582066
be6875d
 
 
 
 
 
 
 
9582066
 
 
 
 
be6875d
 
 
 
9582066
 
ef89af9
9582066
 
ef89af9
9582066
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be6875d
ef89af9
 
be6875d
 
 
9582066
be6875d
 
 
9582066
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
---
'[object Object]': null
license: cc
language:
- en
library_name: adapter-transformers
pipeline_tag: text-classification
---

# Model Card for orYx-models/finetuned-roberta-leadership-sentiment-analysis 

- **Model Description:** This model is a finetuned version of the RoBERTa text classifier(cardiffnlp/twitter-roberta-base-sentiment-latest). It has been trained on a dataset comprising communications from corporate executives to their therapists. Its primary function is to determine whether statements from corporate executives convey a "Positive," "Negative," or "Neutral" sentiment, accompanied by a confidence level indicating the percentage of sentiment expressed in a statement. Being a prototype tool by orYx Models, all feedbacks and insights will be used to further refine the model.

## Model Details

### Model Information

- **Model Type:** Text Classifier
- **Language(s):** English
- **License:** Creative Commons license family
- **Finetuned from Model:** cardiffnlp/twitter-roberta-base-sentiment-latest

### Model Sources

- **HuggingFace Model ID:** cardiffnlp/twitter-roberta-base-2021-124m
- **Paper:** TimeLMs - [Link](https://arxiv.org/abs/2202.03829)

## Uses

- **Use case:** This sentiment analysis tool can analyze text from any user within an organization, such as executives, employees, or clients, and assign a sentiment to it.
- **Outcomes:** The tool generates a "Scored sentiment" which can be used to assess the likelihood of events occurring or vice versa. It can also facilitate the creation of a rating system based on the sentiments expressed in texts.

### Direct Use

```python
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)

nlp("The results don't match, but the effort seems to be always high")

Out[7]: [{'label': 'Positive', 'score': 0.9996090531349182}]
```

- Based on the text the outcomes can be "Positive, Negative, Neutral" along with their confidence score.
- 
### Recommendations
- **Continuous Monitoring:** Regularly monitor the model's performance on new data to ensure its effectiveness and reliability over time.
- **Error Analysis:** Conduct thorough error analysis to identify common patterns of misclassifications and areas for improvement.
- **Fine-Tuning:** Consider fine-tuning the model further based on feedback and insights from users, to enhance its domain-specific performance.
- **Model Interpretability:** Explore techniques for explaining the model's predictions, such as attention mechanisms or feature importance analysis, to increase trust and understanding of its decisions.


## Training Details
```

X_train, X_val, y_train, y_val = train_test_split(X,y, test_size = 0.2, stratify = y)

```

- **Train data:** 80% of 4396 records = 3516
- **Test data:** 20% of 4396 records = 879


### Training Procedure

- **Dataset Split:** Data divided into 80% training and 20% validation sets.
- **Preprocessing:** Input data tokenized into 'input_ids' and 'attention_mask' tensors.
- **Training Hyperparameters:** Set for training, evaluation, and optimization, including batch size, epochs, and logging strategies.
- **Training Execution:** Model trained with specified hyperparameters, monitored with metrics, and logged for evaluation.
- **Evaluation Metrics:** Model evaluated on loss, accuracy, F1 score, precision, and recall for both training and validation sets.

#### Preprocessing [optional]
```
'input_ids': tensor
'attention_mask': tensor
'label': tensor(2)
```

#### Training Hyperparameters
```
args = TrainingArguments(
    output_dir="output",
    do_train = True,
    do_eval = True,
    num_train_epochs = 1,
    per_device_train_batch_size = 4,
    per_device_eval_batch_size = 8,
    warmup_steps = 50,
    weight_decay = 0.01,
    logging_strategy= "steps",
    logging_dir= "logging",
    logging_steps = 50,
    eval_steps = 50,
    save_strategy = "steps",
    fp16 = True,
    #load_best_model_at_end = True
)
```
#### Speeds, Sizes, Times [optional]

- **TrainOutput**
```
global_step=879,
training_loss=0.1825900522650848,
```
- **Metrics**
```
'train_runtime': 101.6309,
'train_samples_per_second': 34.596,
'train_steps_per_second': 8.649,
'total_flos': 346915041274368.0,
'train_loss': 0.1825900522650848,
'epoch': 1.0
```

## Evaluation Metrics Results

```
# Assuming you have a list of evaluation results q and want to create a DataFrame with it
q = [Trainer.evaluate(eval_dataset=df) for df in [train_dataset, val_dataset]]

# Create DataFrame with index and select only the first 5 columns
result_df = pd.DataFrame(q, index=["train", "val"]).iloc[:,:5]

# Display the resulting DataFrame
print(result_df)

______________________________________________________________________
eval_loss  eval_Accuracy   eval_F1  eval_Precision  eval_Recall
train   0.049349       0.988908  0.987063        0.982160     0.992357
val     0.108378       0.976136  0.972464        0.965982     0.979861
______________________________________________________________________
```

**loss**  
- train   0.049349        
- val     0.108378 

**Accuracy** 
- train  0.988908   - **98.8%**
- val    0.976136   - **97.6%**

**F1**  
- train 0.987063    - **98.7%**   
- val   0.972464    - **97.2%**


**Precision**  
- train 0.982160    - **98.2%**
- val   0.965982    - **96.5%**

**Recall**
- train  0.992357   - **99.2%**
- val    0.979861   - **97.9%**




## Environmental Impact


Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** T4 GPU
- **Hours used:** 2 
- **Cloud Provider:** Google
- **Compute Region:** India
- **Carbon Emitted:** No Information Available


### Compute Infrastructure

Google Colab - T4 GPU


### References 
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
    title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
    author = "Camacho-collados, Jose  and
      Rezaee, Kiamehr  and
      Riahi, Talayeh  and
      Ushio, Asahi  and
      Loureiro, Daniel  and
      Antypas, Dimosthenis  and
      Boisson, Joanne  and
      Espinosa Anke, Luis  and
      Liu, Fangyu  and
      Mart{\'\i}nez C{\'a}mara, Eugenio" and others,
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.emnlp-demos.5",
    pages = "38--49"
}

```


## Model Card Authors [optional]

Vineedhar, relkino

## Model Card Contact

https://khalidalhosni.com/