Vineedhar commited on
Commit
4be05be
1 Parent(s): 9582066

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -23
README.md CHANGED
@@ -9,11 +9,11 @@ pipeline_tag: text-classification
9
 
10
  # Model Card for orYx-models/finetuned-roberta-leadership-sentiment-analysis
11
 
12
- <!-- This model is a finetuned version of, roberta text classifier.
13
- The finetuning has been done on the dataset which includes inputs from corporate executives to their therapist.
14
- The sole purpose of the model is to determine wether the statement made from the corporate executives is "Positive, Negative, or Neutral" with which we will also see "Confidence level, i.e the percentage of the sentiment involved in a statement.
15
- The sentiment analysis tool has been particularly built for our client firm called "LDS".
16
- Since it is prototype tool by orYx Models, all the feedback and insights from LDS will be used to finetune the model further.-->
17
 
18
 
19
 
@@ -21,10 +21,9 @@ Since it is prototype tool by orYx Models, all the feedback and insights from LD
21
 
22
  ### Model Description
23
 
24
- <!-- This model is finetuned on a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021,
25
- and finetuned for sentiment analysis with the TweetEval benchmark.
26
- The original Twitter-based RoBERTa model can be found here and the original reference paper is TweetEval.
27
- This model is suitable for English. -->
28
 
29
 
30
 
@@ -37,18 +36,18 @@ This model is suitable for English. -->
37
 
38
  ### Model Sources [optional]
39
 
40
- <!--This is HuggingFace modelID - cardiffnlp/twitter-roberta-base-2021-124m-->
41
 
42
  - **Repository:** More Information Needed
43
  - **Paper [optional]:** TimeLMs - https://arxiv.org/abs/2202.03829
44
 
45
  ## Uses
46
 
47
- <!-- The Sentiment Analysis tool is made domain specific, however since it is a protoype, the depths into domain are still to be ventured.
48
- Use case: We can analyse the text from any executive, employee, client of an organization and attach a sentiment to it.
49
- The outcomes of this will be a "Scored sentiment" upon which we can look for likeliness of an event occurring or vice versa.
50
- The resultant scenario to this will be to generate a rating system based on the sentiments generated by texts from an entity.
51
- -->
52
 
53
  ### Direct Use
54
 
@@ -59,12 +58,6 @@ nlp("The results don't match but the effort seems to be always high")
59
  Out[7]: [{'label': 'Positive', 'score': 0.9996090531349182}]
60
 
61
 
62
- ## Bias, Risks, and Limitations
63
-
64
- <!-- The model is sometimes prone to misinterpret the sentiments. -->
65
-
66
- {{ bias_risks_limitations | default("[More Information Needed]", true)}}
67
-
68
  ### Recommendations
69
 
70
 
@@ -80,6 +73,9 @@ Out[7]: [{'label': 'Positive', 'score': 0.9996090531349182}]
80
 
81
  X_train, X_val, y_train, y_val = train_test_split(X,y, test_size = 0.2, stratify = y)
82
 
 
 
 
83
 
84
  ### Training Procedure
85
 
@@ -94,7 +90,7 @@ X_train, X_val, y_train, y_val = train_test_split(X,y, test_size = 0.2, stratify
94
 
95
  #### Training Hyperparameters
96
 
97
- args = TrainingArguments(
98
  output_dir="output",
99
  do_train = True,
100
  do_eval = True,
@@ -201,7 +197,7 @@ Google Colab - T4 GPU
201
 
202
  ### References
203
  ```
204
- @inproceedings{camacho-collados-etal-2022-tweetnlp,
205
  title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
206
  author = "Camacho-collados, Jose and
207
  Rezaee, Kiamehr and
 
9
 
10
  # Model Card for orYx-models/finetuned-roberta-leadership-sentiment-analysis
11
 
12
+ - This model is a finetuned version of, roberta text classifier.
13
+ - The finetuning has been done on the dataset which includes inputs from corporate executives to their therapist.
14
+ - The sole purpose of the model is to determine wether the statement made from the corporate executives is "Positive, Negative, or Neutral" with which we will also see "Confidence level, i.e the percentage of the sentiment involved in a statement.
15
+ - The sentiment analysis tool has been particularly built for our client firm called "LDS".
16
+ - Since it is prototype tool by orYx Models, all the feedback and insights from LDS will be used to finetune the model further.
17
 
18
 
19
 
 
21
 
22
  ### Model Description
23
 
24
+ - This model is finetuned on a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021,and finetuned for sentiment analysis with the TweetEval benchmark.
25
+ - The original Twitter-based RoBERTa model can be found here and the original reference paper is TweetEval.
26
+ - This model is suitable for English.
 
27
 
28
 
29
 
 
36
 
37
  ### Model Sources [optional]
38
 
39
+ This is HuggingFace modelID - cardiffnlp/twitter-roberta-base-2021-124m
40
 
41
  - **Repository:** More Information Needed
42
  - **Paper [optional]:** TimeLMs - https://arxiv.org/abs/2202.03829
43
 
44
  ## Uses
45
 
46
+ -The Sentiment Analysis tool is made domain specific, however since it is a protoype, the depths into domain are still to be ventured.
47
+
48
+ - **Use case:** We can analyse the text from any executive, employee, client of an organization and attach a sentiment to it.
49
+ - The outcomes of this will be a "Scored sentiment" upon which we can look for likeliness of an event occurring or vice versa.
50
+ - The resultant scenario to this will be to generate a rating system based on the sentiments generated by texts from an entity.
51
 
52
  ### Direct Use
53
 
 
58
  Out[7]: [{'label': 'Positive', 'score': 0.9996090531349182}]
59
 
60
 
 
 
 
 
 
 
61
  ### Recommendations
62
 
63
 
 
73
 
74
  X_train, X_val, y_train, y_val = train_test_split(X,y, test_size = 0.2, stratify = y)
75
 
76
+ - **Train data:** 80% of 4396 records = 3516
77
+ - **Test data:** 20% of 4396 records = 789
78
+
79
 
80
  ### Training Procedure
81
 
 
90
 
91
  #### Training Hyperparameters
92
 
93
+ - args = TrainingArguments(
94
  output_dir="output",
95
  do_train = True,
96
  do_eval = True,
 
197
 
198
  ### References
199
  ```
200
+ - @inproceedings{camacho-collados-etal-2022-tweetnlp,
201
  title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
202
  author = "Camacho-collados, Jose and
203
  Rezaee, Kiamehr and