KameronB commited on
Commit
a666c1b
1 Parent(s): adbd1aa

Updated the readme to reflect the new output changes

Browse files
Files changed (1) hide show
  1. README.md +12 -5
README.md CHANGED
@@ -1,8 +1,17 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
3
  ---
4
  ### Synthetic IT Call Center Data Sentence Quality Predictor
5
- A RoBERTa-base model fine-tuned on a synthetic dataset of good and bad sentences that would be found in IT call center tickets. This model aims to predict the quality of sentences in the context of IT support communications, providing a numerical score from 0 to 10, where 0 represents a poor quality sentence, and 10 represents an ideal quality sentence.
6
 
7
  #### Model Background
8
  This model was created out of the necessity to objectively measure the quality of IT call center journaling and improve overall customer service. By leveraging OpenAI's GPT-4 to simulate both effective and ineffective call center agent responses, and then using GPT-4-turbo to rank these responses, we've synthesized a unique dataset that reflects a wide range of possible interactions in an IT support context. The dataset comprises 1,464 items, each scored and annotated with insights into what constitutes quality journaling vs. poor journaling.
@@ -11,7 +20,7 @@ This model was created out of the necessity to objectively measure the quality o
11
  The foundation of this model is the RoBERTa-base transformer, chosen for its robust performance in natural language understanding tasks. I extended and fine-tuned the last four layers of RoBERTa to specialize in our sentence quality prediction task. This fine-tuning process involved manual adjustments and iterative training sessions to refine the model's accuracy and reduce the Mean Squared Error (MSE) on the validation set.
12
 
13
  #### Performance
14
- After several rounds of training and manual tweaks, the model achieved a validation MSE of approximately 1.66. This metric indicates the model's ability to closely predict the quality scores assigned by the simulated call center manager, with a lower MSE reflecting higher accuracy in those predictions.
15
 
16
  #### Future Work
17
  The journey to perfecting this model is ongoing. Plans to improve its performance include:
@@ -44,7 +53,6 @@ class SITCC(torch.nn.Module):
44
  logits = self.regressor(sequence_output)
45
  return logits
46
 
47
-
48
  def init_model() -> SITCC:
49
  # Load the model from huggingface
50
  model_name = "KameronB/sitcc-roberta"
@@ -65,7 +73,6 @@ def init_model() -> SITCC:
65
  model, tokenizer = init_model()
66
 
67
  def predict(sentences):
68
-
69
  model.eval()
70
  inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
71
  input_ids = inputs['input_ids']
@@ -75,4 +82,4 @@ def predict(sentences):
75
  outputs = model(input_ids, attention_mask)
76
 
77
  return outputs
78
-
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - KameronB/SITCC-dataset
5
+ language:
6
+ - en
7
+ tags:
8
+ - IT
9
+ - classification
10
+ - call center
11
+ - grammar
12
  ---
13
  ### Synthetic IT Call Center Data Sentence Quality Predictor
14
+ A RoBERTa-base model fine-tuned on a synthetic dataset of good and bad sentences that would be found in IT call center tickets. This model aims to predict the quality of sentences in the context of IT support communications, providing a numerical score from 0.0 to 1.0, where 0 represents a poor quality sentence, and 1.0 represents an ideal quality sentence.
15
 
16
  #### Model Background
17
  This model was created out of the necessity to objectively measure the quality of IT call center journaling and improve overall customer service. By leveraging OpenAI's GPT-4 to simulate both effective and ineffective call center agent responses, and then using GPT-4-turbo to rank these responses, we've synthesized a unique dataset that reflects a wide range of possible interactions in an IT support context. The dataset comprises 1,464 items, each scored and annotated with insights into what constitutes quality journaling vs. poor journaling.
 
20
  The foundation of this model is the RoBERTa-base transformer, chosen for its robust performance in natural language understanding tasks. I extended and fine-tuned the last four layers of RoBERTa to specialize in our sentence quality prediction task. This fine-tuning process involved manual adjustments and iterative training sessions to refine the model's accuracy and reduce the Mean Squared Error (MSE) on the validation set.
21
 
22
  #### Performance
23
+ After several rounds of training and manual tweaks, the model achieved a validation MSE of approximately 0.02. This metric indicates the model's ability to closely predict the quality scores assigned by the simulated call center manager, with a lower MSE reflecting higher accuracy in those predictions.
24
 
25
  #### Future Work
26
  The journey to perfecting this model is ongoing. Plans to improve its performance include:
 
53
  logits = self.regressor(sequence_output)
54
  return logits
55
 
 
56
  def init_model() -> SITCC:
57
  # Load the model from huggingface
58
  model_name = "KameronB/sitcc-roberta"
 
73
  model, tokenizer = init_model()
74
 
75
  def predict(sentences):
 
76
  model.eval()
77
  inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
78
  input_ids = inputs['input_ids']
 
82
  outputs = model(input_ids, attention_mask)
83
 
84
  return outputs
85
+ ```