nharrel commited on
Commit
b10ee75
·
verified ·
1 Parent(s): 9d1da55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -18,6 +18,57 @@ To model this threshold effectively, we opted for the tanh activation function,
18
 
19
  Utilizing this approach, we demonstrated improvements in regression tasks for evaluating stances on each test scenario. While the overall MSE did not show significant improvement, we observed higher accuracy, recall, and precision for the regression tasks. It is important to note that the classification task specified in Qiu et al. (2022) solely determines the presence or absence of the value in question, without considering the specific stance presented in the text. Therefore, our regression task, which assesses the particular stance, should not be directly compared with the classification task from Qiu et al. (2022).
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  ## Results from Qiu et al. (2022)
23
 
 
18
 
19
  Utilizing this approach, we demonstrated improvements in regression tasks for evaluating stances on each test scenario. While the overall MSE did not show significant improvement, we observed higher accuracy, recall, and precision for the regression tasks. It is important to note that the classification task specified in Qiu et al. (2022) solely determines the presence or absence of the value in question, without considering the specific stance presented in the text. Therefore, our regression task, which assesses the particular stance, should not be directly compared with the classification task from Qiu et al. (2022).
20
 
21
+ ## Usage
22
+
23
+ import torch
24
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
25
+
26
+ # Load the model and tokenizer
27
+ model_path = 'nharrel/Valuesnet_DeBERTa_v3'
28
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
29
+ model = AutoModelForSequenceClassification.from_pretrained(model_path)
30
+ model.eval()
31
+
32
+ # Define maximum length for padding and truncation
33
+ max_length = 128
34
+
35
+ def custom_round(x):
36
+ if x >= 0.50:
37
+ return 1
38
+ elif x < -0.50:
39
+ return -1
40
+ else:
41
+ return 0
42
+
43
+ def predict(text):
44
+ inputs = tokenizer(text, padding='max_length', truncation=True, max_length=max_length, return_tensors='pt')
45
+ with torch.no_grad():
46
+ outputs = model(**inputs)
47
+
48
+ prediction = torch.tanh(outputs.logits).cpu().numpy()
49
+ rounded_prediction = custom_round(prediction)
50
+ return rounded_prediction
51
+
52
+ def test_sentence(sentence):
53
+ prediction = predict(sentence)
54
+ label_map = {-1: 'Against', 0: 'Not Present', 1: 'Supports'}
55
+ predicted_label = label_map.get(prediction, 'unknown')
56
+ print(f"Sentence: {sentence}")
57
+ print(f"Predicted Label: {predicted_label}")
58
+
59
+ # Define Schwartz's 10 values
60
+ schwartz_values = [
61
+ "BENEVOLENCE", "UNIVERSALISM", "SELF-DIRECTION", "STIMULATION", "HEDONISM",
62
+ "ACHIEVEMENT", "POWER", "SECURITY", "CONFORMITY", "TRADITION"
63
+ ]
64
+
65
+ for value in schwartz_values:
66
+ print("Values stance is: " + value)
67
+ test_sentence(f"[{value}] You are a very pleasant person to be around.")
68
+
69
+
70
+
71
+
72
 
73
  ## Results from Qiu et al. (2022)
74