Edit model card

augesc performance

  • Accuracy: 0.4158
  • Accuracy (Top 3): 0.7558
  • Macro F1: 0.2453
  • Macro F1 (Top 3) 0.5343
from transformers import AutoTokenizer, AutoModelForSequenceClassification

device="cuda:0"
model = "heegyu/TinyLlama-augesc-context"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForSequenceClassification.from_pretrained(model).eval().to(device)

Top 1 strategy prediction

example = """usr: Hi
sys: Hello, how are you today?
usr: I was scolded by my parents yesterday"""

inputs = tokenizer(example, return_tensors="pt").to(device)
logits = model(**inputs).logits.softmax(-1)
print(logits)

label = logits.argmax(-1).item()


ESCONV_STRATEGY = [
    "Question",
    "Restatement or Paraphrasing",
    "Reflection of feelings",
    "Self-disclosure",
    "Affirmation and Reassurance",
    "Providing Suggestions",
    "Information",
    "Others"
]
id2label = {i:k for i, k in enumerate(ESCONV_STRATEGY)}

print(id2label[label])

Top 3 strategy prediction

example = """usr: Hi
sys: Hello, how are you today?
usr: I was scolded by my parents yesterday"""

inputs = tokenizer(example, return_tensors="pt").to(device)
logits = model(**inputs).logits.softmax(-1)
print(logits)

labels = logits.topk(3)[1][0].tolist()


ESCONV_STRATEGY = [
    "Question",
    "Restatement or Paraphrasing",
    "Reflection of feelings",
    "Self-disclosure",
    "Affirmation and Reassurance",
    "Providing Suggestions",
    "Information",
    "Others"
]
id2label = {i:k for i, k in enumerate(ESCONV_STRATEGY)}

for id in labels:
  print(id2label[id])
Downloads last month
28
Safetensors
Model size
1.03B params
Tensor type
F32
·

Dataset used to train heegyu/TinyLlama-augesc-context