--- library_name: transformers datasets: - thu-coai/augesc --- Test set performance - Top 1 Accuracy: 0.4346 - Top 3 Accuracy: 0.7677 - Top 1 Macro F1: 0.2668 - Top 3 Macro F1: 0.5669 ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification device="cuda:0" model = "heegyu/TinyLlama-augesc-context-strategy" tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForSequenceClassification.from_pretrained(model).eval().to(device) example = """usr: Hi sys[Question]: Hello, how are you today? usr: I was scolded by my parents yesterday""" inputs = tokenizer(example, return_tensors="pt").to(device) logits = model(**inputs).logits.softmax(-1) print(logits) label = logits.argmax(-1).item() ESCONV_STRATEGY = [ "Question", "Restatement or Paraphrasing", "Reflection of feelings", "Self-disclosure", "Affirmation and Reassurance", "Providing Suggestions", "Information", "Others" ] id2label = {i:k for i, k in enumerate(ESCONV_STRATEGY)} print(id2label[label]) ```