File size: 1,051 Bytes
23f8e9d f5a6d3b 23f8e9d f5a6d3b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
library_name: transformers
datasets:
- thu-coai/augesc
---
Test set performance
- Top 1 Accuracy: 0.4346
- Top 3 Accuracy: 0.7677
- Top 1 Macro F1: 0.2668
- Top 3 Macro F1: 0.5669
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device="cuda:0"
model = "heegyu/TinyLlama-augesc-context-strategy"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForSequenceClassification.from_pretrained(model).eval().to(device)
example = """usr: Hi
sys[Question]: Hello, how are you today?
usr: I was scolded by my parents yesterday"""
inputs = tokenizer(example, return_tensors="pt").to(device)
logits = model(**inputs).logits.softmax(-1)
print(logits)
label = logits.argmax(-1).item()
ESCONV_STRATEGY = [
"Question",
"Restatement or Paraphrasing",
"Reflection of feelings",
"Self-disclosure",
"Affirmation and Reassurance",
"Providing Suggestions",
"Information",
"Others"
]
id2label = {i:k for i, k in enumerate(ESCONV_STRATEGY)}
print(id2label[label])
``` |