File size: 1,216 Bytes
a9fd2e5
 
 
 
 
 
 
 
 
466f6b2
 
 
 
 
 
 
 
 
 
 
782d7bf
 
 
 
 
 
 
 
 
 
7bb33e1
782d7bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: mit
datasets:
- pietrolesci/dialogue_nli
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---

This model is trained on [Dialogue-NLI](https://arxiv.org/abs/1811.00671). 
Test Result:

|               | Accuracy |
| ------------- | -------- |
| dev           | 89.44    |
| test          | 91.22    |
| verified_test | 95.36    |


To use this model:
```python

import torch
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification

device = "cuda"

model_path = "zayn1111/deberta-v3-dnli"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, model_max_length=512)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to(device)

premise = "i work with a lot of kids in the healthcare industry ."
hypothesis = "i work in the healthcare industry ."

input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")

output = model(input["input_ids"].to(device)) 
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)

```