File size: 2,571 Bytes
8b5718e
 
 
e817f0f
c261186
 
 
 
71199c2
e817f0f
 
d15d6ed
e817f0f
 
 
 
 
 
 
 
5fdaff0
e817f0f
7757033
e817f0f
 
 
 
 
8681b05
e817f0f
 
8681b05
44b8a04
2ea844f
 
8681b05
44b8a04
de467b0
44b8a04
 
 
 
 
 
 
 
 
 
 
e817f0f
5fdaff0
74ba385
f5bf13c
 
 
5fdaff0
74ba385
 
 
8b5718e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
pipeline_tag: text-generation
---
# Factual Consistency Evaluator/Metric in ACL 2023 paper

*[WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning
](https://arxiv.org/abs/2212.10057)*

Open-sourced code: https://github.com/nightdessert/WeCheck
## Model description
WeCheck is a factual consistency metric trained from weakly annotated samples.

This WeCheck checkpoint can be used to check the following three generation tasks:

**Text Summarization/Knowlege grounded dialogue Generation/Paraphrase**

This WeCheck checkpoint is trained based on the following three weak labler:

*[QAFactEval
](https://github.com/salesforce/QAFactEval)* / *[Summarc](https://github.com/tingofurro/summac)* / *[NLI warmup](https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli)* 
---

# How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "nightdessert/WeCheck"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." # Input for Summarization/ Dialogue / Paraphrase
hypothesis = "The movie was not good." # Output for Summarization/ Dialogue / Paraphrase
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt", truncation_strategy="only_first", max_length=512)
output = model(input["input_ids"].to(device))['logits'][:,0]  # device = "cuda:0" or "cpu"
prediction = torch.sigmoid(output).tolist()
print(prediction) #0.884
```
or apply for a batch of samples
```python
premise = ["I first thought that I liked the movie, but upon second thought it was actually disappointing."]*3 # Input list for Summarization/ Dialogue / Paraphrase
hypothesis = ["The movie was not good."]*3 # Output list for Summarization/ Dialogue / Paraphrase
batch_tokens = tokenizer.batch_encode_plus(list(zip(premise, hypothesis)), padding=True, 
            truncation=True, max_length=512, return_tensors="pt", truncation_strategy="only_first")
output = model(batch_tokens["input_ids"].to(device))['logits'][:,0]  # device = "cuda:0" or "cpu"
prediction = torch.sigmoid(output).tolist()
print(prediction) #[0.884,0.884,0.884]
```


license: openrail
pipeline_tag: text-classification
tags:
- Factual Consistency
- Natrual Language Inference
---
language:
- en
tags:
- Factual Consistency Evaluation