File size: 1,731 Bytes
e817f0f c261186 e817f0f d15d6ed e817f0f 5fdaff0 e817f0f 8681b05 e817f0f 8681b05 e817f0f 8681b05 e817f0f de467b0 e817f0f 5fdaff0 74ba385 f5bf13c 5fdaff0 74ba385 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
# Factual Consistency Evaluator/Metric in ACL 2023 paper
*[WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning
](https://arxiv.org/abs/2212.10057)*
## Model description
WeCheck is a factual consistency metric trained from weakly annotated samples.
This WeCheck checkpoint can be used to check the following three generation tasks:
**Text Summarization/Knowlege grounded dialogue Generation/Paraphrase**
This WeCheck checkpoint is trained based on the following three weak labler:
*[QAFactEval
](https://github.com/salesforce/QAFactEval)* / *[Summarc](https://github.com/tingofurro/summac)* / *[NLI warmup](https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli)*
---
### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "nightdessert/WeCheck"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." # Input for Summarization/ Dialogue / Paraphrase
hypothesis = "The movie was not good." # Output for Summarization/ Dialogue / Paraphrase
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device))[:,0] # device = "cuda:0" or "cpu"
prediction = torch.sigmoid(output).tolist()
print(prediction)
```
license: openrail
pipeline_tag: text-classification
tags:
- Factual Consistency
- Natrual Language Inference
---
language:
- en
tags:
- Factual Consistency Evaluation
|