WeCheck / README.md
nightdessert's picture
Update README.md
e817f0f
|
raw
history blame
1.79 kB

Factual Consistency Evaluator/Metric in ACL 2023 paper

WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning

Model description

WeCheck is a factual consistency metric trained from weakly annotated samples.

This WeCheck checkpoint can be used to check the following three generation tasks:

Text Summarization/Knowlege grounded dialogue Generation/Paraphrase

This WeCheck checkpoint is trained based on the following three weak labler:

QAFactEval / Summarc / NLI warmup

How to use the model

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "nightdessert
/
WeCheck "
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was not good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device))  # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)

license: openrail
pipeline_tag: text-classification
tags:
- Factual Consistency
- Natrual Language Inference
---
language:
- en
tags:
- Factual Consistency Evaluation