Edit model card

Overview

Model Description: roberta-large-faithcritic is the RoBERTa large model fine-tuned on FaithCritic, a derivative of the FaithDial dataset. The objective is to predict whether an utterance is faithful or not, given the source knowledge.

The hyperparameters are provided in hparams.yml. To know more about how to train a critic model, visit our repo.

Usage

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("McGill-NLP/roberta-large-faithcritic")
model = AutoModel.from_pretrained("McGill-NLP/roberta-large-faithcritic")

knowledge = "A cardigan is a type of knitted garment (sweater) that has an open front."
response = "The old version is the regular one, knitted garment that has open front and buttons!"
input = tokenizer(knowledge, response)
output = model(**input)

Citation Information

@article{dziri2022faithdial,
  title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue},
  author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva},
  journal={arXiv preprint, arXiv:2204.10757},
  year={2022},
  url={https://arxiv.org/abs/2204.10757}
}
Downloads last month
791
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train McGill-NLP/roberta-large-faithcritic

Collection including McGill-NLP/roberta-large-faithcritic

Evaluation results