This is the repository for AlignScore and its checkpoints, a metric for automatic factual consistency evaluation of text pairs. The metric is introduced in
Yuheng Zha, Yichi Yang, Ruichen Li and Zhiting Hu
Code is at https://github.com/yuh-zha/AlignScore
What is factual consistency and its evaluation?
- Facutual Consistency: For a given text pair (a, b), they are considered factual consistent if 1) all the information in b is also present in a; 2) b does not contradict a.
- Evaluation: Show the degree of factual consistency between the context (text a) and the claim (text b).
Where is factual consistency evaluation applicable?
- Summarization: document and summary
- Paraphrase: sentence A and sentence B
- Dialog: context and response
We list the performance of AlignScore as well as other metrics here.
Our models are trained and evaluated using PyTorch 1.12.1. We recommend using this version to reproduce the results.
- Please first install the right version of PyTorch before installing
- You can install
alignscoreby cloning this repository and
pip install ..
- After installing
alignscore, please use
python -m spacy download en_core_web_smto install the required spaCy model (we use
To evaluate the factual consistency of the
claim w.r.t. the
context, simply use the score method of
from alignscore import AlignScore scorer = AlignScore(model='roberta-base', batch_size=32, device='cuda:0', ckpt_path='/path/to/checkpoint', evaluation_mode='nli_sp') score = scorer.score(contexts=['hello world'], claims=['hello world'])
model: the backbone model of the metric. Now, we only provide the metric trained on RoBERTa
batch_size: the batch size of the inference
device: which device to run the metric
ckpt_path: the path to the checkpoint
evaluation_mode: choose from
'nli_sp', 'nli', 'bin_sp', 'bin'.
bin refer to the 3-way and binary classficiation head, respectively.
sp indicates if the chunk-sentence splitting method is used.
nli_sp is the default setting of AlignScore
We provide two versions of the AlignScore checkpoints:
-base model is based on RoBERTa-base and has 125M parameters. The
-large model is based on RoBERTa-large and has 355M parameters.
You can use the above checkpoints directly for factual consistency evaluation. However, if you wish to train an alignment model from scratch / on your own data, use
python train.py --seed 2022 --batch-size 32 \ --num-epoch 3 --devices 0 1 2 3 \ --model-name roberta-large -- ckpt-save-path ./ckpt/ \ --data-path ./data/training_sets/ \ --max-samples-per-dataset 500000
--seed: the random seed for initialization
--batch-size: the batch size for training
--num-epoch: training epochs
--devices: which devices to train the metric, a list of GPU ids
--model-name: the backbone model name of the metric, default RoBERTa-large
--ckpt-save-path: the path to save the checkpoint
--training-datasets: the names of the training datasets
--data-path: the path to the training datasets
--max-samples-per-dataset: the maximum number of samples from a dataset
Our benchmark includes the TRUE and SummaC benchmark as well as several popular factual consistency evaluation datasets.
To run the benchmark, a few additional dependencies are required and can be installed with
pip install -r requirements.txt.
Additionally, some depedencies are not available as packages and need to be downloaded manually (please see
python benchmark.py --help for instructions).
summac may cause dependency conflicts with
alignscore. Please reinstall
alignscore to force the correct dependency versions.
The relevant arguments for evaluating AlignScore are:
--alignscore: evaluation the AlignScore metric
--alignscore-model: the name of the backbone model (either 'roberta-base' or 'roberta-large')
--alignscore-ckpt: the path to the saved checkpoint
--alignscore-eval-mode: the evaluation mode, defaults to
--device: which device to run the metric, defaults to
--tasks: which tasks to benchmark, e.g., SummEval, QAGS-CNNDM, ...
For the baselines, please see
python benchmark.py --help for details.
- Downloads last month