This is the LACIE fine-tuned version of Mistral-7B, finetuned according to our paper LACIE: Listener-Aware Finetuning for Confidence Calibration in Large Language Models

This model is a fine-tuned version of Mistral-7B base that has been finetuned using data from TriviaQA. LACIE is pragmatic preference-based finetuning method that optimizes models to be better calibrated w.r.t. both implicit and explicit confidence statements. The preferences in the dataset are based on correctness and whether listener accepted to rejected the answer. For more details, please see our paper.

Model Architecture

The architecture is the same as Mistral-7B; the weights in this repo are adapter weights for Mistral.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .