Update README.md
Browse files
README.md
CHANGED
@@ -26,22 +26,15 @@ This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggin
|
|
26 |
|
27 |
More information needed
|
28 |
|
29 |
-
## Training and evaluation data
|
30 |
-
|
31 |
-
More information needed
|
32 |
|
33 |
## Training procedure
|
34 |
|
35 |
### Training hyperparameters
|
36 |
|
37 |
-
|
38 |
-
|
39 |
-
-
|
40 |
-
|
41 |
-
- seed: 42
|
42 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
43 |
-
- lr_scheduler_type: linear
|
44 |
-
- num_epochs: 3.0
|
45 |
|
46 |
### Framework versions
|
47 |
|
@@ -49,3 +42,28 @@ The following hyperparameters were used during training:
|
|
49 |
- Pytorch 2.0.0+cu117
|
50 |
- Datasets 2.14.6
|
51 |
- Tokenizers 0.13.3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
More information needed
|
28 |
|
|
|
|
|
|
|
29 |
|
30 |
## Training procedure
|
31 |
|
32 |
### Training hyperparameters
|
33 |
|
34 |
+
This model was trained with the code available on the [parameterlab/apricot GitHub repository](https://github.com/parameterlab/apricot) using the following command:
|
35 |
+
```shell
|
36 |
+
python3 run_regression_experiment.py --model-identifier gpt-3.5-turbo-0125 --dataset-name coqa --device cuda:0 --num-training-steps 600 --num-in-context-samples 0 --data-dir $data_dir --model-save-dir $model_save_dir --use-binary-targets --result-dir $result_dir --lr 0.00005124 --weight-decay 0.03327 --push-to-hub
|
37 |
+
```
|
|
|
|
|
|
|
|
|
38 |
|
39 |
### Framework versions
|
40 |
|
|
|
42 |
- Pytorch 2.0.0+cu117
|
43 |
- Datasets 2.14.6
|
44 |
- Tokenizers 0.13.3
|
45 |
+
|
46 |
+
## Citation
|
47 |
+
If you find 🍑 Apricot models useful for your work, please cite our paper:
|
48 |
+
``` latex
|
49 |
+
@inproceedings{ulmer-etal-2024-calibrating,
|
50 |
+
title = "Calibrating Large Language Models Using Their Generations Only",
|
51 |
+
author = "Ulmer, Dennis and
|
52 |
+
Gubri, Martin and
|
53 |
+
Lee, Hwaran and
|
54 |
+
Yun, Sangdoo and
|
55 |
+
Oh, Seong",
|
56 |
+
editor = "Ku, Lun-Wei and
|
57 |
+
Martins, Andre and
|
58 |
+
Srikumar, Vivek",
|
59 |
+
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
60 |
+
month = aug,
|
61 |
+
year = "2024",
|
62 |
+
address = "Bangkok, Thailand",
|
63 |
+
publisher = "Association for Computational Linguistics",
|
64 |
+
url = "https://aclanthology.org/2024.acl-long.824",
|
65 |
+
doi = "10.18653/v1/2024.acl-long.824",
|
66 |
+
pages = "15440--15459",
|
67 |
+
abstract = "As large language models (LLMs) are increasingly deployed in user-facing applications, building trust and maintaining safety by accurately quantifying a model{'}s confidence in its prediction becomes even more important. However, finding effective ways to calibrate LLMs{---}especially when the only interface to the models is their generated text{---}remains a challenge. We propose APRICOT (Auxiliary prediction of confidence targets): A method to set confidence targets and train an additional model that predicts an LLM{'}s confidence based on its textual input and output alone. This approach has several advantages: It is conceptually simple, does not require access to the target model beyond its output, does not interfere with the language generation, and has a multitude of potential usages, for instance by verbalizing the predicted confidence or using it to re-prompting the LLM to accurately reflecting its uncertainty. We show how our approach performs competitively in terms of calibration error for white-box and black-box LLMs on closed-book question-answering to detect incorrect LLM answers.",
|
68 |
+
}
|
69 |
+
```
|