Transformers
Safetensors
PEFT
Inference Endpoints
psiyum commited on
Commit
ae65758
1 Parent(s): 4aa5993

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -3,6 +3,7 @@ library_name: transformers
3
  tags:
4
  - transformers
5
  - peft
 
6
  license: apache-2.0
7
  base_model: mistralai/Mistral-7B-v0.1
8
  datasets:
@@ -15,7 +16,7 @@ datasets:
15
 
16
  The model is fine-tuned (calibration-tuned) using a [dataset](https://huggingface.co/datasets/calibration-tuning/Mistral-7B-v0.1-20k-oe) of *open-ended* generations from `mistralai/Mistral-7B-v0.1`, labeled for correctness.
17
  At test/inference time, the probability of correctness defines the confidence of the model in its answer.
18
- For full details, please see our [paper](https://arxiv.org/) and supporting [code](https://github.com/activatedgeek/calibration-tuning).
19
 
20
  **Other Models**: We also release a broader collection of [Open-Ended CT Models](https://huggingface.co/collections/calibration-tuning/open-ended-ct-models-66043b12c7902115c826a20e).
21
 
 
3
  tags:
4
  - transformers
5
  - peft
6
+ - arxiv:2406.08391
7
  license: apache-2.0
8
  base_model: mistralai/Mistral-7B-v0.1
9
  datasets:
 
16
 
17
  The model is fine-tuned (calibration-tuned) using a [dataset](https://huggingface.co/datasets/calibration-tuning/Mistral-7B-v0.1-20k-oe) of *open-ended* generations from `mistralai/Mistral-7B-v0.1`, labeled for correctness.
18
  At test/inference time, the probability of correctness defines the confidence of the model in its answer.
19
+ For full details, please see our [paper](https://arxiv.org/abs/2406.08391) and supporting [code](https://github.com/activatedgeek/calibration-tuning).
20
 
21
  **Other Models**: We also release a broader collection of [Open-Ended CT Models](https://huggingface.co/collections/calibration-tuning/open-ended-ct-models-66043b12c7902115c826a20e).
22