mgubri commited on
Commit
9b5e894
1 Parent(s): 35ac9df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -6,6 +6,9 @@ tags:
6
  model-index:
7
  - name: apricot_binary_coqa_deberta-v3-base_for_gpt-3.5-turbo-0125
8
  results: []
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,11 +16,11 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # apricot_binary_coqa_deberta-v3-base_for_gpt-3.5-turbo-0125
15
 
16
- This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the stanfordnlp/coqa dataset.
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
  ## Intended uses & limitations
23
 
 
6
  model-index:
7
  - name: apricot_binary_coqa_deberta-v3-base_for_gpt-3.5-turbo-0125
8
  results: []
9
+ datasets:
10
+ - stanfordnlp/coqa
11
+ library_name: transformers
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  # apricot_binary_coqa_deberta-v3-base_for_gpt-3.5-turbo-0125
18
 
19
+ This model is fine-tuned for black-box LLM calibration as part of the 🍑 Apricot paper ["Calibrating Large Language Models Using Their Generations Only"](https://arxiv.org/abs/2403.05973) (ACL 2024).
20
 
21
  ## Model description
22
 
23
+ This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) to predict the calibration score for the gpt-3.5-turbo-0125 model on the questions from the stanfordnlp/coqa dataset. It uses the binary type of calibration target score.
24
 
25
  ## Intended uses & limitations
26