lomahony commited on
Commit
5eef764
1 Parent(s): 4f5a74c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -20,6 +20,19 @@ Fully reproducible finetuning code is available on [GitHub](https://github.com/l
20
 
21
  See [Pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) for model details [(paper)](https://arxiv.org/abs/2101.00027).
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  hf (pretrained=lomahony/pythia-70m-helpful-dpo), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 16
24
  | Tasks |Version|Filter|n-shot| Metric | Value | | Stderr |
25
  |--------------|------:|------|-----:|---------------|--------:|---|--------|
 
20
 
21
  See [Pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) for model details [(paper)](https://arxiv.org/abs/2101.00027).
22
 
23
+ See further details of these models in the paper [Attributing Mode Collapse in the Fine-Tuning of Large Language Models](https://openreview.net/pdf?id=3pDMYjpOxk).
24
+
25
+ You can cite these models if they are helpful as follows:
26
+
27
+ <pre>
28
+ @inproceedings{o2024attributing,
29
+ title={Attributing Mode Collapse in the Fine-Tuning of Large Language Models},
30
+ author={O’Mahony, Laura and Grinsztajn, Leo and Schoelkopf, Hailey and Biderman, Stella},
31
+ booktitle={ICLR 2024, Mathematical and Empirical Understanding of Foundation Models (ME-FoMo) workshop},
32
+ year={2024}
33
+ }
34
+ </pre>
35
+
36
  hf (pretrained=lomahony/pythia-70m-helpful-dpo), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 16
37
  | Tasks |Version|Filter|n-shot| Metric | Value | | Stderr |
38
  |--------------|------:|------|-----:|---------------|--------:|---|--------|