lomahony commited on
Commit
cb4c003
1 Parent(s): f2d2d0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -20,6 +20,16 @@ Fully reproducible finetuning code is available on [GitHub](https://github.com/l
20
 
21
  See [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) for model details [(paper)](https://arxiv.org/abs/2101.00027).
22
 
 
 
 
 
 
 
 
 
 
 
23
  hf (pretrained=lomahony/pythia-2.8b-helpful-sft), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 16
24
  | Tasks |Version|Filter|n-shot| Metric | Value | |Stderr|
25
  |--------------|------:|------|-----:|---------------|------:|---|------|
 
20
 
21
  See [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) for model details [(paper)](https://arxiv.org/abs/2101.00027).
22
 
23
+ See further details of these models in the paper [Attributing Mode Collapse in the Fine-Tuning of Large Language Models](https://openreview.net/pdf?id=3pDMYjpOxk).
24
+ You can cite these models if they are helpful in your work as follows:
25
+
26
+ @inproceedings{o2024attributing,
27
+ title={Attributing Mode Collapse in the Fine-Tuning of Large Language Models},
28
+ author={O’Mahony, Laura and Grinsztajn, Leo and Schoelkopf, Hailey and Biderman, Stella},
29
+ booktitle={ICLR 2024, Mathematical and Empirical Understanding of Foundation Models (ME-FoMo) workshop},
30
+ year={2024}
31
+ }
32
+
33
  hf (pretrained=lomahony/pythia-2.8b-helpful-sft), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 16
34
  | Tasks |Version|Filter|n-shot| Metric | Value | |Stderr|
35
  |--------------|------:|------|-----:|---------------|------:|---|------|