Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ datasets:
|
|
10 |
- Anthropic/hh-rlhf
|
11 |
---
|
12 |
|
13 |
-
[Pythia-1b](https://huggingface.co/EleutherAI/pythia-
|
14 |
|
15 |
Checkpoints are also uploaded.
|
16 |
|
@@ -18,5 +18,50 @@ Fully reproducible finetuning code is available on [GitHub](https://github.com/l
|
|
18 |
|
19 |
[wandb log](https://wandb.ai/lauraomahony999/sft-pythia/runs/azscanfe)
|
20 |
|
21 |
-
See [Pythia-1b](https://huggingface.co/EleutherAI/pythia-
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
- Anthropic/hh-rlhf
|
11 |
---
|
12 |
|
13 |
+
[Pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) supervised finetuned using TRLx library with the helpful subset of [Anthropic-hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) for 1 epoch.
|
14 |
|
15 |
Checkpoints are also uploaded.
|
16 |
|
|
|
18 |
|
19 |
[wandb log](https://wandb.ai/lauraomahony999/sft-pythia/runs/azscanfe)
|
20 |
|
21 |
+
See [Pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) for model details [(paper)](https://arxiv.org/abs/2101.00027).
|
22 |
|
23 |
+
hf (pretrained=lomahony/pythia-1b-helpful-sft), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 16
|
24 |
+
| Tasks |Version|Filter|n-shot| Metric | Value | |Stderr|
|
25 |
+
|--------------|------:|------|-----:|---------------|------:|---|------|
|
26 |
+
|arc_challenge | 1|none | 0|acc | 0.2543|± |0.0127|
|
27 |
+
| | |none | 0|acc_norm | 0.2739|± |0.0130|
|
28 |
+
|arc_easy | 1|none | 0|acc | 0.5724|± |0.0102|
|
29 |
+
| | |none | 0|acc_norm | 0.4941|± |0.0103|
|
30 |
+
|boolq | 2|none | 0|acc | 0.6199|± |0.0085|
|
31 |
+
|hellaswag | 1|none | 0|acc | 0.3819|± |0.0048|
|
32 |
+
| | |none | 0|acc_norm | 0.4736|± |0.0050|
|
33 |
+
|lambada_openai| 1|none | 0|perplexity | 7.1374|± |0.2014|
|
34 |
+
| | |none | 0|acc | 0.5626|± |0.0069|
|
35 |
+
|openbookqa | 1|none | 0|acc | 0.2040|± |0.0180|
|
36 |
+
| | |none | 0|acc_norm | 0.3140|± |0.0208|
|
37 |
+
|piqa | 1|none | 0|acc | 0.7138|± |0.0105|
|
38 |
+
| | |none | 0|acc_norm | 0.6997|± |0.0107|
|
39 |
+
|sciq | 1|none | 0|acc | 0.8400|± |0.0116|
|
40 |
+
| | |none | 0|acc_norm | 0.7620|± |0.0135|
|
41 |
+
|wikitext | 2|none | 0|word_perplexity|16.9719|± |N/A |
|
42 |
+
| | |none | 0|byte_perplexity| 1.6981|± |N/A |
|
43 |
+
| | |none | 0|bits_per_byte | 0.7639|± |N/A |
|
44 |
+
|winogrande | 1|none | 0|acc | 0.5343|± |0.0140|
|
45 |
+
|
46 |
+
hf (pretrained=lomahony/pythia-1b-helpful-sft), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 16
|
47 |
+
| Tasks |Version|Filter|n-shot| Metric | Value | |Stderr|
|
48 |
+
|--------------|------:|------|-----:|---------------|------:|---|------|
|
49 |
+
|arc_challenge | 1|none | 5|acc | 0.2628|± |0.0129|
|
50 |
+
| | |none | 5|acc_norm | 0.2918|± |0.0133|
|
51 |
+
|arc_easy | 1|none | 5|acc | 0.6040|± |0.0100|
|
52 |
+
| | |none | 5|acc_norm | 0.5816|± |0.0101|
|
53 |
+
|boolq | 2|none | 5|acc | 0.5963|± |0.0086|
|
54 |
+
|hellaswag | 1|none | 5|acc | 0.3780|± |0.0048|
|
55 |
+
| | |none | 5|acc_norm | 0.4719|± |0.0050|
|
56 |
+
|lambada_openai| 1|none | 5|perplexity |10.2584|± |0.2936|
|
57 |
+
| | |none | 5|acc | 0.4832|± |0.0070|
|
58 |
+
|openbookqa | 1|none | 5|acc | 0.1980|± |0.0178|
|
59 |
+
| | |none | 5|acc_norm | 0.3220|± |0.0209|
|
60 |
+
|piqa | 1|none | 5|acc | 0.7057|± |0.0106|
|
61 |
+
| | |none | 5|acc_norm | 0.7095|± |0.0106|
|
62 |
+
|sciq | 1|none | 5|acc | 0.8980|± |0.0096|
|
63 |
+
| | |none | 5|acc_norm | 0.9000|± |0.0095|
|
64 |
+
|wikitext | 2|none | 5|word_perplexity|16.9719|± |N/A |
|
65 |
+
| | |none | 5|byte_perplexity| 1.6981|± |N/A |
|
66 |
+
| | |none | 5|bits_per_byte | 0.7639|± |N/A |
|
67 |
+
|winogrande | 1|none | 5|acc | 0.5446|± |0.0140|
|