Leogrin's picture
Update README.md
c995518
|
raw
history blame
2.14 kB
metadata
language:
  - en
tags:
  - pytorch
  - causal-lm
  - pythia
license: apache-2.0
datasets:
  - Anthropic/hh-rlhf

Infos

Pythia-1.4b supervised finetuned with Anthropic-hh-rlhf dataset for 1 epoch (sft-model), before DPO (paper) with same dataset for 1 epoch.

wandb log

See Pythia-1.4b for model details (paper).

Benchmark raw results:

Results for the base model are taken from the Pythia paper.

Zero shot

Task 1.4B_base 1.4B_sft 1.4B_dpo
Lambada (OpenAI) 0.616 ± 0.007 0.5977 ± 0.0068 0.5948 ± 0.0068
PIQA 0.711 ± 0.011 0.7133 ± 0.0106 0.7165 ± 0.0105
WinoGrande 0.573 ± 0.014 0.5793 ± 0.0139 0.5746 ± 0.0139
WSC 0.365 ± 0.047 0.3654 ± 0.0474 0.3654 ± 0.0474
ARC - Easy 0.606 ± 0.010 0.6098 ± 0.0100 0.6199 ± 0.0100
ARC - Challenge 0.260 ± 0.013 0.2696 ± 0.0130 0.2884 ± 0.0132
SciQ 0.865 ± 0.011 0.8540 ± 0.0112 0.8550 ± 0.0111
LogiQA 0.210 ± 0.016 NA NA

Five shot

Task 1.4B_base 1.4B_sft 1.4B_dpo
Lambada (OpenAI) 0.578 ± 0.007 0.5201 ± 0.007 0.5247 ± 0.007
PIQA 0.705 ± 0.011 0.7176 ± 0.0105 0.7209 ± 0.0105
WinoGrande 0.580 ± 0.014 0.5793 ± 0.0139 0.5746 ± 0.0139
WSC 0.365 ± 0.047 0.5288 ± 0.0492 0.5769 ± 0.0487
ARC - Easy 0.643 ± 0.010 0.6376 ± 0.0099 0.6561 ± 0.0097
ARC - Challenge 0.290 ± 0.013 0.2935 ± 0.0133 0.3166 ± 0.0136
SciQ 0.92 ± 0.009 0.9180 ± 0.0087 0.9150 ± 0.0088
LogiQA 0.240 ± 0.017 N/A N/A