Benchmark Measure 160M MiniPile 160M MiniPile Lossi
ARC-Challenge acc 0.2125 ± 0.0120 0.1980 ± 0.0116
MMLU acc 0.2699 ± 0.0037 0.2295 ± 0.0035
HellaSwag acc 0.2560 ± 0.0044 0.2599 ± 0.0044
WinoGrande acc 0.4720 ± 0.0140 0.5107 ± 0.0140
Lambada (OpenAI) acc 0.0000 ± 0.0000 0.0000 ± 0.0000
Lambada (OpenAI) perplexity 3033175.2693 ± 288926.5827 2116445.1732 ± 175403.0579
Lambada (Std) acc 0.0000 ± 0.0000 0.0000 ± 0.0000
Lambada (Std) perplexity 27067951.3460 ± 2710040.191 14896599.9251 ± 1366937.5470
BLiMP acc 0.5194 ± 0.0018 0.5492 ± 0.0017
Downloads last month
5
Safetensors
Model size
162M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Marcus2112/pythia-160m-minipile_loss-sampled

Finetuned
(123)
this model

Dataset used to train Marcus2112/pythia-160m-minipile_loss-sampled

Collection including Marcus2112/pythia-160m-minipile_loss-sampled