Edit model card

palmer

palmer

a better base model

This is a small improvement over a (now un-prompted zyte) tinyllama model

evaluation πŸ§ͺ

note that this is a zero-shot setting as opposite to open llm leaderboard's few-shot evals

   model           ARC-C     OBQA   HellaSwag  PIQA  Winogrande Average
tinyllama        | 0.3029 | 0.3600 | 0.5935 | 0.7329 | 0.5959 | 0.5170 |
palmer-002       | 0.3242 | 0.3700 | 0.5956 | 0.7345 | 0.5888 | 0.5226 |
palmer-002-2401  | 0.3294 | 0.3700 | 0.5950 | 0.7399 | 0.5896 | 0.5247 | (this)
babbage-002      | 0.3285 | 0.3620 | 0.6380 | 0.7606 | 0.6085 | 0.5395 |

training 🦾

Training took ~1 A100 gpu hour. It was trained on 50,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible.

prompt πŸ“

no prompt πŸš€

Buy Me A Coffee

Downloads last month
0

Dataset used to train appvoid/palmer-002-2401