no-prompt-1.3b / README.md
appvoid's picture
Update README.md
37c9bf5
|
raw
history blame
No virus
1.84 kB
metadata
license: apache-2.0
language:
  - en
datasets:
  - appvoid/no-prompt-15k
pipeline_tag: text-generation

palmer

palmer

a better base model

palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.3b llama 2 model so you can use it with your favorite tools/frameworks.

evaluation

Model ARC_C HellaSwag PIQA Winogrande
tinyllama-2t 0.2807 0.5463 0.7067 0.5683
palmer-001 0.2807 0.5524 0.7106 0.5896
sheared-1.3b 0.2910 0.5935 0.7339 0.5809
no-prompt-1.3b 0.3157 0.6022 0.7334 0.5864
falcon-rw-1b-instruct-openorca (sota) 0.3362 0.5997 0.7394 0.6148

This model was trained on less than 25% of the dataset yet achieves competitive performance to current sota on open llm leaderboard. Wait for what it's coming!

training

Training took ~5 P100 gpu hours. It was trained on 15,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible.

prompt

no prompt

limitations

Hallucinations are frequent, just as any transformer model this size.

Buy Me A Coffee