appvoid commited on
Commit
9507dff
1 Parent(s): b1a6f60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -8,10 +8,11 @@ datasets:
8
  ---
9
  ![palmer](https://huggingface.co/appvoid/palmer-001/resolve/main/palmer.jpeg)
10
  # palmer
11
- ### a better base model
12
  palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks.
13
 
14
- ### evaluation
 
15
  ```
16
  Model ARC_C HellaSwag PIQA Winogrande Average
17
  tinyllama-2 | 0.2807 | 0.5463 | 0.7067 | 0.5683 | 0.5255 |
@@ -22,16 +23,15 @@ tinyllama-3 | 0.3029 | 0.5935 | 0.7329 | 0.5959 | 0.5563 |
22
  tinyllama-2.5 | 0.3191 | 0.5896 | 0.7307 | 0.5872 | 0.5566 |
23
  palmer-002 | 0.3242 | 0.5956 | 0.7345 | 0.5888 | 0.5607 |
24
  babbage-002 | 0.3285 | 0.6380 | 0.7606 | 0.6085 | 0.5839 |
25
- # note that this is a zero-shot setting as opposite to open llm leaderboard's few-shot evals.
26
  ```
27
 
28
  This model shows exceptional performance and as of now is the best tinyllama-size base model. Furthermore, this proves LIMA paper point and serves as a good open-source alternative to openai's `babbage-002`.
29
 
30
- ### training
31
  Training took ~3.5 P100 gpu hours. It was trained on 15,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible.
32
 
33
- ### prompt
34
  ```
35
- no prompt
36
  ```
37
  <a href="https://ko-fi.com/appvoid" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 48px !important;width: 180px !important; filter: invert(70%);" ></a>
 
8
  ---
9
  ![palmer](https://huggingface.co/appvoid/palmer-001/resolve/main/palmer.jpeg)
10
  # palmer
11
+ ### a better base model
12
  palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks.
13
 
14
+ ### evaluation 🧪
15
+ note that this is a zero-shot setting as opposite to open llm leaderboard's few-shot evals
16
  ```
17
  Model ARC_C HellaSwag PIQA Winogrande Average
18
  tinyllama-2 | 0.2807 | 0.5463 | 0.7067 | 0.5683 | 0.5255 |
 
23
  tinyllama-2.5 | 0.3191 | 0.5896 | 0.7307 | 0.5872 | 0.5566 |
24
  palmer-002 | 0.3242 | 0.5956 | 0.7345 | 0.5888 | 0.5607 |
25
  babbage-002 | 0.3285 | 0.6380 | 0.7606 | 0.6085 | 0.5839 |
 
26
  ```
27
 
28
  This model shows exceptional performance and as of now is the best tinyllama-size base model. Furthermore, this proves LIMA paper point and serves as a good open-source alternative to openai's `babbage-002`.
29
 
30
+ ### training 🦾
31
  Training took ~3.5 P100 gpu hours. It was trained on 15,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible.
32
 
33
+ ### prompt 📝
34
  ```
35
+ no prompt 🚀
36
  ```
37
  <a href="https://ko-fi.com/appvoid" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 48px !important;width: 180px !important; filter: invert(70%);" ></a>