appvoid commited on
Commit
6235368
1 Parent(s): adb46a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -12,13 +12,17 @@ datasets:
12
  palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks.
13
 
14
  ### evaluation
15
- |Model| ARC_C| HellaSwag| PIQA| Winogrande|
16
- |------|-----|-----------|------|-------------|
17
- |tinyllama-2 | 0.2807 |0.5463| 0.7067 | 0.5683|
18
- |palmer-001 | 0.2807 |0.5524| 0.7106 | 0.5896|
19
- |tinyllama-2.5| 0.3191 |0.5896| 0.7307 | 0.5872|
20
- |tinyllama-3 | 0.3029 |0.5935| 0.7329 | **0.5959**|
21
- |palmer-002|**0.3242**|**0.5956**|**0.7345**| 0.5888|
 
 
 
 
22
 
23
  This model shows exceptional performance and as of now is the best tinyllama-size base model. Furthermore, this proves LIMA paper point and serves as a good open-source alternative to openai's `babbage-002`.
24
 
 
12
  palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks.
13
 
14
  ### evaluation
15
+ ```
16
+ ARC_C HellaSwag PIQA Winogrande Average
17
+ tinyllama-2 | 0.2807 | 0.5463 | 0.7067 | 0.5683 | 0.5255 |
18
+ palmer-001 | 0.2807 | 0.5524 | 0.7106 | 0.5896 | 0.5333 |
19
+ babbage-001 | 0.2944 | 0.5448 | 0.7410 | 0.5935 | 0.5434 |
20
+ deacon-1b | 0.2944 | 0.5727 | 0.7040 | 0.5801 | 0.5434 |
21
+ tinyllama-3 | 0.3029 | 0.5935 | 0.7329 | 0.5959 | 0.5563 |
22
+ tinyllama-2.5 | 0.3191 | 0.5896 | 0.7307 | 0.5872 | 0.5566 |
23
+ palmer-002 | 0.3242 | 0.5956 | 0.7345 | 0.5888 | 0.5607 |
24
+ babbage-002 | 0.3285 | 0.6380 | 0.7606 | 0.6085 | 0.5839 |
25
+ ```
26
 
27
  This model shows exceptional performance and as of now is the best tinyllama-size base model. Furthermore, this proves LIMA paper point and serves as a good open-source alternative to openai's `babbage-002`.
28