|
# palmer |
|
palmer-003 focuses on reaching sota performance by MErging of Experts + fine-tuning, where each expert is consolidated into one model and finally is fine-tuned on useful textual data. |
|
|
|
``` |
|
### Evaluation |
|
ARC-C OBQA HellaSwag PIQA Winogrande Average |
|
tinyllama | 0.3029 | 0.3600 | 0.5935 | 0.7329 | 0.5959 | 0.5170 | |
|
palmer-002-2401 | 0.3311 | 0.3600 | 0.5981 | 0.7416 | 0.6006 | 0.5266 | |
|
babbage-002 | 0.3285 | 0.3620 | 0.6380 | 0.7606 | 0.6085 | 0.5395 | |
|
palmer-003 | 0.3370 | 0.3740 | 0.6128 | 0.7486 | 0.6535 | 0.5451 | |
|
``` |