--- license: apache-2.0 ---
![cubby](https://huggingface.co/appvoid/cubby/resolve/main/cubby.webp) arco consistently outperforms every sota model below 600m parameters on average, outperforms some base 1b models and is competitive with the best ones. arco is a merge of multiple internal models fine-tuned on a diverse set of styles and finally merged with the several models (including palmer-004-turbo), followed by a merge with base model to preserve knowledge. #### benchmarks zero-shot evaluations performed on current sota ~0.5b models. | Parameters | Model | MMLU | ARC-C | HellaSwag | PIQA | Winogrande | Average | | -----------|--------------------------------|-------|-------|-----------|--------|------------|---------| | 0.5b | qwen2 |**44.13**| 28.92| 49.05 | 69.31 | 56.99 | 49.68 | | 0.5b | danube3-base | 24.81| 36.18| 60.46| 73.78 | 61.01 | 51.25 | | 0.5b | palmer-turbo | 27.36|35.58|61.79|73.67 | 61.17 |51.91| | 0.5b | arco |26.17|**37.29**|**62.88**|**74.37**|**62.27**|**52.60**| #### supporters Buy Me A Coffee