Bee Models π―
Collection
models fine-tuned to be knowledgeable about apiary practice
β’
6 items
β’
Updated
β’
1
As we feverishly hit the refresh button on hf.co's homepage, on the hunt for the newest waifu chatbot to grace the AI stage, an epiphany struck us like a bee sting. What could we offer to the hive-mind of the community? The answer was as clear as honeyβbeekeeping, naturally. And thus, this un-bee-lievable model was born.
This model is a fine-tuned version of PY007/TinyLlama-1.1B-intermediate-step-240k-503b on the BEE-spoke-data/bees-internal
dataset.
It achieves the following results on the evaluation set:
***** eval metrics *****
eval_accuracy = 0.4972
eval_loss = 2.4283
eval_runtime = 0:00:53.12
eval_samples = 239
eval_samples_per_second = 4.499
eval_steps_per_second = 1.129
perplexity = 11.3391
While the full dataset is not yet complete and therefore not yet released for "safety reasons", you can check out a preliminary sample at: bees-v0
The following hyperparameters were used during training:
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 29.15 |
ARC (25-shot) | 30.55 |
HellaSwag (10-shot) | 51.8 |
MMLU (5-shot) | 24.25 |
TruthfulQA (0-shot) | 39.01 |
Winogrande (5-shot) | 54.46 |
GSM8K (5-shot) | 0.23 |
DROP (3-shot) | 3.74 |