--- license: llama3 library_name: transformers tags: - mergekit - merge - not-for-all-audiences base_model: - Hastagaras/Halu-8B-Llama3-v0.3 - Blackroot/Llama-3-LongStory-LORA - Hastagaras/Halu-8B-Llama3-v0.3 - Blackroot/Llama-3-8B-Abomination-LORA - Hastagaras/Halu-8B-Llama3-v0.3 model-index: - name: Halu-8B-Llama3-Blackroot results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 63.82 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Halu-8B-Llama3-Blackroot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Halu-8B-Llama3-Blackroot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 67.04 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Halu-8B-Llama3-Blackroot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.28 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Halu-8B-Llama3-Blackroot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Halu-8B-Llama3-Blackroot name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Hastagaras/Halu-8B-Llama3-Blackroot name: Open LLM Leaderboard --- ## EXPERIMENTAL MODEL **VERY IMPORTANT:** This model has not been extensively tested or evaluated, and its performance characteristics are currently unknown. It may generate harmful, biased, or inappropriate content. Please exercise caution and use it at your own risk and discretion. I just tried [saishf's](https://huggingface.co/saishf) merged model, and it's great. So I decided to try a similar merge method with [Blackroot's](https://huggingface.co/Blackroot) LoRA that I had found earlier. I don't know what to say about this model... this model is very strange...Maybe because Blackroot's amazing Loras used human data and not synthetic data, hence the model turned out to be very human-like...even the actions or narrations. **WARNING:** This model is very unsafe in certain parts...especially in RP. [IMATRIX GGUF IS HERE](https://huggingface.co/Lewdiculous/Halu-8B-Llama3-Blackroot-GGUF-IQ-Imatrix) made available by [Lewdiculous](https://huggingface.co/Lewdiculous) [STATIC GGUF IS HERE](https://huggingface.co/mradermacher/Halu-8B-Llama3-Blackroot-GGUF/tree/main) made avaible by [mradermacher](https://huggingface.co/mradermacher)
### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Hastagaras/Halu-8B-Llama3-v0.3](https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.3) as a base. ### Models Merged The following models were included in the merge: * [Hastagaras/Halu-8B-Llama3-v0.3](https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.3) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [Hastagaras/Halu-8B-Llama3-v0.3](https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.3) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Hastagaras/Halu-8B-Llama3-v0.3+Blackroot/Llama-3-LongStory-LORA - model: Hastagaras/Halu-8B-Llama3-v0.3+Blackroot/Llama-3-8B-Abomination-LORA merge_method: model_stock base_model: Hastagaras/Halu-8B-Llama3-v0.3 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Hastagaras__Halu-8B-Llama3-Blackroot) | Metric |Value| |---------------------------------|----:| |Avg. |69.78| |AI2 Reasoning Challenge (25-Shot)|63.82| |HellaSwag (10-Shot) |84.55| |MMLU (5-Shot) |67.04| |TruthfulQA (0-shot) |53.28| |Winogrande (5-shot) |79.48| |GSM8k (5-shot) |70.51|