Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Grafted-Hermetic-Platypus-C-2x7B - bnb 4bits - Model creator: https://huggingface.co/lodrick-the-lafted/ - Original model: https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B/ Original model description: --- license: apache-2.0 datasets: - lodrick-the-lafted/Hermes-217K - garage-bAInd/Open-Platypus - jondurbin/airoboros-3.2 model-index: - name: Grafted-Hermetic-Platypus-C-2x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 58.96 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.87 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard --- # Grafted-Hermetic-Platypus-C-2x7B MoE merge of - [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B) - [Hermes-Instruct-7B-217K](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-217K)

# Prompt Format Both the default Mistral-Instruct tags and Alpaca are fine, so either: ``` [INST] {sys_prompt} {instruction} [/INST] ``` or ``` {sys_prompt} ### Instruction: {instruction} ### Response: ``` The tokenizer default is Alpaca this time around.

# Usage ```python from transformers import AutoTokenizer import transformers import torch model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, ) messages = [{"role": "user", "content": "Give me a cooking recipe for an peach pie."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-C-2x7B) | Metric |Value| |---------------------------------|----:| |Avg. |64.39| |AI2 Reasoning Challenge (25-Shot)|58.96| |HellaSwag (10-Shot) |82.77| |MMLU (5-Shot) |62.08| |TruthfulQA (0-shot) |60.87| |Winogrande (5-shot) |77.74| |GSM8k (5-shot) |43.90|