--- tags: - merge - mergekit - cognitivecomputations/dolphin-2.9-llama3-8b - abacusai/Llama-3-Smaug-8B - meta-llama/Meta-Llama-3-8B base_model: - cognitivecomputations/dolphin-2.9-llama3-8b - abacusai/Llama-3-Smaug-8B - meta-llama/Meta-Llama-3-8B license: apache-2.0 --- ![](https://raw.githubusercontent.com/saucam/models/main/aqua-smaug.png) # 💦 aqua-smaug-0.3-8B 🐉 aqua-smaug-0.3-8B is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit): * [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) * [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B) * [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ## 🧩 Configuration ```yamlname: aqua-smaug-0.3-8B models: - model: cognitivecomputations/dolphin-2.9-llama3-8b - model: abacusai/Llama-3-Smaug-8B - model: meta-llama/Meta-Llama-3-8B merge_method: model_stock base_model: abacusai/Llama-3-Smaug-8B dtype: bfloat16 ``` ## Eval Results |Benchmark| Model |winogrande| arc |gsm8k|mmlu|truthfulqa|hellaswag|Average| |---------|--------------------------------------------------------------------|---------:|----:|----:|---:|---------:|--------:|------:| |openllm |[aqua-smaug-0.3-8B](https://huggingface.co/saucam/aqua-smaug-0.3-8B)| 77.11|62.37|76.19| 66| 53.7| 83.02| 69.73| Detailed Results: https://github.com/saucam/model_evals/tree/main/saucam/aqua-smaug-0.3-8B ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "saucam/aqua-smaug-0.3-8B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```