Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Llama-3-Smaug-8B - bnb 4bits

Original model description:

library_name: transformers license: llama2 datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction - anon8231489123/ShareGPT_Vicuna_unfiltered

Llama-3-Smaug-8B

Built with Meta Llama 3

image/png

This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B-Instruct.

Model Description

Evaluation

MT-Bench

########## First turn ##########
                   score
model             turn
Llama-3-Smaug-8B 1   8.77500
Meta-Llama-3-8B-Instruct 1   8.31250
########## Second turn ##########
                   score
model             turn
Meta-Llama-3-8B-Instruct 2   7.8875 
Llama-3-Smaug-8B 2   7.8875
########## Average ##########
                 score
model
Llama-3-Smaug-8B  8.331250
Meta-Llama-3-8B-Instruct 8.10
Model First turn Second Turn Average
Llama-3-Smaug-8B 8.78 7.89 8.33
Llama-3-8B-Instruct 8.31 7.89 8.10

This version of Smaug uses new techniques and new data compared to Smaug-72B, and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.

Downloads last month
0
Safetensors
Model size
4.65B params
Tensor type
FP16
F32
U8