Edit model card

Llama-3-Smaug-8B-GGUF

Model Description

Built with Meta Llama 3

This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B.

Evaluation

MT-Bench

########## First turn ##########
                   score
model             turn
Llama-3-Smaug-8B 1   8.77500
Meta-Llama-3-8B-Instruct 1   8.1
########## Second turn ##########
                   score
model             turn
Meta-Llama-3-8B-Instruct 2   8.2125
Llama-3-Smaug-8B 2   7.8875
########## Average ##########
                 score
model
Llama-3-Smaug-8B  8.331250
Meta-Llama-3-8B-Instruct 8.15625
Model First turn Second Turn Average
Llama-3-Smaug-8B 8.78 7.89 8.33
Llama-3-8B-Instruct 8.1 8.21 8.16

This version of Smaug uses new techniques and new data compared to Smaug-72B, and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228.

Downloads last month
1,020
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/Llama-3-Smaug-8B-GGUF

Quantized
(8)
this model

Datasets used to train QuantFactory/Llama-3-Smaug-8B-GGUF