Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference
Inference Endpoints
6-bit
exl2
File size: 4,361 Bytes
4839177
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
library_name: transformers
license: llama3
datasets:
- aqua_rat
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
---

# Smaug-Llama-3-70B-Instruct-32K

### Built with Meta Llama 3

This is a 32K version of Smaug-Llama-3-70B-Instruct. It uses PoSE (https://arxiv.org/abs/2309.10400) and LoRA (https://arxiv.org/abs/2106.09685) adapter transfer. More details are coming soon.

Needle-In-A-Haystack (https://github.com/jzhang38/EasyContext) heatmap:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/8Z5XgqrZXKcb2hmeTKTT6.png)

### Model Description

- **Developed by:** [Abacus.AI](https://abacus.ai)
- **License:** https://llama.meta.com/llama3/license/
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).

## How to use

The prompt format is unchanged from Llama 3 70B Instruct.

### Use with transformers

See the snippet below for usage with Transformers:

```python
import transformers
import torch

model_id = "abacusai/Smaug-Llama-3-70B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
		messages, 
		tokenize=False, 
		add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```


## Evaluation

### Arena-Hard

### Arena-Hard

Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)). GPT-4o and Gemini-1.5-pro-latest were missing from the original blob post, and we produced those numbers from a local run using the same methodology. 

| Model | Score | 95% Confidence Interval | Average Tokens |
| :---- | ---------: | ----------: | ------: |
| GPT-4-Turbo-2024-04-09 | 82.6  | (-1.8, 1.6)  | 662 |
| GPT-4o | 78.3  | (-2.4, 2.1)  | 685 |
| Gemini-1.5-pro-latest | 72.1  | (-2.3, 2.2)  | 630 |
| Claude-3-Opus-20240229 | 60.4  | (-3.3, 2.4)  | 541 |
| **Smaug-Llama-3-70B-Instruct-32K** | 60.0  | (-2.6, 2.1)  | 844 |
| Smaug-Llama-3-70B-Instruct | 56.7  | (-2.2, 2.6)  | 661 |
| GPT-4-0314 | 50.0  | (-0.0, 0.0)  | 423 |
| Claude-3-Sonnet-20240229 | 46.8  | (-2.1, 2.2)  | 552 |
| Llama-3-70B-Instruct | 41.1  | (-2.5, 2.4)  | 583 |
| GPT-4-0613 | 37.9  | (-2.2, 2.0)  | 354 |
| Mistral-Large-2402 | 37.7 | (-1.9, 2.6)  | 400 |
| Mixtral-8x22B-Instruct-v0.1 | 36.4  | (-2.7, 2.9)  | 430 |
| Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2)  | 474 |
| Command-R-Plus | 33.1 | (-2.1, 2.2)  | 541 |
| Mistral-Medium | 31.9  | (-2.3, 2.4)  | 485 |
| GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0)  | 401 |

Note that we believe the number of tokens/verbosity of the model strongly influences the GPT-4 judge in this case, and at least partially explains the improvement in Arena-Hard score for the 32K model.

### OpenLLM Leaderboard Manual Evaluation

| Model | ARC  | Hellaswag | MMLU | TruthfulQA | Winogrande | GSM8K* | Average |
| :---- | ---: | ------:   | ---: | ---:       | ---:       | ---:   | ---:   |
| Smaug-Llama-3-70B-Instruct-32K | 70.1 | TBA | TBA | 61.9 | 82.2 | TBA | TBA |
| Llama-3-70B-Instruct | 71.4 | 85.7 | 80.0 | 61.8 | 82.9 | 91.1 | 78.8 |

**GSM8K** The GSM8K numbers quoted here are computed using a recent release
of the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/).
The commit used by the leaderboard has a significant issue that impacts models that
tend to use `:` in their responses due to a bug in the stop word configuration for
GSM8K. The issue is covered in more detail in this
[GSM8K evaluation discussion](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard/discussions/770).
The score for both Llama-3 and this model are significantly different when evaluated
with the updated harness as the issue with stop words has been addressed.