Llama 3.1 8B Instruct GGUF
** Updated as of 2024-07-27 **
Original model: Meta-Llama-3.1-8B-Instruct
Model creator: Meta
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
This repo contains GGUF format model files for Meta’s Llama 3.1 8B Instruct, updated as of 2024-07-27 to incorporate long context improvements, as well as changes to the huggingface model itself.
Learn more on Meta’s Llama 3.1 page.
What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted with llama.cpp build 3472 (revision b5e9546), using autogguf.
Prompt template
<|start_header_id|>system<|end_header_id|>
{{system_prompt}}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{prompt}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Download & run with cnvrs on iPhone, iPad, and Mac!
cnvrs is the best app for private, local AI on your device:
- create & save Characters with custom system prompts & temperature settings
- download and experiment with any GGUF model you can find on HuggingFace!
- make it your own with custom Theme colors
- powered by Metal ⚡️ & Llama.cpp, with haptics during response streaming!
- try it out yourself today, on Testflight!
- follow cnvrs on twitter to stay up to date
Original Model Evaluation
Category | Benchmark | # Shots | Metric | Llama 3 8B Instruct | Llama 3.1 8B Instruct | Llama 3 70B Instruct | Llama 3.1 70B Instruct | Llama 3.1 405B Instruct |
General | MMLU | 5 | macro_avg/acc | 68.5 | 69.4 | 82.0 | 83.6 | 87.3 |
MMLU (CoT) | 0 | macro_avg/acc | 65.3 | 73.0 | 80.9 | 86.0 | 88.6 | |
MMLU-Pro (CoT) | 5 | micro_avg/acc_char | 45.5 | 48.3 | 63.4 | 66.4 | 73.3 | |
IFEval | 76.8 | 80.4 | 82.9 | 87.5 | 88.6 | |||
Reasoning | ARC-C | 0 | acc | 82.4 | 83.4 | 94.4 | 94.8 | 96.9 |
GPQA | 0 | em | 34.6 | 30.4 | 39.5 | 41.7 | 50.7 | |
Code | HumanEval | 0 | pass@1 | 60.4 | 72.6 | 81.7 | 80.5 | 89.0 |
MBPP ++ base version | 0 | pass@1 | 70.6 | 72.8 | 82.5 | 86.0 | 88.6 | |
Multipl-E HumanEval | 0 | pass@1 | - | 50.8 | - | 65.5 | 75.2 | |
Multipl-E MBPP | 0 | pass@1 | - | 52.4 | - | 62.0 | 65.7 | |
Math | GSM-8K (CoT) | 8 | em_maj1@1 | 80.6 | 84.5 | 93.0 | 95.1 | 96.8 |
MATH (CoT) | 0 | final_em | 29.1 | 51.9 | 51.0 | 68.0 | 73.8 | |
Tool Use | API-Bank | 0 | acc | 48.3 | 82.6 | 85.1 | 90.0 | 92.0 |
BFCL | 0 | acc | 60.3 | 76.1 | 83.0 | 84.8 | 88.5 | |
Gorilla Benchmark API Bench | 0 | acc | 1.7 | 8.2 | 14.7 | 29.7 | 35.3 | |
Nexus (0-shot) | 0 | macro_avg/acc | 18.1 | 38.5 | 47.8 | 56.7 | 58.7 | |
Multilingual | Multilingual MGSM (CoT) | 0 | em | - | 68.9 | - | 86.9 | 91.6 |
Multilingual benchmarks
Category | Benchmark | Language | Llama 3.1 8B | Llama 3.1 70B | Llama 3.1 405B |
General | MMLU (5-shot, macro_avg/acc) | Portuguese | 62.12 | 80.13 | 84.95 |
Spanish | 62.45 | 80.05 | 85.08 | ||
Italian | 61.63 | 80.4 | 85.04 | ||
German | 60.59 | 79.27 | 84.36 | ||
French | 62.34 | 79.82 | 84.66 | ||
Hindi | 50.88 | 74.52 | 80.31 | ||
Thai | 50.32 | 72.95 | 78.21 |
- Downloads last month
- 939
Model tree for brittlewis12/Meta-Llama-3.1-8B-Instruct-GGUF
Base model
meta-llama/Llama-3.1-8B