Edit model card

Llama-2-7B-Chat-GGUF

Used for llama.cpp Original model: meta-llama/Llama-2-7b-chat-hf

GGUF: New model file format used by llama.cpp

This repo contains all quantized (q4_0, q4_1, q5_0, q5_1, q8_0) GGUF version of Llama-2-7B-Chat model

Downloads last month
106
GGUF
Model size
6.74B params
Architecture
llama

4-bit

5-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .