Edit model card

Llama-3-8B-Japanese-Instruct-GGUF

Original Model

haqishen/Llama-3-8B-Japanese-Instruct

Run with Gaianet

Prompt template:

prompt template: llama-3-chat

Context size:

chat_ctx_size: 4096

Run with GaiaNet:

Quantized GGUF Models

Name Quant method Bits Size Use case
Llama-3-8B-Japanese-Instruct-Q2_K.gguf Q2_K 2 3.18 GB smallest, significant quality loss - not recommended for most purposes
Llama-3-8B-Japanese-Instruct-Q3_K_L.gguf Q3_K_L 3 4.32 GB small, substantial quality loss
Llama-3-8B-Japanese-Instruct-Q3_K_M.gguf Q3_K_M 3 4.02 GB very small, high quality loss
Llama-3-8B-Japanese-Instruct-Q3_K_S.gguf Q3_K_S 3 3.66 GB very small, high quality loss
Llama-3-8B-Japanese-Instruct-Q4_0.gguf Q4_0 4 4.66 GB legacy; small, very high quality loss - prefer using Q3_K_M
Llama-3-8B-Japanese-Instruct-Q4_K_M.gguf Q4_K_M 4 4.92 GB medium, balanced quality - recommended
Llama-3-8B-Japanese-Instruct-Q4_K_S.gguf Q4_K_S 4 4.69 GB small, greater quality loss
Llama-3-8B-Japanese-Instruct-Q5_0.gguf Q5_0 5 5.6 GB legacy; medium, balanced quality - prefer using Q4_K_M
Llama-3-8B-Japanese-Instruct-Q5_K_M.gguf Q5_K_M 5 5.73 GB large, very low quality loss - recommended
Llama-3-8B-Japanese-Instruct-Q5_K_S.gguf Q5_K_S 5 5.6 GB large, low quality loss - recommended
Llama-3-8B-Japanese-Instruct-Q6_K.gguf Q6_K 6 6.6 GB very large, extremely low quality loss
Llama-3-8B-Japanese-Instruct-Q8_0.gguf Q8_0 8 8.54 GB very large, extremely low quality loss - not recommended
Llama-3-8B-Japanese-Instruct-f16.gguf f16 16 16.1 GB

Quantized with llama.cpp b2824.

Downloads last month
950
GGUF
Model size
8.03B params
Architecture
llama
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from