mlx-community/Meta-Llama-3-8B-Instruct

This model was converted to MLX format from meta-llama/Meta-Llama-3-8B-Instruct using mlx-lm version 0.12.0. Refer to the original model card for more details on the model.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Meta-Llama-3-8B-Instruct")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
29
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support mlx models with pipeline type text-generation

Model tree for mlx-community/Meta-Llama-3-8B-Instruct

Quantizations
1 model