--- license: llama2 library_name: transformers tags: - mlx datasets: - aqua_rat - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction --- # voxmenthe/Smaug-Llama-3-70B-Instruct-mlx-8bit The Model [voxmenthe/Smaug-Llama-3-70B-Instruct-mlx-8bit](https://huggingface.co/voxmenthe/Smaug-Llama-3-70B-Instruct-mlx-8bit) was converted to MLX format from [abacusai/Smaug-Llama-3-70B-Instruct](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct) using mlx-lm version **0.13.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("voxmenthe/Smaug-Llama-3-70B-Instruct-mlx-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```