--- language: - en license: llama3 tags: - meta - llama-3 - mlx - mlx pipeline_tag: text-generation --- # EmoLlama-3-8B-Instruct-1048k-8bit This model was converted to MLX format from [`mlx-community/Llama-3-8B-Instruct-1048k-8bit`](). Refer to the [original model card](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-8bit) for more details on the model. ## Use with mlx ```bash pip install mlx git clone https://github.com/ml-explore/mlx-examples.git cd mlx-examples/llms/hf_llm python generate.py --model mlx-community/EmoLlama-3-8B-Instruct-1048k-8bit --prompt "My name is" ```