--- language: - en license: apache-2.0 tags: - chat - mlx pipeline_tag: text-generation --- # mlx-community/Qwen2-57B-A14B-Instruct-4bit The Model [mlx-community/Qwen2-57B-A14B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen2-57B-A14B-Instruct-4bit) was converted to MLX format from [Qwen/Qwen2-57B-A14B-Instruct](https://huggingface.co/Qwen/Qwen2-57B-A14B-Instruct) using mlx-lm version **0.14.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen2-57B-A14B-Instruct-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```