--- language: - en license: other tags: - pretrained - moe - mlx license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B/blob/main/LICENSE pipeline_tag: text-generation --- # mlx-community/Qwen1.5-MoE-A2.7B-Chat-4bit This model was converted to MLX format from [`Qwen/Qwen1.5-MoE-A2.7B-Chat`](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat) using mlx-lm version version [d661440](https://github.com/ml-explore/mlx-examples/commit/d661440dbb8e1970fadad79c5061e786fe1c54ca). Model added by [Prince Canuma](https://twitter.com/Prince_Canuma). Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Qwen1.5-MoE-A2.7B-4bit") response = generate(model, tokenizer, prompt="Write a story about Einstein", verbose=True) ```