prince-canuma commited on
Commit
11aaad5
1 Parent(s): d75a4f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -12,7 +12,10 @@ pipeline_tag: text-generation
12
  ---
13
 
14
  # mlx-community/Qwen1.5-MoE-A2.7B-4bit
15
- This model was converted to MLX format from [`Qwen/Qwen1.5-MoE-A2.7B`]() using mlx-lm version **0.4.0**.
 
 
 
16
  Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) for more details on the model.
17
  ## Use with mlx
18
 
@@ -24,5 +27,5 @@ pip install mlx-lm
24
  from mlx_lm import load, generate
25
 
26
  model, tokenizer = load("mlx-community/Qwen1.5-MoE-A2.7B-4bit")
27
- response = generate(model, tokenizer, prompt="hello", verbose=True)
28
  ```
 
12
  ---
13
 
14
  # mlx-community/Qwen1.5-MoE-A2.7B-4bit
15
+ This model was converted to MLX format from [`Qwen/Qwen1.5-MoE-A2.7B`](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) using mlx-lm version version [d661440](https://github.com/ml-explore/mlx-examples/commit/d661440dbb8e1970fadad79c5061e786fe1c54ca).
16
+
17
+ Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
18
+
19
  Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) for more details on the model.
20
  ## Use with mlx
21
 
 
27
  from mlx_lm import load, generate
28
 
29
  model, tokenizer = load("mlx-community/Qwen1.5-MoE-A2.7B-4bit")
30
+ response = generate(model, tokenizer, prompt="Write a story about Einstein", verbose=True)
31
  ```