pcuenq HF staff commited on
Commit
77a00f7
1 Parent(s): 4320ed1

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -14,10 +14,16 @@ inference:
14
  # pcuenq/My-Mistral-7B-v0.1-4bit
15
  This model was converted to MLX format from [`mistralai/Mistral-7B-v0.1`]().
16
  Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
 
17
  ## Use with mlx
 
18
  ```bash
19
- pip install mlx
20
- git clone https://github.com/ml-explore/mlx-examples.git
21
- cd mlx-examples/llms/hf_llm
22
- python generate.py --model pcuenq/My-Mistral-7B-v0.1-4bit --prompt "My name is"
 
 
 
 
23
  ```
 
14
  # pcuenq/My-Mistral-7B-v0.1-4bit
15
  This model was converted to MLX format from [`mistralai/Mistral-7B-v0.1`]().
16
  Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
17
+
18
  ## Use with mlx
19
+
20
  ```bash
21
+ pip install mlx-lm
22
+ ```
23
+
24
+ ```python
25
+ from mlx_lm import load, generate
26
+
27
+ model, tokenizer = load("pcuenq/My-Mistral-7B-v0.1-4bit")
28
+ response = generate(model, tokenizer, prompt="hello", verbose=True)
29
  ```