Goekdeniz-Guelmez m-i commited on
Commit
f159c31
1 Parent(s): 243f226

Update README.md (#2)

Browse files

- Update README.md (b4d8f761b182922879d7c9d17cf3aeb429a9bc52)


Co-authored-by: Marc Igeleke <m-i@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,7 +9,7 @@ extra_gated_description: If you want to learn more about how we process your per
9
 
10
  # mlx-community/Mamba-Codestral-7B-v0.1-8bits
11
 
12
- The Model [mlx-community/Mamba-Codestral-7B-v0.1-8bits](https://huggingface.co/mlx-community/Mamba-Codestral-7B-v0.1-8bits) was converted to MLX format from [mistralai/Mamba-Codestral-7B-v0.1](https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1) using mlx-lm version **0.18.2**.
13
 
14
  ## Use with mlx
15
 
@@ -20,7 +20,7 @@ pip install mlx-lm
20
  ```python
21
  from mlx_lm import load, generate
22
 
23
- model, tokenizer = load("mlx-community/Mamba-Codestral-7B-v0.1-8bits")
24
 
25
  prompt="hello"
26
 
 
9
 
10
  # mlx-community/Mamba-Codestral-7B-v0.1-8bits
11
 
12
+ The Model [mlx-community/Mamba-Codestral-7B-v0.1-8bit](https://huggingface.co/mlx-community/Mamba-Codestral-7B-v0.1-8bit) was converted to MLX format from [mistralai/Mamba-Codestral-7B-v0.1](https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1) using mlx-lm version **0.18.2**.
13
 
14
  ## Use with mlx
15
 
 
20
  ```python
21
  from mlx_lm import load, generate
22
 
23
+ model, tokenizer = load("mlx-community/Mamba-Codestral-7B-v0.1-8bit")
24
 
25
  prompt="hello"
26