Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ The 4bpw fits seems to work well on my 16gig M1 MBP, 8bpw needs more ram.
|
|
10 |
|
11 |
[Documentation on MLX](https://github.com/ml-explore/mlx/)
|
12 |
|
13 |
-
Other Quants:
|
14 |
|
15 |
-MLX: [8bit](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-8bit-mlx), [4bit](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-4bit-mlx)
|
16 |
|
|
|
10 |
|
11 |
[Documentation on MLX](https://github.com/ml-explore/mlx/)
|
12 |
|
13 |
+
### Other Quants:
|
14 |
|
15 |
-MLX: [8bit](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-8bit-mlx), [4bit](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-4bit-mlx)
|
16 |
|