Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ This model [mlx-community/Ling-1T-mlx-DQ3_K_M](https://huggingface.co/mlx-commun
|
|
| 13 |
converted to MLX format from [inclusionAI/Ling-1T](https://huggingface.co/inclusionAI/Ling-1T)
|
| 14 |
using mlx-lm version **0.28.1**.
|
| 15 |
|
| 16 |
-
This is created for people using a single Apple Mac Studio M3 Ultra with 512 GB. The 4-bit version of
|
| 17 |
|
| 18 |
```bash
|
| 19 |
pip install mlx-lm
|
|
|
|
| 13 |
converted to MLX format from [inclusionAI/Ling-1T](https://huggingface.co/inclusionAI/Ling-1T)
|
| 14 |
using mlx-lm version **0.28.1**.
|
| 15 |
|
| 16 |
+
This is created for people using a single Apple Mac Studio M3 Ultra with 512 GB. The 4-bit version of Ling 1T does not fit. Using research results, we aim to get 4-bit performance from a slightly smaller and smarter quantization. It should also not be so large that it leaves no memory for a useful context window.
|
| 17 |
|
| 18 |
```bash
|
| 19 |
pip install mlx-lm
|