Upload README.md
Browse files
README.md
CHANGED
@@ -38,10 +38,10 @@ Since I am a free user, so for the time being, I only upload models that might b
|
|
38 |
|
39 |
| Filename | Quant type | File Size | Description |
|
40 |
| -------- | ---------- | --------- | ----------- |
|
41 |
-
| [Llama-3_1-Nemotron
|
42 |
-
| [Llama-3_1-Nemotron
|
43 |
-
| [Llama-3_1-Nemotron
|
44 |
-
| [Llama-3_1-Nemotron
|
45 |
|
46 |
## How to check i8mm support for Apple devices
|
47 |
|
|
|
38 |
|
39 |
| Filename | Quant type | File Size | Description |
|
40 |
| -------- | ---------- | --------- | ----------- |
|
41 |
+
| [Llama-3_1-Nemotron-51B-Instruct.Q4_K_M.gguf](https://huggingface.co/ymcki/Llama-3_1-Nemotron 51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct.Q4_K_M.gguf) | Q4_K_M | 31GB | Good for A100 40GB or dual 3090 |
|
42 |
+
| [Llama-3_1-Nemotron-51B-Instruct.Q4_0.gguf](https://huggingface.co/ymcki/Llama-3_1-Nemotron 51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct.Q4_0.gguf) | Q4_0 | 29.3GB | For 32GB cards, e.g. 5090. |
|
43 |
+
| [Llama-3_1-Nemotron-51B-Instruct.Q4_0_4_8.gguf](https://huggingface.co/ymcki/Llama-3_1-Nemotron 51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct.Q4_0_4_8.gguf) | Q4_0_4_8 | 29.3GB | For Apple Silicon |
|
44 |
+
| [Llama-3_1-Nemotron-51B-Instruct.Q3_K_S.gguf](https://huggingface.co/ymcki/Llama-3_1-Nemotron 51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct.Q3_K_S.gguf) | Q3_K_S | 22.7GB | Largest model that can fit a single 3090 |
|
45 |
|
46 |
## How to check i8mm support for Apple devices
|
47 |
|