https://aitorrent.zerroug.de/bartowski-minitron-4b-base-gguf-torrent/
#1
by
zerroug
- opened
README.md
CHANGED
@@ -9,6 +9,9 @@ quantized_by: bartowski
|
|
9 |
|
10 |
## Llamacpp imatrix Quantizations of Minitron-4B-Base
|
11 |
|
|
|
|
|
|
|
12 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3600">b3600</a> for quantization.
|
13 |
|
14 |
Original model: https://huggingface.co/nvidia/Minitron-4B-Base
|
|
|
9 |
|
10 |
## Llamacpp imatrix Quantizations of Minitron-4B-Base
|
11 |
|
12 |
+
## Torrent Files
|
13 |
+
https://aitorrent.zerroug.de/bartowski-minitron-4b-base-gguf-torrent/
|
14 |
+
|
15 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3600">b3600</a> for quantization.
|
16 |
|
17 |
Original model: https://huggingface.co/nvidia/Minitron-4B-Base
|