Doctor-Shotgun
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -29,10 +29,19 @@ The objective, as with the other Magnum models, is to emulate the prose style an
|
|
29 |
|
30 |
[Here's the rsLoRA adapter](https://huggingface.co/Doctor-Shotgun/Magnum-v4-SE-70B-LoRA) for those merge-makers out there to play with.
|
31 |
|
|
|
|
|
32 |
Thank you to [bartowski](https://huggingface.co/bartowski) for the [imatrix GGUF quants](https://huggingface.co/bartowski/L3.3-70B-Magnum-v4-SE-GGUF) and [mradermacher](https://huggingface.co/mradermacher) for the [static GGUF quants](https://huggingface.co/mradermacher/L3.3-70B-Magnum-v4-SE-GGUF).
|
33 |
|
34 |
Thank you to [alpindale](https://huggingface.co/alpindale) for the [fp8 dynamic quant](https://huggingface.co/alpindale/L3.3-70B-Magnum-v4-SE-FP8).
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
## Usage
|
37 |
|
38 |
This model follows the Llama 3 prompt format.
|
|
|
29 |
|
30 |
[Here's the rsLoRA adapter](https://huggingface.co/Doctor-Shotgun/Magnum-v4-SE-70B-LoRA) for those merge-makers out there to play with.
|
31 |
|
32 |
+
## Quantized models
|
33 |
+
|
34 |
Thank you to [bartowski](https://huggingface.co/bartowski) for the [imatrix GGUF quants](https://huggingface.co/bartowski/L3.3-70B-Magnum-v4-SE-GGUF) and [mradermacher](https://huggingface.co/mradermacher) for the [static GGUF quants](https://huggingface.co/mradermacher/L3.3-70B-Magnum-v4-SE-GGUF).
|
35 |
|
36 |
Thank you to [alpindale](https://huggingface.co/alpindale) for the [fp8 dynamic quant](https://huggingface.co/alpindale/L3.3-70B-Magnum-v4-SE-FP8).
|
37 |
|
38 |
+
Exl2 quants courtesy of [MikeRoz](https://huggingface.co/MikeRoz), including `measurement.json` if you need to make one with a different bitrate:
|
39 |
+
- [2.25bpw](https://huggingface.co/MikeRoz/Doctor-Shotgun_L3.3-70B-Magnum-v4-SE-2.25bpw-h6-exl2)
|
40 |
+
- [3.5bpw](https://huggingface.co/MikeRoz/Doctor-Shotgun_L3.3-70B-Magnum-v4-SE-3.5bpw-h6-exl2)
|
41 |
+
- [4.25bpw](https://huggingface.co/MikeRoz/Doctor-Shotgun_L3.3-70B-Magnum-v4-SE-4.25bpw-h6-exl2)
|
42 |
+
- [6.0bpw](https://huggingface.co/MikeRoz/Doctor-Shotgun_L3.3-70B-Magnum-v4-SE-6.0bpw-h6-exl2)
|
43 |
+
- [8.0bpw](https://huggingface.co/MikeRoz/Doctor-Shotgun_L3.3-70B-Magnum-v4-SE-8.0bpw-h8-exl2)
|
44 |
+
|
45 |
## Usage
|
46 |
|
47 |
This model follows the Llama 3 prompt format.
|