Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ base_model: [mattshumer/Reflection-Llama-3.1-70B]
|
|
4 |
---
|
5 |
|
6 |
# High Precision quant of π[Reflection-Llama-3.1-70B](https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B)π
|
7 |
-
# This gets 99.96% perplexity at 50gb filesize whereas fp8 (not tested on this model) is
|
8 |
|
9 |
|
10 |
|
|
|
4 |
---
|
5 |
|
6 |
# High Precision quant of π[Reflection-Llama-3.1-70B](https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B)π
|
7 |
+
# This gets 99.96% perplexity at 50gb filesize whereas fp8 (not tested on this model) is known to be 97-98.8%
|
8 |
|
9 |
|
10 |
|