mradermacher commited on
Commit
b9b2b63
1 Parent(s): ad39d96

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -179,7 +179,6 @@ tags:
179
  <!-- ### vocab_type: -->
180
  static quants of https://huggingface.co/NurtureAI/Meta-Llama-3-70B-Instruct-64k
181
 
182
-
183
  <!-- provided-files -->
184
  weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
185
  ## Usage
@@ -196,6 +195,7 @@ more details, including on how to concatenate multi-part files.
196
  |:-----|:-----|--------:|:------|
197
  | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
198
  | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
 
199
 
200
 
201
  Here is a handy graph by ikawrakow comparing some lower-quality quant
 
179
  <!-- ### vocab_type: -->
180
  static quants of https://huggingface.co/NurtureAI/Meta-Llama-3-70B-Instruct-64k
181
 
 
182
  <!-- provided-files -->
183
  weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
184
  ## Usage
 
195
  |:-----|:-----|--------:|:------|
196
  | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
197
  | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
198
+ | [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-64k-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-64k.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
199
 
200
 
201
  Here is a handy graph by ikawrakow comparing some lower-quality quant