Update README.md
Browse files
README.md
CHANGED
@@ -213,20 +213,6 @@ quantized_by: bartowski
|
|
213 |
|
214 |
## Exllama v2 Quantizations of Meta-Llama-3-8B-Instruct
|
215 |
|
216 |
-
If generation refuses to stop, you can edit tokenizer_config.json.
|
217 |
-
|
218 |
-
Replace line 2055:
|
219 |
-
|
220 |
-
```
|
221 |
-
"eos_token": "<|end_of_text|>",
|
222 |
-
```
|
223 |
-
|
224 |
-
with:
|
225 |
-
|
226 |
-
```
|
227 |
-
"eos_token": "<|eot_id|>",
|
228 |
-
```
|
229 |
-
|
230 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization.
|
231 |
|
232 |
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
|
|
|
213 |
|
214 |
## Exllama v2 Quantizations of Meta-Llama-3-8B-Instruct
|
215 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
216 |
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization.
|
217 |
|
218 |
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
|