JohannesGaessler commited on
Commit
52de903
1 Parent(s): 9f7a7b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -3,4 +3,6 @@ license: apache-2.0
3
  ---
4
 
5
  This repository contains FP16 logits produced via the llama.cpp `perplexity` with `wikitext-2-raw/wiki.test.raw`.
6
- By using the logits as input the KL divergence for a quantized model can be calculated without the need to run the model at FP16.
 
 
 
3
  ---
4
 
5
  This repository contains FP16 logits produced via the llama.cpp `perplexity` with `wikitext-2-raw/wiki.test.raw`.
6
+ By using the logits as input the KL divergence for a quantized model can be calculated without the need to run the model at FP16.
7
+
8
+ **Important: The logits I previously uploaded for LLaMA 3 Instruct 70b FP16 may have been affected by hardware instability issues and any conclusions drawn from them may be incorrect. I therefore deleted the file.**