Datasets:

Languages:
Chinese
Tags:
Not-For-All-Audiences
License:
The Dataset Viewer is not available on this dataset.

https://www.kaggle.com/code/reginliu/perplexity

Model Size PPL n_vocab PPL_adjust
qwen1_5-14b-chat-IQ3_XS.gguf 6.48 11.8084 +/- 0.121615 152064 11.8084
causallm_14b.IQ3_XS.gguf 6.48 13.3798 +/- 0.13641 152064 13.3798
causallm_14b.IQ4_XS.gguf 7.85 13.4127 +/- 0.13762 152064 13.4127
causallm_14b.Q4_0.gguf 8.18 13.6714 +/- 0.13964 152064 13.6714
causallm_14b.IQ2_XXS.gguf 4.98 15.0160 +/- 0.15004 152064 15.0160
Yi-9B-200K_iQ3xxs.gguf 3.47 6.8157 +/- 0.05453 64000 16.1941
Fi-9B-200K-Q8_0.gguf 9.38 6.8402 +/- 0.05741 64000 16.2523
causallm_7b.Q5_K_M.gguf 5.53 16.5278 +/- 0.18005 152064 16.5278
Qwen1.5-22B-Chat-Merge-Q4_0.gguf 12.6 21.9669 +/- 0.28980 152064 21.9669
Kunoichi-DPO-v2-7B-Q4_K_M-imatrix.gguf 4.37 6.7096 +/- 0.04519 32000 31.8840
WizardLM-2-7B-IQ4_XS-imat.gguf 3.91 9.8891 +/- 0.08106 32000 46.9930
For a model that returns tokens completely at random, we have
P(token∣context)=1nvocab,PPL=(1P)NN=nvocab P(token|context) = \frac{1}{n_{vocab}}, \quad PPL = \sqrt[N]{\left(\frac{1}{P}\right)^N} = n_{vocab}
therefore
PPLadjust=PPLnvocabΓ—152064 PPL_{adjust} = \frac{PPL}{n_{vocab}} \times 152064
Downloads last month
0
Edit dataset card

Models trained or fine-tuned on Limour/perplexity