Works with latest llama.cpp version. (05/06/23 build = 622) F16 perplexity : 5.9066 q6_K perplexity : 5.9110