leonardlin's picture
Update README.md
bb6858a verified
metadata
license: llama3
base_model: shisa-ai/shisa-v1-llama3-8b
datasets:
  - augmxnt/ultra-orca-boros-en-ja-v1
language:
  - ja
  - en

There are some stray llama2 </s> tokens for some reason, but tested to work correctly with multiturn w/ the llama3 chat_template:

./server -ngl 99 -m shisa-v1-llama3-8b.Q5_K_M.gguf --chat-template llama3 -fa -v

Note: BF16 GGUFs have no CUDA implementation atm: https://github.com/ggerganov/llama.cpp/issues/7211

Conversion was done from HEAD on 2024-03-27 version: 3005 (d6ef0e77) (closest release is 3006):

#!/bin/bash

cd llama.cpp

echo 'Converting HF to GGUF'
python convert-hf-to-gguf.py --outtype bf16 --outfile /models/gguf/shisa-v1-llama3-8b.bf16.gguf /models/hf/shisa-v1-llama3-8b

echo 'Quanting...'
time ./quantize /models/gguf/shisa-v1-llama3-8b.bf16.gguf /models/gguf/shisa-v1-llama3-8b.Q8_0.gguf Q8_0
time ./quantize /models/gguf/shisa-v1-llama3-8b.bf16.gguf /models/gguf/shisa-v1-llama3-8b.Q6_K.gguf Q6_K
time ./quantize /models/gguf/shisa-v1-llama3-8b.bf16.gguf /models/gguf/shisa-v1-llama3-8b.Q5_K_M.gguf Q5_K_M
time ./quantize /models/gguf/shisa-v1-llama3-8b.bf16.gguf /models/gguf/shisa-v1-llama3-8b.Q4_K_M.gguf Q4_K_M
time ./quantize /models/gguf/shisa-v1-llama3-8b.bf16.gguf /models/gguf/shisa-v1-llama3-8b.Q4_0.gguf Q4_0