GGUF
Japanese
English
qwen
nekomata-7b-gguf / README.md
tianyuz's picture
Upload folder using huggingface_hub
02fdd73
metadata
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
language:
  - ja
  - en
tags:
  - qwen
inference: false

rinna/nekomata-7b-gguf

rinna-icon

Overview

The model is the GGUF version of rinna/nekomata-7b. It can be used with llama.cpp for lightweight inference.

Quantization of this model may cause stability issue in GPTQ, AWQ and GGUF q4_0. We recommend GGUF q4_K_M for 4-bit quantization.

See rinna/nekomata-7b for details about model architecture and data.


How to use the model

See llama.cpp for more usage details.

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

MODEL_PATH=/path/to/nekomata-7b-gguf/nekomata-7b.Q4_K_M.gguf
MAX_N_TOKENS=128
PROMPT="西田幾多郎は、"

./main -m ${MODEL_PATH} -n ${MAX_N_TOKENS} -p "${PROMPT}"

Tokenization

Please refer to rinna/nekomata-7b for tokenization details.


How to cite

@misc{RinnaNekomata7bGGUF, 
    url={https://huggingface.co/rinna/nekomata-7b-gguf}, 
    title={rinna/nekomata-7b-gguf}, 
    author={Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}
}

License

Tongyi Qianwen LICENSE AGREEMENT