ggml-vicuna-7b-4bit / README.md
chharlesonfire's picture
First model version
0d7df93
|
raw
history blame contribute delete
No virus
324 Bytes
metadata
license: apache-2.0

convert ggml-vicuna-7b-f16 to ggml-vicuna-7b-q4_0

Source: https://huggingface.co/chharlesonfire/ggml-vicuna-7b-f16

No unnecessary changes

Usage:

  1. Download llama.cpp from https://github.com/ggerganov/llama.cpp

  2. make and run llama.cpp and choose model with ggml-vicuna-7b-q4_0.bin