--- license: apache-2.0 --- ## convert ggml-vicuna-7b-f16 to ggml-vicuna-7b-q4_0 Source: https://huggingface.co/chharlesonfire/ggml-vicuna-7b-f16 No unnecessary changes ## Usage: 1. Download llama.cpp from https://github.com/ggerganov/llama.cpp 2. make and run llama.cpp and choose model with ggml-vicuna-7b-q4_0.bin