convert ggml-vicuna-7b-f16 to ggml-vicuna-7b-q4_0

Source: https://huggingface.co/chharlesonfire/ggml-vicuna-7b-f16

No unnecessary changes

Usage:

  1. Download llama.cpp from https://github.com/ggerganov/llama.cpp

  2. make and run llama.cpp and choose model with ggml-vicuna-7b-q4_0.bin

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support