GGML?

#2
by creative420 - opened

is it possible to quantize it for llama.cpp?

Not yet - there's no GGML support yet.

As soon as there is I will upload GGMLs.

Any update on ggml versions?

No not yet I'm afraid. No-one has started work on it, to my knowledge.

You can track the discussions here: https://github.com/ggerganov/llama.cpp/issues/1602

Can't wait for GGML version of wizard-falcon 40b. This is gonna be big.

deleted

Can't wait for GGML version of wizard-falcon 40b. This is gonna be big.

Literally as the file size will be huge :)

Looks like they have got quantised GGML working for falcon in a branch: https://github.com/ggerganov/llama.cpp/issues/1602#issuecomment-1580330824

Someone posted quantized versions of the 7B Falcon model: https://huggingface.co/RachidAR/falcon-7B-ggml/tree/main

Sign up or log in to comment