dolly-v2-ggml / dolly-v2-12b-f16.meta
LLukas22's picture
Uploaded f16 models
01fc2f7
raw
history blame
275 Bytes
{
"model": "GptNeoX",
"quantization": "F16",
"quantization_version": "Not_Quantized",
"container": "GGML",
"converter": "llm-rs",
"hash": "f52682dadd1005c15e6a084e50bd23ff24f049340e7650cb16556291cd1225b9",
"base_model": "databricks/dolly-v2-12b"
}