pythia-ggml / pythia-1.4b-f16.meta
LLukas22's picture
Upload new model file: 'pythia-1.4b-f16.bin'
4646dd8
raw
history blame
274 Bytes
{
"model": "GptNeoX",
"quantization": "F16",
"quantization_version": "Not_Quantized",
"container": "GGML",
"converter": "llm-rs",
"hash": "9aabe7b3ade926cebef20662554467f6522b98bfdc7f540066bcb58f8976e90f",
"base_model": "EleutherAI/pythia-1.4b"
}