pythia-ggml / pythia-70m-f16.meta
LLukas22's picture
Upload new model file: 'pythia-70m-f16.bin'
1dd25b9
raw
history blame
273 Bytes
{
"model": "GptNeoX",
"quantization": "F16",
"quantization_version": "Not_Quantized",
"container": "GGML",
"converter": "llm-rs",
"hash": "93b2e5425f7b962c4838282d947aa933a92ab79eb05e0893189dc033420aebad",
"base_model": "EleutherAI/pythia-70m"
}