Text Generation
Transformers
Safetensors
English
stablelm
causal-lm
Eval Results
Inference Endpoints
8 papers

gguf/ggml variants

#2
by scrawnyether - opened

can you please give me the url for gguf and ggml variants

I tried to use llama.cpp to convert the stableML-3B into ggml/gguf, but there is an error when executing the convert.py script:

% python3 convert.py models/stablelm-3b-4e1t
Traceback (most recent call last):
File "/Users/xxx/code/llama.cpp/convert.py", line 1208, in
main()
File "/Users/xxx/code/llama.cpp/convert.py", line 1149, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/code/llama.cpp/convert.py", line 1060, in load_some_model
raise Exception(f"Can't find model in directory {path}")
Exception: Can't find model in directory models/stablelm-3b-4e1t

After chnge to gptneox based conversion script, now I got a new error message.

% python3 convert-gptneox-hf-to-gguf.py ./models/stablelm-3b-4e1t 1
gguf: loading model stablelm-3b-4e1t
Model architecture not supported: StableLMEpochForCausalLM

Sign up or log in to comment