No GGUF ?

#2
by Pumba2 - opened

also waiting!

When trying to convert with llamacpp I have :

(sati) rob@Robins-MBP-2 llama.cpp % python3 ./convert.py ../phi-2
Loading model file ../phi-2/model-00001-of-00002.safetensors
Loading model file ../phi-2/model-00001-of-00002.safetensors
Loading model file ../phi-2/model-00002-of-00002.safetensors
Traceback (most recent call last):
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 1228, in
main()
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 1161, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 1078, in load_some_model
model_plus = merge_multifile_models(models_plus)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 593, in merge_multifile_models
model = merge_sharded([mp.model for mp in models_plus])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 572, in merge_sharded
return {name: convert(name) for name in names}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 572, in
return {name: convert(name) for name in names}
^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 547, in convert
lazy_tensors: list[LazyTensor] = [model[name] for model in models]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rob/Downloads/llama.cpp/./convert.py", line 547, in
lazy_tensors: list[LazyTensor] = [model[name] for model in models]
~~~~~^^^^^^
KeyError: 'transformer.embd.wte.weight'
(sati) rob@Robins-MBP-2 llama.cpp %

Sign up or log in to comment