Unable to convert `llama-2-70b-chat.ggmlv3.q4_K_M.bin` to GGUF

#12
by barha - opened

Trying to use convert-llama-ggmlv3-to-gguf.py from llama-cpp, I am unable to convert the model llama-2-70b-chat.ggmlv3.q4_K_M.bin to gguf format in order to use with llama-cpp.

Error:
"""

  • Preparing to save GGUF file
  • Adding model parameters and KV items
  • Adding 32000 vocab item(s)
  • Adding 723 tensor(s)
    Traceback (most recent call last):
    File "/llm/barha/llama.cpp/convert-llama-ggmlv3-to-gguf.py", line 345, in
    main()
    File "/llm/barha/llama.cpp/convert-llama-ggmlv3-to-gguf.py", line 341, in main
    converter.save()
    File "/llm/barha/llama.cpp/convert-llama-ggmlv3-to-gguf.py", line 165, in save
    self.add_tensors(gguf_writer)
    File "/llm/barha/llama.cpp/convert-llama-ggmlv3-to-gguf.py", line 273, in add_tensors
    mapped_name = nm.get(name)
    AttributeError: 'TensorNameMap' object has no attribute 'get'
    """

Bro, I also can’t open it in other programs and there is an error when converting from bin to gguf

@Ruby777 why do you want to convert it? Just use the gguf variant of llama 2 70b chat.

Sign up or log in to comment