Unable to download gguf file
Error: Error converting to fp16: b'INFO:hf-to-gguf:Loading model: finance-chat-model-investopedia\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3673, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3641, in main\n hparams = Model.load_hparams(dir_model)\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 425, in load_hparams\n with open(dir_model / "config.json", "r", encoding="utf-8") as f:\nFileNotFoundError: [Errno 2] No such file or directory: 'finance-chat-model-investopedia/config.json'\n'
@janamapatel -- Your conversion code is failing because its trying to locate config.json but we have provided the finetuned LORA adapters, you have to first merge the adapters into FP16 base model & then try to convert or you can try below FP16 & GGUF models which is also trained on Finlang datasets:
Llama-3-8b:
https://huggingface.co/anamikac2708/Llama3-8b-finetuned-investopedia-Merged-FP16
https://huggingface.co/anamikac2708/Llama3-8b-finetuned-investopedia-q4_k_m_gguf
https://huggingface.co/anamikac2708/Llama3-8b-finetuned-investopedia-q8_0_gguf
https://huggingface.co/anamikac2708/Llama3-8b-finetuned-investopedia-q5_k_m_gguf
Gemma-7b:
https://huggingface.co/anamikac2708/Gemma-7b-finetuned-investopedia-Merged-FP16
https://huggingface.co/anamikac2708/Gemma-7b-finetuned-investopedia-q8_0_gguf
https://huggingface.co/anamikac2708/Gemma-7b-finetuned-investopedia-q4_k_m_gguf
https://huggingface.co/anamikac2708/Gemma-7b-finetuned-investopedia-q5_k_m_gguf
Hope this helps.