Requesting we move the .gguf file into /main
#4
by
FutureProofHome
- opened
Nvidia's official llama_ccp implementation uses the huggingface-downloader to pull down models and place all the correct files in the correct folder structures. The problem is that Nvidia's huggingface-downloader implementation doesn't currently support branches (at least that I see).
The discussion is requesting we move the Llama-2-7b-chat-hf-function-calling-v3.Q4_K.gguf from the gguf
branch to the main branch for easy implementation.
Soon I'll open a PR if you'd like, but I wanted to discuss here first.
Thanks, please open a pr