Ollama upload please.

#2
by AlgorithmicKing - opened

Hey, thanks so much, I really appreciate it! Would you mind uploading it to Ollama? I can do it too, but downloading the model and then uploading it again would take me quite a while.

I was going to say you can run directly in ollama from HF but I actually don't know how that works with vision models.. will look into it !

I was going to say you can run directly in ollama from HF but I actually don't know how that works with vision models.. will look into it !

well i imported the gguff into openwebui and....
Screenshot 2024-12-27 121545.png
this happens every time and here is the translation:

, Hello little fairies~
Your editor is online again
Recently I'm following "The Next Stop Is Happiness"
It's really too sweet
Song Weilong and Song Qian have a full sense of CP

wait... this is diffrent a while ago it was yapping about a cook and giving me recipies

Hi, seems like we have a problem parsing GGUF file on hugging face backend. I'll report it to the team

So, this is a real bug? I thought this was a problem on my side, thanks a lot! Now I don't have to debug my Ollama or OWUI.

Yes it's a bug on HF side. I don't know yet when we can deploy a fix, given that many of us are on vacation.

In the meantime, you can try adding the chat template yourself via a Modelfile, ref ollama docs

FROM hf.co/bartowski/QVQ-72B-Preview-GGUF
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
"""
PARAMETER stop "<|im_end|>"

thank you and also, is the problem with all the ggufs or just the q4km's?

it's for all quants

I'll try to take a look tomorrow at the process

❤️ thanks a lot! you are a legend!

Sign up or log in to comment