Issues with running
#4
by
matimats
- opened
I'm trying to run it on my M1 Max 32GB Mac with Ollama, but I'm getting "Error: llama runner process has terminated". Am I doing anything wrong or is my machine simply not powerful enough?
Ollama is GGUF only, is it not?
You would need close to100GB ram to load it full fat.
But quantized (i.e. "dolphin-2.6-mixtral-8x7b.Q4_K_M.gguf") might be able to fit in to 32GB.