Will it run on a 4090?

#3
by brifl - opened

I ran this successfully on a home PC with a GeForce 4090 RTX. It ran on oobabooga. I couldn't build auto-gptq from source, but the transformers update worked. I had to select AutoGPTQ from oobabooga too.

It thinks for a long time before answering. At 0.5 temp, it is giving coherent and thoughtful answers to complex questions. This feels like I am running something between GPT-3.5 Turbo and GPT-4 Turbo (like Claude) on my PC. I am shocked at the progress being made with these. This is the best model yet. I wonder how much impact Synthia made. These mixtral models are clearly the next gen.

brifl changed discussion status to closed

Sign up or log in to comment