Lol

#1
by ddh0 - opened

lol how many gb of vram do you need to make this work? will it run on 8gb gpu?

Lovely read on the model card. Thank you for your masochistic pioneering! πŸ˜„πŸ™πŸ‘

lol how many gb of vram do you need to make this work? will it run on 8gb gpu?

Lol

Guys I posted the q4 version on CivitAI and it runs on my 8GB GPU just fine, check out the showcase post for all the images generated with it:

https://civitai.com/models/964045/flux-heavy-17b

doubling up on layers and then quantizing it to q_4. seems redundant somehow. doesn't produce impressive results

lol how many gb of vram do you need to make this work? will it run on 8gb gpu?

Even the full model runs on a 10GB card as well since it'll just swap to system RAM (and then disk once you run out of that too). And yeah, you could crunch it all the way down to Q3_K_M and run it on a toaster, but the actual inference speed would probably still be pretty bad since you'd be heavily compute bound. Question is, why would you torture yourself with that kek.

Lovely read on the model card. Thank you for your masochistic pioneering! πŸ˜„πŸ™πŸ‘

It was mostly a weekend project to see if it was possible, though testing, merging and training it was indeed extremely slow and painful lmao. The model card reflects that.

doubling up on layers and then quantizing it to q_4. seems redundant somehow. doesn't produce impressive results

I haven't tested it at 512x512 like your example images, I guess you could try 1024x1024 or around there, but at the end of the day it's a proof of concept model with minimal training (with no training text was almost always broken and small details just had horrid texture). Also, the fixup dataset may or may not have been largely anime trash.

EDIT: also, small nitpick @dasilva333 - quant names should ideally follow the llama.cpp naming, so Q4_0, Q8_0, Q4_K_S, etc.

Sign up or log in to comment