Model Performance Curiosity

#7
by sumuks - opened

Hey Tom!

Thank you so much for the work that you're doing.

As title. With a powerful enough card (for context, 24G VRAM), would it make more sense to run an unquantized 13B parameter model, or a 4bit quantized 30B model? I think this question actually translates to - how much does quantization affect performance.

I've seen some of the synthetic benchmarks and perplexity scores, but I know that doesn't usually translate 1-1 for real world tasks. I also don't see a lot of evaluations done from an open ended content generation perspective, so I'd love any anecdotal evidence that you may have come across.

Thank you once again!

Just try both and see what you like. Why would you have to ask this? If you have the hardware just try them with your own benchmarks.

In my experience, benchmark performance has not exactly been perfectly correlated with real world performance. I remember @TheBloke mentioning in another discussion (perhaps on a different release), that he had some experience with this, so I was curious regarding his thought process.

That being said, I'm also planning to conduct some research on some metrics to evaluate open ended content generation, (perhaps something like https://aclanthology.org/2021.acl-long.500.pdf )

sumuks, could you tell us what is more productive in the end?

Sign up or log in to comment