Moe from your 7b param model would be more effective

#1
by rombodawg - opened

I get that this is impressive because a 2.8b param models infrence speed can match 7b param models quality. But its not really practical. If you can run a 7b parameter model you dont really need the extra inference speed, what you need is better quality.

This is just my opinion, but I would have much preferred a bigger MoE from your guys made up from your 7b parameter models.
like a mixtral-48b param model but made up from deepseeklm-7b. Or even deepseekcoder-6.7b-instruct.

I was recently informed that the 40% in your github referred to training compute requirement needed not inference speed. Which in that case I kindoff take back what i said. This model does alot more than i though it did, considering you can probably fully train it on 24gb of vram in 4-bit using qlora with no issue.

But still i want to see that deepseek-48b-MoE please 😁

If im mistaken about anything I said here please correct me, so me and the community can better understand your model.
Cheers

i think this title can be changed as deepseeks-8*2.8b-chat

Sign up or log in to comment