Any release plans for the 7b jamba model without MoE?

#30
by danielpark - opened

Congratulations on the amazing work, and thank you for sharing it.

Currently, it's possible to use jamba as 4bit weight only at Google Colab(using single A100) due to MoE layers, but this comes with significant performance limitations, making it highly likely that the MoE layers won't function as intended.

I'm curious if there are any plans for releasing a 7b model without MoE layers.

danielpark changed discussion title from Could you share the release plans for the 7b jamba model without MoE? to Any release plans for the 7b jamba model without MoE?
AI21 org

Thank you @danielpark !

The model can fit on a single A100 with 80GB memory.

We intend to release smaller variations of Jamba that were used as indicative experiments (not fully trained)

Thank you for your prompt and kind assistance. I was impressed by the fast and impressive architecture of AI21. I wanted to test it on Colab, but since getting nearly 80GB of A100 allocation is like winning the lottery, I had to give up. Other instances are either too expensive or too difficult to use, so I couldn't consider them.

I was planning to use a specialized model once I got the 80GB, but it would be great if AI21 could release the weights of Jamba 7B initialized as quickly as possible.

I'm preparing research based on the Jamba architecture.

Thank you.

Sign up or log in to comment