How was this created?

#3
by gagan3012 - opened

Hello,
I am trying to understand model merging, and I was wondering how this model was created and why its not 140B, considering its 2 70b llamas. Also can you share the model creation scripts if not can you explain the process?

Goliath wasn't created by simply stacking two models on top of each other. The merge process was essentially taking slices from various layer ranges from each model, then interleaving those slices into a final model. It didn't turn out to be 140B because not all layers were included in the slices. For example, the input and output layers were not stacked, as they need to be unique. The final size is 118B, but I named it 120B because it sounds better.

Goliath wasn't created by simply stacking two models on top of each other. The merge process was essentially taking slices from various layer ranges from each model, then interleaving those slices into a final model. It didn't turn out to be 140B because not all layers were included in the slices. For example, the input and output layers were not stacked, as they need to be unique. The final size is 118B, but I named it 120B because it sounds better.

Thanks for the explanation (and the great model, it works so nicely!)!
But the mergekit provides 3 types of merges: Linear, Slerp and Passthrough. Are you using the passthrough merge as shown here: https://github.com/cg123/mergekit/blob/main/examples/orcamini-platy-44layer.yml
Another question I have is what's your strategy of finding the way to stack the model? Do you always separate them into 16-layer groups and interleaving them?

Best
HomoSapien

Sign up or log in to comment