70B?

#3
by FiditeNemini - opened

Hello, could I trouble you to provide a bit of info on how you construct the projector file for a larger model? I happened to stumble over the GGUF used for Llama 3 and just tried it with Llama 3.1 with fairly promising results, as 3.1 seems to have been trained with multimodal content. I'd like to apply the projector to the 70B model but haven't done this before. Would you be able to point me in the right direction at all? Any help would be appreciated.

qresearch org

you'd have to train the projection module from scratch for llama 70B as the embedding spaces are not the same, for that you would use something like llava

Thank you.

qtnx changed discussion status to closed

Sign up or log in to comment