Llama Compatibility

#1
by brucethemoose - opened

I'm not sure how different the architecture actually is, but if possible, could you change the config to be llama compatible instead of requiring custom runtime/tokenizer code?

Custom code is a huge blocker to the highly optimized llama architecture infrastructure the community already uses... To be blunt, I can run Yi 34B 200K with a fraction of the resources it takes to run this 20B model, for the moment, and finetune it just about as efficiently with llama focused frameworks.

Yi itself already went through this ordeal, and "llamafied" their release to the benefit of everyone: https://huggingface.co/01-ai/Yi-34B/discussions/11

InternLM org

Thank you for your suggestion. The biggest difference lies in the combination of Wq, Wk, Wv, we did this for training efficiency. We are planning to offer a script that facilitates conversion between InternLM2 and LLaMA.

InternLM org

Please try to use script in https://github.com/InternLM/InternLM/tree/main/tools to convert the format.

x54-729 changed discussion status to closed

Sign up or log in to comment