Inquiries about customized trainer in open script

#9
by MLee1006 - opened

Hi, I read your source code in github, if I understand correct from the customized trainer.py, the expand layer is only modify specify layer. While reading your paper, you are adding layers on top instead fine tune specified layer. So want to clarify if i misunderstood your approach. Appreciate for your insights. Interesting works!

Thanks,
Ming

ARC Lab, Tencent PCG org
edited Jan 22

Thanks for your interest. We add the identity blocks interleaved to the initial model and then we tune the added block while freezing the other part. I hope this will be helpful.

I see, thank you so much! if we would like to refer to your github codes , i think we just need add the arguments expand_layers, once we specify it, the identity blocks will be auto created and tune accordingly?

Thanks,
Ming

ARC Lab, Tencent PCG org

I think you should first use https://github.com/TencentARC/LLaMA-Pro/blob/main/scripts/block_expansion.py to create the ckpt with added blocks. After that, you can load the ckpt and specify the added blocks, which are going to be tuned, in the training script.

Sign up or log in to comment