Gemma2-2b training uses much more momory!

#23
by bubbleseller - opened

I have been training gemma2-2b into VLM on 8 80G H800 with max batch size 4 by pytorch FSDP.And I think the batch size is strange because training llama2 into VLM with the same FSDP settings has batch size 32. So I wonder if there are some kernels or computation that are very memory consuming in he code of transformers model gemma2.

I have been training gemma2-2b into VLM

It is very interesting. I wonder how it works.

I think the batch size is strange because training llama2 into VLM with the same FSDP settings has batch size 32.

IIRC, FSDP doesn't support soft capping in Gemma2. You may need to use alternative settings.

Sign up or log in to comment