pretraining Gemma for domain dataset

#41
by Iamexperimenting - opened

Hi team,

I would like to pretrain Gemma model with my domain dataset. I wanted to train gemma model with my domain data. But I wanted to train all the parameters not using LoRA.

1.a : does tokenizer learns/add any new tokens(domain specific words which is not present in the training tokenizer) during continued pre-training?

can you please provide any example article to fine-tune full parameters?

@ybelkada @suryabhupa

@suryabhupa @ybelkada can you please provide an example?

Google org

Hello! Sorry for the delay.

  1. I'm not sure what you mean by new tokens; you shouldn't need to use any new tokens when finetuning, and you are welcome to use any formatting template you'd like; see our own formatting we use if you'd like, especially as those are natively supported by our tokenizer.

  2. The Zephyr 7B published their finetuning setup here: https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1, and other guides such as https://ai.google.dev/gemma/docs/jax_finetune, https://lightning.ai/lightning-ai/studios/understanding-using-and-finetuning-gemma, https://www.kaggle.com/code/lucamassaron/fine-tune-gemma-7b-it-for-sentiment-analysis exist as well.

Iamexperimenting changed discussion title from domain specific fine-tuning to pretraining Gemma for domain dataset

@suryabhupa @ybelkada
token which I was referring to "domain specific word(technical word) which is not present in the training tokenizer".

I think, I have used the wrong term in the title and above description. Now, I have changed it.

Basically, I wanted to train the Gemma model with my domain data(like I want to continue to train the Gemma model with my domain data). I wanted to train all parameters in the Gemma model.

Like Google team trained the base Gemma model. I would like to take the base Gemma model and continue to train it with my domain data.

Did you use delimiter <bos> and <eos> during training the base Gemma model?

@suryabhupa @ybelkada can you please guide here?

@suryabhupa @ybelkada can you please guide here?

Google org

Hi @Iamexperimenting
Thanks ! I will let @suryabhupa reply here whenever he can as I am not familiar with the Gemma training procedure

Google org

Hello! Yes, when constructing batches, I'd recommend having sequences in your pretraining set-up that have and tokens in the right places to delimit sequences, but also to properly construct the attention masks. We also use and tokens when doing so. You should experiment with how exactly you pack your examples into a single batch, I'd recommend checking out the T5, GPT, or PaLM paper for some details on how they did it. You do not need to add any extra tokens here.

Sign up or log in to comment