Discrepancy between num_samples when fine tuning and the number of samples in my training file

#27
by codev - opened

Hello,

I am trying to fine-tune ProtGPT2 using a training dataset of about 4000 sequences. However, when I run run_clm.py it keeps saying the number of training samples is 605 (see screenshot below), so it doesn't seem like it is using all of my training data. I have tried adjusting the batch size and number of epochs, and setting the --max_train_samples argument to 10000, all of which seem to have no effect. Has anyone else run into this?

Screenshot 2023-05-11 at 4.37.28 PM.png

Thanks in advance!

Kathryn

Hi codev!

the run_clm.py script combines the sequences together into single batches to fit the window size of 512. This window size is expressed in tokens, where each token more or less has an average length of 4 amino acids. So every batch could have 2-10 sequences depending on your sequence length. Hence, 605 batches can perfectly comprise 4000 sequences.
Hope this helps, let me know if it doesn't!

Got it, thank you so much @nferruz ! I was coming to a similar conclusion yesterday as I was reading through the run_clm script, but it is helpful to know that this is the intended behavior.

Best,

Kathryn

Sign up or log in to comment