Lengths of designed proteins

#6
by SenyorDrew - opened

Thanks for making such an awesome resource available. I had a question about the lengths of designed proteins - it seems this question was asked previously, but I don't think I understand the answer. I've followed the instructions for generating protein sequences:
sequences = protgpt2("M", min_length=50, max_length=70, do_sample=True, top_k=950, repetition_penalty=1.2, num_return_sequences=10, eos_token_id=0)

I trying to ask that the protein sequences be no longer than 70 amino acids, but most of the sequences are much longer (most are ~180 AA, with the smallest having 98AA's). They also contain multiple "\n" tokens.
Questions:

  1. How should the "\n" tokens be interpreted? Should I just ignore them?
  2. How can I force the output designed sequences to actually be between 50 and 70 amino acids in length?

Thanks in advance for the help.

Hi!

Thanks a lot for reaching out and using ProtGPT2!

The min_length and max_length arguments are expressed in tokens, not amino acids. The dataset was tokenized before the pre-training stage. Each token, on average, has 3-4 amino acids (but it can vary a lot); hence you get around 180AA. If you want sequences shorter than 70AA, I will try with min_length=20 and max_length=25. But this will require trial and error because some sequences will be enriched with longer or shorter tokens than average. What I'd do if you need a specific range is generate those approximate numbers (min=20 and max=25) and discard those that do not fulfill your criteria. Inference should generally be very fast if you have a GPU.

When generating de novo sequences, instead of 'M,' I'd start with the end-of-sequences token: '<|endoftext|>' You will avoid lots of those ' \ n' tokens. Starting with  '<|endoftext|>,' you should not see more than one  '\n' token every 60 amino acids. If you do,  I'd discard those sequences and keep sampling until getting good ones.

Another thought. When the model is passed the max_length argument, it generates until that length, but perhaps those sequences aren't naturally that short. Hence the model is truncating them. (the model would naturally generate longer sequences, 300-400 amino acids long). To avoid this, I'd generate many sequences and only consider those whose final token is an '<|endoftext|>' token. This way you will avoid unnaturally truncated sequences.

I hope this helps for now; if you have any more questions, please keep me posted!
Noelia

Sign up or log in to comment