Fine-tuning memory and custom tokenizer

#2
by ipark - opened

Thanks for sharing great work! I have two questions:

Q1. For fine-tuning (Example 2), is there a minimum memory requirement in GPU?
In the ZymCTRL paper, Nvidia A100 GPUs with 40GB memory were used. My GUS has 12GB memory, wondering if it matters.
Since I got same error https://discuss.huggingface.co/t/cuda-out-of-memory-error/17959/4
Reduced batch size to 1, as well as block size down to 32, but still got the same error.
If I used CPU instead with --no_cuda, I could fine-tune albeit a way long time.

Q2. Does it make sense to fine-tune the pretrained ZymCTRL with a custom tokenizer (in which smiles strings are tokenized, instead of EC numbers)? In general, any restriction on the length of prompts in the train set?

Thank you very much.

AI for protein design org

Hi ipark,

Thanks a lot for posting! It sounds like 12GB may not be enough to fit the model. As you say I’ve only tried with A100 and A40s but from your error it sounds like you will need more than 12GB or use the CPU. If I remember correctly, there is a documentation page in HuggingFace with tricks to train large models (but I don’t seem to find it now) and it had tips to try to fit the model into ‘smaller’ GPUs.

Q2: Yes you can fine-tune with a different tokenizer as well. I expect however that you should fine-tune for quite long because as it is ZymCTRL doesn’t have any knowledge of chemistry. The inout limit is 1024.

Thank you Noelia!

This might be for the tricks in HuggingFace documentation you are referring to
https://huggingface.co/docs/transformers/v4.18.0/en/performance
Will look into this.

Thanks again!

Sign up or log in to comment