Awesome. Request for training script.

#2
by gsaivinay - opened

This is awesome. Thanks for providing this model. I was looking for such a model that is trained on instructions and this works very good. Is it possible to provide the original training script? or at least the data preprocessing scripts (involving split to chunks, converting to tokens etc..). So that I can fine tune this model in the same way for my use cases.

NLP Cloud org

Thanks, great to see it's useful!
Here is the dataset I used: https://huggingface.co/datasets/nlpcloud/instructions-dataset-adapted-from-stanford-alpaca-for-gpt-j
Here's the repo I used for the fine-tuning: https://github.com/kingoflolz/mesh-transformer-jax. I used 5 epochs, and apart from that I kept the default parameters in the repo.

Thanks for your reply. Much appreciated. Could you please answer few more questions for me? While training, did you freeze any layers? and what kind of hardware/software config is used (like GPU VRAM, CPU Memory etc..)

I've initially trained base GPT-J model in fp-16 by freezing all layers and unfreezing top n layers, which provided decent results for my use case and data, but unable to train entire model on a single GPU with 24GB VRAM. Looking into AWS sagemaker SMP.

NLP Cloud org

You're welcome.
No I did not freeze any layer. I fine-tuned the model on a TPU v3-8 , and followed this guide: https://github.com/kingoflolz/mesh-transformer-jax/blob/master/howto_finetune.md
The fine-tuning took around one hour.

Sign up or log in to comment