Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints

What is the fine tuning process of GPT-JT-6B-v1 Copied ? Any Docs available ?

#15
by MukeshSharma - opened

Any python notebook available to fine tune GPT-JT-6B-v1 On my personal dataset.

Is it good for Code Generation ? Can it perform better than original GPT-J ?

Want to know as well

I tried fine-tuning an 8-bit quantized version of GPT-JT and failed to get any output.. I've fine-tuned 8-bit quantized regular GPT-J without issue. I'm wondering if there are differences in fine-tuning the models.

Looks like this model cannot be fine-tuned

Together org

Sorry for the late reply @yahma @kobalsky
I finally realize that to achieve bidirectional attention for inference, we set zeros the causal mask , by layer.bias[:] = 0. This is fine because during inference, the model naturally cannot see future tokens. So removing causal mask won't cause any problem.

In order to do training / fine-tuning, we should revert this back, and manually control the causal mask for each sequence – the prompt part should be all zeros, and the generation part should be causal mask. Otherwise, there will be information leakage (each token can see the entire sequence) in training so the model won't learn meaningful things.

Hello, @juewang does that mean we can not fine-tune it on the new (specific) dataset?
Like I want to tune the model on just a medical dataset (say) for the purpose of question answering, I want the model to "generate" the answers from its knowledge.

In this case, what procedure should I follow to tune the model with just the medical dataset and then say I should be able to ask the question like: "What are top 5 causes of diarrhea?" and it should return the "generated" answer.

Please help, thanks.

Sign up or log in to comment