Text Generation
Transformers
PyTorch
gptj
Inference Endpoints

Few shot learning

#1
by iRanadheer - opened

Hey, finetuning the GPTJ with Stanford alpaca instructions is a great idea, I'm looking for something like this, it's great to see someone has already done a great job.

I'm trying to do few-shot learning (classification) with the GPT-J model, but it doesn't seem to do a good job. I have tried the alpaca model (LORA), and it has increased the accuracy, but it's still not enough. So, I've come across your instruct-gptj model. I tried the few-shot learning, it doesn't seem to understand few-shot learning now, does it? I got very bad results all over. Maybe I'm doing something completely wrong. I was looking for the documentation for this model and it says that I don't need the few-shot learning as the model can understand the instructions.

what if I want to do the few-shot and provide a few examples for each class? will it work? Do you have any sample prompts? or have you not tested this? I'm very curious to know.

NLP Cloud org

Thanks, and cool to see that you had the same idea :)
In our tests this model seems to still perform correctly for few-shot learning, even if fine-tuned for instructions. For to be honest I don't really see why you would use this model if you are trying to perform text classification with few-shot learning.
Maybe you can have a look at our guide to see how to perform few-shot learning for classification: https://nlpcloud.com/effectively-using-gpt-j-gpt-neo-gpt-3-alternatives-few-shot-learning.html#zero-shot-text-classification

I have tried GPT-J and NEO for few-shot learning. the quality is not good enough. In fact, I tested a few examples on your playground, I think it's more or less the same. Did you retrain or fine-tune the gptj model before you create an API?

NLP Cloud org

"I think it's more or less the same" --> I am not exactly sure what you mean by that.
Maybe you can copy paste a few-shot example here so I can advise, and so it benefits everyone who is reading this?

"Did you retrain or fine-tune the gptj model" --> Yes this instruct GPT-J model is a fine-tuned version of GPT-J

if you want to see something insane, try nlpcloud's GPT_Neo fien tuned mdoel.

Sign up or log in to comment