Issue with Model Deployment in Google Colab - Need Help with Hugging Face

#2
by Samis922 - opened

Hey everyone,

I've been working on deploying a question generate model from Hugging Face, and I'm encountering a bit of an issue when using it in Google Colab. In a live environment, the model performs well, generating questions related to my text with the hints I provide. However, Google Colab, it seems to generate only one type of question, which is always in the format of "Is it true that...?"

I'd really appreciate some guidance on how to make the "Multi-Task Model(s) Sensitive to Hints" work smoothly in Google Colab. I suspect there might be some differences in the environment or how hints are being interpreted.

If anyone has experience with this or can offer suggestions on hint placement, prompt structure, or any other relevant tips, I'd be grateful for your insights. I'm eager to make this model work effectively in Colab for my project.

Thanks in advance for your help!

Best regards,
Samane

@Samis922

I hope your question is for the context model? If so, have you had a chance to see the live demo yet, it has lot of specifics w.r.t. hyperparamets and the code used behind the scenes. Hopefully, this helps resolve the issue for you.

Demo Link: https://huggingface.co/spaces/consciousAI/question_generation

This is the model that I want to use the codes: consciousAI/question-generation-auto-hints-t5-v1-base-s-q-c, in the live everything, works really well and I use a hint but I don't know how to deploy it in Python with that exact setup. I am a beginner in using coding and models, sorry for these types of questions

Sign up or log in to comment