Can i pass the prompt via a post request to the HF Inference endpoint / API?

#5
by jamesdhope - opened

Hi there, can I pass the prompt in the post request to the HF inferencing endpoint, and what is the correct payload format please? Thanks.

I am not sure if this model is already supported in inferencing endpoint or not (I am not in that team). I know it's not supported yet in transformers' pipeline.
I will check the inferencing endpoint part.

Is there any update on this issue?

Hi~

For this model, at this moment, a custom Inference handler is needed. You can check the following documentation

https://huggingface.co/docs/inference-endpoints/guides/custom_handler

Sign up or log in to comment