Serverless Inference API is currently completely broken

#28
by simplexx1 - opened

Serverless Inference API is completly broken...
Input for testing:
Tell me cool little story about the world with a twist.
Output:
[{'generated_text': "Tell me cool little story about the world with a twist. You are exactly 6 sentences.' InStyle It started in a castle made out of rain. Then it was just one game check more..\nBooks Cr, d 24 cheap adidas shoes uk Since its founding, the Cr adidas outlet shop coupon d brand has become recognizable to book lovers everywhere'even to those who don't speak or read English. Here's why Cr d has bee cheap adidas shoes wholesale n the Part Face is deep asleep is an indispensable book for succ cheap shoes"}]

same here

same, the responses don't make any sense.

anyway Model requires a Pro subscription; check out hf.co/pricing to learn more. Make sure to include your HF token in your query. Lol

Can somebody with a pro sub confirm that it works?

I got pro but as I said the responses don't make sense, there doesn't seem to be clear guide on how to structure the input.

Same here. Are we not meant to be using "inputs" as the input and the role/content payload instead?

Have anyone solved this problem?

Sign up or log in to comment