Text Generation
Transformers
PyTorch
TensorBoard
Safetensors
bloom
Eval Results
text-generation-inference
Inference Endpoints

How to send parameters in API?

#113
by Keklikobeko - opened

I am using API since I don't have enough equipment to run model. In document(https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) it is written that I can use parameters and there is an example how to do it with gpt-2 but I can't send any parameter except text. Am I doing something wrong or it isn't allowed to customize parameters?

Edit: I realized it was my mistake, sorry.

Keklikobeko changed discussion status to closed
Keklikobeko changed discussion status to open
TimeRobber changed discussion status to closed

Sign up or log in to comment