Passing parameters to the model deployed on HF Inference Endpoints

#59
by dkincaid - opened

I've got the model deployed to HF Inference Endpoints, but I can't figure out how to pass parameters (like language, return_timestamp, etc) to the model. Since the audio bytes seem to need to be passed as bytes, I don't get how we can pass parameters. Especially since the language parameter has to be passed as a nested dict like {"generate_kwargs": {"language": "en"}}.

headers = {"Authorization": f"Bearer {hf_api_token}",
           "Content-Type": "audio/x-mpeg-3"}

def query(filename):
    with open(filename, "rb") as f:
        data = f.read()

    response = requests.post(API_URL, headers=headers, data=data)
    return response.json()

# Usage example
whisper_transcription = query(audio_file_path)
whisper_transcription

HI @dkincaid ,
I have been struggling with the same problem. But the solution is so simple.
Usually, the payload will be of a JSON type, where you can enter arguments. Depending on the task of the model (audio trancription/predicting next word/ etc.) the user can pass different keywords.
Which keywords can be passed are provided for each task here: https://huggingface.co/docs/api-inference/detailed_parameters

In our case, we have the task automatic_speech_recognition task. The allowed parameters in this setting are ... well, none.
This screenshot shows it:
image.png
Currently, no other parameters can be passed.

" But the solution is so simple "
what is the solution ? @joshwe

Is there any update to this ? still dont have the function to pass language = 'en' when using inference API.

Sign up or log in to comment