How to pass "history_prompt" in the API inference?
Hi there,
how can i pass "v2/en_speaker_6" history prompt to the model?
Hey @vivasvan100 , you can do it like this:
from transformers import AutoProcessor, BarkModel
processor = AutoProcessor.from_pretrained("suno/bark")
model = BarkModel.from_pretrained("suno/bark")
voice_preset = "v2/en_speaker_6"
inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
Thank you for your response, however i would like to do it in the 🤗 inference API or 🤗 Inference Endpoint.
The code above will do it on my local system i believe. Is this correct?
My question is:
import requests
API_URL = "https://api-inference.huggingface.co/models/suno/bark-small"
headers = {"Authorization": "Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "The answer to the universe is 42",
# If possible .... can I add a parameter here for history prompt
})
Sorry if this is noob question. I am new to HuggingFace
I believe you can first define some forward parameters:
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("suno/bark")
# get voice preset from processor
blank_inp = processor("blank", voice_preset = "v2/en_speaker_1")
# get voice_preset
history_prompt = blank_inp["history_prompt"]
forward_params = {
"history_prompt": history_prompt,
}
And then pass this to your request:
import requests
API_URL = "https://api-inference.huggingface.co/models/suno/bark-small"
headers = {"Authorization": "Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "The answer to the universe is 42",
"forward_params": forward_params,
})
I'm actually not sure it works, could you tell me what you've got ?
It does not work because the output of the processor contains tensors that are not JSON serialisable.
I tried this fix:
blank_inp = AutoProcessor.from_pretrained("suno/bark")(
"blank", voice_preset="v2/en_speaker_1"
)
# get voice_preset
history_prompt = blank_inp["history_prompt"]
new_history_prompt = {}
for key, value in history_prompt.items():
value = value.cpu().numpy().tolist()
new_history_prompt[key] = value
forward_params = {
"history_prompt": new_history_prompt,
}
This runs through but the speaker is actually not consistent... Any ideas ?
FYI I managed to make it work using a custom handler. Feel free to use https://huggingface.co/skroed/bark-small
Wow, thanks for figuring that out ! Leaving the discussion open for future reference!