Hi, how can i get the stream response for local deploy?

#8
by thirtyfl - opened

thanks! my question is same as the title.

I was attempting to deploy it on my local machine, but since it requires a GPU, I tried running it on a CPU instead. However, I encountered an error. If anyone has successfully achieved this, please share your experience here.

I was attempting to deploy it on my local machine, but since it requires a GPU, I tried running it on a CPU instead. However, I encountered an error. If anyone has successfully achieved this, please share your experience here.

could you show the error mess? I run the model local successful.

You can use streammer to get stream response

You can use streammer to get stream response

Hi, thanks for you reply. How to use streammer to get stream response? I cant find streammer anyway, could you show a demo for me?
I will really grateful for it, thanks!

You can use streammer to get stream response

Hi, thanks for you reply. How to use streammer to get stream response? I cant find streammer anyway, could you show a demo for me?
I will really grateful for it, thanks!

you can check this link:
https://huggingface.co/docs/transformers/internal/generation_utils#transformers.TextIteratorStreamer
it contains a demo, you just need to change the model path

thirtyfl changed discussion status to closed

Sign up or log in to comment