The model's response are gibberish?!

#2
by taesiri - opened

Hello, First of all, thank you for the demo.

Is it me, or is the model generating gibberish with every message or system prompt?

Tonic AI org

πŸ‘‹πŸ»hey there !

thanks for raising this issue !

based on the below, i've now made some workable parameter choices as default , normally it should help.

Screenshot 2023-11-22 023338.png
Screenshot 2023-11-22 023333.png

basically the way i handle memory here, means if there are several folks here at the same time they can easily overload the machine , but now it should be better (less new tokens)

Normally when the response comes back, it's two full responses (with high new token querries) , and perhaps i should look into a good way to parse and present this information.

Please be patient as Orca2 rebuilds πŸ™πŸ» thanks a lot for testing ! πŸš€

Tonic AI org

@taesiri , in the end , i gave up, no quick and painless way for me to add session memory in 5 minutes or less ^^ , hope you'll forgive me for departing from the microsoft example code ^^ πŸš€

Hey @Tonic

I haven't checked your code but if multiple users is the issue, the solution should be enabling queue in Gradio.

Sign up or log in to comment