This model is amazing, only one problem

#2
by spike4379 - opened

its replies are fantastic! The only problem I have is that by the third reply it just begins to get stuck infinitely generating and ooba has to be shut down, same with tavernAI. (Running on a 4090). Is there any suggestions for a way around this? Happens even with 1 generations set in params tab

Thank you for the feedback πŸ˜„
Unfortunately i cannot replicate this issue on KoboldAI, so it might be some odd setting in ooba? Sorry, i am not really familiar with that inference back end to really offer any help

Sign up or log in to comment