Text Generation
Transformers
PyTorch
English
gpt_neox
Inference Endpoints
text-generation-inference

Can not run with Langchain.

#13
by alifatmi - opened

i have fine tune on my dataset. it give very good results, but when i integrate it with langchain and use ConversationBufferWindowMemory( k=1) function of langchain until k=1 , but when i increase window memory so it did not provide second question answer,
i think it has memory issue , how can i solve it.
When i run 7B model that works fine.

Sign up or log in to comment