Text Generation
Inference Endpoints

Is there integration with Langchain?

by DesmondChoy - opened

I've checked https://python.langchain.com/docs/integrations/llms/ but I can't find options on integrating this with Langchain.
I've installed the 4-bit model and was hoping to use it for a Q&A use case.
Would appreciate any help, thank you!

This isn't a question about or regarding the model. If you have the knowhow then you could integrate anything that generates text from input with langchain.

You can run any llama-architecture type model from a variety of frameworks, many of which have langchain integration. The simplest is probably transformers or CTransformers. Also pretty simple is running it with Oobabooga's textgen web UI, or any application like that that you can run and perform inference with. That tool comes with a variety of options for langchain integration, including blocking and streaming APIs, or just a local URL that you can plug right in to a langchain function. There's a whole massive section in the langchain documentation on which inference frameworks have supported packages.

Sign up or log in to comment