This is a LLama LoRA fine tuned on top of WizardLM-7B with this dataset: https://huggingface.co/datasets/paolorechia/medium-size-generated-tasks It's meant mostly as an proof of concept to see how fine tuning may improve the performance of coding agents that rely on the Langchain framework.

To use this LoRA, you can use my repo as starting point: https://github.com/paolorechia/learn-langchain

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .