Fine Tuning for Natural Language to SQL Conversion

#11
by anshukpal - opened

We have a requirement to generate SQL from natural language. We have used codellama/CodeLlama-13b-Instruct-hf model, getting answers but understand that the model needs good amount of further tuning specific to our use case/questions and related schema. We have used: https://github.com/ragntune/code-llama-finetune/blob/main/fine-tune-code-llama.ipynb and used our custom dataset.
But do't see much improvement at all in the queries which were not getting answered with pre-trained model and not getting answered in the fine tuned model as well Those queries were specifically trained in the fine tuning process.

What I am thinking is the fine tuned model trained properly? Is the training dataset correctly done?

Any thoughts here?

Anshuk

Sign up or log in to comment