Apply for community grant: Academic project (gpu)

#1
by richardr1126 - opened

GPUs

I am in need of more GPU access. I currently have my GTX 1080 and 32 gb of VRAM running the model server splitting between CPU and GPU. The quality of the model output is no where near as good as it is with the Transformers library, which is why I still run the model on an A10g on this space sometimes for evaluation. The cost is quickly adding up though and it is coming out of my own pocket.

Introduction

This project aims to use off-the-shelf large language models for text-to-SQL program sysnthesis tasks. After experimenting with various models, fine-tuning hyperparameters, and training datasets an optimal solution was identified by fine-tuning the WizardLM/WizardCoder-15B-V1.0 base model using QLoRA techniques on this customized Spider training dataset. The resultant model, richardr1126/spider-skeleton-wizard-coder-merged, demonstrates 63.7% execution accuracy when evaluated. The project utilizes a custom validation dataset that incorporates database context into the question. A live demonstration of the model is available on Hugging Face Space, facilitated by the Gradio library for user-friendly GUI.

Spider Skeleton WizardCoder - test-suite-sql-eval Results

With temperature set to 0.0, top_p set to 0.9, and top_k set to 0, the model achieves 63.7% execution accuracy on the Spider dev set w/ database context.

Note:

  • ChatGPT was evaluated with the default hyperparameters and with the system message You are a sophisticated AI assistant capable of converting text into SQL queries. You can only output SQL, don't add any other text.
  • Both models were evaluated with --plug_value in evaluation.py using the Spider dev set with database context.
    • --plug_value: If set, the gold value will be plugged into the predicted query. This is suitable if your model does not predict values. This is set to False by default.

Spider Dataset

Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.

This dataset was used to finetune this model.

Sign up or log in to comment