Edit model card

Model Card for Model ID

This model card presents the enhanced Llama 2 model, fine-tuned for SQL programming and deployed using the Yale High Performance Computing (HPC) platform. The project focuses on leveraging the computational power of Yale HPC to push the boundaries of what Large Language Models (LLMs) can achieve, specifically in the context of SQL programming.

Model Details

Model Description

This model aimed at advancing the capabilities of LLMs in programming languages, with a focus on SQL. The Llama 2 model, developed by Meta AI, serves as the foundation for this project. It has been fine-tuned using a Parameter Efficient Fine-Tuning (PEFT) approach, integrating techniques such as Low-Rank Adaptation (LoRA) and Retrieval Augmented Generation (RAG) to enhance its SQL programming assistance capabilities.

  • Developed by: Kaifeng Gao, Jiayi Chen, Yuntian Liu, Yixiao Chen
  • Model type: Large Language Model (Llama 2)
  • Language(s) (NLP): English
  • License: TBD
  • Finetuned from model: Llama 2

Model Sources

Uses

Direct Use

The model is designed to assist developers in writing efficient and accurate SQL queries by providing contextually relevant suggestions and explanations. It can be directly used by SQL programmers of all skill levels to improve their query writing process.

Downstream Use

The model can serve as a backend for educational tools, IDE plugins, or other applications that require SQL query generation or optimization.

Out-of-Scope Use

Uses that involve tasks far removed from SQL programming or those that require real-time interaction with live databases may not be suitable.

Bias, Risks, and Limitations

The model's performance and output quality are directly tied to the training dataset. As such, any biases or inaccuracies in the dataset could be reflected in the model's suggestions.

Recommendations

Users should verify the model's suggestions against best practices and the latest SQL standards. Ongoing evaluation and refinement with updated datasets are recommended to mitigate biases and improve performance.

How to Get Started with the Model

Refer to the project's GitHub repository for detailed instructions on deploying and interacting with the model through the Streamlit web application.

Training Details

Training Data

The model was fine-tuned using the "b-mc2/sql-create-context" dataset, enriched with SQL questions and their corresponding answers covering a broad range of concepts.

Training Procedure

Preprocessing

The dataset was structured into a format conducive to efficient learning by employing templates that transformed raw data into a series of instructions and answers (refer to Tommy0303000/preprocessed-sql-create-context).

Training Hyperparameters

Training regime: The model leveraged LoRA for efficient adaptation, alongside model quantization for reduced memory footprint.

Evaluation

Post fine-tuning and RAG integration, the model showed significant improvement in generating correct SQL syntax and providing comprehensive information from SQL tutorial websites. Some responses may contain minor syntax errors, attributed to the initial training dataset quality.

Environmental Impact

The use of Yale HPC resources aimed at efficient computational usage, though specific metrics on carbon emissions and electricity usage are pending further analysis.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
10
Safetensors
Model size
6.74B params
Tensor type
FP16
·