Edit model card

AI2sql

AI2sql is a state-of-the-art LLM for converting natural language questions to SQL queries.

Model Card: Fine-tuning Llama 2 for AI2SQL Query Generation

This model card outlines the fine-tuning of the Llama 2 model to generate SQL queries for AI2SQL tasks.

Model Details

  • Original Model: NousResearch/Llama-2-7b-chat-hf
  • Model Type: Large Language Model
  • Fine-tuning Task: AI2SQL (SQL Query Generation)
  • Fine-tuned Model Name: llama-2-7b-miniguanaco

Implementation

  • Environment Requirement: GPU-supported platform with minimum 20GB RAM.
  • Dependencies: accelerate==0.21.0, peft==0.4.0, bitsandbytes==0.40.2, transformers==4.31.0, trl==0.4.7
  • GPU Specification: T4 or equivalent (as of 24 Aug 2023)

Training Details

  • Dataset: WikiSQL
  • Method: Supervised Fine-Tuning (SFT)
  • Epochs: 1
  • Batch Size: 4 per GPU
  • Optimization: AdamW with cosine learning rate schedule
  • Learning Rate: 2e-4
  • Special Features:
    • LoRA for efficient parameter adjustment.
    • 4-bit precision model loading with BitsAndBytes.
    • Gradient checkpointing and clipping.

Performance Metrics

  • Accuracy: 85% (on a held-out test set from WikiSQL)
  • Query Generation Time: Average of 0.5 seconds per query
  • Resource Efficiency: Demonstrates 30% reduced memory usage compared to the base model

Usage and Applications

TBD

Note: The performance metrics provided here are hypothetical and for illustrative purposes only. Actual performance would depend on various factors, including the specifics of the dataset and training regimen.

Downloads last month
7

Dataset used to train ai2sql/ai2sql_llama-2-7b

Space using ai2sql/ai2sql_llama-2-7b 1