Instructions to use thanhdath/FINER-SQL-0.5B-Spider with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use thanhdath/FINER-SQL-0.5B-Spider with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="thanhdath/FINER-SQL-0.5B-Spider") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("thanhdath/FINER-SQL-0.5B-Spider") model = AutoModelForCausalLM.from_pretrained("thanhdath/FINER-SQL-0.5B-Spider") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use thanhdath/FINER-SQL-0.5B-Spider with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "thanhdath/FINER-SQL-0.5B-Spider" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "thanhdath/FINER-SQL-0.5B-Spider", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/thanhdath/FINER-SQL-0.5B-Spider
- SGLang
How to use thanhdath/FINER-SQL-0.5B-Spider with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "thanhdath/FINER-SQL-0.5B-Spider" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "thanhdath/FINER-SQL-0.5B-Spider", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "thanhdath/FINER-SQL-0.5B-Spider" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "thanhdath/FINER-SQL-0.5B-Spider", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use thanhdath/FINER-SQL-0.5B-Spider with Docker Model Runner:
docker model run hf.co/thanhdath/FINER-SQL-0.5B-Spider
base_model:
- griffith-bigdata/Qwen-2.5-Coder-0.5B-SQL-Writer
license: apache-2.0
language:
- en
tags:
- text-to-sql
- spider
- grpo
- finer-sql
- code
library_name: transformers
pipeline_tag: text-generation
FINER-SQL-0.5B-Spider
A small but capable 0.5 B-parameter Text-to-SQL model fine-tuned from
griffith-bigdata/Qwen-2.5-Coder-0.5B-SQL-Writer
with GRPO + the FINER-SQL dense rewards (Memory + Atomic).
β 75.0% Execution Accuracy on Spider Dev (n=30, value-aware voting). Runs on a 4-8 GB GPU.
π See other models: https://huggingface.co/collections/griffith-bigdata/finer-sql π GitHub: https://github.com/thanhdath/finer-sql/tree/main
FINER-SQL Model Family β Comparison Across All Sizes
| Model | Params | BIRD Dev (n=30, vav) | Spider Dev (n=30, vav, +agg_hint) |
|---|---|---|---|
| FINER-SQL-3B-BIRD | 3 B | 67.54% β | 83.8% |
| FINER-SQL-3B-Spider | 3 B | 63.04% | 85.10% β |
| FINER-SQL-0.5B-BIRD | 0.5 B | 50.85% β | 68.6% |
| FINER-SQL-0.5B-Spider (this model) | 0.5 B | TBD | 75.0% β |
The 0.5 B Spider model is 6.4 pp better than the 0.5 B BIRD model on Spider Dev β confirming dataset-specific specialisation matters even at small scales.
Inference
Quick start (vLLM)
from vllm import LLM, SamplingParams
llm = LLM(
model="griffith-bigdata/FINER-SQL-0.5B-Spider",
dtype="bfloat16",
max_model_len=4096,
gpu_memory_utilization=0.7,
)
system_prompt = """You are a meticulous SQL expert. Generate a single, correct SQL query for the user question and the provided database schema.
Follow this exact response format:
Rules:
- Output exactly one SQL statement.
- The SQL must be executable on SQLite.
- Do not include any explanatory text.
- Output one SQL statement only. Do not include any extra text, tags, or code fences."""
sampling = SamplingParams(n=30, temperature=1.0, max_tokens=2048)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Database Schema:\n{schema}\n\nQuestion: {question}"},
]
output = llm.chat(messages, sampling)
candidate_sqls = [c.text.split("</think>")[-1].strip() for c in output[0].outputs]
# Apply majority voting (vav) β see GitHub repo
Recommended evaluation pipeline
- Generate n=30 candidates with temperature=1.0
- Execute each candidate; group results
- Pick from the largest non-empty success group (value-aware voting, "vav")
- Score with the official Spider evaluator (
test_suite_sql_eval)
This pipeline gives 75.0% Spider Dev EX (75.44% MV).
Detailed Spider Dev results (n=30, vav)
| Hardness | Count | Execution Accuracy |
|---|---|---|
| Easy | 248 | 91.9% |
| Medium | 446 | 82.5% |
| Hard | 174 | 62.6% |
| Extra Hard | 166 | 42.8% |
| All | 1034 | 75.0% |
Recall@30: 85.11% (any-correct rate among 30 candidates).
Training
| Parameter | Value |
|---|---|
| Base model | griffith-bigdata/Qwen-2.5-Coder-0.5B-SQL-Writer |
| Algorithm | GRPO |
| Train data | Spider train (8,659 samples) |
| Total steps | 2000 (this checkpoint = 2000) |
| Learning rate | 8e-6 |
| Num generations per prompt | 32 |
| Gradient accumulation | 32 |
| Max completion length | 2048 |
| Max prompt length | 1500 |
| Temperature (rollout) | 1.0 |
| Selection during eval | vav (value-aware voting) |
| Rewards | Execution + Atomic + Memory + Format |
License
Inherits the base model's license (Apache 2.0).
Citation
@article{finer-sql-2026,
title = {FINER-SQL: Fine-grained reasoning rewards for small Text-to-SQL models},
author = {Thanh Dat and others},
year = {2026},
}