Improve model card: add GitHub link and usage example
Browse filesThis PR improves the model card for SLM-SQL by adding a prominent link to the GitHub repository and a Python code example for model usage with the `transformers` library. This makes it easier for users to find the code and get started with the model.
A minor formatting fix for percentage signs in the introduction has also been included.
README.md
CHANGED
|
@@ -1,20 +1,20 @@
|
|
| 1 |
---
|
| 2 |
-
pipeline_tag: text-generation
|
| 3 |
library_name: transformers
|
| 4 |
license: cc-by-nc-4.0
|
|
|
|
| 5 |
tags:
|
| 6 |
- text-to-sql
|
| 7 |
- reinforcement-learning
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
| 11 |
# SLM-SQL: An Exploration of Small Language Models for Text-to-SQL
|
| 12 |
|
| 13 |
### Important Links
|
| 14 |
|
| 15 |
π[Arxiv Paper](https://arxiv.org/abs/2507.22478) |
|
| 16 |
-
π€[HuggingFace](https://huggingface.co/collections/cycloneboy/slm-sql-688b02f99f958d7a417658dc) |
|
| 17 |
-
π€[ModelScope](https://modelscope.cn/collections/SLM-SQL-624bb6a60e9643) |
|
|
|
|
| 18 |
|
| 19 |
## News
|
| 20 |
|
|
@@ -55,6 +55,42 @@ Performance Comparison of different Text-to-SQL methods on BIRD dev and test dat
|
|
| 55 |
|
| 56 |
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_ablation_study.png" height="300" alt="slmsql_ablation_study">
|
| 57 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
## Model
|
| 59 |
|
| 60 |
| **Model** | Base Model | Train Method | Modelscope | HuggingFace |
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
license: cc-by-nc-4.0
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
tags:
|
| 6 |
- text-to-sql
|
| 7 |
- reinforcement-learning
|
| 8 |
---
|
| 9 |
|
|
|
|
| 10 |
# SLM-SQL: An Exploration of Small Language Models for Text-to-SQL
|
| 11 |
|
| 12 |
### Important Links
|
| 13 |
|
| 14 |
π[Arxiv Paper](https://arxiv.org/abs/2507.22478) |
|
| 15 |
+
π€[HuggingFace Collection](https://huggingface.co/collections/cycloneboy/slm-sql-688b02f99f958d7a417658dc) |
|
| 16 |
+
π€[ModelScope Collection](https://modelscope.cn/collections/SLM-SQL-624bb6a60e9643) |
|
| 17 |
+
π[GitHub Repository](https://github.com/CycloneBoy/slm_sql)
|
| 18 |
|
| 19 |
## News
|
| 20 |
|
|
|
|
| 55 |
|
| 56 |
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_ablation_study.png" height="300" alt="slmsql_ablation_study">
|
| 57 |
|
| 58 |
+
## Usage
|
| 59 |
+
|
| 60 |
+
This model can be loaded and used directly with the Hugging Face `transformers` library. Below is a basic example for Text-to-SQL generation.
|
| 61 |
+
|
| 62 |
+
```python
|
| 63 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 64 |
+
import torch
|
| 65 |
+
|
| 66 |
+
# Load the tokenizer and model
|
| 67 |
+
model_name = "cycloneboy/SLM-SQL-0.5B" # You can replace with other models from the table below
|
| 68 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 69 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16)
|
| 70 |
+
|
| 71 |
+
# Example text-to-SQL query
|
| 72 |
+
# For Text-to-SQL, you might also need to provide schema information depending on the model's training.
|
| 73 |
+
prompt = "Give me the SQL query for customers who placed orders in New York."
|
| 74 |
+
|
| 75 |
+
# For chat models like Qwen2.5-Coder-0.5B-Instruct, it's often best to use the chat template:
|
| 76 |
+
messages = [
|
| 77 |
+
{"role": "user", "content": prompt}
|
| 78 |
+
]
|
| 79 |
+
chat_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 80 |
+
|
| 81 |
+
# Tokenize input
|
| 82 |
+
input_ids = tokenizer(chat_input, return_tensors="pt").input_ids.to(model.device)
|
| 83 |
+
|
| 84 |
+
# Generate SQL query
|
| 85 |
+
# Adjust generation parameters as needed. Common ones include max_new_tokens, do_sample, temperature, top_p, num_beams
|
| 86 |
+
generated_ids = model.generate(input_ids, max_new_tokens=100, num_beams=1, do_sample=False)
|
| 87 |
+
|
| 88 |
+
# Decode and print the generated SQL
|
| 89 |
+
# Set skip_special_tokens=True to remove special tokens from the output.
|
| 90 |
+
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
|
| 91 |
+
print(generated_text)
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
## Model
|
| 95 |
|
| 96 |
| **Model** | Base Model | Train Method | Modelscope | HuggingFace |
|