cnatale's picture
Update README.md
ff849e9
---
license: apache-2.0
language:
- en
tags:
- text-to-sql
- text to sql
pretty_name: Presto/Athena Text to SQL Dataset
size_categories:
- 1K<n<10K
---
I created this dataset using [sqlglot](https://github.com/tobymao/sqlglot) to auto-convert the Spider and Wikisql datasets to Presto syntax, along with running some regex's for additional cleanup.
An example use case is fine-tuning an existing model to respond with Presto/Athena text-to-sql, if it performs well at standard SQL syntax used by the major text to sql training datasets.
Example of fine-tuning using this dataset (in this case for Mystral 7b Instruct):
```
import json
import pandas as pd
from datasets import Dataset
def read_jsonl(file_path):
data = []
with open(file_path, 'r', encoding='utf-8') as file:
for line in file:
json_data = json.loads(line.strip())
data.append(json_data)
return data
# Read the train and validation files
train_data = read_jsonl('training_data/train.jsonl')
valid_data = read_jsonl('training_data/valid.jsonl')
# Convert to pandas DataFrame
train_df = pd.DataFrame(train_data)
valid_df = pd.DataFrame(valid_data)
# Convert DataFrame to Huggingface Dataset
train_dataset = Dataset.from_pandas(train_df)
valid_dataset = Dataset.from_pandas(valid_df)
# Example of processing
# train_texts = [example['text'] for example in train_dataset]
# valid_texts = [example['text'] for example in valid_dataset]
instruct_tune_dataset = {
"train": train_dataset,
"test": valid_dataset
}
...
def create_prompt(sample):
"""
Update the prompt template:
Combine both the prompt and input into a single column.
"""
bos_token = "<s>"
original_system_message = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
system_message = "Write a SQL query or use a function to answer the following question. Use the SQL dialect Presto for AWS Athena."
question = sample["question"].replace(original_system_message, "").strip()
response = sample["answer"].strip()
eos_token = "</s>"
full_prompt = ""
full_prompt += bos_token
full_prompt += "[INST] <<SYS>>" + system_message + "<</SYS>>\n\n"
full_prompt += question + " [/INST] "
full_prompt += response
full_prompt += eos_token
return full_prompt
...
from trl import SFTTrainer
trainer = SFTTrainer(
model=model,
peft_config=peft_config,
max_seq_length=max_seq_length,
tokenizer=tokenizer,
packing=True,
formatting_func=create_prompt, # this will apply the create_prompt mapping to all training and test dataset
args=args,
train_dataset=instruct_tune_dataset["train"],
eval_dataset=instruct_tune_dataset["test"]
)
```