A text2sql T5 model, finetuned from Flan-t5-base. Code: Link A further finetuning will significantly increase the performance of Flan-t5 model on Text-to-SQL tasks.
Inference Example:
from transformers import T5Tokenizer, T5ForConditionalGeneration, pipeline
table_columns = "Transaction_ID, Platform, Product_ID, User_ID, Transaction_Amount, Region, Transaction_Time, Transaction_Unit, User_Comments"
table_name = "my_data"
PROMPT_INPUT = f"""
Given a SQL table named '{table_name}' with the following columns:
{table_columns}
Construct a SQL query to answer the following question:
Q: {{question}}.
"""
model_id = "kevinng77/chat-table-flan-t5"
tokenizer = T5Tokenizer.from_pretrained(model_id)
model = T5ForConditionalGeneration.from_pretrained(model_id)
input_text = PROMPT_INPUT.format_map({"question": "How many rows are there in the table?"})
pipe = pipeline(
"text2text-generation",
model=model, tokenizer=tokenizer, max_length=512
)
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.