Edit model card

Introducing Hrida-T2SQL-3B-128k-V0.1, our latest small language model (SLM) tailored for data scientists and industry professionals. This advanced model marks a significant upgrade from our previous release, now equipped with an expanded 128k token context window for handling even the most intricate data queries with precision. Powered by the Phi 3 architecture, it effortlessly converts natural language queries into precise SQL commands, enhancing data analysis efficiency and decision-making capabilities.

Prompt Template

### Instruction: 
Provide the system prompt.

### Dialect:
Specify the SQL dialect (e.g., MySQL, PostgreSQL, SQL Server, etc.).

### Context: 
Provide the database schema including table names, column names, and data types.

### Input: 
User's query.

### Response:
Expected SQL query output based on the input and context.
  • Instruction (System Prompt): This guides the model on processing input to generate the SQL query response effectively.
  • Dialect (Optional): Specify the SQL variant the model should use to ensure the generated query conforms to the correct syntax.
  • Context: Provide the database schema to the model for generating accurate SQL queries.
  • Input: Provide the user query for the model to comprehend and transform into an SQL query.
  • Response: Expected output from the model.

Chat Prompt Template

<s>
<|system|>
{ Instruction / System Prompt }
<|user|>
{ Context / User Query } <|end|>
<|assistant|>

Run the Model

Using Transformers

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# Define the model and tokenizer
model_id = "HridaAI/Hrida-T2SQL-3B-128k-V0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, trust_remote_code=True)

# Define the context and prompt
prompt = """
Answer to the query will be in the form of an SQL query.
### Context: CREATE TABLE Employees (
    EmployeeID INT PRIMARY KEY,
    FirstName VARCHAR(50),
    LastName VARCHAR(50),
    Age INT,
    DepartmentID INT,
    Salary DECIMAL(10, 2),
    DateHired DATE,
    Active BOOLEAN,
    FOREIGN KEY (DepartmentID) REFERENCES Departments(DepartmentID)
); 

CREATE TABLE Departments (
    DepartmentID INT PRIMARY KEY,
    DepartmentName VARCHAR(100),
    Location VARCHAR(100)
); 
### Input: Write a SQL query to select all the employees who are active.
### Response:
"""
# Prepare the input
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)

# Generate the output
outputs = model.generate(inputs, max_length=300)
print(tokenizer.decode(outputs[0]))

Using MLX

from mlx_lm import generate, load

model,tokenizer = load("HridaAI/Hrida-T2SQL-3B-128k-V0.1")

prompt = """
Answer to the quey will be in the form of SQL query.
### Context: CREATE TABLE Employees (
    EmployeeID INT PRIMARY KEY,
    FirstName VARCHAR(50),
    LastName VARCHAR(50),
    Age INT,
    DepartmentID INT,
    Salary DECIMAL(10, 2),
    DateHired DATE,
    Active BOOLEAN,
    FOREIGN KEY (DepartmentID) REFERENCES Departments(DepartmentID)
); 

CREATE TABLE Departments (
    DepartmentID INT PRIMARY KEY,
    DepartmentName VARCHAR(100),
    Location VARCHAR(100)
); ### Input: Write a SQL query to select all the employees who are active. ### Response:"""

response = generate(model=model,tokenizer=tokenizer,prompt=prompt, verbose=True)
Downloads last month
22
Safetensors
Model size
3.82B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.