Lloro-SQL / README.md
rafaelgeraldini's picture
Update README.md
be1157e verified
metadata
library_name: transformers
base_model: meta-llama/Meta-Llama-3-8B-Instruct
license: llama3
language:
  - pt
tags:
  - code
  - sql
  - finetuned
  - portugues-BR
co2_eq_emissions:
  emissions: 1450
  source: >-
    Lacoste, Alexandre, et al. “Quantifying the Carbon Emissions of Machine
    Learning.” ArXiv (Cornell University), 21 Oct. 2019,
    https://doi.org/10.48550/arxiv.1910.09700.
  training_type: fine-tuning
  geographical_location: Council Bluffs, Iowa, USA.
  hardware_used: 1 A100 40GB GPU

Lloro SQL

Lloro-7b Logo

Lloro SQL, developed by Semantix Research Labs, is a language Model that was trained to effectively transform Portuguese queries into SQL Code. It is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct, that was trained on GretelAI public datasets. The fine-tuning process was performed using the QLORA metodology on a GPU A100 with 40 GB of RAM.

Model description

Model type: A 7B parameter fine-tuned on GretelAI public datasets.

Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well

Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct

What is Lloro's intended use(s)?

Lloro is built for Text2SQL in Portuguese contexts .

Input : Text

Output : Text (Code)

Usage

Using an OpenAI compatible inference server (like vLLM)

from openai import OpenAI
client = OpenAI(
    api_key="EMPTY",
    base_url="http://localhost:8000/v1",
)
def generate_responses(instruction, client=client):
    
    chat_response = client.chat.completions.create(
    model=<model>,
    messages=[
        {"role": "system", "content": "Você escreve a instrução SQL que responde às perguntas feitas. Você NÃO FORNECE NENHUM COMENTÁRIO OU EXPLICAÇÃO sobre o que o código faz, apenas a instrução SQL terminando em ponto e vírgula. Você utiliza todos os comandos disponíveis na especificação SQL, como: [SELECT, WHERE, ORDER, LIMIT, CAST, AS, JOIN]."},
        {"role": "user", "content": instruction},
    ]
)
    
    return chat_response.choices[0].message.content

output = generate_responses(user_prompt)

Params

Training Parameters

Params Training Data Examples Tokens LR
8B GretelAI public datasets + Synthetic Data 102970 18.654.222 2e-4

Model Sources

GretelAI: https://huggingface.co/datasets/gretelai/synthetic_text_to_sql

Performance

Test Dataset

Model LLM as Judge Code Bleu Score Rouge-L CodeBert- Precision CodeBert-Recall CodeBert-F1 CodeBert-F3
Llama 3 8B 65.48% 0.4583 0.6361 0.8815 0.8871 0.8835 0.8862
Lloro - SQL 71.33% 0.6512 0.7965 0.9458 0.9469 0.9459 0.9466
GPT - 3.5 Turbo 67.52% 0.6232 0.9967 0.9151 0.9152 0.9142 0.9175

Database Benchmark

Model Score
Llama 3 - Base 35.55%
Lloro - SQL 49.48%
GPT - 3.5 Turbo 46.15%

Translated BIRD Benchmark - https://bird-bench.github.io/

Model Score
Llama 3 - Base 33.87%
Lloro - SQL 47.14%
GPT - 3.5 Turbo 42.14%

Training Infos

The following hyperparameters were used during training:

Parameter Value
learning_rate 2e-4
weight_decay 0.001
train_batch_size 16
eval_batch_size 8
seed 42
optimizer Adam - adamw_8bit
lr_scheduler_type cosine
num_epochs 4.0

QLoRA hyperparameters

The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training:

Parameter Value
lora_r 64
lora_alpha 128
lora_dropout 0

Experiments

Model Epochs Overfitting Final Epochs Training Hours CO2 Emission (Kg)
Llama 3 8B Instruct 5 Yes 4 10.16 1.45

Framework versions

Library Version
accelerate 0.21.0
bitsandbytes 0.42.0
Datasets 2.14.3
peft 0.4.0
Pytorch 2.0.1
safetensors 0.4.1
scikit-image 0.22.0
scikit-learn 1.3.2
Tokenizers 0.14.1
Transformers 4.37.2
trl 0.4.7