create model card lloro sql
#1
by
Ivyna
- opened
README.md
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
4 |
+
license: apache-2.0
|
5 |
+
language:
|
6 |
+
- pt
|
7 |
+
tags:
|
8 |
+
- code
|
9 |
+
- sql
|
10 |
+
- finetuned
|
11 |
+
- portugues-BR
|
12 |
+
---
|
13 |
+
**Lloro SQL**
|
14 |
+
|
15 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/653176dc69fffcfe1543860a/h0kNd9OTEu1QdGNjHKXoq.png" width="300" alt="Lloro-7b Logo"/>
|
16 |
+
|
17 |
+
|
18 |
+
Lloro SQL, developed by Semantix Research Labs, is a language Model that was trained to effectively transform Portuguese queries into SQL Code. It is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct, that was trained on Bird and Spider public datasets. The fine-tuning process was performed using the QLORA metodology on a GPU A100 with 40 GB of RAM.
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
+
**Model description**
|
23 |
+
|
24 |
+
|
25 |
+
Model type: A 7B parameter fine-tuned on GretelAI public datasets.
|
26 |
+
|
27 |
+
Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well
|
28 |
+
|
29 |
+
Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct
|
30 |
+
|
31 |
+
|
32 |
+
|
33 |
+
**What is Lloro's intended use(s)?**
|
34 |
+
|
35 |
+
|
36 |
+
Lloro is built for Text2SQL in Portuguese contexts .
|
37 |
+
|
38 |
+
Input : Text
|
39 |
+
|
40 |
+
Output : Text (Code)
|
41 |
+
|
42 |
+
|
43 |
+
**Usage**
|
44 |
+
|
45 |
+
|
46 |
+
Using an OpenAI compatible inference server (like [vLLM](https://docs.vllm.ai/en/latest/index.html))
|
47 |
+
|
48 |
+
```python
|
49 |
+
from openai import OpenAI
|
50 |
+
client = OpenAI(
|
51 |
+
api_key="EMPTY",
|
52 |
+
base_url="http://localhost:8000/v1",
|
53 |
+
)
|
54 |
+
def generate_responses(instruction, client=client):
|
55 |
+
|
56 |
+
chat_response = client.chat.completions.create(
|
57 |
+
model=<model>,
|
58 |
+
messages=[
|
59 |
+
{"role": "system", "content": "Você escreve a instrução SQL que responde às perguntas feitas. Você NÃO FORNECE NENHUM COMENTÁRIO OU EXPLICAÇÃO sobre o que o código faz, apenas a instrução SQL terminando em ponto e vírgula. Você utiliza todos os comandos disponíveis na especificação SQL, como: [SELECT, WHERE, ORDER, LIMIT, CAST, AS, JOIN]."},
|
60 |
+
{"role": "user", "content": instruction},
|
61 |
+
]
|
62 |
+
)
|
63 |
+
|
64 |
+
return chat_response.choices[0].message.content
|
65 |
+
|
66 |
+
output = generate_responses(user_prompt)
|
67 |
+
|
68 |
+
```
|
69 |
+
|
70 |
+
|
71 |
+
|
72 |
+
**Params**
|
73 |
+
Training Parameters
|
74 |
+
| Params | Training Data | Examples | Tokens | LR |
|
75 |
+
|----------------------------------|---------------------------------|---------------------------------|------------|--------|
|
76 |
+
| 8B | GretelAI public datasets | 65000 | 18.000.000 | 9e-5 |
|
77 |
+
|
78 |
+
|
79 |
+
**Model Sources**
|
80 |
+
|
81 |
+
GretelAI: https://huggingface.co/datasets/gretelai/synthetic_text_to_sql
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
**Performance**
|
86 |
+
| Modelo | LLM as Judge | Code Bleu Score | Rouge-L | CodeBert- Precision | CodeBert-Recall | CodeBert-F1 | CodeBert-F3 |
|
87 |
+
|----------------|--------------|-----------------|---------|----------------------|-----------------|-------------|-------------|
|
88 |
+
| Llama 3 - Base | 65.48% | 0.4583 | 0.6361 | 0.8815 | 0.8871 | 0.8835 | 0.8862 |
|
89 |
+
| Llama 3 - FT | 62.57% | 0.6512 | 0.7965 | 0.9458 | 0.9469 | 0.9459 | 0.9466 |
|
90 |
+
|
91 |
+
|
92 |
+
**Training Infos:**
|
93 |
+
The following hyperparameters were used during training:
|
94 |
+
|
95 |
+
| Parameter | Value |
|
96 |
+
|---------------------------|----------------------|
|
97 |
+
| learning_rate | 9e-5 |
|
98 |
+
| weight_decay | 0.001 |
|
99 |
+
| train_batch_size | 16 |
|
100 |
+
| eval_batch_size | 8 |
|
101 |
+
| seed | 42 |
|
102 |
+
| optimizer | Adam - adamw_8bit |
|
103 |
+
| lr_scheduler_type | cosine |
|
104 |
+
| num_epochs | 3.0 |
|
105 |
+
|
106 |
+
**QLoRA hyperparameters**
|
107 |
+
The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training:
|
108 |
+
|
109 |
+
| Parameter | Value |
|
110 |
+
|-----------------|---------|
|
111 |
+
| lora_r | 16 |
|
112 |
+
| lora_alpha | 64 |
|
113 |
+
| lora_dropout | 0 |
|
114 |
+
|
115 |
+
|
116 |
+
|
117 |
+
**Framework versions**
|
118 |
+
| Library | Version |
|
119 |
+
|---------------|-----------|
|
120 |
+
| accelerate | 0.21.0 |
|
121 |
+
| bitsandbytes | 0.42.0 |
|
122 |
+
| Datasets | 2.14.3 |
|
123 |
+
| peft | 0.4.0 |
|
124 |
+
| Pytorch | 2.0.1 |
|
125 |
+
| safetensors | 0.4.1 |
|
126 |
+
| scikit-image | 0.22.0 |
|
127 |
+
| scikit-learn | 1.3.2 |
|
128 |
+
| Tokenizers | 0.14.1 |
|
129 |
+
| Transformers | 4.37.2 |
|
130 |
+
| trl | 0.4.7 |
|