amalnuaimi
commited on
Commit
β’
be285f6
1
Parent(s):
e83ddb4
Upload folder using huggingface_hub
Browse files- README.md +86 -0
- cal_data.safetensors +3 -0
- config.json +28 -0
- generation_config.json +6 -0
- job_new.json +0 -0
- label_mask.npy +3 -0
- labeled_matches.npy +3 -0
- labels.npy +3 -0
- measurement.json +0 -0
- output.safetensors +3 -0
- predictions.npy +3 -0
- special_tokens_map.json +29 -0
- tokenizer.json +0 -0
- tokenizer.model +3 -0
- tokenizer_config.json +83 -0
- training_args.bin +3 -0
README.md
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-sa-4.0
|
3 |
+
library_name: transformers
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
---
|
6 |
+
# Update notice
|
7 |
+
The model weights were updated at 7 AM UTC on Feb 7, 2024. The new model weights lead to a much more performant model βΒ particularly for joins.
|
8 |
+
|
9 |
+
If you downloaded the model before that, please redownload the weights for best performance.
|
10 |
+
|
11 |
+
# Model Card for SQLCoder-7B-2
|
12 |
+
|
13 |
+
A capable large language model for natural language to SQL generation.
|
14 |
+
|
15 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/603bbad3fd770a9997b57cb6/AYUE2y14vy2XkD9MZpScu.png)
|
16 |
+
|
17 |
+
## Model Details
|
18 |
+
|
19 |
+
### Model Description
|
20 |
+
|
21 |
+
<!-- Provide a longer summary of what this model is. -->
|
22 |
+
|
23 |
+
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
24 |
+
|
25 |
+
- **Developed by:** [Defog, Inc](https://defog.ai)
|
26 |
+
- **Model type:** [Text to SQL]
|
27 |
+
- **License:** [CC-by-SA-4.0]
|
28 |
+
- **Finetuned from model:** [CodeLlama-7B]
|
29 |
+
|
30 |
+
### Model Sources [optional]
|
31 |
+
- [**HuggingFace:**](https://huggingface.co/defog/sqlcoder-70b-alpha)
|
32 |
+
- [**GitHub:**](https://github.com/defog-ai/sqlcoder)
|
33 |
+
- [**Demo:**](https://defog.ai/sqlcoder-demo/)
|
34 |
+
|
35 |
+
## Uses
|
36 |
+
|
37 |
+
This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.
|
38 |
+
|
39 |
+
This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.
|
40 |
+
|
41 |
+
## How to Get Started with the Model
|
42 |
+
|
43 |
+
Use the code [here](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) to get started with the model.
|
44 |
+
|
45 |
+
## Prompt
|
46 |
+
|
47 |
+
Please use the following prompt for optimal results. Please remember to use `do_sample=False` and `num_beams=4` for optimal results.
|
48 |
+
|
49 |
+
```
|
50 |
+
### Task
|
51 |
+
Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION]
|
52 |
+
|
53 |
+
### Database Schema
|
54 |
+
The query will run on a database with the following schema:
|
55 |
+
{table_metadata_string_DDL_statements}
|
56 |
+
|
57 |
+
### Answer
|
58 |
+
Given the database schema, here is the SQL query that [QUESTION]{user_question}[/QUESTION]
|
59 |
+
[SQL]
|
60 |
+
```
|
61 |
+
|
62 |
+
## Evaluation
|
63 |
+
|
64 |
+
This model was evaluated on [SQL-Eval](https://github.com/defog-ai/sql-eval), a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
|
65 |
+
|
66 |
+
You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/).
|
67 |
+
|
68 |
+
### Results
|
69 |
+
|
70 |
+
We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
|
71 |
+
|
72 |
+
| | date | group_by | order_by | ratio | join | where |
|
73 |
+
| -------------- | ---- | -------- | -------- | ----- | ---- | ----- |
|
74 |
+
| sqlcoder-70b | 96 | 91.4 | 97.1 | 85.7 | 97.1 | 91.4 |
|
75 |
+
| sqlcoder-7b-2 | 96 | 91.4 | 94.3 | 91.4 | 94.3 | 77.1 |
|
76 |
+
| sqlcoder-34b | 80 | 94.3 | 85.7 | 77.1 | 85.7 | 80 |
|
77 |
+
| gpt-4 | 72 | 94.3 | 97.1 | 80 | 91.4 | 80 |
|
78 |
+
| gpt-4-turbo | 76 | 91.4 | 91.4 | 62.8 | 88.6 | 77.1 |
|
79 |
+
| natural-sql-7b | 56 | 88.6 | 85.7 | 60 | 88.6 | 80 |
|
80 |
+
| sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 |
|
81 |
+
| gpt-3.5 | 72 | 77.1 | 82.8 | 34.3 | 65.7 | 71.4 |
|
82 |
+
| claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 |
|
83 |
+
|
84 |
+
## Model Card Contact
|
85 |
+
|
86 |
+
Contact us on X at [@defogdata](https://twitter.com/defogdata), or on email at [founders@defog.ai](mailto:founders@defog.ai)
|
cal_data.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:411ba6a4de6d25040d10fc69346536a502748cf103eacc08cc67fb10559a4b51
|
3 |
+
size 1638488
|
config.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "defog/sqlcoder-7b-instruct-ds7",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"attention_bias": false,
|
7 |
+
"attention_dropout": 0.0,
|
8 |
+
"bos_token_id": 1,
|
9 |
+
"eos_token_id": 2,
|
10 |
+
"hidden_act": "silu",
|
11 |
+
"hidden_size": 4096,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 11008,
|
14 |
+
"max_position_embeddings": 16384,
|
15 |
+
"model_type": "llama",
|
16 |
+
"num_attention_heads": 32,
|
17 |
+
"num_hidden_layers": 32,
|
18 |
+
"num_key_value_heads": 32,
|
19 |
+
"pretraining_tp": 1,
|
20 |
+
"rms_norm_eps": 1e-05,
|
21 |
+
"rope_scaling": null,
|
22 |
+
"rope_theta": 1000000,
|
23 |
+
"tie_word_embeddings": false,
|
24 |
+
"torch_dtype": "float16",
|
25 |
+
"transformers_version": "4.37.2",
|
26 |
+
"use_cache": true,
|
27 |
+
"vocab_size": 32016
|
28 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": 2,
|
5 |
+
"transformers_version": "4.37.2"
|
6 |
+
}
|
job_new.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
label_mask.npy
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ce06db10b5b506fcc10ada7fb9621e9a82a0b0560d04856e3571f432c3774fb9
|
3 |
+
size 458228
|
labeled_matches.npy
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0294fdf749ea2973520795db71dea9401c98147ee0ace8931dab3042f63064d7
|
3 |
+
size 458228
|
labels.npy
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e598810721e5682a88d42fa5c4844c2f1fce92c41a2fb0da1b28fe1994b47cc9
|
3 |
+
size 3664928
|
measurement.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
output.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:871249f4182c6c4017fb9367fe4dce5803d4bfeaa8e71135db97be70fd7a96b3
|
3 |
+
size 5222191200
|
predictions.npy
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:20712c1033ce864ac8a548eee7986235e60cd95b3b9cf3a441777044daa92a3d
|
3 |
+
size 3664928
|
special_tokens_map.json
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"β<PRE>",
|
4 |
+
"β<MID>",
|
5 |
+
"β<SUF>",
|
6 |
+
"β<EOT>"
|
7 |
+
],
|
8 |
+
"bos_token": {
|
9 |
+
"content": "<s>",
|
10 |
+
"lstrip": false,
|
11 |
+
"normalized": false,
|
12 |
+
"rstrip": false,
|
13 |
+
"single_word": false
|
14 |
+
},
|
15 |
+
"eos_token": {
|
16 |
+
"content": "</s>",
|
17 |
+
"lstrip": false,
|
18 |
+
"normalized": false,
|
19 |
+
"rstrip": false,
|
20 |
+
"single_word": false
|
21 |
+
},
|
22 |
+
"unk_token": {
|
23 |
+
"content": "<unk>",
|
24 |
+
"lstrip": false,
|
25 |
+
"normalized": false,
|
26 |
+
"rstrip": false,
|
27 |
+
"single_word": false
|
28 |
+
}
|
29 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:45ccb9c8b6b561889acea59191d66986d314e7cbd6a78abc6e49b139ca91c1e6
|
3 |
+
size 500058
|
tokenizer_config.json
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": true,
|
3 |
+
"add_eos_token": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"0": {
|
6 |
+
"content": "<unk>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"1": {
|
14 |
+
"content": "<s>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"2": {
|
22 |
+
"content": "</s>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"32007": {
|
30 |
+
"content": "β<PRE>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"32008": {
|
38 |
+
"content": "β<SUF>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"32009": {
|
46 |
+
"content": "β<MID>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"32010": {
|
54 |
+
"content": "β<EOT>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": false,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
}
|
61 |
+
},
|
62 |
+
"additional_special_tokens": [
|
63 |
+
"β<PRE>",
|
64 |
+
"β<MID>",
|
65 |
+
"β<SUF>",
|
66 |
+
"β<EOT>"
|
67 |
+
],
|
68 |
+
"bos_token": "<s>",
|
69 |
+
"clean_up_tokenization_spaces": false,
|
70 |
+
"eos_token": "</s>",
|
71 |
+
"eot_token": "β<EOT>",
|
72 |
+
"fill_token": "<FILL_ME>",
|
73 |
+
"legacy": null,
|
74 |
+
"middle_token": "β<MID>",
|
75 |
+
"model_max_length": 1000000000000000019884624838656,
|
76 |
+
"pad_token": null,
|
77 |
+
"prefix_token": "β<PRE>",
|
78 |
+
"sp_model_kwargs": {},
|
79 |
+
"suffix_token": "β<SUF>",
|
80 |
+
"tokenizer_class": "CodeLlamaTokenizer",
|
81 |
+
"unk_token": "<unk>",
|
82 |
+
"use_default_system_prompt": false
|
83 |
+
}
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d76514892c133615428b3e93206e03f334310b47e0b9565e067a577f87a2f1b2
|
3 |
+
size 4856
|