File size: 1,259 Bytes
b540715
6261ce1
b540715
 
6261ce1
b540715
6261ce1
 
 
 
7640939
6261ce1
7640939
6261ce1
7640939
 
 
 
 
 
6261ce1
 
 
 
 
6cc6258
6261ce1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: apache-2.0
datasets:
- kaxap/llama2-sql-instruct-sys-prompt
pipeline_tag: text-generation
---

## 💻 Usage

``` python
!pip install -q accelerate==0.21.0 transformers==4.31.0

import os
import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    pipeline,
    logging,
)

model = "ataberkd/llama-2-7b-SQL_FINETUNED_1K"
prompt = 'You are an expert in SQL and data analysis. Given the table structure described by the CREATE TABLE statement, write an SQL query that provides the solution to the question and give the explanation of result your giving. CREATE TABLE statement: CREATE TABLE "user" ( "name" text, "surname" text, "tel" text, "address" text, "performanceScore" text,"Age" text, "Language" text );. Question: "Can you bring users who speak French and are greater than 20 years old?"'

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

sequences = pipeline(
    f'<s>[INST] {prompt} [/INST]',
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
```