Create ReadMe.md
Browse files
README.md
ADDED
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- aswin1906/llama2-sql-instruct-2k
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: question-answering
|
8 |
+
tags:
|
9 |
+
- code
|
10 |
+
---
|
11 |
+
|
12 |
+
|
13 |
+
# Fine-Tune Llama 2 Model Using qLORA for Custom SQL Dataset
|
14 |
+
Instruction fine-tuning has become extremely popular since the (accidental) release of LLaMA.
|
15 |
+
The size of these models and the peculiarities of training them on instructions and answers introduce more complexity and often require parameter-efficient learning techniques such as QLoRA.
|
16 |
+
Refer Dataset at **aswin1906/llama2-sql-instruct-2k**
|
17 |
+
## Model Background
|
18 |
+
|
19 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65059e484a8839a8bd5f67cb/tkHJ3Tuh7Jim4jKg6_h_m.png)
|
20 |
+
|
21 |
+
## Model Inference
|
22 |
+
Refer the below code to apply model inference
|
23 |
+
```
|
24 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
25 |
+
import torch, re
|
26 |
+
from rich import print
|
27 |
+
|
28 |
+
class Training:
|
29 |
+
def __init__(self) -> None:
|
30 |
+
self.model_name= "meta-llama/Llama-2-7b-chat-hf"
|
31 |
+
self.dataset= "aswin1906/llama2-sql-instruct-2k"
|
32 |
+
self.model_path= "aswin1906/llama-7b-sql-2k"
|
33 |
+
self.instruction= 'You are given the following SQL table structure described by CREATE TABLE statement: CREATE TABLE "l" ( "player" text, "no" text, "nationality" text, "position" text, "years_in_toronto" text, "school_club_team" text ); Write an SQL query that provides the solution to the following question: '
|
34 |
+
self.model = AutoModelForCausalLM.from_pretrained(
|
35 |
+
self.model_path,
|
36 |
+
load_in_8bit=False,
|
37 |
+
torch_dtype=torch.float16,
|
38 |
+
device_map="auto"
|
39 |
+
)
|
40 |
+
self.tokenizer = AutoTokenizer.from_pretrained(self.model_path)
|
41 |
+
|
42 |
+
def inference(self, prompt):
|
43 |
+
"""
|
44 |
+
Prompting started here
|
45 |
+
"""
|
46 |
+
# Run text generation pipeline with our next model
|
47 |
+
pipe = pipeline(task="text-generation", model=self.model, tokenizer=self.tokenizer, max_length=200)
|
48 |
+
result = pipe(f'<s>[INST] {self.instruction}"{prompt}". [/INST]')
|
49 |
+
response= result[0]['generated_text'].split('[/INST]')[-1]
|
50 |
+
return response
|
51 |
+
|
52 |
+
train= Training()
|
53 |
+
instruction= re.split(';|by CREATE', train.instruction)
|
54 |
+
print(f"[purple4] ------------------------------Instruction--------------------------")
|
55 |
+
print(f"[medium_spring_green] {instruction[0]}")
|
56 |
+
print(f"[bold green]CREATE{instruction[1]};")
|
57 |
+
print(f"[medium_spring_green] {instruction[2]}")
|
58 |
+
print(f"[purple4] -------------------------------------------------------------------")
|
59 |
+
while True:
|
60 |
+
# prompt = 'What position does the player who played for butler cc (ks) play?'
|
61 |
+
print("[bold blue]#Human: [bold green]", end="")
|
62 |
+
user = input()
|
63 |
+
print('[bold blue]#Response: [bold green]', train.inference(user))
|
64 |
+
|
65 |
+
```
|
66 |
+
|
67 |
+
Contact **aswin1906@gmail.com** for model training code
|
68 |
+
|
69 |
+
## output
|
70 |
+
|
71 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65059e484a8839a8bd5f67cb/ny_K7xBp53FILIhJkieX5.png)
|
72 |
+
|