toonzzzrock monshinawatra commited on
Commit
83df6ab
1 Parent(s): 7baba66

Update README.md (#5)

Browse files

- Update README.md (e6dc510136b1259e0de2edd29287893ab8838f5b)


Co-authored-by: Shinawatra nach <monshinawatra@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -8,4 +8,34 @@ language:
8
  pipeline_tag: text-generation
9
  tags:
10
  - code_generation
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  pipeline_tag: text-generation
9
  tags:
10
  - code_generation
11
+ ---
12
+
13
+ Example inference using huggingface transformers.
14
+ ```python
15
+ from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaTokenizer
16
+ import pandas as pd
17
+
18
+ def get_prediction(raw_prediction):
19
+ if "[/INST]" in raw_prediction:
20
+ index = raw_prediction.index("[/INST]")
21
+ return raw_prediction[index + 7:]
22
+
23
+ return raw_prediction
24
+
25
+ tokenizer = LlamaTokenizer.from_pretrained("AIAT/Pangpuriye-openthaigpt-1.0.0-7b-chat", trust_remote_code=True)
26
+ model = AutoModelForCausalLM.from_pretrained("AIAT/Pangpuriye-openthaigpt-1.0.0-7b-chat", trust_remote_code=True)
27
+
28
+ schema = """your SQL schema"""
29
+ query = "หาจำนวนลูกค้าที่เป็นเพศชาย"
30
+
31
+ prompt = f"""
32
+ [INST] <<SYS>>
33
+ You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
34
+ <</SYS>>
35
+ {schema}### (sql extract) {query} [/INST]
36
+ """
37
+
38
+ tokens = tokenizer(prompt, return_tensors="pt")
39
+ output = model.generate(tokens["input_ids"], max_new_tokens=20, eos_token_id=tokenizer.eos_token_id)
40
+ print(get_prediction(tokenizer.decode(output[0], skip_special_tokens=True)))
41
+ ```