Xhaheen commited on
Commit
66f8fac
1 Parent(s): 457334e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -82,5 +82,53 @@ response = tokenizer.batch_decode(outputs)
82
 
83
 
84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
82
 
83
 
84
 
85
+ # Inference With Inference with HuggingFace transformers
86
+
87
+
88
+
89
+
90
+ ```python3
91
+
92
+ from peft import AutoPeftModelForCausalLM
93
+ from transformers import AutoTokenizer
94
+
95
+ model = AutoPeftModelForCausalLM.from_pretrained(
96
+ "Xhaheen/Shaheen_Gemma_Urdu_",
97
+ load_in_4bit = False
98
+ )
99
+ tokenizer = AutoTokenizer.from_pretrained("Xhaheen/Shaheen_Gemma_Urdu_")
100
+
101
+
102
+ input_prompt = """
103
+ ### Instruction:
104
+ {}
105
+
106
+ ### Input:
107
+ {}
108
+
109
+ ### Response:
110
+ {}"""
111
+
112
+
113
+
114
+ input_text = input_prompt.format(
115
+ "دیئے گئے موضوع کے بارے میں ایک مختصر پیراگراف لکھیں۔", # instruction
116
+ "قابل تجدید توانائی کے استعمال کی اہمیت", # input
117
+ "", # output - leave this blank for generation!
118
+ )
119
+
120
+ inputs = tokenizer([input_text], return_tensors = "pt").to("cuda")
121
+
122
+ outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True)
123
+ response = tokenizer.batch_decode(outputs)[0]
124
+
125
+ ```
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
 
134
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)