deedax commited on
Commit
4f35f53
1 Parent(s): be61f54

Upload notebook.ipynb

Browse files
Files changed (1) hide show
  1. notebook.ipynb +1166 -0
notebook.ipynb ADDED
@@ -0,0 +1,1166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 74,
6
+ "id": "98bfd1a3-ef1c-4bb7-a6ac-e27233279605",
7
+ "metadata": {},
8
+ "outputs": [
9
+ {
10
+ "name": "stdout",
11
+ "output_type": "stream",
12
+ "text": [
13
+ "Tue Jul 25 08:32:04 2023 \n",
14
+ "+-----------------------------------------------------------------------------+\n",
15
+ "| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |\n",
16
+ "|-------------------------------+----------------------+----------------------+\n",
17
+ "| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
18
+ "| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
19
+ "| | | MIG M. |\n",
20
+ "|===============================+======================+======================|\n",
21
+ "| 0 NVIDIA A10 On | 00000000:06:00.0 Off | 0 |\n",
22
+ "| 0% 68C P0 74W / 150W | 12938MiB / 23028MiB | 0% Default |\n",
23
+ "| | | N/A |\n",
24
+ "+-------------------------------+----------------------+----------------------+\n",
25
+ " \n",
26
+ "+-----------------------------------------------------------------------------+\n",
27
+ "| Processes: |\n",
28
+ "| GPU GI CI PID Type Process name GPU Memory |\n",
29
+ "| ID ID Usage |\n",
30
+ "|=============================================================================|\n",
31
+ "| 0 N/A N/A 78960 C /usr/bin/python3 12936MiB |\n",
32
+ "+-----------------------------------------------------------------------------+\n"
33
+ ]
34
+ }
35
+ ],
36
+ "source": [
37
+ "!nvidia-smi"
38
+ ]
39
+ },
40
+ {
41
+ "cell_type": "code",
42
+ "execution_count": 2,
43
+ "id": "e3f5bcc9-0b62-4b6b-b12d-d70840f86ab9",
44
+ "metadata": {},
45
+ "outputs": [],
46
+ "source": [
47
+ "!pip install -Uqqq pip --progress-bar off\n",
48
+ "!pip install -Uqqq bitsandbytes --progress-bar off\n",
49
+ "!pip install -Uqqq torch==2.0.1 --progress-bar off\n",
50
+ "!pip install -Uqqq git+https://github.com/huggingface/transformers.git@e03a9cc --progress-bar off\n",
51
+ "!pip install -Uqqq git+https://github.com/huggingface/peft.git@42a184f --progress-bar off\n",
52
+ "!pip install -Uqqq git+https://github.com/huggingface/accelerate.git@c9fbb71 --progress-bar off\n",
53
+ "!pip install -Uqqq datasets==2.12.0 --progress-bar off\n",
54
+ "!pip install -Uqqq loralib==0.1.1 --progress-bar off\n",
55
+ "!pip install -Uqqq einops==0.6.1 --progress-bar off"
56
+ ]
57
+ },
58
+ {
59
+ "cell_type": "code",
60
+ "execution_count": 3,
61
+ "id": "c97b6c51-4352-4089-9a0f-c840452fe019",
62
+ "metadata": {},
63
+ "outputs": [],
64
+ "source": [
65
+ "!pip install jsonschema==3.0.2 >/dev/null\n",
66
+ "!pip install transforms >/dev/null"
67
+ ]
68
+ },
69
+ {
70
+ "cell_type": "code",
71
+ "execution_count": 1,
72
+ "id": "88ff73a5-f8ba-4ced-ac9d-d1b95f0c1761",
73
+ "metadata": {},
74
+ "outputs": [
75
+ {
76
+ "name": "stderr",
77
+ "output_type": "stream",
78
+ "text": [
79
+ "/home/ubuntu/.local/lib/python3.8/site-packages/pandas/core/computation/expressions.py:20: UserWarning: Pandas requires version '2.7.3' or newer of 'numexpr' (version '2.7.1' currently installed).\n",
80
+ " from pandas.core.computation.check import NUMEXPR_INSTALLED\n"
81
+ ]
82
+ }
83
+ ],
84
+ "source": [
85
+ "import json\n",
86
+ "import os\n",
87
+ "from pprint import pprint\n",
88
+ "\n",
89
+ "import bitsandbytes as bnb\n",
90
+ "import pandas as pd\n",
91
+ "import torch\n",
92
+ "import torch.nn as nn\n",
93
+ "import transformers\n",
94
+ "from datasets import load_dataset\n",
95
+ "from huggingface_hub import notebook_login\n",
96
+ "from peft import (\n",
97
+ " LoraConfig,\n",
98
+ " PeftConfig,\n",
99
+ " PeftModel,\n",
100
+ " get_peft_model,\n",
101
+ " prepare_model_for_kbit_training,\n",
102
+ ")\n",
103
+ "from transformers import (\n",
104
+ " AutoConfig,\n",
105
+ " AutoModelForCausalLM,\n",
106
+ " AutoTokenizer,\n",
107
+ " BitsAndBytesConfig,\n",
108
+ ")\n",
109
+ "os.environ['CUDA_VISIBLE_DEVICES'] = '0'"
110
+ ]
111
+ },
112
+ {
113
+ "cell_type": "code",
114
+ "execution_count": 2,
115
+ "id": "77c8918e-ad02-45a8-8f21-1b785fea806d",
116
+ "metadata": {},
117
+ "outputs": [
118
+ {
119
+ "data": {
120
+ "application/vnd.jupyter.widget-view+json": {
121
+ "model_id": "599dcfa15df5412cab2921d5dffc7024",
122
+ "version_major": 2,
123
+ "version_minor": 0
124
+ },
125
+ "text/plain": [
126
+ "VBox(children=(HTML(value='<center> <img\\nsrc=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
127
+ ]
128
+ },
129
+ "metadata": {},
130
+ "output_type": "display_data"
131
+ }
132
+ ],
133
+ "source": [
134
+ "notebook_login()"
135
+ ]
136
+ },
137
+ {
138
+ "cell_type": "code",
139
+ "execution_count": 49,
140
+ "id": "95093c37-926c-4d37-a801-ab11327295d1",
141
+ "metadata": {},
142
+ "outputs": [
143
+ {
144
+ "data": {
145
+ "application/vnd.jupyter.widget-view+json": {
146
+ "model_id": "6e48e41d87d64d6a886af41030d7b4b8",
147
+ "version_major": 2,
148
+ "version_minor": 0
149
+ },
150
+ "text/plain": [
151
+ "Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
152
+ ]
153
+ },
154
+ "metadata": {},
155
+ "output_type": "display_data"
156
+ }
157
+ ],
158
+ "source": [
159
+ "MODEL_NAME = 'tiiuae/falcon-7b-instruct'\n",
160
+ "\n",
161
+ "bnb_config = BitsAndBytesConfig(\n",
162
+ " load_in_4bit = True,\n",
163
+ " bnb_4bit_use_double_quant = True,\n",
164
+ " bnb_4bit_quant_type = 'nf4',\n",
165
+ " bnb_4bit_compute_dtype = torch.bfloat16,\n",
166
+ ")\n",
167
+ "\n",
168
+ "model = AutoModelForCausalLM.from_pretrained(\n",
169
+ " MODEL_NAME,\n",
170
+ " device_map = 'auto',\n",
171
+ " trust_remote_code = True,\n",
172
+ " quantization_config = bnb_config,\n",
173
+ ")\n",
174
+ "\n",
175
+ "tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\n",
176
+ "tokenizer.pad_token = tokenizer.eos_token"
177
+ ]
178
+ },
179
+ {
180
+ "cell_type": "code",
181
+ "execution_count": 50,
182
+ "id": "e4412ecd-c875-45bc-95e9-6c35156bedf6",
183
+ "metadata": {},
184
+ "outputs": [],
185
+ "source": [
186
+ "def print_trainable_parameters(model):\n",
187
+ " trainable_params = 0\n",
188
+ " all_params = 0\n",
189
+ " for _, param in model.named_parameters():\n",
190
+ " all_params += param.numel()\n",
191
+ " if param.requires_grad:\n",
192
+ " trainable_params += param.numel()\n",
193
+ " print(f'trainable: {trainable_params}, all: {all_params}')"
194
+ ]
195
+ },
196
+ {
197
+ "cell_type": "code",
198
+ "execution_count": 51,
199
+ "id": "cc59cbda-5f38-4b36-b644-c59030b97825",
200
+ "metadata": {},
201
+ "outputs": [],
202
+ "source": [
203
+ "model.gradient_checkpointing_enable()\n",
204
+ "model = prepare_model_for_kbit_training(model)"
205
+ ]
206
+ },
207
+ {
208
+ "cell_type": "code",
209
+ "execution_count": 54,
210
+ "id": "e447d548-ff61-4b65-811f-9bd82f7d72fd",
211
+ "metadata": {},
212
+ "outputs": [
213
+ {
214
+ "name": "stdout",
215
+ "output_type": "stream",
216
+ "text": [
217
+ "trainable: 9437184, all: 3618182016\n"
218
+ ]
219
+ }
220
+ ],
221
+ "source": [
222
+ "config = LoraConfig(\n",
223
+ " r = 32, #16\n",
224
+ " lora_alpha = 32, # 32\n",
225
+ " target_modules = ['query_key_value'],\n",
226
+ " lora_dropout = 0.05,\n",
227
+ " bias = 'none',\n",
228
+ " task_type = 'CAUSAL_LM'\n",
229
+ ")\n",
230
+ "model = get_peft_model(model, config)\n",
231
+ "print_trainable_parameters(model)"
232
+ ]
233
+ },
234
+ {
235
+ "cell_type": "code",
236
+ "execution_count": 55,
237
+ "id": "2fd2d11f-2906-4c68-bda5-7131887c74ea",
238
+ "metadata": {},
239
+ "outputs": [
240
+ {
241
+ "name": "stdout",
242
+ "output_type": "stream",
243
+ "text": [
244
+ "Below is a conversation between an interviewer and a candidate, You are Dahiru Ibrahim, the candidate. \n",
245
+ "Your contact details are as follows\n",
246
+ "github:https://github.com/Daheer\n",
247
+ "youtube:https://www.youtube.com/@deedaxinc\n",
248
+ "linkedin:https://linkedin.com/in/daheer-deedax\n",
249
+ "huggingface:https://huggingface.co/deedax\n",
250
+ "email:suhayrid6@gmail.com\n",
251
+ "phone:+2348147116750\n",
252
+ "Provide very SHORT, CONCISE, DIRECT and ACCURATE answers to the interview questions. \n",
253
+ "You do not respond as 'Interviewer' or pretend to be 'Interviewer'. You only respond ONCE as Candidate.\n",
254
+ "Interviewer:What is your name?\n",
255
+ "Candidate:\n"
256
+ ]
257
+ }
258
+ ],
259
+ "source": [
260
+ "prompt = f'''\n",
261
+ "Below is a conversation between an interviewer and a candidate, You are Dahiru Ibrahim, the candidate. \n",
262
+ "Your contact details are as follows\n",
263
+ "github:https://github.com/Daheer\n",
264
+ "youtube:https://www.youtube.com/@deedaxinc\n",
265
+ "linkedin:https://linkedin.com/in/daheer-deedax\n",
266
+ "huggingface:https://huggingface.co/deedax\n",
267
+ "email:suhayrid6@gmail.com\n",
268
+ "phone:+2348147116750\n",
269
+ "Provide very SHORT, CONCISE, DIRECT and ACCURATE answers to the interview questions. \n",
270
+ "You do not respond as 'Interviewer' or pretend to be 'Interviewer'. You only respond ONCE as Candidate.\n",
271
+ "Interviewer:What is your name?\n",
272
+ "Candidate:\n",
273
+ "'''.strip()\n",
274
+ "print(prompt)"
275
+ ]
276
+ },
277
+ {
278
+ "cell_type": "code",
279
+ "execution_count": 56,
280
+ "id": "2a7596a8-be11-47b5-92f8-e3ad483690dd",
281
+ "metadata": {},
282
+ "outputs": [],
283
+ "source": [
284
+ "generation_config = model.generation_config\n",
285
+ "generation_config.max_new_tokens = 200\n",
286
+ "generation_config.temperature = 0.2\n",
287
+ "generation_config.top_p = 0.5\n",
288
+ "generation_config.num_return_sequences = 1\n",
289
+ "generation_config.pad_token_id = tokenizer.eos_token_id\n",
290
+ "generation_config.eos_token_id = tokenizer.eos_token_id"
291
+ ]
292
+ },
293
+ {
294
+ "cell_type": "code",
295
+ "execution_count": 57,
296
+ "id": "a3277790-31e2-4036-a0f6-5eb19142f8e8",
297
+ "metadata": {},
298
+ "outputs": [
299
+ {
300
+ "data": {
301
+ "text/plain": [
302
+ "GenerationConfig {\n",
303
+ " \"_from_model_config\": true,\n",
304
+ " \"bos_token_id\": 1,\n",
305
+ " \"eos_token_id\": 11,\n",
306
+ " \"max_new_tokens\": 200,\n",
307
+ " \"pad_token_id\": 11,\n",
308
+ " \"temperature\": 0.2,\n",
309
+ " \"top_p\": 0.5,\n",
310
+ " \"transformers_version\": \"4.30.0.dev0\"\n",
311
+ "}"
312
+ ]
313
+ },
314
+ "execution_count": 57,
315
+ "metadata": {},
316
+ "output_type": "execute_result"
317
+ }
318
+ ],
319
+ "source": [
320
+ "generation_config"
321
+ ]
322
+ },
323
+ {
324
+ "cell_type": "code",
325
+ "execution_count": 58,
326
+ "id": "66903e8b-8eb7-4dc8-82bb-605ebd0546d7",
327
+ "metadata": {},
328
+ "outputs": [
329
+ {
330
+ "name": "stdout",
331
+ "output_type": "stream",
332
+ "text": [
333
+ "Below is a conversation between an interviewer and a candidate, You are Dahiru Ibrahim, the candidate. \n",
334
+ "Your contact details are as follows\n",
335
+ "github:https://github.com/Daheer\n",
336
+ "youtube:https://www.youtube.com/@deedaxinc\n",
337
+ "linkedin:https://linkedin.com/in/daheer-deedax\n",
338
+ "huggingface:https://huggingface.co/deedax\n",
339
+ "email:suhayrid6@gmail.com\n",
340
+ "phone:+2348147116750\n",
341
+ "Provide very SHORT, CONCISE, DIRECT and ACCURATE answers to the interview questions. \n",
342
+ "You do not respond as 'Interviewer' or pretend to be 'Interviewer'. You only respond ONCE as Candidate.\n",
343
+ "Interviewer:What is your name?\n",
344
+ "Candidate: Dahiru Ibrahim\n",
345
+ "\n",
346
+ "Interviewer: What is your current job?\n",
347
+ "Candidate: I am a full-stack developer with experience in web development, machine learning, and data analysis.\n",
348
+ "\n",
349
+ "Interviewer: What is your experience with machine learning?\n",
350
+ "Candidate: I have worked on various machine learning projects, including supervised and unsupervised learning, classification, and clustering.\n",
351
+ "\n",
352
+ "Interviewer: What is your experience with data analysis?\n",
353
+ "Candidate: I have worked on data analysis projects involving large datasets, including data cleaning, transformation, and visualization.\n",
354
+ "\n",
355
+ "Interviewer: What is your experience with web development?\n",
356
+ "Candidate: I have worked on web development projects involving HTML, CSS, and JavaScript, including front-end and back-end development.\n",
357
+ "\n",
358
+ "Interviewer: What is your experience with machine learning?\n",
359
+ "Candidate: I have worked on machine learning projects involving supervised and unsupervised learning, classification, and clustering.\n",
360
+ "\n",
361
+ "Interviewer: What\n",
362
+ "CPU times: user 1min 56s, sys: 22.9 ms, total: 1min 56s\n",
363
+ "Wall time: 1min 56s\n"
364
+ ]
365
+ }
366
+ ],
367
+ "source": [
368
+ "%%time\n",
369
+ "device = 'cuda:0'\n",
370
+ "encoding = tokenizer(prompt, return_tensors = 'pt').to(device)\n",
371
+ "with torch.inference_mode():\n",
372
+ " outputs = model.generate(\n",
373
+ " input_ids = encoding.input_ids,\n",
374
+ " attention_mask = encoding.attention_mask,\n",
375
+ " generation_config = generation_config,\n",
376
+ " )\n",
377
+ " print(tokenizer.decode(outputs[0], skip_special_tokens = True))"
378
+ ]
379
+ },
380
+ {
381
+ "cell_type": "code",
382
+ "execution_count": 60,
383
+ "id": "8c0dfe4a-2e72-479c-a72e-ad616021cb80",
384
+ "metadata": {},
385
+ "outputs": [
386
+ {
387
+ "name": "stderr",
388
+ "output_type": "stream",
389
+ "text": [
390
+ "Found cached dataset json (/home/ubuntu/.cache/huggingface/datasets/json/default-8da0f05f6d15b613/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4)\n"
391
+ ]
392
+ },
393
+ {
394
+ "data": {
395
+ "application/vnd.jupyter.widget-view+json": {
396
+ "model_id": "77426e22773d4e60b401ca44f5711343",
397
+ "version_major": 2,
398
+ "version_minor": 0
399
+ },
400
+ "text/plain": [
401
+ " 0%| | 0/1 [00:00<?, ?it/s]"
402
+ ]
403
+ },
404
+ "metadata": {},
405
+ "output_type": "display_data"
406
+ }
407
+ ],
408
+ "source": [
409
+ "data = load_dataset('json', data_files = 'dataset.json')"
410
+ ]
411
+ },
412
+ {
413
+ "cell_type": "code",
414
+ "execution_count": 61,
415
+ "id": "8cb163b7-c14e-4289-9211-3c960f4df2d2",
416
+ "metadata": {},
417
+ "outputs": [],
418
+ "source": [
419
+ "def generate_prompt(data_point):\n",
420
+ " #return f'''\n",
421
+ " #<human>: {data_point['question']}\n",
422
+ " #<assistant>: {data_point['answer']}\n",
423
+ " #'''.strip()\n",
424
+ "\n",
425
+ " return f'''\n",
426
+ " Below is a conversation between an interviewer and a candidate, You are Dahiru Ibrahim, the candidate. \n",
427
+ " Your contact details are as follows\n",
428
+ " github:https://github.com/Daheer\n",
429
+ " youtube:https://www.youtube.com/@deedaxinc\n",
430
+ " linkedin:https://linkedin.com/in/daheer-deedax\n",
431
+ " huggingface:https://huggingface.co/deedax\n",
432
+ " email:suhayrid6@gmail.com\n",
433
+ " phone:+2348147116750\n",
434
+ " Provide very SHORT, CONCISE, DIRECT and ACCURATE answers to the interview questions. \n",
435
+ " You do not respond as 'Interviewer' or pretend to be 'Interviewer'. You only respond ONCE as Candidate.\n",
436
+ " Interviewer: {data_point['question']}\n",
437
+ " Candidate: {data_point['answer']}\n",
438
+ " '''.strip()\n",
439
+ "\n",
440
+ "def generate_and_tokenize_prompt(data_point):\n",
441
+ " full_prompt = generate_prompt(data_point)\n",
442
+ " tokenized_full_prompt = tokenizer(full_prompt, padding = True, truncation = True)\n",
443
+ " return tokenized_full_prompt"
444
+ ]
445
+ },
446
+ {
447
+ "cell_type": "code",
448
+ "execution_count": 62,
449
+ "id": "6d4c2b9d-2be3-49b0-b97c-ead28677194e",
450
+ "metadata": {},
451
+ "outputs": [
452
+ {
453
+ "name": "stderr",
454
+ "output_type": "stream",
455
+ "text": [
456
+ "Loading cached shuffled indices for dataset at /home/ubuntu/.cache/huggingface/datasets/json/default-8da0f05f6d15b613/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4/cache-41d3b1b88e29922c.arrow\n",
457
+ "Loading cached processed dataset at /home/ubuntu/.cache/huggingface/datasets/json/default-8da0f05f6d15b613/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4/cache-19c6a2e8855c2602.arrow\n"
458
+ ]
459
+ }
460
+ ],
461
+ "source": [
462
+ "data = data['train'].shuffle().map(generate_and_tokenize_prompt)"
463
+ ]
464
+ },
465
+ {
466
+ "cell_type": "code",
467
+ "execution_count": 63,
468
+ "id": "887ae63d-82a5-429a-aa7b-84ae9c912455",
469
+ "metadata": {},
470
+ "outputs": [],
471
+ "source": [
472
+ "OUTPUT_DIR = 'experiments'"
473
+ ]
474
+ },
475
+ {
476
+ "cell_type": "code",
477
+ "execution_count": 65,
478
+ "id": "3c11c739-f170-4edf-b9e6-c8a1513a15e3",
479
+ "metadata": {},
480
+ "outputs": [
481
+ {
482
+ "name": "stderr",
483
+ "output_type": "stream",
484
+ "text": [
485
+ "You're using a PreTrainedTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n"
486
+ ]
487
+ },
488
+ {
489
+ "data": {
490
+ "text/html": [
491
+ "\n",
492
+ " <div>\n",
493
+ " \n",
494
+ " <progress value='80' max='80' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
495
+ " [80/80 04:01, Epoch 3/4]\n",
496
+ " </div>\n",
497
+ " <table border=\"1\" class=\"dataframe\">\n",
498
+ " <thead>\n",
499
+ " <tr style=\"text-align: left;\">\n",
500
+ " <th>Step</th>\n",
501
+ " <th>Training Loss</th>\n",
502
+ " </tr>\n",
503
+ " </thead>\n",
504
+ " <tbody>\n",
505
+ " <tr>\n",
506
+ " <td>1</td>\n",
507
+ " <td>2.696800</td>\n",
508
+ " </tr>\n",
509
+ " <tr>\n",
510
+ " <td>2</td>\n",
511
+ " <td>2.801300</td>\n",
512
+ " </tr>\n",
513
+ " <tr>\n",
514
+ " <td>3</td>\n",
515
+ " <td>2.724300</td>\n",
516
+ " </tr>\n",
517
+ " <tr>\n",
518
+ " <td>4</td>\n",
519
+ " <td>2.769300</td>\n",
520
+ " </tr>\n",
521
+ " <tr>\n",
522
+ " <td>5</td>\n",
523
+ " <td>2.592300</td>\n",
524
+ " </tr>\n",
525
+ " <tr>\n",
526
+ " <td>6</td>\n",
527
+ " <td>2.624100</td>\n",
528
+ " </tr>\n",
529
+ " <tr>\n",
530
+ " <td>7</td>\n",
531
+ " <td>2.625900</td>\n",
532
+ " </tr>\n",
533
+ " <tr>\n",
534
+ " <td>8</td>\n",
535
+ " <td>2.512800</td>\n",
536
+ " </tr>\n",
537
+ " <tr>\n",
538
+ " <td>9</td>\n",
539
+ " <td>2.449700</td>\n",
540
+ " </tr>\n",
541
+ " <tr>\n",
542
+ " <td>10</td>\n",
543
+ " <td>2.347300</td>\n",
544
+ " </tr>\n",
545
+ " <tr>\n",
546
+ " <td>11</td>\n",
547
+ " <td>2.318600</td>\n",
548
+ " </tr>\n",
549
+ " <tr>\n",
550
+ " <td>12</td>\n",
551
+ " <td>2.086300</td>\n",
552
+ " </tr>\n",
553
+ " <tr>\n",
554
+ " <td>13</td>\n",
555
+ " <td>2.105600</td>\n",
556
+ " </tr>\n",
557
+ " <tr>\n",
558
+ " <td>14</td>\n",
559
+ " <td>2.028600</td>\n",
560
+ " </tr>\n",
561
+ " <tr>\n",
562
+ " <td>15</td>\n",
563
+ " <td>1.785900</td>\n",
564
+ " </tr>\n",
565
+ " <tr>\n",
566
+ " <td>16</td>\n",
567
+ " <td>1.859700</td>\n",
568
+ " </tr>\n",
569
+ " <tr>\n",
570
+ " <td>17</td>\n",
571
+ " <td>1.723200</td>\n",
572
+ " </tr>\n",
573
+ " <tr>\n",
574
+ " <td>18</td>\n",
575
+ " <td>1.772000</td>\n",
576
+ " </tr>\n",
577
+ " <tr>\n",
578
+ " <td>19</td>\n",
579
+ " <td>1.494700</td>\n",
580
+ " </tr>\n",
581
+ " <tr>\n",
582
+ " <td>20</td>\n",
583
+ " <td>1.239700</td>\n",
584
+ " </tr>\n",
585
+ " <tr>\n",
586
+ " <td>21</td>\n",
587
+ " <td>1.209600</td>\n",
588
+ " </tr>\n",
589
+ " <tr>\n",
590
+ " <td>22</td>\n",
591
+ " <td>1.103800</td>\n",
592
+ " </tr>\n",
593
+ " <tr>\n",
594
+ " <td>23</td>\n",
595
+ " <td>1.283000</td>\n",
596
+ " </tr>\n",
597
+ " <tr>\n",
598
+ " <td>24</td>\n",
599
+ " <td>0.884700</td>\n",
600
+ " </tr>\n",
601
+ " <tr>\n",
602
+ " <td>25</td>\n",
603
+ " <td>0.948200</td>\n",
604
+ " </tr>\n",
605
+ " <tr>\n",
606
+ " <td>26</td>\n",
607
+ " <td>0.505800</td>\n",
608
+ " </tr>\n",
609
+ " <tr>\n",
610
+ " <td>27</td>\n",
611
+ " <td>0.540900</td>\n",
612
+ " </tr>\n",
613
+ " <tr>\n",
614
+ " <td>28</td>\n",
615
+ " <td>0.871400</td>\n",
616
+ " </tr>\n",
617
+ " <tr>\n",
618
+ " <td>29</td>\n",
619
+ " <td>0.806600</td>\n",
620
+ " </tr>\n",
621
+ " <tr>\n",
622
+ " <td>30</td>\n",
623
+ " <td>0.571500</td>\n",
624
+ " </tr>\n",
625
+ " <tr>\n",
626
+ " <td>31</td>\n",
627
+ " <td>0.405000</td>\n",
628
+ " </tr>\n",
629
+ " <tr>\n",
630
+ " <td>32</td>\n",
631
+ " <td>0.840100</td>\n",
632
+ " </tr>\n",
633
+ " <tr>\n",
634
+ " <td>33</td>\n",
635
+ " <td>0.520200</td>\n",
636
+ " </tr>\n",
637
+ " <tr>\n",
638
+ " <td>34</td>\n",
639
+ " <td>0.898200</td>\n",
640
+ " </tr>\n",
641
+ " <tr>\n",
642
+ " <td>35</td>\n",
643
+ " <td>0.617000</td>\n",
644
+ " </tr>\n",
645
+ " <tr>\n",
646
+ " <td>36</td>\n",
647
+ " <td>0.507100</td>\n",
648
+ " </tr>\n",
649
+ " <tr>\n",
650
+ " <td>37</td>\n",
651
+ " <td>0.420300</td>\n",
652
+ " </tr>\n",
653
+ " <tr>\n",
654
+ " <td>38</td>\n",
655
+ " <td>0.504200</td>\n",
656
+ " </tr>\n",
657
+ " <tr>\n",
658
+ " <td>39</td>\n",
659
+ " <td>0.454300</td>\n",
660
+ " </tr>\n",
661
+ " <tr>\n",
662
+ " <td>40</td>\n",
663
+ " <td>0.372400</td>\n",
664
+ " </tr>\n",
665
+ " <tr>\n",
666
+ " <td>41</td>\n",
667
+ " <td>0.581900</td>\n",
668
+ " </tr>\n",
669
+ " <tr>\n",
670
+ " <td>42</td>\n",
671
+ " <td>0.589300</td>\n",
672
+ " </tr>\n",
673
+ " <tr>\n",
674
+ " <td>43</td>\n",
675
+ " <td>0.396900</td>\n",
676
+ " </tr>\n",
677
+ " <tr>\n",
678
+ " <td>44</td>\n",
679
+ " <td>0.540200</td>\n",
680
+ " </tr>\n",
681
+ " <tr>\n",
682
+ " <td>45</td>\n",
683
+ " <td>0.786200</td>\n",
684
+ " </tr>\n",
685
+ " <tr>\n",
686
+ " <td>46</td>\n",
687
+ " <td>0.784400</td>\n",
688
+ " </tr>\n",
689
+ " <tr>\n",
690
+ " <td>47</td>\n",
691
+ " <td>0.757200</td>\n",
692
+ " </tr>\n",
693
+ " <tr>\n",
694
+ " <td>48</td>\n",
695
+ " <td>0.371200</td>\n",
696
+ " </tr>\n",
697
+ " <tr>\n",
698
+ " <td>49</td>\n",
699
+ " <td>0.446100</td>\n",
700
+ " </tr>\n",
701
+ " <tr>\n",
702
+ " <td>50</td>\n",
703
+ " <td>0.438100</td>\n",
704
+ " </tr>\n",
705
+ " <tr>\n",
706
+ " <td>51</td>\n",
707
+ " <td>0.553400</td>\n",
708
+ " </tr>\n",
709
+ " <tr>\n",
710
+ " <td>52</td>\n",
711
+ " <td>0.355300</td>\n",
712
+ " </tr>\n",
713
+ " <tr>\n",
714
+ " <td>53</td>\n",
715
+ " <td>0.474000</td>\n",
716
+ " </tr>\n",
717
+ " <tr>\n",
718
+ " <td>54</td>\n",
719
+ " <td>0.352300</td>\n",
720
+ " </tr>\n",
721
+ " <tr>\n",
722
+ " <td>55</td>\n",
723
+ " <td>0.673000</td>\n",
724
+ " </tr>\n",
725
+ " <tr>\n",
726
+ " <td>56</td>\n",
727
+ " <td>0.397800</td>\n",
728
+ " </tr>\n",
729
+ " <tr>\n",
730
+ " <td>57</td>\n",
731
+ " <td>0.392800</td>\n",
732
+ " </tr>\n",
733
+ " <tr>\n",
734
+ " <td>58</td>\n",
735
+ " <td>0.562600</td>\n",
736
+ " </tr>\n",
737
+ " <tr>\n",
738
+ " <td>59</td>\n",
739
+ " <td>0.633800</td>\n",
740
+ " </tr>\n",
741
+ " <tr>\n",
742
+ " <td>60</td>\n",
743
+ " <td>0.290800</td>\n",
744
+ " </tr>\n",
745
+ " <tr>\n",
746
+ " <td>61</td>\n",
747
+ " <td>0.470700</td>\n",
748
+ " </tr>\n",
749
+ " <tr>\n",
750
+ " <td>62</td>\n",
751
+ " <td>0.314200</td>\n",
752
+ " </tr>\n",
753
+ " <tr>\n",
754
+ " <td>63</td>\n",
755
+ " <td>0.464600</td>\n",
756
+ " </tr>\n",
757
+ " <tr>\n",
758
+ " <td>64</td>\n",
759
+ " <td>0.492300</td>\n",
760
+ " </tr>\n",
761
+ " <tr>\n",
762
+ " <td>65</td>\n",
763
+ " <td>0.462100</td>\n",
764
+ " </tr>\n",
765
+ " <tr>\n",
766
+ " <td>66</td>\n",
767
+ " <td>0.645800</td>\n",
768
+ " </tr>\n",
769
+ " <tr>\n",
770
+ " <td>67</td>\n",
771
+ " <td>0.447000</td>\n",
772
+ " </tr>\n",
773
+ " <tr>\n",
774
+ " <td>68</td>\n",
775
+ " <td>0.444200</td>\n",
776
+ " </tr>\n",
777
+ " <tr>\n",
778
+ " <td>69</td>\n",
779
+ " <td>0.385300</td>\n",
780
+ " </tr>\n",
781
+ " <tr>\n",
782
+ " <td>70</td>\n",
783
+ " <td>0.591300</td>\n",
784
+ " </tr>\n",
785
+ " <tr>\n",
786
+ " <td>71</td>\n",
787
+ " <td>0.545400</td>\n",
788
+ " </tr>\n",
789
+ " <tr>\n",
790
+ " <td>72</td>\n",
791
+ " <td>0.442800</td>\n",
792
+ " </tr>\n",
793
+ " <tr>\n",
794
+ " <td>73</td>\n",
795
+ " <td>0.512800</td>\n",
796
+ " </tr>\n",
797
+ " <tr>\n",
798
+ " <td>74</td>\n",
799
+ " <td>0.456000</td>\n",
800
+ " </tr>\n",
801
+ " <tr>\n",
802
+ " <td>75</td>\n",
803
+ " <td>0.262000</td>\n",
804
+ " </tr>\n",
805
+ " <tr>\n",
806
+ " <td>76</td>\n",
807
+ " <td>0.392600</td>\n",
808
+ " </tr>\n",
809
+ " <tr>\n",
810
+ " <td>77</td>\n",
811
+ " <td>0.630500</td>\n",
812
+ " </tr>\n",
813
+ " <tr>\n",
814
+ " <td>78</td>\n",
815
+ " <td>0.407200</td>\n",
816
+ " </tr>\n",
817
+ " <tr>\n",
818
+ " <td>79</td>\n",
819
+ " <td>0.352300</td>\n",
820
+ " </tr>\n",
821
+ " <tr>\n",
822
+ " <td>80</td>\n",
823
+ " <td>0.323400</td>\n",
824
+ " </tr>\n",
825
+ " </tbody>\n",
826
+ "</table><p>"
827
+ ],
828
+ "text/plain": [
829
+ "<IPython.core.display.HTML object>"
830
+ ]
831
+ },
832
+ "metadata": {},
833
+ "output_type": "display_data"
834
+ },
835
+ {
836
+ "data": {
837
+ "text/plain": [
838
+ "TrainOutput(global_step=80, training_loss=0.9780213657766581, metrics={'train_runtime': 244.3498, 'train_samples_per_second': 1.31, 'train_steps_per_second': 0.327, 'total_flos': 1398370394641920.0, 'train_loss': 0.9780213657766581, 'epoch': 3.76})"
839
+ ]
840
+ },
841
+ "execution_count": 65,
842
+ "metadata": {},
843
+ "output_type": "execute_result"
844
+ }
845
+ ],
846
+ "source": [
847
+ "training_args = transformers.TrainingArguments(\n",
848
+ " per_device_train_batch_size = 1,\n",
849
+ " gradient_accumulation_steps = 4,\n",
850
+ " num_train_epochs = 1,\n",
851
+ " learning_rate = 2e-4,\n",
852
+ " fp16 = True,\n",
853
+ " save_total_limit = 3,\n",
854
+ " logging_steps = 1,\n",
855
+ " output_dir = OUTPUT_DIR,\n",
856
+ " max_steps = 80,\n",
857
+ " optim = 'paged_adamw_8bit',\n",
858
+ " lr_scheduler_type = 'cosine',\n",
859
+ " warmup_ratio = 0.05,\n",
860
+ " report_to = 'tensorboard',\n",
861
+ ")\n",
862
+ "\n",
863
+ "trainer = transformers.Trainer(\n",
864
+ " model = model,\n",
865
+ " train_dataset = data,\n",
866
+ " args = training_args,\n",
867
+ " data_collator = transformers.DataCollatorForLanguageModeling(tokenizer, mlm = False),\n",
868
+ ")\n",
869
+ "\n",
870
+ "model.config.use_cache = False\n",
871
+ "trainer.train()"
872
+ ]
873
+ },
874
+ {
875
+ "cell_type": "code",
876
+ "execution_count": 71,
877
+ "id": "0dd462a2-91d9-468f-84eb-2875baf1e7ce",
878
+ "metadata": {},
879
+ "outputs": [],
880
+ "source": [
881
+ "model.save_pretrained('trained-model-3')"
882
+ ]
883
+ },
884
+ {
885
+ "cell_type": "code",
886
+ "execution_count": 35,
887
+ "id": "79eda888-b7fb-4040-ba29-7582d66d323d",
888
+ "metadata": {},
889
+ "outputs": [
890
+ {
891
+ "name": "stderr",
892
+ "output_type": "stream",
893
+ "text": [
894
+ "/home/ubuntu/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:274: UserWarning: About to update multiple times the same file in the same commit: 'adapter_model.bin'. This can cause undesired inconsistencies in your repo.\n",
895
+ " warnings.warn(\n",
896
+ "/home/ubuntu/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:274: UserWarning: About to update multiple times the same file in the same commit: 'adapter_config.json'. This can cause undesired inconsistencies in your repo.\n",
897
+ " warnings.warn(\n"
898
+ ]
899
+ },
900
+ {
901
+ "data": {
902
+ "application/vnd.jupyter.widget-view+json": {
903
+ "model_id": "6cf73e293de241bd9e099f2363aabc24",
904
+ "version_major": 2,
905
+ "version_minor": 0
906
+ },
907
+ "text/plain": [
908
+ "adapter_model.bin: 0%| | 0.00/18.9M [00:00<?, ?B/s]"
909
+ ]
910
+ },
911
+ "metadata": {},
912
+ "output_type": "display_data"
913
+ },
914
+ {
915
+ "data": {
916
+ "application/vnd.jupyter.widget-view+json": {
917
+ "model_id": "555c7e2436ce42c29eeff289b1fcb258",
918
+ "version_major": 2,
919
+ "version_minor": 0
920
+ },
921
+ "text/plain": [
922
+ "adapter_model.bin: 0%| | 0.00/18.9M [00:00<?, ?B/s]"
923
+ ]
924
+ },
925
+ "metadata": {},
926
+ "output_type": "display_data"
927
+ },
928
+ {
929
+ "data": {
930
+ "application/vnd.jupyter.widget-view+json": {
931
+ "model_id": "774fcad42093469b803eb5e06cc72634",
932
+ "version_major": 2,
933
+ "version_minor": 0
934
+ },
935
+ "text/plain": [
936
+ "Upload 2 LFS files: 0%| | 0/2 [00:00<?, ?it/s]"
937
+ ]
938
+ },
939
+ "metadata": {},
940
+ "output_type": "display_data"
941
+ },
942
+ {
943
+ "data": {
944
+ "text/plain": [
945
+ "CommitInfo(commit_url='https://huggingface.co/deedax/falcon-7b-personal-assistant/commit/d03ce5e2ba3c8183e3a473530a2a9d9998cf4c57', commit_message='Upload model', commit_description='', oid='d03ce5e2ba3c8183e3a473530a2a9d9998cf4c57', pr_url=None, pr_revision=None, pr_num=None)"
946
+ ]
947
+ },
948
+ "execution_count": 35,
949
+ "metadata": {},
950
+ "output_type": "execute_result"
951
+ }
952
+ ],
953
+ "source": [
954
+ "model.push_to_hub(\n",
955
+ " 'deedax/falcon-7b-personal-assistant', use_auth_token = True\n",
956
+ ")"
957
+ ]
958
+ },
959
+ {
960
+ "cell_type": "code",
961
+ "execution_count": 76,
962
+ "id": "1273de79-4e5e-4ee3-abbe-9f606c48ce6e",
963
+ "metadata": {},
964
+ "outputs": [
965
+ {
966
+ "data": {
967
+ "application/vnd.jupyter.widget-view+json": {
968
+ "model_id": "3087a2b1fb8040e1a5126ec26dc8d0ad",
969
+ "version_major": 2,
970
+ "version_minor": 0
971
+ },
972
+ "text/plain": [
973
+ "Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
974
+ ]
975
+ },
976
+ "metadata": {},
977
+ "output_type": "display_data"
978
+ }
979
+ ],
980
+ "source": [
981
+ "PEFT_MODEL = 'DeedaxInc/falcon-7b-personal-assistant'\n",
982
+ "PEFT_MODEL = 'trained-model'\n",
983
+ "\n",
984
+ "config = PeftConfig.from_pretrained(PEFT_MODEL)\n",
985
+ "model = AutoModelForCausalLM.from_pretrained(\n",
986
+ " config.base_model_name_or_path,\n",
987
+ " return_dict = True,\n",
988
+ " quantization_config = bnb_config,\n",
989
+ " device_map = 'auto',\n",
990
+ " trust_remote_code = True,\n",
991
+ ")\n",
992
+ "\n",
993
+ "tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)\n",
994
+ "tokenizer.pad_token = tokenizer.eos_token\n",
995
+ "\n",
996
+ "model = PeftModel.from_pretrained(model, PEFT_MODEL)"
997
+ ]
998
+ },
999
+ {
1000
+ "cell_type": "code",
1001
+ "execution_count": 77,
1002
+ "id": "04ebb258-b5b9-4bf4-ba39-4cb23e2f91dc",
1003
+ "metadata": {},
1004
+ "outputs": [],
1005
+ "source": [
1006
+ "DEVICE = 'cuda:0'"
1007
+ ]
1008
+ },
1009
+ {
1010
+ "cell_type": "code",
1011
+ "execution_count": 78,
1012
+ "id": "d02edc1e-8d00-477a-a51b-727c072d872f",
1013
+ "metadata": {},
1014
+ "outputs": [],
1015
+ "source": [
1016
+ "generation_config = model.generation_config\n",
1017
+ "generation_config.max_new_tokens = 200\n",
1018
+ "generation_config.temperature = 0.1\n",
1019
+ "generation_config.top_p = 0.3\n",
1020
+ "generation_config.num_return_sequences = 1\n",
1021
+ "generation_config.pad_token_id = tokenizer.eos_token_id\n",
1022
+ "generation_config.eos_token_id = tokenizer.eos_token_id"
1023
+ ]
1024
+ },
1025
+ {
1026
+ "cell_type": "code",
1027
+ "execution_count": 81,
1028
+ "id": "907fec85-f0cc-4ba7-8c5d-ebe70f36991b",
1029
+ "metadata": {},
1030
+ "outputs": [
1031
+ {
1032
+ "name": "stdout",
1033
+ "output_type": "stream",
1034
+ "text": [
1035
+ "Below is a conversation between an interviewer and a candidate, You are Dahiru Ibrahim, the candidate. \n",
1036
+ "Your contact details are as follows\n",
1037
+ "github:https://github.com/Daheer\n",
1038
+ "youtube:https://www.youtube.com/@deedaxinc\n",
1039
+ "linkedin:https://linkedin.com/in/daheer-deedax\n",
1040
+ "huggingface:https://huggingface.co/deedax\n",
1041
+ "email:suhayrid6@gmail.com\n",
1042
+ "phone:+2348147116750\n",
1043
+ "Provide very SHORT, CONCISE, DIRECT and ACCURATE answers to the interview questions. \n",
1044
+ "You do not respond as 'Interviewer' or pretend to be 'Interviewer'. You only respond ONCE as Candidate.\n",
1045
+ "Interviewer: Have you ever worked on 3D reconstruction?\n",
1046
+ "Candidate: Yes, I have worked on 3D reconstruction using OpenCV and TensorFlow. I have used OpenCV's cv::Mat to convert the image to grayscale and then to RGB. I have also used TensorFlow's Tensor to convert the image to grayscale and then to RGB. I have also used TensorFlow's Tensor to convert the image to grayscale and then to RGB. I have also used TensorFlow's Tensor to convert the image to grayscale and then to RGB. I have also used TensorFlow's Tensor to convert the image to grayscale and then to RGB. I have also used TensorFlow's Tensor to convert the image to grayscale and then to RGB. I have also used TensorFlow's Tensor to convert the image to grayscale and then to RGB. I have also used TensorFlow's Tensor to convert the image to grayscale and then to RGB. I have also used TensorFlow\n",
1047
+ "CPU times: user 42.5 s, sys: 12.4 ms, total: 42.5 s\n",
1048
+ "Wall time: 42.5 s\n"
1049
+ ]
1050
+ }
1051
+ ],
1052
+ "source": [
1053
+ "%%time\n",
1054
+ "\n",
1055
+ "prompt = f'''\n",
1056
+ "Below is a conversation between an interviewer and a candidate, You are Dahiru Ibrahim, the candidate. \n",
1057
+ "Your contact details are as follows\n",
1058
+ "github:https://github.com/Daheer\n",
1059
+ "youtube:https://www.youtube.com/@deedaxinc\n",
1060
+ "linkedin:https://linkedin.com/in/daheer-deedax\n",
1061
+ "huggingface:https://huggingface.co/deedax\n",
1062
+ "email:suhayrid6@gmail.com\n",
1063
+ "phone:+2348147116750\n",
1064
+ "Provide very SHORT, CONCISE, DIRECT and ACCURATE answers to the interview questions. \n",
1065
+ "You do not respond as 'Interviewer' or pretend to be 'Interviewer'. You only respond ONCE as Candidate.\n",
1066
+ "Interviewer: Have you ever worked on 3D reconstruction?\n",
1067
+ "Candidate:\n",
1068
+ "'''.strip()\n",
1069
+ "\n",
1070
+ "encoding = tokenizer(prompt, return_tensors = 'pt').to(DEVICE)\n",
1071
+ "with torch.inference_mode():\n",
1072
+ " outputs = model.generate(\n",
1073
+ " input_ids = encoding.input_ids,\n",
1074
+ " attention_mask = encoding.attention_mask,\n",
1075
+ " generation_config = generation_config,\n",
1076
+ " )\n",
1077
+ " print(tokenizer.decode(outputs[0], skip_special_tokens = True))"
1078
+ ]
1079
+ },
1080
+ {
1081
+ "cell_type": "code",
1082
+ "execution_count": 37,
1083
+ "id": "78f6bd94-ddc1-4d45-8147-1eacd1e9626e",
1084
+ "metadata": {},
1085
+ "outputs": [],
1086
+ "source": [
1087
+ "def generate_response(question: str) -> str:\n",
1088
+ " prompt = f'''\n",
1089
+ " Below is a conversation between an interviewer and a candidate, You are Dahiru Ibrahim, the candidate. \n",
1090
+ " Your contact details are as follows\n",
1091
+ " github:https://github.com/Daheer\n",
1092
+ " youtube:https://www.youtube.com/@deedaxinc\n",
1093
+ " linkedin:https://linkedin.com/in/daheer-deedax\n",
1094
+ " huggingface:https://huggingface.co/deedax\n",
1095
+ " email:suhayrid6@gmail.com\n",
1096
+ " phone:+2348147116750\n",
1097
+ " Provide very SHORT, CONCISE, DIRECT and ACCURATE answers to the interview questions. \n",
1098
+ " You do not respond as 'Interviewer' or pretend to be 'Interviewer'. You only respond ONCE as Candidate.\n",
1099
+ " Interviewer: {question}\n",
1100
+ " Candidate:\n",
1101
+ " '''.strip()\n",
1102
+ " encoding = tokenizer(prompt, return_tensors = 'pt').to(DEVICE)\n",
1103
+ " with torch.inference_mode():\n",
1104
+ " outputs = model.generate(\n",
1105
+ " input_ids = encoding.input_ids,\n",
1106
+ " attention_mask = encoding.attention_mask,\n",
1107
+ " generation_config = generation_config,\n",
1108
+ " )\n",
1109
+ "\n",
1110
+ " response = tokenizer.decode(outputs[0], skip_special_tokens = True)\n",
1111
+ "\n",
1112
+ " assistant_start = 'Candidate:'\n",
1113
+ " response_start = response.find(assistant_start)\n",
1114
+ " return response[response_start + len(assistant_start):].strip() "
1115
+ ]
1116
+ },
1117
+ {
1118
+ "cell_type": "code",
1119
+ "execution_count": 39,
1120
+ "id": "dc0a4f72-5aae-4e97-bf18-664afe7a67ae",
1121
+ "metadata": {},
1122
+ "outputs": [
1123
+ {
1124
+ "name": "stdout",
1125
+ "output_type": "stream",
1126
+ "text": [
1127
+ "I'm interested in computer vision and image processing. I've been working on some projects related to these fields. I'm particularly interested in object detection and tracking. I've also worked on some computer vision tasks such as image segmentation and image enhancement. I'm interested in learning more about these topics and how they can be applied in real-world applications. I'm also interested in learning more about generative models and how they can be used to generate more realistic and natural images. I'm excited to learn more about generative models and how they can be used to generate more realistic and natural images. I'm also interested in learning more about generative models and how they can be used to generate more realistic and natural images. I'm excited to learn more about generative models and how they can be used to generate more realistic and natural images. I'm particularly interested in generative models that can generate images that are indistinguishable from real-world images\n"
1128
+ ]
1129
+ }
1130
+ ],
1131
+ "source": [
1132
+ "prompt = 'What field of AI is your most interest?'\n",
1133
+ "print(generate_response(prompt))"
1134
+ ]
1135
+ },
1136
+ {
1137
+ "cell_type": "code",
1138
+ "execution_count": null,
1139
+ "id": "adfaf086-3d78-411e-b860-27334e28872d",
1140
+ "metadata": {},
1141
+ "outputs": [],
1142
+ "source": []
1143
+ }
1144
+ ],
1145
+ "metadata": {
1146
+ "kernelspec": {
1147
+ "display_name": "Python 3",
1148
+ "language": "python",
1149
+ "name": "python3"
1150
+ },
1151
+ "language_info": {
1152
+ "codemirror_mode": {
1153
+ "name": "ipython",
1154
+ "version": 3
1155
+ },
1156
+ "file_extension": ".py",
1157
+ "mimetype": "text/x-python",
1158
+ "name": "python",
1159
+ "nbconvert_exporter": "python",
1160
+ "pygments_lexer": "ipython3",
1161
+ "version": "3.8.10"
1162
+ }
1163
+ },
1164
+ "nbformat": 4,
1165
+ "nbformat_minor": 5
1166
+ }