SalehAhmad
commited on
Commit
•
642a0cd
1
Parent(s):
cfc4a4d
Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,15 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
## Training procedure
|
5 |
|
6 |
-
|
7 |
The following `bitsandbytes` quantization config was used during training:
|
8 |
- quant_method: QuantizationMethod.BITS_AND_BYTES
|
9 |
- load_in_8bit: False
|
@@ -15,7 +21,51 @@ The following `bitsandbytes` quantization config was used during training:
|
|
15 |
- bnb_4bit_quant_type: nf4
|
16 |
- bnb_4bit_use_double_quant: False
|
17 |
- bnb_4bit_compute_dtype: bfloat16
|
|
|
18 |
### Framework versions
|
|
|
19 |
|
|
|
|
|
|
|
20 |
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- QA
|
8 |
+
- Objective QA
|
9 |
+
- Subjective QA
|
10 |
---
|
11 |
## Training procedure
|
12 |
|
|
|
13 |
The following `bitsandbytes` quantization config was used during training:
|
14 |
- quant_method: QuantizationMethod.BITS_AND_BYTES
|
15 |
- load_in_8bit: False
|
|
|
21 |
- bnb_4bit_quant_type: nf4
|
22 |
- bnb_4bit_use_double_quant: False
|
23 |
- bnb_4bit_compute_dtype: bfloat16
|
24 |
+
|
25 |
### Framework versions
|
26 |
+
- PEFT 0.4.0
|
27 |
|
28 |
+
## Methods to Infer
|
29 |
+
```
|
30 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline, TextStreamer
|
31 |
|
32 |
+
def stream(user_prompt):
|
33 |
+
runtimeFlag = "cuda:0"
|
34 |
+
inputs = tokenizer([user_prompt], return_tensors="pt").to(runtimeFlag)
|
35 |
+
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=False)
|
36 |
+
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=2046)
|
37 |
+
|
38 |
+
def infer(user_prompt):
|
39 |
+
runtimeFlag = "cuda:0"
|
40 |
+
inputs = tokenizer([user_prompt], return_tensors="pt").to(runtimeFlag)
|
41 |
+
outputs = model.generate(**inputs, max_new_tokens=2046)
|
42 |
+
return tokenizer.decode(outputs[0], skip_special_tokens=True)
|
43 |
+
|
44 |
+
def infer_pipeline(user_prompt):
|
45 |
+
runtimeFlag = "cuda:0"
|
46 |
+
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=2046, return_full_text=False, device_map="auto",
|
47 |
+
kwargs={'stop': ["###Human:"]})
|
48 |
+
return pipe(user_prompt)
|
49 |
+
```
|
50 |
+
|
51 |
+
### Sample Inputs:
|
52 |
+
```
|
53 |
+
Sys_OBJECTIVE = '''You are a chatbot, who is helping to curate datasets. When given an input context paragraph, you have to generate only one mcq question,
|
54 |
+
it's options and it's actual answer. You have to follow the given JSON format for generating the question, options and answer.
|
55 |
+
Donot use words like "in this paragraph", "from the context" etc. The questions should be independent of any other question.
|
56 |
+
'''
|
57 |
+
|
58 |
+
Sys_SUBJECTIVE = '''You are a chatbot, who is helping to curate datasets. When given an input context paragraph, you have to generate only one subjective quesion,
|
59 |
+
and it's actual answer. You have to follow the given JSON format for generating the question and answer.
|
60 |
+
Donot use words like "in this paragraph", "from the context" etc. The questions should be independent of any other question.'''
|
61 |
+
|
62 |
+
Prompt = '''And in the leadership styles it will be that is the is the there will be the changing into the leadership styles and in the leadership styles it will be that is the the approach will be for doing this type of the research which has been adopted in this paper is that is the degree of the correlation and its statistical significance between the self-assess leadership behavior and the 360 degree assessment of performance, evidence is presented showing that results vary in different context.'''
|
63 |
+
|
64 |
+
Formatted_Prompt_OBJECTIVE = f"###Human: {Sys_OBJECTIVE}\nThe context is: {Prompt}\n###Assistant: "
|
65 |
+
|
66 |
+
Formatted_Prompt_SUBJECTIVE = f"###Human: {Sys_SUBJECTIVE}\nThe context is: {Prompt}\n###Assistant: "
|
67 |
+
|
68 |
+
- stream(Formatted_Prompt_OBJECTIVE)
|
69 |
+
- infer_pipeline(Formatted_Prompt_OBJECTIVE)
|
70 |
+
- infer_pipeline(Formatted_Prompt_SUBJECTIVE)
|
71 |
+
```
|