SalehAhmad commited on
Commit
19cdaf9
1 Parent(s): 642a0cd

Upload 10 files

Browse files
README.md CHANGED
@@ -1,15 +1,9 @@
1
  ---
2
  library_name: peft
3
- language:
4
- - en
5
- pipeline_tag: text-generation
6
- tags:
7
- - QA
8
- - Objective QA
9
- - Subjective QA
10
  ---
11
  ## Training procedure
12
 
 
13
  The following `bitsandbytes` quantization config was used during training:
14
  - quant_method: QuantizationMethod.BITS_AND_BYTES
15
  - load_in_8bit: False
@@ -21,51 +15,7 @@ The following `bitsandbytes` quantization config was used during training:
21
  - bnb_4bit_quant_type: nf4
22
  - bnb_4bit_use_double_quant: False
23
  - bnb_4bit_compute_dtype: bfloat16
24
-
25
  ### Framework versions
26
- - PEFT 0.4.0
27
-
28
- ## Methods to Infer
29
- ```
30
- from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline, TextStreamer
31
-
32
- def stream(user_prompt):
33
- runtimeFlag = "cuda:0"
34
- inputs = tokenizer([user_prompt], return_tensors="pt").to(runtimeFlag)
35
- streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=False)
36
- _ = model.generate(**inputs, streamer=streamer, max_new_tokens=2046)
37
-
38
- def infer(user_prompt):
39
- runtimeFlag = "cuda:0"
40
- inputs = tokenizer([user_prompt], return_tensors="pt").to(runtimeFlag)
41
- outputs = model.generate(**inputs, max_new_tokens=2046)
42
- return tokenizer.decode(outputs[0], skip_special_tokens=True)
43
 
44
- def infer_pipeline(user_prompt):
45
- runtimeFlag = "cuda:0"
46
- pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=2046, return_full_text=False, device_map="auto",
47
- kwargs={'stop': ["###Human:"]})
48
- return pipe(user_prompt)
49
- ```
50
 
51
- ### Sample Inputs:
52
- ```
53
- Sys_OBJECTIVE = '''You are a chatbot, who is helping to curate datasets. When given an input context paragraph, you have to generate only one mcq question,
54
- it's options and it's actual answer. You have to follow the given JSON format for generating the question, options and answer.
55
- Donot use words like "in this paragraph", "from the context" etc. The questions should be independent of any other question.
56
- '''
57
-
58
- Sys_SUBJECTIVE = '''You are a chatbot, who is helping to curate datasets. When given an input context paragraph, you have to generate only one subjective quesion,
59
- and it's actual answer. You have to follow the given JSON format for generating the question and answer.
60
- Donot use words like "in this paragraph", "from the context" etc. The questions should be independent of any other question.'''
61
-
62
- Prompt = '''And in the leadership styles it will be that is the is the there will be the changing into the leadership styles and in the leadership styles it will be that is the the approach will be for doing this type of the research which has been adopted in this paper is that is the degree of the correlation and its statistical significance between the self-assess leadership behavior and the 360 degree assessment of performance, evidence is presented showing that results vary in different context.'''
63
-
64
- Formatted_Prompt_OBJECTIVE = f"###Human: {Sys_OBJECTIVE}\nThe context is: {Prompt}\n###Assistant: "
65
-
66
- Formatted_Prompt_SUBJECTIVE = f"###Human: {Sys_SUBJECTIVE}\nThe context is: {Prompt}\n###Assistant: "
67
-
68
- - stream(Formatted_Prompt_OBJECTIVE)
69
- - infer_pipeline(Formatted_Prompt_OBJECTIVE)
70
- - infer_pipeline(Formatted_Prompt_SUBJECTIVE)
71
- ```
 
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
3
  ---
4
  ## Training procedure
5
 
6
+
7
  The following `bitsandbytes` quantization config was used during training:
8
  - quant_method: QuantizationMethod.BITS_AND_BYTES
9
  - load_in_8bit: False
 
15
  - bnb_4bit_quant_type: nf4
16
  - bnb_4bit_use_double_quant: False
17
  - bnb_4bit_compute_dtype: bfloat16
 
18
  ### Framework versions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
 
 
 
 
 
 
20
 
21
+ - PEFT 0.4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": "</s>",
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }