crumb commited on
Commit
e3f2c80
1 Parent(s): ff17888

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Instruct-GPT-J
2
+
3
+ Use:
4
+
5
+ ```python
6
+ import torch
7
+ from peft import PeftModel, PeftConfig
8
+ from transformers import AutoModelForCausalLM, AutoTokenizer
9
+
10
+ peft_model_id = "crumb/gpt-j-6b-lora-alpaca"
11
+ config = PeftConfig.from_pretrained(peft_model_id)
12
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto', revision='sharded')
13
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
14
+
15
+ # Load the Lora model
16
+ model = PeftModel.from_pretrained(model, peft_model_id)
17
+
18
+ # This example is in the alpaca training set
19
+ batch = tokenizer("Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: How can we reduce air pollution? ### Response:", return_tensors='pt')
20
+
21
+ with torch.cuda.amp.autocast():
22
+ output_tokens = model.generate(**batch, max_new_tokens=50)
23
+
24
+ print(tokenizer.decode(output_tokens[0], skip_special_tokens=True))
25
+ # One way to reduce air pollution is to reduce the amount of emissions from vehicles. This can be done by implementing stricter emission standards and increasing the use of electric vehicles. Another way to reduce air pollution is to reduce the amount of waste produced by industries.
26
+ ```
27
+
28
+ A function to turn an instruction into a prompt for the model could be written as follows
29
+
30
+ ```python
31
+ def prompt(instruction, input=''):
32
+ if input=='':
33
+ return f"Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: "
34
+ return f"Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Input: {input} ### Response: "
35
+ ```
36
+
37
+ Where input would be an input for the model to act on based on the instruction.