emre570 commited on
Commit
6277244
1 Parent(s): ca2b2f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -24,8 +24,8 @@ You can access the fine-tuning code below.
24
 
25
  ## Example Usage
26
  You can use the adapter model with PEFT.
27
-
28
- `from peft import PeftModel, PeftConfig
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
  base_model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3-8b-bnb-4bit")
@@ -51,8 +51,8 @@ inputs = tokenizer([
51
 
52
 
53
  outputs = model.generate(**inputs, max_new_tokens=256)
54
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))`
55
-
56
  Output:
57
 
58
  `Instruction:
 
24
 
25
  ## Example Usage
26
  You can use the adapter model with PEFT.
27
+ ```
28
+ from peft import PeftModel, PeftConfig
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
  base_model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3-8b-bnb-4bit")
 
51
 
52
 
53
  outputs = model.generate(**inputs, max_new_tokens=256)
54
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
55
+ ```
56
  Output:
57
 
58
  `Instruction: