rAIfle commited on
Commit
6cbc9ad
1 Parent(s): c49981d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -1,5 +1,21 @@
1
  1 epoch of grimulkan/LimaRP-augmented on LLaMA-8b via unsloth on colab, using the llama-chat template.
2
- ```trainer = SFTTrainer(
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  model = model,
4
  tokenizer = tokenizer,
5
  train_dataset = dataset,
 
1
  1 epoch of grimulkan/LimaRP-augmented on LLaMA-8b via unsloth on colab, using the llama-chat template.
2
+ ```
3
+ model = FastLanguageModel.get_peft_model(
4
+ model,
5
+ r = 64, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
6
+ target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
7
+ "gate_proj", "up_proj", "down_proj",],
8
+ lora_alpha = 16,
9
+ lora_dropout = 0, # Supports any, but = 0 is optimized
10
+ bias = "none", # Supports any, but = "none" is optimized
11
+ # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
12
+ use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
13
+ random_state = 3407,
14
+ use_rslora = True, # We support rank stabilized LoRA
15
+ loftq_config = None, # And LoftQ
16
+ )
17
+
18
+ trainer = SFTTrainer(
19
  model = model,
20
  tokenizer = tokenizer,
21
  train_dataset = dataset,