TroyDoesAI commited on
Commit
16b31f4
•
1 Parent(s): 606a3a4

Update README.md

Browse files

Added Training Specs

Files changed (1) hide show
  1. README.md +46 -1
README.md CHANGED
@@ -61,4 +61,49 @@ For best results, use full precision with one of the three different instruction
61
  - "instruction": "Create the mermaid diagram for the following story:"
62
  - "instruction": "Create the mermaid diagram for the following:"
63
 
64
- Exciting times ahead as we delve into the MermaidLLama revolution! 🚀
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  - "instruction": "Create the mermaid diagram for the following story:"
62
  - "instruction": "Create the mermaid diagram for the following:"
63
 
64
+ Exciting times ahead as we delve into the MermaidLLama revolution! 🚀
65
+
66
+ LoRA Rank
67
+ Also called dimension count. Higher values = larger file, more content control. Smaller values = smaller file, less control. Use 4 or 8 for style, 128 or 256 to teach, 1024+ for fine-detail on big data. More VRAM is needed for higher ranks.
68
+ 2048
69
+
70
+ LoRA Alpha
71
+ This divided by the rank becomes the scaling of the LoRA. Higher means stronger. A good standard value is twice your Rank.
72
+ 4096
73
+
74
+ Batch Size
75
+ Global batch size. The two batch sizes together determine gradient accumulation (gradientAccum = batch / microBatch). Higher gradient accum values lead to better quality training.
76
+ 1
77
+
78
+ Micro Batch Size
79
+ Per-device batch size (NOTE: multiple devices not yet implemented). Increasing this will increase VRAM usage.
80
+ 1
81
+
82
+ Cutoff Length
83
+ Cutoff length for text input. Essentially, how long of a line of text to feed in at a time. Higher values require drastically more VRAM.
84
+ 4096
85
+
86
+ Save every n steps
87
+ If above 0, a checkpoint of the LoRA will be saved every time this many steps pass.
88
+ 1000
89
+
90
+ Epochs
91
+ Number of times every entry in the dataset should be fed into training. So 1 means feed each item in once, 5 means feed it in five times, etc.
92
+ 3
93
+
94
+ Learning Rate
95
+ In scientific notation.
96
+ 1e-6
97
+
98
+ LR Scheduler
99
+ Learning rate scheduler - defines how the learning rate changes over time. "Constant" means never change, "linear" means to go in a straight line from the learning rate down to 0, cosine follows a curve, etc.
100
+ cosine
101
+
102
+
103
+ Target Modules
104
+ â–¼
105
+ Selects which modules to target in training. Targeting more modules is closer to a full fine-tune at the cost of increased VRAM requirements and adapter size. NOTE: Only works for model_id='llama', other types will retain default training behavior and not use these settings.
106
+
107
+ Enable q_proj
108
+ Enable v_proj
109
+