nkpz commited on
Commit
b6b31f0
1 Parent(s): 43ff150

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -1,10 +1,14 @@
1
  ---
2
  license: other
3
  ---
 
 
4
  https://huggingface.co/chargoddard/llama2-22b-blocktriangular trained one one epoch of 52k rows of Stanford Alpaca. About 11 hours on a 3090.
5
 
6
  I had trouble with training using the other 22b method with `BLOCK_DIAGONAL=True`, but with this method, this is the first time I've been able to target all modules without breaking the output.
7
 
8
  `target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "up_proj", "gate_proj", "down_proj"]`
9
 
10
- Trained at 5e-5 with r=32. For more info see https://wandb.ai/nkpz/huggingface/runs/3oy5nbtv/workspace?workspace=user-nkpz
 
 
 
1
  ---
2
  license: other
3
  ---
4
+ **There is no official 22b model, this is just a weird experiment, and any potential benefits of hacking on the architecture have not been validated in any formal manner**
5
+
6
  https://huggingface.co/chargoddard/llama2-22b-blocktriangular trained one one epoch of 52k rows of Stanford Alpaca. About 11 hours on a 3090.
7
 
8
  I had trouble with training using the other 22b method with `BLOCK_DIAGONAL=True`, but with this method, this is the first time I've been able to target all modules without breaking the output.
9
 
10
  `target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "up_proj", "gate_proj", "down_proj"]`
11
 
12
+ Trained at 5e-5 with r=32. For more info see https://wandb.ai/nkpz/huggingface/runs/3oy5nbtv/workspace?workspace=user-nkpz
13
+
14
+ It's been responding coherently enough that I would need to run some objective benchmarks to determine if this is better/worse than stock llama 13b