FPHam commited on
Commit
e2cb079
·
1 Parent(s): 1c71a1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,6 +8,6 @@ tags:
8
  LORA finetune for LLAMA 13B and VICUNA 13B trained on ~500 limericks dataset.
9
  Since most limericks are dirty - that's what you get with this LORA too.
10
 
11
- If anybody can merge this LORA with vicuna 13B and quantize to 4 bit - it would be great!
12
 
13
- Version: 0.1
 
8
  LORA finetune for LLAMA 13B and VICUNA 13B trained on ~500 limericks dataset.
9
  Since most limericks are dirty - that's what you get with this LORA too.
10
 
11
+ The priming is easy, just start:
12
 
13
+ There was a man in