kaiokendev commited on
Commit
44b5268
·
1 Parent(s): b8e6174

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ Tests have shown that the model does indeed leverage the extended context at 8K,
14
  #### Using the monkey-patch?
15
  You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.125 and the maximum sequence length to 16384**
16
 
17
- #### Using Oobabooga or Exllama?
18
  - `python server.py --max_seq_len 16384 --compress_pos_emb 8 --loader exllama_hf`
19
 
20
  I trained the LoRA with the following configuration:
 
14
  #### Using the monkey-patch?
15
  You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.125 and the maximum sequence length to 16384**
16
 
17
+ #### Using Oobabooga with Exllama?
18
  - `python server.py --max_seq_len 16384 --compress_pos_emb 8 --loader exllama_hf`
19
 
20
  I trained the LoRA with the following configuration: