Commit
·
44b5268
1
Parent(s):
b8e6174
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ Tests have shown that the model does indeed leverage the extended context at 8K,
|
|
14 |
#### Using the monkey-patch?
|
15 |
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.125 and the maximum sequence length to 16384**
|
16 |
|
17 |
-
#### Using Oobabooga
|
18 |
- `python server.py --max_seq_len 16384 --compress_pos_emb 8 --loader exllama_hf`
|
19 |
|
20 |
I trained the LoRA with the following configuration:
|
|
|
14 |
#### Using the monkey-patch?
|
15 |
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.125 and the maximum sequence length to 16384**
|
16 |
|
17 |
+
#### Using Oobabooga with Exllama?
|
18 |
- `python server.py --max_seq_len 16384 --compress_pos_emb 8 --loader exllama_hf`
|
19 |
|
20 |
I trained the LoRA with the following configuration:
|