Doctor-Shotgun commited on
Commit
327ab26
1 Parent(s): 8e8da0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -97,11 +97,11 @@ special_tokens:
97
 
98
  # limarp-miqu-1-70b-qlora
99
 
100
- Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
101
 
102
  I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
103
 
104
- However, I decided to go ahead and release this in case someone else finds a use for it.
105
 
106
  ## Model description
107
 
 
97
 
98
  # limarp-miqu-1-70b-qlora
99
 
100
+ Experimental limarp qlora trained at 16384 ctx length (greater than size of the longest limarp sample when tokenized via llama's tokenizer) on the fixed dequantized miqu-1-70b model by 152334H.
101
 
102
  I wasn't particularly happy with the results I got when I tried applying the lora at varying weights to the miqu-1-70b model. It's possible that this is related to the fact that the model was dequantized from Q5_K_M GGUF, or perhaps due to it already being an instruct-tuned model.
103
 
104
+ However, I decided to go ahead and release this in case someone else finds a use for it. Provided as-is and YMMV.
105
 
106
  ## Model description
107