Pretergeek
commited on
Commit
•
37ad7ea
1
Parent(s):
a270932
Update README.md
Browse files
README.md
CHANGED
@@ -12,4 +12,4 @@ Openchat-3.5-0106 is an excellent model but was based on Mistral-7B-v0.1 which h
|
|
12 |
This is the result. This model is not meant for use, it was created to test if this method is viable for replacing the base model of fine-tuned models (when tokenizer and weights have not been changed too much). I am uploading here for evaluation. I don't expect this model to match the original OpenChat-3.5-0106 since I used a LoRA with rank 512, so it won't be equivalent to a full fine-tuning. I have been able to extract LoRAs with higher rank, but currently I don't have the resources to merge them with the model as the memory requirements exceed what I have at my disposal.
|
13 |
If you would like to help my work, check my Ko-Fi and/or Patreon:
|
14 |
* https://ko-fi.com/pretergeek
|
15 |
-
* patreon.com/Pretergeek
|
|
|
12 |
This is the result. This model is not meant for use, it was created to test if this method is viable for replacing the base model of fine-tuned models (when tokenizer and weights have not been changed too much). I am uploading here for evaluation. I don't expect this model to match the original OpenChat-3.5-0106 since I used a LoRA with rank 512, so it won't be equivalent to a full fine-tuning. I have been able to extract LoRAs with higher rank, but currently I don't have the resources to merge them with the model as the memory requirements exceed what I have at my disposal.
|
13 |
If you would like to help my work, check my Ko-Fi and/or Patreon:
|
14 |
* https://ko-fi.com/pretergeek
|
15 |
+
* https://patreon.com/Pretergeek
|