Pretergeek
commited on
Commit
•
a270932
1
Parent(s):
1cbabf3
Update README.md
Browse files
README.md
CHANGED
@@ -2,11 +2,14 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
|
6 |
|
7 |
Openchat-3.5-0106 is an excellent model but was based on Mistral-7B-v0.1 which has a context window of 8192 tokens. Mistral-7B-v0.2 has a context window of 32768 tokens. I could have extended OpenChat-3.5 context myself with RoPE and/or YaRN but that has been done. There are many models on HF that have done exactly that. Instead I decided to try and replicate OpenChat-3.5-0106 using the LoRA extraction method available in mergekit. These are the steps I followed:
|
8 |
- Extract a LoRA with rank 512 from OpenChat-3.5-0106 using [One](https://huggingface.co/imone)'s [Mistral_7B_with_EOT_token](https://huggingface.co/imone/Mistral_7B_with_EOT_token) as the base model.
|
9 |
- Replicate imone's work by adding the EOT token to Mistral-7B-v0.2, creating [Mistral-7B-v0.2_EOT](https://huggingface.co/Pretergeek/Mistral-7B-v0.2_EOT).
|
10 |
- Merge the LoRA's weights to the Mistral-7B-v0.2_EOT model.
|
11 |
|
12 |
-
This is the result. This model is not meant for use, it was created to test if this method is viable for replacing the base model of fine-tuned models (when tokenizer and weights have not been changed too much). I am uploading here for evaluation. I don't expect this model to match the original OpenChat-3.5-0106 since I used a LoRA with rank 512, so it won't be equivalent to a full fine-tuning.
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
This model was created as an experiment on using LoRA extraction to replicate [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as a base model instead of the original [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
|
6 |
|
7 |
Openchat-3.5-0106 is an excellent model but was based on Mistral-7B-v0.1 which has a context window of 8192 tokens. Mistral-7B-v0.2 has a context window of 32768 tokens. I could have extended OpenChat-3.5 context myself with RoPE and/or YaRN but that has been done. There are many models on HF that have done exactly that. Instead I decided to try and replicate OpenChat-3.5-0106 using the LoRA extraction method available in mergekit. These are the steps I followed:
|
8 |
- Extract a LoRA with rank 512 from OpenChat-3.5-0106 using [One](https://huggingface.co/imone)'s [Mistral_7B_with_EOT_token](https://huggingface.co/imone/Mistral_7B_with_EOT_token) as the base model.
|
9 |
- Replicate imone's work by adding the EOT token to Mistral-7B-v0.2, creating [Mistral-7B-v0.2_EOT](https://huggingface.co/Pretergeek/Mistral-7B-v0.2_EOT).
|
10 |
- Merge the LoRA's weights to the Mistral-7B-v0.2_EOT model.
|
11 |
|
12 |
+
This is the result. This model is not meant for use, it was created to test if this method is viable for replacing the base model of fine-tuned models (when tokenizer and weights have not been changed too much). I am uploading here for evaluation. I don't expect this model to match the original OpenChat-3.5-0106 since I used a LoRA with rank 512, so it won't be equivalent to a full fine-tuning. I have been able to extract LoRAs with higher rank, but currently I don't have the resources to merge them with the model as the memory requirements exceed what I have at my disposal.
|
13 |
+
If you would like to help my work, check my Ko-Fi and/or Patreon:
|
14 |
+
* https://ko-fi.com/pretergeek
|
15 |
+
* patreon.com/Pretergeek
|