Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ By concatenating layers from different LLMs, it can produce models with an exoti
|
|
17 |
|
18 |
Many thanks to [Abacaj](https://huggingface.co/abacaj) for providing the [fine tuned weights](https://huggingface.co/abacaj/phi-2-super) that were used in the creation of this base model. You can find the full script for how the model was merged [here](https://huggingface.co/Kquant03/Phi-Stoma/blob/main/mergekit_config.yml)...thanks to [KatyTheCutie](https://huggingface.co/KatyTheCutie) for inspring me to test out this script.
|
19 |
|
20 |
-
## This idea was brought to me by [The Face of Goonery](https://huggingface.co/The-Face-Of-Goonery), also known as Caleb Morgan. I have him to thank if fine-tuning this model turns out to be a success
|
21 |
# How to run inference:
|
22 |
|
23 |
```python
|
|
|
17 |
|
18 |
Many thanks to [Abacaj](https://huggingface.co/abacaj) for providing the [fine tuned weights](https://huggingface.co/abacaj/phi-2-super) that were used in the creation of this base model. You can find the full script for how the model was merged [here](https://huggingface.co/Kquant03/Phi-Stoma/blob/main/mergekit_config.yml)...thanks to [KatyTheCutie](https://huggingface.co/KatyTheCutie) for inspring me to test out this script.
|
19 |
|
20 |
+
## This idea was brought to me by [The Face of Goonery](https://huggingface.co/The-Face-Of-Goonery), also known as Caleb Morgan. I have him to thank if fine-tuning this model turns out to be a success...he also helped me to make this model even larger than the prior one.
|
21 |
# How to run inference:
|
22 |
|
23 |
```python
|