Update README.md
Browse files
README.md
CHANGED
@@ -108,7 +108,6 @@ using the ([Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mist
|
|
108 |
This model was trained using the ChatML format, so it should be used for inference using the ChatML chatbot format.
|
109 |
We chose this format as the base model ([Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca)) was trained with this format, and we find the chatbot format more compelling for practical use compared to the Alpaca style instruction format.
|
110 |
|
111 |
-
|
112 |
We trained for 1 epoch using the following Axolotl config. (Early stopping was not performed during our training.)
|
113 |
<details><summary>Axolotl config .yaml</summary>
|
114 |
|
@@ -197,4 +196,6 @@ special_tokens:
|
|
197 |
unk_token: "<unk>"
|
198 |
```
|
199 |
|
200 |
-
</details>
|
|
|
|
|
|
108 |
This model was trained using the ChatML format, so it should be used for inference using the ChatML chatbot format.
|
109 |
We chose this format as the base model ([Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca)) was trained with this format, and we find the chatbot format more compelling for practical use compared to the Alpaca style instruction format.
|
110 |
|
|
|
111 |
We trained for 1 epoch using the following Axolotl config. (Early stopping was not performed during our training.)
|
112 |
<details><summary>Axolotl config .yaml</summary>
|
113 |
|
|
|
196 |
unk_token: "<unk>"
|
197 |
```
|
198 |
|
199 |
+
</details>
|
200 |
+
|
201 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|