basiliskinstitute
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -15,4 +15,6 @@ I'm pretty confident that this is my best model to date. This is a combination o
|
|
15 |
The following Categories are what was taken out of the Airoborus datset and added to my own Lamia dataset:
|
16 |
"roleplay", "unalignment", "editor", "writing", "detailed_writing", "stylized_response", "unalign", "cot", "song"
|
17 |
|
18 |
-
I'm hoping that this can improve the models narrative/storywriting ability, logic, and intelligence, while reducing any potential inherent ethical "alignment" that may be present in the base mistral model from pretaining on Chat-GPT generated data.
|
|
|
|
|
|
15 |
The following Categories are what was taken out of the Airoborus datset and added to my own Lamia dataset:
|
16 |
"roleplay", "unalignment", "editor", "writing", "detailed_writing", "stylized_response", "unalign", "cot", "song"
|
17 |
|
18 |
+
I'm hoping that this can improve the models narrative/storywriting ability, logic, and intelligence, while reducing any potential inherent ethical "alignment" that may be present in the base mistral model from pretaining on Chat-GPT generated data.
|
19 |
+
|
20 |
+
The format is Chatml, and the base model is Yarn Mistral which increases the context size to a true 16k+ rather than rellying on the sliding attention window.
|