TehVenom commited on
Commit
e5e3578
1 Parent(s): 023a9a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ For a final model composed of:
16
  ----
17
 
18
  This was done for the sake of testing the theory of how 'long context' tunes affect attention when merged with a model that has been trained for a different purpose, on a shorter context span.
19
- Different from the first merge [(That sports a 50/50 ratio)](https://huggingface.co/TehVenom/mpt-7b-InstructAndStorywriting-50_50-Merge), this one is lopsided towards the Instruct base model to have another comparison point for the effects of CTX span merging, and to have a model that is primarily focused on Instruct.
20
 
21
  There are two objectives for this merge, first one is to see how much out of the 65k-Storywriter model is necessart to raise the ceiling of the final model's context size,
22
  and to try and make the base Chat model less dry, and slightly more fun / verbose, and intelligent by adding the literature / Instruct based models into it.
 
16
  ----
17
 
18
  This was done for the sake of testing the theory of how 'long context' tunes affect attention when merged with a model that has been trained for a different purpose, on a shorter context span.
19
+ Different from the first merges [(That sports a 50/50 ratio)](https://huggingface.co/TehVenom/mpt-7b-InstructAndStorywriting-50_50-Merge), this one is lopsided towards the Chat base model to have another comparison point for the effects of CTX span merging, and to have a model that is primarily focused on Chatting.
20
 
21
  There are two objectives for this merge, first one is to see how much out of the 65k-Storywriter model is necessart to raise the ceiling of the final model's context size,
22
  and to try and make the base Chat model less dry, and slightly more fun / verbose, and intelligent by adding the literature / Instruct based models into it.