sydonayrex commited on
Commit
33a46f0
1 Parent(s): 204ccef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ The base of this model is Mistral Instruct 0.3 that has been supersized using ta
22
 
23
  In addition to the layer merging, the model has been further fine tuned using SFT using Unsloth to act as a base for further training and experimentation with DPO or ORPO (current DPO project in the process of being trained using Axolotl.)
24
 
25
- If you find the LLM is acting as if it has had a stroke, see if you have flash attn turned off and enable it is so. This seemed to correct any issues I had when running the model in LM Studio.
26
 
27
  GGUFs are available here:
28
 
 
22
 
23
  In addition to the layer merging, the model has been further fine tuned using SFT using Unsloth to act as a base for further training and experimentation with DPO or ORPO (current DPO project in the process of being trained using Axolotl.)
24
 
25
+ If you find the LLM is acting as if it has had a stroke, see if you have flash attn turned off and enable it if is off. This seemed to correct any issues I had when running the model in LM Studio.
26
 
27
  GGUFs are available here:
28