DavidAU commited on
Commit
a68268d
1 Parent(s): 38fcc09

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -84,8 +84,8 @@ One version is not stronger than the other, they are different and result in dif
84
 
85
  - This model requires "Command-R" template, responds to standard parameters, and has a max context of 128k (131,000).
86
  - "IQ" quants will be uploaded first, as the Imatrix effect is far stronger in these than "Q" quants. Q quants - all sizes - will upload after the IQ versions.
87
- - IQ4 is the most powerful/balanced in terms of raw power, however IQ3/IQ2 may be stronger "horror" wise due to increased IMATRIX effects the lower you go in terms of bit level.
88
- - IQ2s will also be effective due to sheer number of parameters (35 billion) in the model.
89
  - Q4s and Q5s will still be very strong, with Q6 being medium-strong relatively speaking in terms of "horror" changes. This is due to how Imatrix process affects quants of different bit sizes - lower, is stronger, higher is weaker. Again, these are relative.
90
 
91
  <b>Optional Enhancement:</B>
 
84
 
85
  - This model requires "Command-R" template, responds to standard parameters, and has a max context of 128k (131,000).
86
  - "IQ" quants will be uploaded first, as the Imatrix effect is far stronger in these than "Q" quants. Q quants - all sizes - will upload after the IQ versions.
87
+ - IQ4 is the most powerful/balanced in terms of raw power (see examples below), however IQ3/IQ2 may be stronger "horror" wise due to increased IMATRIX effects the lower you go in terms of bit level.
88
+ - IQ2s will also be effective due to sheer number of parameters (35 billion) in the model. (see examples below)
89
  - Q4s and Q5s will still be very strong, with Q6 being medium-strong relatively speaking in terms of "horror" changes. This is due to how Imatrix process affects quants of different bit sizes - lower, is stronger, higher is weaker. Again, these are relative.
90
 
91
  <b>Optional Enhancement:</B>