ChuckMcSneed commited on
Commit
9498514
1 Parent(s): 1a473cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -14,10 +14,10 @@ Like B-test, but the task is simpler and the distracting text is way more annoyi
14
  This test is designed around breaking expectations. It consists of a common math trick, but with a twist. The twist is that there is no math involved, just reading. It also has an extensive section at the end to guide the model into breaking the overtrained conditioning. Models will get 1 point for getting the answer right and up to 2 points for the right reasoning.
15
 
16
  ## P-test:
17
- Poems. Model passes each poem test for writing coherently and in rhyme. 1 point for each poem.
18
 
19
  ## S-test:
20
- Stylized writing. Models are asked to explain a concept in a distinct writing style or as if they are a character. Up to 1 point for each style. Models are penalized for failing to explain the concept or to keep the style all the way through the explaination.
21
 
22
  # What does each of the tests measure I dont understand111!!!11!
23
  BCD=following commands
@@ -39,10 +39,13 @@ What they show is quite interesting:
39
  - Una-xaberius shows that overtraining on benchmarks leads to loss of creativity and the model does not become smarter
40
  - Solar-instruct, despite its small size still can write poems, but is incapable of writing in style
41
  - ChatGPT can't pass B-test due to filter, C-test and P tests were not performed for that reason. It is incredibly good at stylized writing though, outperforming ALL tested local models. It can't pass D-test due to overfitting.
 
 
42
 
43
  # More tests?
44
  Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.
45
 
46
  # Updates
47
  2023-12-19
48
- Added solar-instruct(suspicously high benchmarks) una-xaberius(known cheater) and ChatGPT. Some tests were not performed with ChatGPT because Sam will ban me for them.
 
 
14
  This test is designed around breaking expectations. It consists of a common math trick, but with a twist. The twist is that there is no math involved, just reading. It also has an extensive section at the end to guide the model into breaking the overtrained conditioning. Models will get 1 point for getting the answer right and up to 2 points for the right reasoning.
15
 
16
  ## P-test:
17
+ Poems. Model passes each poem test for writing coherently and in rhyme. 1 point for each poem. 6 in total.
18
 
19
  ## S-test:
20
+ Stylized writing. Models are asked to explain a concept in a distinct writing style or as if they are a character. Up to 1 point for each style. Models are penalized for failing to explain the concept or to keep the style all the way through the explaination. 8 in total.
21
 
22
  # What does each of the tests measure I dont understand111!!!11!
23
  BCD=following commands
 
39
  - Una-xaberius shows that overtraining on benchmarks leads to loss of creativity and the model does not become smarter
40
  - Solar-instruct, despite its small size still can write poems, but is incapable of writing in style
41
  - ChatGPT can't pass B-test due to filter, C-test and P tests were not performed for that reason. It is incredibly good at stylized writing though, outperforming ALL tested local models. It can't pass D-test due to overfitting.
42
+ - Cybertron seems to perform at approximately the same level as Solar-instruct, it also was surprisingly okay at writing poems
43
+ - Neither Cybertron nor Solar-instruct outperform 70B models as they claim. Both are unable to follow advanced instructions(BCD tests).
44
 
45
  # More tests?
46
  Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.
47
 
48
  # Updates
49
  2023-12-19
50
+ Added solar-instruct(suspicously high benchmarks) una-xaberius(known cheater) and ChatGPT. Some tests were not performed with ChatGPT because Sam will ban me for them.
51
+ Added cybertron v3 per request of @fblgit.