ChuckMcSneed's picture
Update README.md
7c95d34
metadata
license: wtfpl

Since automatic open source benchmark leaderboard got flooded with incoherent overtrained cheater meme models, I decided to take the matters in my own hands and create my own set of proprietary tests. The aim of these tests is not to see how smart the model is, but to see how good it is at execution of commands and creative writing in a reasonably quantifiable way. All tests are executed with temperature and top P≈0 and rep. penalty=1 in koboldcpp. Model-appropriate format is used, unless it doesn't work.

Currently I have the following tests:

B-test:

This test is designed to establish the baseline of the model. It consists of a main task and a bunch of text, which model has to ignore while still executing the task. If the model refuses or fails to comply in a logical way immediately, it fails(0/3). After the initial request question it will get bombarded with text, it gets 1 point for reaching the first checkpoint(1/3). It will get another point for passing the test fully(2/3) and a final point for exiting the test successfully(3/3)

C-test:

Like B-test, but the task is simpler and the distracting text is way more annoying. Since the task is much simpler there are fewer points to gain. Model gets 1 point for passing main distractions and another point for successfully exiting the task. Model gets penalized for writing more than necessary, eg (Note: as an AI language model...).

D-test:

This test is designed around breaking expectations. It consists of a common math trick, but with a twist. The twist is that there is no math involved, just reading. It also has an extensive section at the end to guide the model into breaking the overtrained conditioning. Models will get 1 point for getting the answer right and up to 2 points for the right reasoning.

P-test:

Poems. Model passes each poem test for writing coherently and in rhyme. 1 point for each poem.

S-test:

Stylized writing. Models are asked to explain a concept in a distinct writing style or as if they are a character. Up to 1 point for each style. Models are penalized for failing to explain the concept or to keep the style all the way through the explaination.

What does each of the tests measure I dont understand111!!!11!

BCD=following commands

PS=creative writing

RESULTS AND DISCUSSION

This table shows the results

Here they are. The results of each test. You can see pure data in file LLM-test.csv

What they show is quite interesting:

  • Goliath in the best at following commands, followed by Qwen and Nous-Hermes
  • Goliath, Xwin and Mixtral are the best at creative writing
  • Qwen is terrible at creative writing, but good at following commands, Mixtral is the opposite
  • Size matters! 34B Nous-Capubara is the worst on average, likely due to it's size
  • Xwin, Goliath and Mixtral are the best at stylized writing
  • Goliath, Euryale, Xwin and Mixtral are the only ones who were capable to write coherent poems most of the time

More tests?

Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.