froggeric commited on
Commit
e78630d
1 Parent(s): fc384ba

New results and observations from 2024-04-16

Browse files
Files changed (1) hide show
  1. README.md +26 -4
README.md CHANGED
@@ -12,7 +12,7 @@ _"The only difference between Science and screwing around is writing it down."_
12
 
13
  # The LLM Creativity benchmark
14
 
15
- _Last benchmark update: 12 Mar 2024_
16
 
17
  The goal of this benchmark is to evaluate the ability of Large Language Models to be used
18
  as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
@@ -31,10 +31,34 @@ The questions can be split half-half in 2 possible ways:
31
 
32
  # Results
33
 
34
- ![image.png](https://cdn-uploads.huggingface.co/production/uploads/65a681d3da9f6df1410562e9/U1nIwW5eUBVZtOvNBfuWK.png)
35
 
36
  # Remarks about some of the models
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  [wolfram/miqu-1-103b](https://huggingface.co/wolfram/miqu-1-103b)\
39
  Has slightly more difficulties following instructions than the 120b merge. Also produces more annoying repetitions and re-use of expressions.
40
  The q5_ks is a slight improvements over q4_km, but as it uses more memory, it reduces what it is available for context. Still, with 96GB I can still use a context larger than 16k.
@@ -48,8 +72,6 @@ Very creative, which makes for some great writing, but it also means it has a ha
48
  [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B)\
49
  Great writing with lots of details, taking sufficient time to develop the plot. The small context size though is a limiting factor for consistency.
50
 
51
- **Previously:**
52
-
53
  [wolfram/miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b)\
54
  This frankenmerge has dramatically improved over the original 70b miqu, and somehow, it has also made it less likely to refuse to answer! It's a huge improvement. Still has the same tendencies as the original: likes to use lists when replying, and double line breaks in the prompt reduce the quality of the reply.
55
 
 
12
 
13
  # The LLM Creativity benchmark
14
 
15
+ _Last benchmark update: 16 Apr 2024_
16
 
17
  The goal of this benchmark is to evaluate the ability of Large Language Models to be used
18
  as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
 
31
 
32
  # Results
33
 
34
+ ![benchmark-results.png](https://cdn-uploads.huggingface.co/production/uploads/65a681d3da9f6df1410562e9/QgWSW4sbG-YV6lte4oVGE.png)
35
 
36
  # Remarks about some of the models
37
 
38
+ [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)\
39
+ A big step up for open LLM models. Remember, this is a base model, not instruct. Which means instead of answering questions, it works best by giving it the beginning of an answer for completion.
40
+
41
+ [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)\
42
+ Amazing at such a small size. Only one third the size of its big brother, but not so far behind, and ahead of most other large models. System prompts tend to create unexpected behaviour, like continuation, or forum discussions! Better to avoid them.
43
+
44
+ [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)\
45
+ Fantastic! The first model I test that actually understand humour, and made me laugh a few times. One small drawback: has a tendancy to keep on writing beyond what was requested instead of stopping as instructed.
46
+
47
+ [MarsupialAI/LaDameBlanche-v2-95b](https://huggingface.co/MarsupialAI/LaDameBlanche-v2-95b)\
48
+ Completely unrestricted. Follows instructions well.
49
+
50
+ [crestf411/daybreak-miqu-1-70b-v1.0-hf](https://huggingface.co/crestf411/daybreak-miqu-1-70b-v1.0-hf)\
51
+ Has some annoying turns of phrase that it likes to use over and over again.
52
+
53
+ [nsfwthrowitaway69/Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2)\
54
+ Self-merge of lzvl
55
+
56
+ [nsfwthrowitaway69/Venus-103b-v1.1](https://huggingface.co/nsfwthrowitaway69/Venus-103b-v1.1)\
57
+ Amazing level of details, and unrushed storytelling. Can produce real gems, but can also fail miserably.
58
+
59
+
60
+ **Previously:**
61
+
62
  [wolfram/miqu-1-103b](https://huggingface.co/wolfram/miqu-1-103b)\
63
  Has slightly more difficulties following instructions than the 120b merge. Also produces more annoying repetitions and re-use of expressions.
64
  The q5_ks is a slight improvements over q4_km, but as it uses more memory, it reduces what it is available for context. Still, with 96GB I can still use a context larger than 16k.
 
72
  [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B)\
73
  Great writing with lots of details, taking sufficient time to develop the plot. The small context size though is a limiting factor for consistency.
74
 
 
 
75
  [wolfram/miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b)\
76
  This frankenmerge has dramatically improved over the original 70b miqu, and somehow, it has also made it less likely to refuse to answer! It's a huge improvement. Still has the same tendencies as the original: likes to use lists when replying, and double line breaks in the prompt reduce the quality of the reply.
77