File size: 1,565 Bytes
15bada7
02c4b84
15bada7
02c4b84
 
c6e6482
 
 
 
15bada7
 
 
c6e6482
15bada7
 
 
 
6d5143e
 
15bada7
 
6d5143e
 
15bada7
 
 
 
 
 
 
 
c6e6482
b938c4e
6692ddb
15bada7
 
 
 
 
 
 
 
 
 
6692ddb
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# -=- UNDER CONSTRUCTION -=-

_Please note that even though this repo is public, the information provided below is not yet complete._


# The LLM Creativity benchmark

Last updated: 26 Feb 2024

The goal of this benchmark is to evaluate the ability of Large Language Models to be used
as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
by me, to assess the quality of writing.

There are 24 questions, some standalone, other follow-ups to previous questions for a multi-turn conversation.
The questions can be split half-half in 2 possible ways:

## First split: sfw / nsfw
* **sfw**: 50% are safe questions that should not trigger any guardrail
* **nsfw**: 50% are questions covering a wide range of NSFW and illegal topics, which are testing for censorship

## Second split: story / smart
* **story**: 50% of questions are creative writing tasks, covering both the nsfw and sfw topics
* **smart**: 50% of questions are more about testing the capabilities of the model to work as an assistant, again covering both the nsfw and sfw topics

## What is _not_ included
* roleplay
* mathematics
* coding
* trick questions

# Results

![image/png](https://huggingface.co/datasets/froggeric/creativity/resolve/main/benchmark_results_2024-02-26.png)

# Remarks about some of the models


# Questions type

I will not provide the exact questions are used for various reasons, but I can provide some general ideas about which areas they cover:


# Other interesting benchmarks

- https://eqbench.com/
- https://chat.lmsys.org/