|
# -=- UNDER CONSTRUCTION -=- |
|
|
|
_Please note that even though this repo is public, the information provided below is not yet complete._ |
|
|
|
|
|
# The LLM Creativity benchmark |
|
|
|
Last updated: 26 Feb 2024 |
|
|
|
The goal of this benchmark is to evaluate the ability of Large Language Models to be used |
|
as an **uncensored creative writing assistant**. Human evaluation of the results is done manually, |
|
by me, to assess the quality of writing. |
|
|
|
There are 24 questions, some standalone, other follow-ups to previous questions for a multi-turn conversation. |
|
The questions can be split half-half in 2 possible ways: |
|
|
|
## First split: sfw / nsfw |
|
* **sfw**: 50% are safe questions that should not trigger any guardrail |
|
* **nsfw**: 50% are questions covering a wide range of NSFW and illegal topics, which are testing for censorship |
|
|
|
## Second split: story / smart |
|
* **story**: 50% of questions are creative writing tasks, covering both the nsfw and sfw topics |
|
* **smart**: 50% of questions are more about testing the capabilities of the model to work as an assistant, again covering both the nsfw and sfw topics |
|
|
|
## What is _not_ included |
|
* roleplay |
|
* mathematics |
|
* coding |
|
* trick questions |
|
|
|
# Results |
|
|
|
![image/png](https://huggingface.co/datasets/froggeric/creativity/resolve/main/benchmark_results_2024-02-26.png) |
|
|
|
# Remarks about some of the models |
|
|
|
|
|
# Questions type |
|
|
|
I will not provide the exact questions are used for various reasons, but I can provide some general ideas about which areas they cover: |
|
|
|
|
|
# Other interesting benchmarks |
|
|
|
- https://eqbench.com/ |
|
- https://chat.lmsys.org/ |