license: apache-2.0
language:
- en
pretty_name: 100 system prompts for benchmarking large language models
size_categories:
- n<1K
Dataset Card for Dataset Name
This datset is a collection of 100 system prompts for large language models.
Dataset Details
Dataset Description
These 100 system prompts test a model's ability to follow grammatical patterns; answer basic multiple choice questions; act according to a particular persona; memorize information; and speak in French.
Refer to 100_system_prompts.py to run the prompt, probe, function triplets. 100_system_prompts.json is purely for display purposes. You'll need to extract frequent_words.txt and one_syllable_words.txt from frequent_words_and_one_syllable_words.zip in order to run two tests.
- Curated by: Naomi Bashkansky
- Language(s) (NLP): en
- License: apache-2.0
Dataset Sources [optional]
- Repository: https://github.com/likenneth/persona
- Paper [optional]: Forthcoming.
Uses
A benchmark for large language models: how good are LLMs at following a system prompt? Tests both basic capabilities (is a model able to follow the system prompt) and basic alignment (does a model that can follow the system prompt do so).
Can be used to compare different models, or to help in performing interventions on a model to make it better at following system prompts.
Direct Use
This dataset is released open source. Researchers are especially encouraged to use this dataset.
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
There exists no benchmark of system prompts.
Source Data
Data Collection and Processing
Process: thinking of system prompts, probes, and testing functions. Running the system prompts on GPT-4 to check GPT-4 is (mostly) able to follow them. Testing functions are in Python.
Who are the source data producers?
Naomi Bashkansky made most of the system prompts, and Kenneth Li made the rest.
Personal and Sensitive Information
No.
Bias, Risks, and Limitations
Limitation: as models become more capable, this benchmark may become outdated/too easy. The ideal benchmark is one that tests the model's alignment - its propensity toward following the system prompt - rather than its ability to do so.
Bias: this datset is only in English, with the exception of three French prompts.
Citation [optional]
BibTeX:
Forthcoming.
APA:
Forthcoming.
Dataset Card Authors [optional]
Naomi Bashkansky, Kenneth Li