--- license: apache-2.0 language: - en pretty_name: 100 system prompts for benchmarking large language models size_categories: - n<1K --- # Dataset Card for Dataset Name This datset is a collection of 100 system prompts for large language models. ## Dataset Details ### Dataset Description These 100 system prompts test a model's ability to follow grammatical patterns; answer basic multiple choice questions; act according to a particular persona; memorize information; and speak in French. Files: - **hundred_system_prompts.py**: refer to this to see the (prompt, probe, function) triplets, as well as the helper functions. - **hundred_system_prompts.json**: this is purely for display purposes. - **run_benchmark.py**: this runs the 100 tests on a model, without any context other than the system prompt and the probe. - **create_json_file.py**: a small file that was used to create the **hundred_system_prompts.py** file. More info: - **Curated by:** Naomi Bashkansky - **Language(s) (NLP):** en - **License:** apache-2.0 ### Dataset Sources - **Repository:** https://github.com/likenneth/persona - **Paper:** Forthcoming. ## Uses A benchmark for large language models: how good are LLMs at following a system prompt? Tests both basic capabilities (is a model able to follow the system prompt) and basic alignment (does a model that *can* follow the system prompt do so). Can be used to compare different models, or to help in performing interventions on a model to make it better at following system prompts. ### Direct Use This dataset is released open source. Researchers are especially encouraged to use this dataset. ## Dataset Structure "prompt" is given as a system prompt to a large language model. "probe" is given as a user inquiry; its purpose it to elicit a response that allows us to check if the LLM is following the system prompt. "function" checks whether the LLM's response to the probe follows the system prompt; it returns a number from 0 (not following) to 1 (following). ## Dataset Creation ### Curation Rationale There exists no benchmark of system prompts. ### Source Data #### Data Collection and Processing Process: thinking of system prompts, probes, and testing functions. Running the system prompts on GPT-4 to check GPT-4 is (mostly) able to follow them. Testing functions are in Python. #### Who are the source data producers? Naomi Bashkansky made most of the system prompts, and Kenneth Li made the rest. #### Personal and Sensitive Information No. ## Bias, Risks, and Limitations Limitation: as models become more capable, this benchmark may become outdated/too easy. The ideal benchmark is one that tests the model's alignment - its propensity toward following the system prompt - rather than its ability to do so. Bias: this datset is only in English, with the exception of three French prompts. ## Citation **BibTeX:** Forthcoming. **APA:** Forthcoming. ## Dataset Card Authors Naomi Bashkansky, Kenneth Li ## Dataset Card Contact naomibashkansky@college.harvard.edu, ke_li@g.harvard.edu