---
license: apache-2.0
---
[**Paper**](https://openreview.net/pdf?id=jkcHYEfPv3) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
We created **Starling** by fine-tuning Vicuna-7B on HarmfulQA, a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are on our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.