--- license: mit --- # Model Card ## Model Information This repository provides the checkpoint of [Vicuna-7B-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) after safe unlearning with 100 raw harmful questions during training ([safe unlearning paper](https://arxiv.org/abs/2407.02855), [safe unlearning code](https://github.com/thu-coai/SafeUnlearning)). This model is significantly more safe against various jailbreak attacks than the original model while maintaining comparable general performance. ## Uses The prompt format is the same as the original [Vicuna-7B-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5), so you can use this model in the same way. Also refer to our [Github Repository](https://github.com/thu-coai/SafeUnlearning) for example code.