Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models Paper โข 2307.14539 โข Published Jul 26, 2023 โข 2 โข 1
Cross-Modal Safety Alignment: Is textual unlearning all you need? Paper โข 2406.02575 โข Published May 27 โข 2
Cross-Modal Safety Alignment: Is textual unlearning all you need? Paper โข 2406.02575 โข Published May 27 โข 2