Introducing v0.5 of the AI Safety Benchmark from MLCommons Paper • 2404.12241 • Published Apr 18, 2024 • 11
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content Paper • 2403.13031 • Published Mar 19, 2024 • 1
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! Paper • 2310.03693 • Published Oct 5, 2023 • 1