SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors Paper • 2406.14598 • Published Jun 20
Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications Paper • 2402.05162 • Published Feb 7 • 1
BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection Paper • 2308.12439 • Published Aug 23, 2023
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! Paper • 2310.03693 • Published Oct 5, 2023 • 1
Introducing v0.5 of the AI Safety Benchmark from MLCommons Paper • 2404.12241 • Published Apr 18 • 10
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content Paper • 2403.13031 • Published Mar 19 • 1
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! Paper • 2310.03693 • Published Oct 5, 2023 • 1
Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs Paper • 2305.02440 • Published May 3, 2023 • 1
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset Paper • 2207.00220 • Published Jul 1, 2022 • 3
When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset Paper • 2104.08671 • Published Apr 18, 2021