InferAligner: Inference-Time Alignment for Harmlessness through Cross-Model Guidance Paper • 2401.11206 • Published Jan 20 • 1
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning Paper • 2312.01552 • Published Dec 4, 2023 • 28
Chain-of-Verification Reduces Hallucination in Large Language Models Paper • 2309.11495 • Published Sep 20, 2023 • 38