Confidence Calibration and Rationalization for LLMs via Multi-Agent Deliberation Paper • 2404.09127 • Published Apr 14 • 1
Training Socially Aligned Language Models in Simulated Human Society Paper • 2305.16960 • Published May 26, 2023 • 2
Confidence Calibration and Rationalization for LLMs via Multi-Agent Deliberation Paper • 2404.09127 • Published Apr 14 • 1
VLGuard Collection Data and Model weights for VLGuard: https://ys-zong.github.io/VLGuard/ • 13 items • Updated Jun 18 • 1