-
Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models
Paper • 2401.01301 • Published -
Do Language Models Know When They're Hallucinating References?
Paper • 2305.18248 • Published -
xVerify: Efficient Answer Verifier for Reasoning Model Evaluations
Paper • 2504.10481 • Published • 84
Madhav Bhagat
codebreach
·
AI & ML interests
None yet
Recent Activity
updated
a collection
about 16 hours ago
hallucinations
updated
a collection
about 17 hours ago
hallucinations
liked
a Space
almost 2 years ago
h2oai/h2ogpt-chatbot2
Organizations
Collections
3
spaces
1
models
0
None public yet
datasets
0
None public yet