Investigating Human-Aligned Large Language Model Uncertainty
Abstract
Recent work has sought to quantify large language model uncertainty to facilitate model control and modulate user trust. Previous works focus on measures of uncertainty that are theoretically grounded or reflect the average overt behavior of the model. In this work, we investigate a variety of uncertainty measures, in order to identify measures that correlate with human group-level uncertainty. We find that Bayesian measures and a variation on entropy measures, top-k entropy, tend to agree with human behavior as a function of model size. We find that some strong measures decrease in human-similarity with model size, but, by multiple linear regression, we find that combining multiple uncertainty measures provide comparable human-alignment with reduced size-dependency.
Community
We asked a simple question: Which measures of language model uncertainty actually correlate with human uncertainty evident in sources like pew data. It turns out some measures tend to become similar to human uncertainty as a function of model size. Others are more generically applicable.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Survey of Uncertainty Estimation Methods on Large Language Models (2025)
- Scalable Best-of-N Selection for Large Language Models via Self-Certainty (2025)
- An Empirical Analysis of Uncertainty in Large Language Model Evaluations (2025)
- When an LLM is apprehensive about its answers -- and when its uncertainty is justified (2025)
- Semantic Volume: Quantifying and Detecting both External and Internal Uncertainty in LLMs (2025)
- CoT-UQ: Improving Response-wise Uncertainty Quantification in LLMs with Chain-of-Thought (2025)
- Ensemble based approach to quantifying uncertainty of LLM based classifications (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper