Papers
arxiv:2601.11004

NAACL: Noise-AwAre Verbal Confidence Calibration for LLMs in RAG Systems

Published on Jan 16
· Submitted by
Jeff
on Jan 20
#3 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,

Abstract

Large language models suffer from poor confidence calibration in retrieval-augmented generation due to noisy contexts, but a noise-aware calibration framework significantly improves calibration performance.

AI-generated summary

Accurately assessing model confidence is essential for deploying large language models (LLMs) in mission-critical factual domains. While retrieval-augmented generation (RAG) is widely adopted to improve grounding, confidence calibration in RAG settings remains poorly understood. We conduct a systematic study across four benchmarks, revealing that LLMs exhibit poor calibration performance due to noisy retrieved contexts. Specifically, contradictory or irrelevant evidence tends to inflate the model's false certainty, leading to severe overconfidence. To address this, we propose NAACL Rules (Noise-AwAre Confidence CaLibration Rules) to provide a principled foundation for resolving overconfidence under noise. We further design NAACL, a noise-aware calibration framework that synthesizes supervision from about 2K HotpotQA examples guided by these rules. By performing supervised fine-tuning (SFT) with this data, NAACL equips models with intrinsic noise awareness without relying on stronger teacher models. Empirical results show that NAACL yields substantial gains, improving ECE scores by 10.9% in-domain and 8.0% out-of-domain. By bridging the gap between retrieval noise and verbal calibration, NAACL paves the way for both accurate and epistemically reliable LLMs.

Community

Paper submitter

This paper addresses the often-overlooked problem of confidence calibration for large language models (LLMs) in retrieval-augmented generation (RAG) settings, where noisy retrieved contexts can severely inflate model overconfidence. The authors systematically study calibration performance across multiple benchmarks and propose Noise-AwAre Confidence CaLibration Rules (NAACL Rules) along with a noise-aware supervised fine-tuning framework (NAACL) that leverages guided supervision to imbue models with intrinsic noise awareness. Empirical results demonstrate consistent reductions in expected calibration error both in-domain and out-of-domain, highlighting the method’s potential to improve epistemic reliability of LLM outputs in factual applications. This work is timely and relevant for enhancing trustworthiness of deployed RAG systems.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.11004 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.11004 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.11004 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.