Representational Stability of Truth in Large Language Models
Abstract
LLMs exhibit varying levels of stability in encoding truth representations, influenced more by epistemic familiarity than linguistic form, as assessed through perturbation analysis of their activations.
Large language models (LLMs) are widely used for factual tasks such as "What treats asthma?" or "What is the capital of Latvia?". However, it remains unclear how stably LLMs encode distinctions between true, false, and neither-true-nor-false content in their internal probabilistic representations. We introduce representational stability as the robustness of an LLM's veracity representations to perturbations in the operational definition of truth. We assess representational stability by (i) training a linear probe on an LLM's activations to separate true from not-true statements and (ii) measuring how its learned decision boundary shifts under controlled label changes. Using activations from sixteen open-source models and three factual domains, we compare two types of neither statements. The first are fact-like assertions about entities we believe to be absent from any training data. We call these unfamiliar neither statements. The second are nonfactual claims drawn from well-known fictional contexts. We call these familiar neither statements. The unfamiliar statements induce the largest boundary shifts, producing up to 40% flipped truth judgements in fragile domains (such as word definitions), while familiar fictional statements remain more coherently clustered and yield smaller changes (leq 8.2%). These results suggest that representational stability stems more from epistemic familiarity than from linguistic form. More broadly, our approach provides a diagnostic for auditing and training LLMs to preserve coherent truth assignments under semantic uncertainty, rather than optimizing for output accuracy alone.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLM Knowledge is Brittle: Truthfulness Representations Rely on Superficial Resemblance (2025)
- Layer of Truth: Probing Belief Shifts under Continual Pre-Training Poisoning (2025)
- How Language Models Conflate Logical Validity with Plausibility: A Representational Analysis of Content Effects (2025)
- Large Language Models Do NOT Really Know What They Don't Know (2025)
- Emergence of Linear Truth Encodings in Language Models (2025)
- PragWorld: A Benchmark Evaluating LLMs'Local World Model under Minimal Linguistic Alterations and Conversational Dynamics (2025)
- The Illusion of Certainty: Uncertainty quantification for LLMs fails under ambiguity (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper