Parrot: Persuasion and Agreement Robustness Rating of Output Truth -- A Sycophancy Robustness Benchmark for LLMs
Abstract
PARROT evaluates the robustness of large language models against social pressure and sycophancy, revealing significant variability in model behavior and confidence shifts across different domains and authority templates.
This study presents PARROT (Persuasion and Agreement Robustness Rating of Output Truth), a robustness focused framework designed to measure the degradation in accuracy that occurs under social pressure exerted on users through authority and persuasion in large language models (LLMs) the phenomenon of sycophancy (excessive conformity). PARROT (i) isolates causal effects by comparing the neutral version of the same question with an authoritatively false version using a double-blind evaluation, (ii) quantifies confidence shifts toward the correct and imposed false responses using log-likelihood-based calibration tracking, and (iii) systematically classifies failure modes (e.g., robust correct, sycophantic agreement, reinforced error, stubborn error, self-correction, etc.) using an eight-state behavioral taxonomy. We evaluated 22 models using 1,302 MMLU-style multiple-choice questions across 13 domains and domain-specific authority templates. Findings show marked heterogeneity: advanced models (e.g., GPT-5, GPT-4.1, Claude Sonnet 4.5) exhibit low "follow rates" (leq 11%, GPT-5: 4\%) and minimal accuracy loss, while older/smaller models show severe epistemic collapse (GPT-4: 80\%, Qwen 2.5-1.5B: 94\%). The danger is not limited to response changes; weak models reduce confidence in the correct response while increasing confidence in the imposed incorrect response. While international law and global knowledge at the domain level exhibit high fragility, elementary mathematics is relatively resilient. Consequently, we argue that the goal of "resistance to overfitting pressure" should be addressed as a primary objective alongside accuracy, harm avoidance, and privacy for safe deployment in the real world.
Community
This study presents PARROT (Persuasion and Agreement Robustness Rating of
Output Truth), a robustness focused framework designed to measure the
degradation in accuracy that occurs under social pressure exerted on users
through authority and persuasion in large language models (LLMs) the phenomenon
of sycophancy (excessive conformity). PARROT (i) isolates causal effects by
comparing the neutral version of the same question with an authoritatively
false version using a double-blind evaluation, (ii) quantifies confidence
shifts toward the correct and imposed false responses using
log-likelihood-based calibration tracking, and (iii) systematically classifies
failure modes (e.g., robust correct, sycophantic agreement, reinforced error,
stubborn error, self-correction, etc.) using an eight-state behavioral
taxonomy. We evaluated 22 models using 1,302 MMLU-style multiple-choice
questions across 13 domains and domain-specific authority templates
We will release both soon.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beacon: Single-Turn Diagnosis and Mitigation of Latent Sycophancy in Large Language Models (2025)
- Critical or Compliant? The Double-Edged Sword of Reasoning in Chain-of-Thought Explanations (2025)
- Improving Metacognition and Uncertainty Communication in Language Models (2025)
- Cross-Lingual Stability and Bias in Instruction-Tuned Language Models for Humanitarian NLP (2025)
- The Silent Judge: Unacknowledged Shortcut Bias in LLM-as-a-Judge (2025)
- Layer of Truth: Probing Belief Shifts under Continual Pre-Training Poisoning (2025)
- AI Debaters are More Persuasive when Arguing in Alignment with Their Own Beliefs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper