The Consistency Critic: Correcting Inconsistencies in Generated Images via Reference-Guided Attentive Alignment
Abstract
ImageCritic addresses detail inconsistency in image generation through reference-guided post-editing, using attention alignment loss and a detail encoder.
Previous works have explored various customized generation tasks given a reference image, but they still face limitations in generating consistent fine-grained details. In this paper, our aim is to solve the inconsistency problem of generated images by applying a reference-guided post-editing approach and present our ImageCritic. We first construct a dataset of reference-degraded-target triplets obtained via VLM-based selection and explicit degradation, which effectively simulates the common inaccuracies or inconsistencies observed in existing generation models. Furthermore, building on a thorough examination of the model's attention mechanisms and intrinsic representations, we accordingly devise an attention alignment loss and a detail encoder to precisely rectify inconsistencies. ImageCritic can be integrated into an agent framework to automatically detect inconsistencies and correct them with multi-round and local editing in complex scenarios. Extensive experiments demonstrate that ImageCritic can effectively resolve detail-related issues in various customized generation scenarios, providing significant improvements over existing methods.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OmniRefiner: Reinforcement-Guided Local Diffusion Refinement (2025)
- Text2Traffic: A Text-to-Image Generation and Editing Method for Traffic Scenes (2025)
- UniFit: Towards Universal Virtual Try-on with MLLM-Guided Semantic Alignment (2025)
- ConsistCompose: Unified Multimodal Layout Control for Image Composition (2025)
- SpotDiff: Spotting and Disentangling Interference in Feature Space for Subject-Preserving Image Generation (2025)
- ContextGen: Contextual Layout Anchoring for Identity-Consistent Multi-Instance Generation (2025)
- Generative Editing in the Joint Vision-Language Space for Zero-Shot Composed Image Retrieval (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper

