RAISECity: A Multimodal Agent Framework for Reality-Aligned 3D World Generation at City-Scale
Abstract
RAISECity generates high-quality, city-scale 3D worlds with real-world alignment, using an agentic framework with multimodal tools, iterative refinement, and advanced representations.
City-scale 3D generation is of great importance for the development of embodied intelligence and world models. Existing methods, however, face significant challenges regarding quality, fidelity, and scalability in 3D world generation. Thus, we propose RAISECity, a Reality-Aligned Intelligent Synthesis Engine that creates detailed, City-scale 3D worlds. We introduce an agentic framework that leverages diverse multimodal foundation tools to acquire real-world knowledge, maintain robust intermediate representations, and construct complex 3D scenes. This agentic design, featuring dynamic data processing, iterative self-reflection and refinement, and the invocation of advanced multimodal tools, minimizes cumulative errors and enhances overall performance. Extensive quantitative experiments and qualitative analyses validate the superior performance of RAISECity in real-world alignment, shape precision, texture fidelity, and aesthetics level, achieving over a 90% win-rate against existing baselines for overall perceptual quality. This combination of 3D quality, reality alignment, scalability, and seamless compatibility with computer graphics pipelines makes RAISECity a promising foundation for applications in immersive media, embodied intelligence, and world models.
Community
RAISECity, a novel Reality-Aligned Intelligent Synthesis Engine utilizing an agentic framework with multimodal tools, achieves superior quality and scalability in city-scale 3D world generation, making it a promising foundation for embodied intelligence and world models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MajutsuCity: Language-driven Aesthetic-adaptive City Generation with Controllable 3D Assets and Layouts (2025)
- Yo'City: Personalized and Boundless 3D Realistic City Scene Generation via Self-Critic Expansion (2025)
- Text2Traffic: A Text-to-Image Generation and Editing Method for Traffic Scenes (2025)
- ShapeCraft: LLM Agents for Structured, Textured and Interactive 3D Modeling (2025)
- Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery (2025)
- MetaFind: Scene-Aware 3D Asset Retrieval for Coherent Metaverse Scene Generation (2025)
- HiGS: Hierarchical Generative Scene Framework for Multi-Step Associative Semantic Spatial Composition (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper