Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation Paper • 2510.24821 • Published 3 days ago • 27
Every Attention Matters: An Efficient Hybrid Architecture for Long-Context Reasoning Paper • 2510.19338 • Published 10 days ago • 101
view article Article Art of Focus: Page-Aware Sparse Attention and Ling 2.0’s Quest for Efficient Context Length Scaling By RichardBian and 19 others • 11 days ago • 14
view article Article Ring-flash-linear-2.0: A Highly Efficient Hybrid Architecture for Test-Time Scaling By RichardBian and 8 others • 22 days ago • 10
Ring Collection Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling. • 5 items • Updated 18 days ago • 19