Rethinking LLM Evaluation: Can We Evaluate LLMs with 200x Less Data? Paper • 2510.10457 • Published Oct 12 • 2
Efficient Multi-modal Large Language Models via Progressive Consistency Distillation Paper • 2510.00515 • Published Oct 1 • 39
Data Whisperer: Efficient Data Selection for Task-Specific LLM Fine-Tuning via Few-Shot In-Context Learning Paper • 2505.12212 • Published May 18
Compute Only 16 Tokens in One Timestep: Accelerating Diffusion Transformers with Cluster-Driven Feature Caching Paper • 2509.10312 • Published Sep 12
Socratic-Zero : Bootstrapping Reasoning via Data-Free Agent Co-evolution Paper • 2509.24726 • Published Sep 29 • 19
Winning the Pruning Gamble: A Unified Approach to Joint Sample and Token Pruning for Efficient Supervised Fine-Tuning Paper • 2509.23873 • Published Sep 28 • 67
Reasoning Like an Economist: Post-Training on Economic Problems Induces Strategic Generalization in LLMs Paper • 2506.00577 • Published May 31 • 11
Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More Paper • 2502.11494 • Published Feb 17
Shifting AI Efficiency From Model-Centric to Data-Centric Compression Paper • 2505.19147 • Published May 25 • 144