From f(x) and g(x) to f(g(x)): LLMs Learn New Skills in RL by Composing Old Ones Paper • 2509.25123 • Published 28 days ago • 18
Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models Paper • 2306.02320 • Published Jun 4, 2023 • 1
UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset Paper • 2402.04588 • Published Feb 7, 2024 • 2
AIR: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset Paper • 2504.03612 • Published Apr 4 • 2
Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment Paper • 2402.19085 • Published Feb 29, 2024
RLPR: Extrapolating RLVR to General Domains without Verifiers Paper • 2506.18254 • Published Jun 23 • 31
From AI for Science to Agentic Science: A Survey on Autonomous Scientific Discovery Paper • 2508.14111 • Published Aug 18 • 33
Towards a Unified View of Large Language Model Post-Training Paper • 2509.04419 • Published Sep 4 • 73
HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark? Paper • 2509.07894 • Published Sep 9 • 32
A Survey of Reinforcement Learning for Large Reasoning Models Paper • 2509.08827 • Published Sep 10 • 183
Inference-Time Alignment Control for Diffusion Models with Reinforcement Learning Guidance Paper • 2508.21016 • Published Aug 28