Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences Paper • 2404.03715 • Published Apr 4, 2024 • 62
Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF Paper • 2405.21046 • Published May 31, 2024 • 4
Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts Paper • 2406.12845 • Published Jun 18, 2024 • 1
Trajectory Bellman Residual Minimization: A Simple Value-Based Method for LLM Reasoning Paper • 2505.15311 • Published May 21, 2025
Breaking the Capability Ceiling of LLM Post-Training by Reintroducing Markov States Paper • 2603.19987 • Published 13 days ago • 9
Understanding Behavior Cloning with Action Quantization Paper • 2603.20538 • Published 13 days ago • 2
Understanding Behavior Cloning with Action Quantization Paper • 2603.20538 • Published 13 days ago • 2
Understanding Behavior Cloning with Action Quantization Paper • 2603.20538 • Published 13 days ago • 2
Breaking the Capability Ceiling of LLM Post-Training by Reintroducing Markov States Paper • 2603.19987 • Published 13 days ago • 9
Breaking the Capability Ceiling of LLM Post-Training by Reintroducing Markov States Paper • 2603.19987 • Published 13 days ago • 9
Towards Principled Representation Learning from Videos for Reinforcement Learning Paper • 2403.13765 • Published Mar 20, 2024