Entropy Minimization: Qwen3-4B-Base trained on DAPO-14k
This is the Qwen3-4B-Base model trained by Entropy Minimization using the DAPO-14k training set. It was presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
Co-rewarding is a novel self-supervised reinforcement learning (RL) framework designed to improve the training stability for eliciting reasoning in large language models (LLMs). It achieves this by seeking complementary supervision from multiple views. Specifically, Co-rewarding is instantiated in two ways:
- Co-rewarding-I: A data-side approach that derives reward signals from contrastive agreement across semantically analogous questions.
- Co-rewarding-II: A model-side approach that uses a slowly-updated reference teacher with pseudo labels to realize self-distillation.
These instantiations introduce discrepancies that increase the difficulty of training collapse on trivial reasoning solutions.
For more details on the Co-rewarding framework, training procedures, and other related models and datasets, please refer to the official GitHub repository.