Reasoning-Aware GRPO using Process Mining

BAELAB, Pusan National University, Busan, Korea

Taekyhun Park* , Yongjae Lee*, Hyerim Bae

🌟 Github | 📥 1.5B Download | 📥 7B Download | 📄 Arxiv Paper Link |

Abstract

Reinforcement learning (RL)-based post-training has been crucial for enabling multi-step reasoning in large reasoning models (LRMs), yet current reward schemes are typically outcome-centric. We propose PM4GRPO, a reasoning-aware Group Relative Policy Optimization (GRPO) that augments standard answer/format rewards with signals over the reasoning procedure. To this end, process mining techniques are utilized to compute a scalar conformance reward that measures how closely a policy model's reasoning aligns with the pretrained teacher model. The empirical results on five benchmarks demonstrate that PM4GRPO significantly outperforms existing methodologies for GRPO-based post-training. These results highlight that leveraging process mining for reasoning-aware GRPO effectively enhances the reasoning capabilities of policy models.

Illustration of PM4GRPO

Downloads last month
290
Safetensors
Model size
333k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Thrillcrazyer/Qwen-7B_THIP

Quantizations
1 model