Tip

GRPO was first introduced in DeepSeekMath(in 2024 Feb) but received much wider recognition after DeepSeek R1’s success.

In contrast to methods like PPO, GRPO foregoes the critic model that is typically with the same size as the policy model, and estimates the baseline from group scores instead. Specifically, for each question , GRPO samples a group of outputs from the old policy model and then optimizes the policy model by maximizing the following objective:

In DeepSeek series, KL divergence is approximated by the following unbiased estimator(Schulman, 2020):

which is guaranteed to be non-negative.

is the advantage, derived from the rewards corresponding to the outputs within each group: