Abstract
Group Sequence Policy Optimization (GSPO) is a reinforcement learning algorithm that improves training efficiency and performance of large language models by using sequence-level importance ratios and operations.
This paper introduces Group Sequence Policy Optimization (GSPO), our stable, efficient, and performant reinforcement learning algorithm for training large language models. Unlike previous algorithms that adopt token-level importance ratios, GSPO defines the importance ratio based on sequence likelihood and performs sequence-level clipping, rewarding, and optimization. We demonstrate that GSPO achieves superior training efficiency and performance compared to the GRPO algorithm, notably stabilizes Mixture-of-Experts (MoE) RL training, and has the potential for simplifying the design of RL infrastructure. These merits of GSPO have contributed to the remarkable improvements in the latest Qwen3 models.
Community
This paper introduces Group Sequence Policy Optimization (GSPO), a stable, efficient,
and performant RL algorithm for training the latest Qwen3 models (Instruct, Coder, and Thinking)
Awesome!
beautiful 🔥
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- StaQ it! Growing neural networks for Policy Mirror Descent (2025)
- Bingo: Boosting Efficient Reasoning of LLMs via Dynamic and Significance-based Reinforcement Learning (2025)
- On-Policy RL with Optimal Reward Baseline (2025)
- RePO: Replay-Enhanced Policy Optimization (2025)
- DeepVideo-R1: Video Reinforcement Fine-Tuning via Difficulty-aware Regressive GRPO (2025)
- Truncated Proximal Policy Optimization (2025)
- KDRL: Post-Training Reasoning LLMs via Unified Knowledge Distillation and Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
To address the high variance issue in token-level importance sampling and the information loss in GSPO's sequence-level approach, I propose a Subsequence-level Clipped Importance Sampling method. For a sequence
split into K subsequences, compute weights as:
Add a trust region constraint:
This reduces variance by limiting product terms, retains local information via subsequence granularity, and ensures stability with clipping and KL constraints, outperforming GSPO in flexibility and efficiency.
Awesome work!!
Thank you so much the prompt response! It helped a lot!
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper