Optimizing collaborative filtering recommender systems with the GRPO reinforcement learning algorithm

https://doi.org/10.55214/2576-8484.v9i8.9471

Authors

Collaborative filtering recommender systems primarily focus on short-term prediction accuracy but exhibit limitations concerning long-term user satisfaction and content diversity. In this paper, we reinterpret user-item interaction data as reinforcement learning with verifiable rewards and introduce the Group Relative Policy Optimization (GRPO) reinforcement learning algorithm, originally proposed in the large language model domain, to collaborative filtering model fine-tuning for the first time. GRPO directly updates policies without separate critic networks, balancing exploration and exploitation while optimizing long-term user engagement. In experiments conducted on Amazon review datasets covering baby products, video games, and industrial & scientific categories, the GRPO-optimized model achieved up to 15.16% improvement in Recall@10 compared to baseline models. Additionally, we revealed that user embeddings from graph-based collaborative filtering architectures positively contribute to GRPO algorithm optimization, whereas positional embeddings from sequential collaborative filtering architectures impede optimization performance. These findings empirically validate the effectiveness of the GRPO algorithm as a robust approach for recommender system model optimization.

How to Cite

Jeon, H., Lee, C.-H., Choi, J.-G., & Kwon, H.-J. (2025). Optimizing collaborative filtering recommender systems with the GRPO reinforcement learning algorithm. Edelweiss Applied Science and Technology, 9(8), 871–881. https://doi.org/10.55214/2576-8484.v9i8.9471

Downloads

Download data is not yet available.

Dimension Badge

Download

Downloads

Issue

Section

Articles

Published

2025-08-15