Request-Level Training Paradigms for Efficient Large-Scale Recommendation Systems
Keywords:
Recommendation Systems, Request-Level Training, Ranking Systems, Machine Learning Infrastructure, PersonalizationAbstract
Modern recommendation systems operate at a massive scale and are expected to provide increasingly personalized experiences under strict latency, storage, and computational constraints. Traditional training pipelines typically structure data at the level of individual impressions, which has historically been effective but introduces substantial redundancy in both data representation and computation. This redundancy becomes a significant bottleneck as platforms grow in complexity and volume. Request-level training paradigms offer an alternative by redefining the basic unit of learning from the individual impression to the full request or grouped interaction context. This article examines the conceptual foundations, architectural implications, and systems benefits of request-level training in large-scale recommendation systems. It argues that aligning training data structures more closely with real interaction patterns enables better computational efficiency, richer contextual modeling, and more scalable personalization. The discussion also explores the practical infrastructure changes required for adopting this paradigm and considers its broader implications for the future of machine learning systems used in retrieval and ranking.
Downloads
References
Xiangnan He et al., "Neural Collaborative Filtering," ACM Digital Library, 2017. [Online]. Available: https://dl.acm.org/doi/10.1145/3038912.3052569
Paul Covington et al., "Deep Neural Networks for YouTube Recommendations," ACM Digital Library, 2016. [Online]. Available: https://dl.acm.org/doi/10.1145/2959100.2959190
Heng-Tze Cheng, et al., "Wide & Deep Learning for Recommender Systems," ACM Digital Library, 2016. [Online]. Available: https://dl.acm.org/doi/10.1145/2988450.2988454
Guorui Zhou et al., "Deep Interest Network for Click-Through Rate Prediction," ACM Digital Library, 2016. [Online]. Available: https://dl.acm.org/doi/10.1145/3219819.3219823
Ashish Vaswani et al., "Attention is All You Need," NIPS, 2017. [Online]. Available: https://papers.nips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
Chris Burges et al., "Learning to rank using gradient descent," ACM Digital Library, 2005. [Online]. Available: https://dl.acm.org/doi/10.1145/1102351.1102363
Jeffrey Dean, Sanjay Ghemawat, "MapReduce: simplified data processing on large clusters," ACM Digital Library, 2008. [Online]. Available: https://dl.acm.org/doi/10.1145/1327452.1327492
Steffen Rendle, "Factorization Machines," ACM Digital Library, 2011. [Online]. Available: https://ieeexplore.ieee.org/document/5694074
Thorsten Joachims, "Optimizing search engines using clickthrough data," ACM Digital Library, 2002. [Online]. Available: https://dl.acm.org/doi/10.1145/775047.775067
Lihong Li et al., "A contextual-bandit approach to personalized news article recommendation," ACM Digital Library, 2010. [Online]. Available: https://dl.acm.org/doi/10.1145/1772690.1772758
Balázs Hidasi et al., "Session-based Recommendations with Recurrent Neural Networks," arXiv:1511.06939 [cs.LG], 2016. [Online]. Available: https://arxiv.org/abs/1511.06939
D. Sculley et al., "Hidden Technical Debt in Machine Learning Systems," NIPS 2015. [Online]. Available: https://dl.acm.org/doi/10.1145/2348283.2348408
Michael Bendersky, W. Bruce Croft, "Modeling higher-order term dependencies in information retrieval using query hypergraphs," ACM Digital Library, 2012. [Online]. Available: https://dl.acm.org/doi/10.1145/2348283.2348408
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.


