论文标题

具有成本补贴的多武器匪徒

Multi-armed Bandits with Cost Subsidy

论文作者

Sinha, Deeksha, Sankararama, Karthik Abinav, Kazerouni, Abbas, Avadhanula, Vashist

论文摘要

在本文中,我们考虑了多武器匪徒问题(MAB)问题的新颖变体,MAB具有成本补贴,该变体对许多真实的应用程序进行了建模,其中学习代理必须为选择ARM付费,并担心优化累积成本和奖励。我们提出了两个应用程序,即智能的SMS路由问题和广告受众优化问题,这些应用程序(尤其是在线平台)面临,并显示我们的问题如何唯一捕获这些应用程序的关键功能。我们表明,现有的MAB算法(例如上限置信度结合和汤普森采样)的幼稚概括对于此问题表现不佳。然后,我们建立了一个基本的下限,该问题对此问题的任何在线学习算法的性能,强调了与经典MAB问题相比,我们的问题的硬度。我们还提供了一个简单的探索变体,然后为该算法建立了近乎最佳的后悔界限。最后,我们执行广泛的数值模拟,以了解各种实例的算法套件的行为,并建议使用不同算法的实用指南。

In this paper, we consider a novel variant of the multi-armed bandit (MAB) problem, MAB with cost subsidy, which models many real-life applications where the learning agent has to pay to select an arm and is concerned about optimizing cumulative costs and rewards. We present two applications, intelligent SMS routing problem and ad audience optimization problem faced by several businesses (especially online platforms), and show how our problem uniquely captures key features of these applications. We show that naive generalizations of existing MAB algorithms like Upper Confidence Bound and Thompson Sampling do not perform well for this problem. We then establish a fundamental lower bound on the performance of any online learning algorithm for this problem, highlighting the hardness of our problem in comparison to the classical MAB problem. We also present a simple variant of explore-then-commit and establish near-optimal regret bounds for this algorithm. Lastly, we perform extensive numerical simulations to understand the behavior of a suite of algorithms for various instances and recommend a practical guide to employ different algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源