论文标题

汤普森采样算法的平均变化匪徒

Thompson Sampling Algorithms for Mean-Variance Bandits

论文作者

Zhu, Qiuyu, Tan, Vincent Y. F.

论文摘要

多臂强盗(MAB)问题是一项经典的学习任务,体现了探索探索折衷方案。但是,标准配方未考虑{\ em风险}。在在线决策系统中,风险是主要问题。在这方面,平均变化风险度量是最常见的目标功能之一。在MAB问题的背景下,现有的均值优化算法具有对奖励分布的不现实假设。我们开发了汤普森采样风格的算法,用于均值变化mAB,并为高斯和伯努利·匪徒提供了全面的遗憾分析,其假设较少。我们的算法在均值变化mAB方面达到了最著名的遗憾界限,并在某些参数制度中达到了信息理论界限。经验模拟表明,对于所有风险公差,我们的算法显着优于现有的基于LCB的算法。

The multi-armed bandit (MAB) problem is a classical learning task that exemplifies the exploration-exploitation tradeoff. However, standard formulations do not take into account {\em risk}. In online decision making systems, risk is a primary concern. In this regard, the mean-variance risk measure is one of the most common objective functions. Existing algorithms for mean-variance optimization in the context of MAB problems have unrealistic assumptions on the reward distributions. We develop Thompson Sampling-style algorithms for mean-variance MAB and provide comprehensive regret analyses for Gaussian and Bernoulli bandits with fewer assumptions. Our algorithms achieve the best known regret bounds for mean-variance MABs and also attain the information-theoretic bounds in some parameter regimes. Empirical simulations show that our algorithms significantly outperform existing LCB-based algorithms for all risk tolerances.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源