论文标题
在线Pac-Bayes学习
Online PAC-Bayes Learning
论文作者
论文摘要
大多数Pac-bayesian界限都在批处理学习设置中占据,在推理或预测之前,在其中一次收集数据。这与许多当代学习问题有所不同,在这些问题中收集了数据流,并且必须动态调整算法。我们在此在线学习框架中证明了新的pac-bayesian界限,利用了遗憾的更新定义,我们通过批处理转换来重新访问古典的pac-bayesian结果,将其延长到依赖数据的情况下。我们的结果适用于有限的损失,潜在的\ emph {non-convex},为在线学习中有希望的发展铺平了道路。
Most PAC-Bayesian bounds hold in the batch learning setting where data is collected at once, prior to inference or prediction. This somewhat departs from many contemporary learning problems where data streams are collected and the algorithms must dynamically adjust. We prove new PAC-Bayesian bounds in this online learning framework, leveraging an updated definition of regret, and we revisit classical PAC-Bayesian results with a batch-to-online conversion, extending their remit to the case of dependent data. Our results hold for bounded losses, potentially \emph{non-convex}, paving the way to promising developments in online learning.