论文标题

简约的学习激烈的缓存

Parsimonious Learning-Augmented Caching

论文作者

Im, Sungjin, Kumar, Ravi, Petety, Aditya, Purohit, Manish

论文摘要

学习激进的算法(其中,传统算法都通过机器学习的预测进行了增强 - 已成为超越最坏情况分析的框架。总体目标是设计算法,这些算法在预测准确而保留某些最差的情况下,近距离执行,无论预测的准确性如何。该框架已成功应用于在线问题,例如缓存,可以使用预测来减轻不确定性。 在本文中,我们介绍并研究了学习效果算法可以暂时利用预测的环境。我们考虑了缓存问题 - 在学习效果的设置中已经进行了广泛的研究 - 并表明人们可以实现定量相似的结果,但只能使用均方根数量的预测。

Learning-augmented algorithms -- in which, traditional algorithms are augmented with machine-learned predictions -- have emerged as a framework to go beyond worst-case analysis. The overarching goal is to design algorithms that perform near-optimally when the predictions are accurate yet retain certain worst-case guarantees irrespective of the accuracy of the predictions. This framework has been successfully applied to online problems such as caching where the predictions can be used to alleviate uncertainties. In this paper we introduce and study the setting in which the learning-augmented algorithm can utilize the predictions parsimoniously. We consider the caching problem -- which has been extensively studied in the learning-augmented setting -- and show that one can achieve quantitatively similar results but only using a sublinear number of predictions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源