论文标题
自下而上的列举合成的即时学习
Just-in-Time Learning for Bottom-Up Enumerative Synthesis
论文作者
论文摘要
程序合成中的一个关键挑战是合成器必须探索的搜索空间的天文大小。为了应对这一挑战,最近提出的旨在使用学习的概率模型指导综合的工作。但是,对于没有高质量培训数据的问题域而言,获得这种模型可能是不可行的。在这项工作中,我们介绍了一种替代方法来进行指导程序综合:而不是提前训练模型,而是通过从沿途遇到的部分解决方案学习来展示如何及时及时地引导一个模型。为了充分利用该模型,我们还提出了一种新的程序枚举算法,我们将其配音为自下而上的搜索,从而通过概率模型的指导扩展了有效的自下而上搜索。 我们在称为探针的工具中实现了这种方法,该工具针对流行语法引导的合成(SYGUS)格式中的问题。我们评估了文献中基准的探针,并表明它在无指导的自下而上搜索和最先进的概率引导的合成器中实现了显着的性能,该合成器已在现有解决方案的语料库中进行了培训。此外,我们表明这些性能增长并非以解决方案质量为代价:由探针产生的程序仅比最短的解决方案稍大,并且没有执行不必要的案例分解。
A key challenge in program synthesis is the astronomical size of the search space the synthesizer has to explore. In response to this challenge, recent work proposed to guide synthesis using learned probabilistic models. Obtaining such a model, however, might be infeasible for a problem domain where no high-quality training data is available. In this work we introduce an alternative approach to guided program synthesis: instead of training a model ahead of time we show how to bootstrap one just in time, during synthesis, by learning from partial solutions encountered along the way. To make the best use of the model, we also propose a new program enumeration algorithm we dub guided bottom-up search, which extends the efficient bottom-up search with guidance from probabilistic models. We implement this approach in a tool called Probe, which targets problems in the popular syntax-guided synthesis (SyGuS) format. We evaluate Probe on benchmarks from the literature and show that it achieves significant performance gains both over unguided bottom-up search and over a state-of-the-art probability-guided synthesizer, which had been trained on a corpus of existing solutions. Moreover, we show that these performance gains do not come at the cost of solution quality: programs generated by Probe are only slightly more verbose than the shortest solutions and perform no unnecessary case-splitting.