论文标题

量表变量靶标的神经回归

Neural Regression For Scale-Varying Targets

论文作者

Khakhar, Adam, Buckman, Jacob

论文摘要

在这项工作中,我们证明了使用均方误差丢失的回归的主要局限性是其对目标规模的敏感性。这使得学习设置由目标组成,其价值观具有不同的尺度。最近提供的替代损耗函数(称为直方图丢失)避免了此问题。但是,其计算成本随直方图中的存储桶数量而线性增长,这使预测具有实用值的目标。为了解决这个问题,我们提出了一种新颖的方法,用于培训有关实价回归目标的深度学习模型,自回归回归,该回归通过利用自回归目标分解来学习高保真分布。我们证明,这个训练目标使我们能够解决涉及不同尺度目标的回归任务。

In this work, we demonstrate that a major limitation of regression using a mean-squared error loss is its sensitivity to the scale of its targets. This makes learning settings consisting of target's whose values take on varying scales challenging. A recently-proposed alternative loss function, known as histogram loss, avoids this issue. However, its computational cost grows linearly with the number of buckets in the histogram, which renders prediction with real-valued targets intractable. To address this issue, we propose a novel approach to training deep learning models on real-valued regression targets, autoregressive regression, which learns a high-fidelity distribution by utilizing an autoregressive target decomposition. We demonstrate that this training objective allows us to solve regression tasks involving targets with different scales.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源