论文标题
使用Barlow双胞胎进行深入的积极学习
Deep Active Learning Using Barlow Twins
论文作者
论文摘要
卷积神经网络(CNN)的概括性能主要受到训练图像的数量,质量和多样性倾向。在许多现实世界中,所有培训数据都需要在许多现实应用程序中进行注释,易于获取,但昂贵且耗时的标签。为任务进行积极学习的目的是从未标记的池中绘制最有用的样本,该样本可在注释后用于培训。通过在大型计算机视觉基准上使用有监督的方法来缩小性能的差距,以完全不同的目标,自我监督的学习越来越流行。如今,自我监督的学习(SSL)已显示出对输入样本失真不变的低级表示形式,并且可以将不变性编码为人为产生的扭曲,例如旋转,太阳化,裁剪等自我监督学习(SSL)方法取决于更简单,更可扩展的框架来学习。 In this paper, we unify these two families of approaches from the angle of active learning using self-supervised learning mainfold and propose Deep Active Learning using BarlowTwins(DALBT), an active learning method for all the datasets using combination of classifier trained along with self-supervised loss framework of Barlow Twins to a setting where the model can encode the invariance of artificially created distortions, e.g.旋转,阳光,种植等
The generalisation performance of a convolutional neural networks (CNN) is majorly predisposed by the quantity, quality, and diversity of the training images. All the training data needs to be annotated in-hand before, in many real-world applications data is easy to acquire but expensive and time-consuming to label. The goal of the Active learning for the task is to draw most informative samples from the unlabeled pool which can used for training after annotation. With total different objective, self-supervised learning which have been gaining meteoric popularity by closing the gap in performance with supervised methods on large computer vision benchmarks. self-supervised learning (SSL) these days have shown to produce low-level representations that are invariant to distortions of the input sample and can encode invariance to artificially created distortions, e.g. rotation, solarization, cropping etc. self-supervised learning (SSL) approaches rely on simpler and more scalable frameworks for learning. In this paper, we unify these two families of approaches from the angle of active learning using self-supervised learning mainfold and propose Deep Active Learning using BarlowTwins(DALBT), an active learning method for all the datasets using combination of classifier trained along with self-supervised loss framework of Barlow Twins to a setting where the model can encode the invariance of artificially created distortions, e.g. rotation, solarization, cropping etc.