论文标题

并非所有领域都同样复杂:自适应多域学习

Not all domains are equally complex: Adaptive Multi-Domain Learning

论文作者

Senhaji, Ali, Raitoharju, Jenni, Gabbouj, Moncef, Iosifidis, Alexandros

论文摘要

深度学习方法是高度专业化的,需要对不同任务进行培训单独的模型。多域学习着眼于一次学习多种不同任务的方法,每个任务都来自不同的领域。多域学习中最常见的方法是形成一个域的不可知论模型,其参数在所有域之间共享,并为每个新域学习少数额外的域特异性参数。但是,不同的领域有不同的难度。使用域不可知模型的增强版本对所有域的模型进行参数化,从而导致不必要的效率解决方案,尤其是对于易于解决任务的解决方案。我们为多域学习的深神经网络提出了一种自适应参数化方法。所提出的方法以原始方法的形式执行,同时减少了迄今为止参数的数量,从而导致有效的多域学习解决方案。

Deep learning approaches are highly specialized and require training separate models for different tasks. Multi-domain learning looks at ways to learn a multitude of different tasks, each coming from a different domain, at once. The most common approach in multi-domain learning is to form a domain agnostic model, the parameters of which are shared among all domains, and learn a small number of extra domain-specific parameters for each individual new domain. However, different domains come with different levels of difficulty; parameterizing the models of all domains using an augmented version of the domain agnostic model leads to unnecessarily inefficient solutions, especially for easy to solve tasks. We propose an adaptive parameterization approach to deep neural networks for multi-domain learning. The proposed approach performs on par with the original approach while reducing by far the number of parameters, leading to efficient multi-domain learning solutions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源