论文标题
可识别的双重vae用于删除表示形式
An Identifiable Double VAE For Disentangled Representations
论文作者
论文摘要
关于学习分解表示的很大一部分文献集中在变化自动编码器(VAE)上。最近的事态发展表明,如果没有模型和数据的归纳偏见,则无法在完全无监督的环境中获得分解。然而,Khemakhem等人,AISTATS,2020年,使用特定形式的先验形式,有条件地取决于补充输入观测值的辅助变量,可能是一种偏见,从而导致可识别的模型,并保证了散布。沿着这条线工作,我们提出了一种基于VAE的新型生成模型,并具有理论上的可识别性保证。我们通过学习最佳表示,在潜在的情况下获得了有条件的先验,从而对其正则化施加了额外的力量。我们还将方法扩展到半监督的设置。根据有关解散文献中提出的几个已建立的指标,实验结果表明,相对于最先进的方法的表现出色。
A large part of the literature on learning disentangled representations focuses on variational autoencoders (VAE). Recent developments demonstrate that disentanglement cannot be obtained in a fully unsupervised setting without inductive biases on models and data. However, Khemakhem et al., AISTATS, 2020 suggest that employing a particular form of factorized prior, conditionally dependent on auxiliary variables complementing input observations, can be one such bias, resulting in an identifiable model with guarantees on disentanglement. Working along this line, we propose a novel VAE-based generative model with theoretical guarantees on identifiability. We obtain our conditional prior over the latents by learning an optimal representation, which imposes an additional strength on their regularization. We also extend our method to semi-supervised settings. Experimental results indicate superior performance with respect to state-of-the-art approaches, according to several established metrics proposed in the literature on disentanglement.