论文标题
几乎为零的多任务分类的规范平均过滤器
Canonical Mean Filter for Almost Zero-Shot Multi-Task classification
论文作者
论文摘要
支持集是在几次任务中快速适应该模型的有条件先验的关键。但是,严格的支撑集形式使其在实际应用中实际上很难。在Anil的启发下,我们重新考虑了适应性在CNAP的特征提取器中的作用,这是一种最先进的代表性少方法。为了调查角色,几乎零射(AZS)任务是通过修复替换共同方案的支持集来设计的,该方案为不同任务的不同条件先验提供了相应的支持集。 AZS实验结果推断,适应性在特征提取器中很少起作用。但是,CNAPS不能在随机选择的支持集中稳健,并且在某些元数据数据集上表现不佳,因为其散布的均值嵌入量是由简单的平均操作员响应的。为了增强CNAP的鲁棒性,提出了规范平均滤波器(CMF)模块,以通过将支撑集映射到规范的形式中,从而在特征空间中进行平均嵌入密集型和稳定。 CMF可以使CNAPS可靠,即使它们是随机矩阵,也可以使CNAPS可靠。该归因使CNAPS能够在测试阶段删除平均编码器和参数适应网络,而AZS任务上的CNAP-CMF可以通过一声任务来保持性能。它导致大量参数减少。确切地说,在测试阶段删除了40.48 \%参数。此外,CNAP-CMF在单发任务中的表现优于CNAP,因为它解决了内在任务不稳定的性能问题。分类性能,可视化和聚类结果验证CMF是否使CNAP更好,更简单。
The support set is a key to providing conditional prior for fast adaption of the model in few-shot tasks. But the strict form of support set makes its construction actually difficult in practical application. Motivated by ANIL, we rethink the role of adaption in the feature extractor of CNAPs, which is a state-of-the-art representative few-shot method. To investigate the role, Almost Zero-Shot (AZS) task is designed by fixing the support set to replace the common scheme, which provides corresponding support sets for the different conditional prior of different tasks. The AZS experiment results infer that the adaptation works little in the feature extractor. However, CNAPs cannot be robust to randomly selected support sets and perform poorly on some datasets of Meta-Dataset because of its scattered mean embeddings responded by the simple mean operator. To enhance the robustness of CNAPs, Canonical Mean Filter (CMF) module is proposed to make the mean embeddings intensive and stable in feature space by mapping the support sets into a canonical form. CMFs make CNAPs robust to any fixed support sets even if they are random matrices. This attribution makes CNAPs be able to remove the mean encoder and the parameter adaptation network at the test stage, while CNAP-CMF on AZS tasks keeps the performance with one-shot tasks. It leads to a big parameter reduction. Precisely, 40.48\% parameters are dropped at the test stage. Also, CNAP-CMF outperforms CNAPs in one-shot tasks because it addresses inner-task unstable performance problems. Classification performance, visualized and clustering results verify that CMFs make CNAPs better and simpler.