论文标题
有效的图像表示学习与联合采样的软性
Efficient Image Representation Learning with Federated Sampled Softmax
论文作者
论文摘要
在分散数据上的学习图像表示可以在数据无法跨数据孤岛汇总的情况下带来许多好处。 SoftMax横熵损失非常有效,通常用于学习图像表示。事实证明,使用大量课程对集中学习中这种表征的描述能力特别有益。但是,随着FL客户计算和通信的需求与班级数量成比例增加,在使用联合学习的分散数据上这样做并不是一件直接的。在这项工作中,我们介绍了联合采样的SoftMax(FEDSS),这是一种通过联合学习学习图像表示的资源有效方法。具体而言,FL客户端对一组类进行了采样,并仅优化相应的模型参数,相对于采样的SoftMax目标,该目标近似于全局完整的SoftMax目标。我们检查了损失配方,并从经验上表明,我们的方法可显着减少客户设备传递并优化的参数数量,同时与标准的完整软磁方法进行同时执行。这项工作创造了有可能在联合设置下有大量类的分散数据上有效学习图像表示的可能性。
Learning image representations on decentralized data can bring many benefits in cases where data cannot be aggregated across data silos. Softmax cross entropy loss is highly effective and commonly used for learning image representations. Using a large number of classes has proven to be particularly beneficial for the descriptive power of such representations in centralized learning. However, doing so on decentralized data with Federated Learning is not straightforward as the demand on FL clients' computation and communication increases proportionally to the number of classes. In this work we introduce federated sampled softmax (FedSS), a resource-efficient approach for learning image representation with Federated Learning. Specifically, the FL clients sample a set of classes and optimize only the corresponding model parameters with respect to a sampled softmax objective that approximates the global full softmax objective. We examine the loss formulation and empirically show that our method significantly reduces the number of parameters transferred to and optimized by the client devices, while performing on par with the standard full softmax method. This work creates a possibility for efficiently learning image representations on decentralized data with a large number of classes under the federated setting.