论文标题
隐私保护联合复发性神经网络
Privacy-Preserving Federated Recurrent Neural Networks
论文作者
论文摘要
我们提出了Rhode,这是一个新型系统,可以通过依靠多阶层的同构加密来实现跨核联合学习环境中对复发神经网络(RNN)的隐私培训和预测。 Rhode保留了培训数据,模型和预测数据的机密性;它减轻了针对被动对抗威胁模型下梯度的联合学习攻击。我们提出了一个包装方案,多维包装,以更好地利用单个指令,加密下的多个数据(SIMD)操作。使用多维包装,Rhode可以同时进行一批样品的有效处理。为了避免爆炸梯度问题,Rhode提供了几个剪辑近似值,用于在加密下执行梯度剪辑。我们从实验上表明,具有Rhode的模型性能与数据持有人之间的同质和异质数据分布相似。我们的实验评估表明,Rhode分别与数据持有人的数量和时间段的数量线性缩放,分别与特征的数量和RNN的隐藏单位数量分别进行了次线性和子分数。据我们所知,Rhode是第一个为RNN及其变体培训的基础的系统,在联合学习环境中加密。
We present RHODE, a novel system that enables privacy-preserving training of and prediction on Recurrent Neural Networks (RNNs) in a cross-silo federated learning setting by relying on multiparty homomorphic encryption. RHODE preserves the confidentiality of the training data, the model, and the prediction data; and it mitigates federated learning attacks that target the gradients under a passive-adversary threat model. We propose a packing scheme, multi-dimensional packing, for a better utilization of Single Instruction, Multiple Data (SIMD) operations under encryption. With multi-dimensional packing, RHODE enables the efficient processing, in parallel, of a batch of samples. To avoid the exploding gradients problem, RHODE provides several clipping approximations for performing gradient clipping under encryption. We experimentally show that the model performance with RHODE remains similar to non-secure solutions both for homogeneous and heterogeneous data distribution among the data holders. Our experimental evaluation shows that RHODE scales linearly with the number of data holders and the number of timesteps, sub-linearly and sub-quadratically with the number of features and the number of hidden units of RNNs, respectively. To the best of our knowledge, RHODE is the first system that provides the building blocks for the training of RNNs and its variants, under encryption in a federated learning setting.