论文标题
Bomanet:整个神经网络的布尔掩蔽
BoMaNet: Boolean Masking of an Entire Neural Network
论文作者
论文摘要
从物理侧通道攻击的推理引擎窃取机器学习(ML)模型的最新工作非常需要有效的侧向通道防御。这项工作提出了第一个$ \ textit {完全屏蔽} $神经网络推理引擎设计。 蒙版使用安全的多方计算将秘密分为随机的共享,并将秘密依赖性计算与侧通道的统计关系(例如,功率绘制)。在这项工作中,我们构建了安全的硬件原始图,以掩盖$ \ textit {ash} $在神经网络中的线性和非线性操作。我们通过将每个添加剂转换为XOR和门的序列,并通过增强Trichina的安全布尔屏蔽样式来解决掩盖整数添加的挑战。我们通过添加管道元素来改善传统的Trichina和大门,以更好地抗故障,并构建了整个设计,以维持每个周期1个蒙版添加的吞吐量。 我们在Xilinx Spartan-6(XC6SLX75)FPGA上实现了建议的安全推理引擎。结果表明,掩盖会导致延迟3.5 \%的开销,而面积为5.9 $ \ times $。最后,我们用2M轨迹演示了蒙版设计的安全性。
Recent work on stealing machine learning (ML) models from inference engines with physical side-channel attacks warrant an urgent need for effective side-channel defenses. This work proposes the first $\textit{fully-masked}$ neural network inference engine design. Masking uses secure multi-party computation to split the secrets into random shares and to decorrelate the statistical relation of secret-dependent computations to side-channels (e.g., the power draw). In this work, we construct secure hardware primitives to mask $\textit{all}$ the linear and non-linear operations in a neural network. We address the challenge of masking integer addition by converting each addition into a sequence of XOR and AND gates and by augmenting Trichina's secure Boolean masking style. We improve the traditional Trichina's AND gates by adding pipelining elements for better glitch-resistance and we architect the whole design to sustain a throughput of 1 masked addition per cycle. We implement the proposed secure inference engine on a Xilinx Spartan-6 (XC6SLX75) FPGA. The results show that masking incurs an overhead of 3.5\% in latency and 5.9$\times$ in area. Finally, we demonstrate the security of the masked design with 2M traces.