论文标题

利用和防御苹果神经锤的近似线性

Exploiting and Defending Against the Approximate Linearity of Apple's NeuralHash

论文作者

Bhatia, Jagdeep Singh, Meng, Kevin

论文摘要

感知哈希映射图像具有相同语义内容与相同$ n $ bit Hash值相同的图像,同时将语义不同的图像映射到不同的哈希。这些算法在网络安全方面具有重要的应用,例如版权侵权检测,内容指纹和监视。苹果的神经汉堡是一种系统,旨在检测用户设备上非法内容的存在,而不会损害消费者的隐私。我们提出了令人惊讶的发现,即神经哈什大约是线性的,这激发了新型黑盒攻击的开发,这些攻击可以(i)逃避对“非法”图像的检测,(ii)产生近乎收集的攻击,以及(iii)有关哈希图像的泄漏信息,所有这些都无法访问模型参数。这些脆弱性对神经哈什的安全目标构成了严重威胁;为了解决这些问题,我们建议使用经典的加密标准提出一个简单的修复程序。

Perceptual hashes map images with identical semantic content to the same $n$-bit hash value, while mapping semantically-different images to different hashes. These algorithms carry important applications in cybersecurity such as copyright infringement detection, content fingerprinting, and surveillance. Apple's NeuralHash is one such system that aims to detect the presence of illegal content on users' devices without compromising consumer privacy. We make the surprising discovery that NeuralHash is approximately linear, which inspires the development of novel black-box attacks that can (i) evade detection of "illegal" images, (ii) generate near-collisions, and (iii) leak information about hashed images, all without access to model parameters. These vulnerabilities pose serious threats to NeuralHash's security goals; to address them, we propose a simple fix using classical cryptographic standards.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源