论文标题

可转移的非可再生示例

Transferable Unlearnable Examples

论文作者

Ren, Jie, Xu, Han, Wan, Yuxuan, Ma, Xingjun, Sun, Lichao, Tang, Jiliang

论文摘要

随着越来越多的人在线发布个人数据,未经授权的数据使用已成为一个严重的问题。已经引入了不可验证的策略,以防止未经许可进行第三方培训数据。它们在发布之前会增加用户数据的扰动,该数据旨在使在发行的已发表数据集无效的模型中进行培训。这些扰动已用于特定的培训设置和目标数据集。但是,在其他培训设置和数据集中使用时,它们的无可估计效果会大大减少。为了解决这个问题,我们提出了一种基于类别可分离性(CSD)的新颖的无透明策略(CSD),该策略旨在通过增强线性可分离性来更好地将无与伦比的效果更好地传输到其他培训环境和数据集中。广泛的实验证明了跨培训设置和数据集的拟议未成年示例的可传递性。

With more people publishing their personal data online, unauthorized data usage has become a serious concern. The unlearnable strategies have been introduced to prevent third parties from training on the data without permission. They add perturbations to the users' data before publishing, which aims to make the models trained on the perturbed published dataset invalidated. These perturbations have been generated for a specific training setting and a target dataset. However, their unlearnable effects significantly decrease when used in other training settings and datasets. To tackle this issue, we propose a novel unlearnable strategy based on Classwise Separability Discriminant (CSD), which aims to better transfer the unlearnable effects to other training settings and datasets by enhancing the linear separability. Extensive experiments demonstrate the transferability of the proposed unlearnable examples across training settings and datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源