已收录 273651 条政策
 政策提纲
  • 暂无提纲
Improving robustness of convolutional neural networks using element-wise activation scaling
[摘要] Recent works reveal that re-calibrating intermediate activation of adversarial examples can improve the adversarial robustness of CNN models. The state of the arts exploit this feature at the channel level to help CNN models defend adversarial attacks, where each intermediate activation is uniformly scaled by a factor. However, we conduct a more fine-grained analysis on intermediate activation and observe that adversarial examples only change a portion of elements within an activation. This observation motivates us to investigate a new method to re-calibrate intermediate activation of CNNs to improve robustness. Instead of uniformly scaling each activation, we individually adjust each element within an activation and thus propose Element-Wise Activation Scaling, dubbed EWAS, to improve CNNs' adversarial robustness. EWAS is a simple yet very effective method in enhancing robustness. Experimental results on ResNet-18 and WideResNet with CIFAR10 and SVHN show that EWAS significantly improves the robustness accuracy. Especially for ResNet18 on CIFAR10, EWAS increases the adversarial accuracy by 37.65% to 82.35% against C & W attack. The code and trained models are available at https://github.com/ieslab-ynu/EWAS.& COPY; 2023 Elsevier B.V. All rights reserved.
[发布日期] 2023-12-01 [发布机构] 
[效力级别]  [学科分类] 
[关键词]  [时效性] 
   浏览次数:5      统一登录查看全文      激活码登录查看全文