Paper Title :Generating Attachable Adversarial Patches to Make the Object Identification Wrong Based on Neural Networks
Author :Shi-Jinn Horng, Huang Huang
Article Citation :Shi-Jinn Horng ,Huang Huang ,
(2023 ) " Generating Attachable Adversarial Patches to Make the Object Identification Wrong Based on Neural Networks " ,
International Journal of Advance Computational Engineering and Networking (IJACEN) ,
pp. 35-42,
Volume-11,Issue-4
Abstract : Adversarial example is the one that can make our network misclassification through small disturbance, which are
often harmless to human cognition but fatal to neural networks. Nowadays, there is no way to resist all kinds of disturbance
attacks, which makes people have more doubts about the architecture of the network. Three different sub-models are
proposed in this research to attack the neural networks. The attack scope model can effectively reduce the attack range and
guide the adversarial algorithm to conduct accurate perturbation attack. Adversarial attack models can generate different
adversarial patches through adversarial algorithms. The adversarial patches are compact and can be manufactured artificially.
This disturbance patch can be directly attached to the original map to efficiently and accurately disturb the target model. The
success rate of disturbance by generating a small patch is 70.1%. Especially, the method proposed in this paper can be
applied in different neural networks.
Keywords - Deep Learning, Neural Network, Adversarial Attack, Adversarial Patch
Type : Research paper
Published : Volume-11,Issue-4
DOIONLINE NO - IJACEN-IRAJ-DOIONLINE-19644
View Here
Copyright: © Institute of Research and Journals
|
|
| |
|
PDF |
| |
Viewed - 26 |
| |
Published on 2023-07-10 |
|