当前位置:网站首页>The latest good article | interpretable confrontation defense based on causal inference

The latest good article | interpretable confrontation defense based on causal inference

2022-06-10 17:54:00 Zhiyuan community

Machine Intelligence Research

Models based on deep learning are vulnerable to adversarial attacks . In sensitive and security critical scenarios , Defense is essential against attacks . However , Deep learning methods still lack effective defense mechanisms against attacks . Most of the existing methods are only expedients for specific antagonistic samples . The research team of the Institute of automation of the Chinese Academy of Sciences used causal reasoning to explore the working mechanism of antagonistic samples , A causal model is established to describe the generation and performance of antagonistic samples . The results are published in MIR In the third issue .

 

The picture is from Springer

 

Full text guide

 

Deep learning method has opened a new era of artificial intelligence . In the field of computer vision , Deep learning method in image classification 、 object detection 、 And image segmentation . Deep neural network shows the powerful ability of nonlinear mapping from raw data to advanced features . However , Antagonistic samples cast a shadow over the great success of deep learning .“ Powerful ” The deep learning module is vulnerable to various adversarial attack algorithms . Attackers can use well-designed perturbations to destroy the predictions of state-of-the-art models , But human beings can not find this disturbance . This problem hinders the application of depth method in sensitive and security critical scenarios . therefore , The defense against adversarial attacks has attracted much attention , And has become an important research topic .

 

There has been a lot of research on defense against adversarial attacks . However , It is not clear how adversarial samples can destroy the deep learning model . The potential working mechanism of antagonistic samples deserves further exploration and research . therefore , Most of the existing methods are only expedients for specific antagonistic samples . for example , Confrontation training , Introducing antagonistic samples into the training process , As a defense , Get a lot of attention . However , However, the generalization ability of the method based on confrontation training is very limited , Especially for invisible attacks , This limitation is more obvious .

 

To defend against adversarial attacks , It is necessary to reveal the working mechanism of antagonistic samples . The research team of the Institute of automation of the Chinese Academy of Sciences used causal reasoning to explore the working mechanism of antagonistic samples . Compared with the method based on Statistics , Causal reasoning can more naturally simulate the relationship between variables .

 

This paper establishes a causal model to describe the generation and performance of antagonistic samples . The causal model enables us to estimate the causal effect between the output of the deep neural network and the antagonistic sample sub region , And this is data driven / What cannot be achieved by statistical methods . therefore , The tampered predictions can be attributed to sub regions , This means that there is a possibility to explain the antagonistic sample and reveal its working mechanism .

  Counter samples . Adding a small disturbance to the original image can manipulate the model output

 

The main contribution of the article is as follows :

 

1) This paper establishes a causal model to explain the generation and performance of antagonistic samples . The causal model enables us to estimate the causal relationship between the output and input sample sub regions of the deep neural network .

 

2) Based on causal reasoning , The article reveals the working mechanism of antagonistic samples . The causal effects in different sub areas of antagonistic samples may be inconsistent , Even the opposite . Usually , Only a small number of counter samples play a decisive role in the deception identification model .

 

3) Based on these findings , This paper puts forward a simple and effective strategy to defend against adversarial attacks . These strategies enable us to detect and identify countermeasure samples , Without additional models or training .

 

 

The full text download

Towards Interpretable Defense Against Adversarial Attacks via Causal Inference

Min Ren, Yun-Long Wang, Zhao-Feng He

https://www.mi-research.net/en/article/doi/10.1007/s11633-022-1330-7

https://link.springer.com/article/10.1007/s11633-022-1330-7

 

【 The author of this article 】

About Machine Intelligence Research

 

Machine Intelligence Research( abbreviation MIR, Original title International Journal of Automation and Computing) Sponsored by the Institute of automation, Chinese Academy of Sciences , On 2022 It was officially published in .MIR Based on the domestic 、 Global oriented , Focus on serving the national strategic needs , Publish the latest original research papers in the field of machine intelligence 、 review 、 Comments, etc , Comprehensively report the basic theories and cutting-edge innovative research achievements in the field of international machine intelligence , Promote international academic exchanges and discipline development , Serve the progress of national artificial intelligence science and technology . The journal was selected " China Science and technology journal excellence action plan ", Has been ESCI、EI、Scopus、 The core journals of science and technology in China 、CSCD Wait for the database to include .

 

 

MIR Good articles in the past
A collection of high quality reviews | Covers evolutionary computing 、 Deep audio-visual learning 、 Target tracking ......
A collection of high-quality solicitations | Covering Palmprint and palmar vein recognition 、 Gesture recognition 、 Self supervised learning .......
AI Cutting edge | Focus on knowledge mining 、5G、 Areas such as reinforcement learning ; From Lenovo Research Institute 、 Automation Institute of Chinese Academy of Sciences and other teams
He Huiguang team, Institute of automation, Chinese Academy of Sciences | Based on the RGEC The new network of
lenovo CTO Ruiyong team | Knowledge mining : Cross domain overview
Editor in chief academician Tan Tieniu sent a message , MIR The first issue was officially published !
Zhan Zhihui team of South China University of Technology | review : Evolutionary computing for expensive optimization
Yin Xucheng team of Beijing University of science and Technology | Small sample image classification based on weak correlation knowledge integration
Zhang MINLING team of Southeast University | Multidimensional classification method based on selective feature augmentation

 

 

The database contains information
Good news | MIR By ESCI Included !
Good news | MIR By EI And Scopus Database Collection
The good news of the new year !MIR Included in the “ The core journals of science and technology in China ”

 

Click on " Read the original " Download the third good article for free

原网站

版权声明
本文为[Zhiyuan community]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/161/202206101700172399.html