当前位置:网站首页>U.S. Air Force Research Laboratory, "exploring the vulnerability and robustness of deep learning systems", the latest 85 page technical report in 2022

U.S. Air Force Research Laboratory, "exploring the vulnerability and robustness of deep learning systems", the latest 85 page technical report in 2022

2022-07-07 03:24:00 Zhiyuan community

Deep neural network makes the performance of modern computer vision system reach a new level in various challenging tasks . Although it has great benefits in accuracy and efficiency , But the highly parameterized nonlinear properties of deep networks make them very difficult to explain , It is easy to fail when there are opponents or abnormal data . This vulnerability makes it disturbing to integrate these models into our real-world systems . This project has two main lines :(1) We explore the vulnerability of deep neural networks by developing the most advanced adversarial attacks ;(2) We are in a challenging operating environment ( For example, in the scene of target recognition and joint learning in the open world ) Improve the robustness of the model . A total of nine articles have been published in this study , Each article has promoted the latest progress in their respective fields .

Deep neural networks in the field of machine learning , In particular, great progress has been made in the field of computer vision . Although most of the recent research on these models is to improve the accuracy and efficiency of tasks , But people don't know much about the robustness of deep Networks . The highly parameterized nature of deep networks is both a blessing and a curse . One side , It makes the performance level far beyond the traditional machine learning model . On the other hand ,DNN Very difficult to explain , Cannot provide an accurate concept of uncertainty . therefore , Before integrating these powerful models into our most trusted systems , It is important to continue to study and explore the vulnerabilities of these models .

We The first main line of research is to explore by making powerful adversarial attacks against various models DNN The fragility of From the perspective of attack , Confrontational attacks are not only eye-catching , And they are also a tool , So that we can better understand and explain the complex model behavior . Adversarial attacks also provide a challenging robustness benchmark , We can test it in the future . Our philosophy is , To create a highly robust model , We must start by trying to fully understand all the ways in which they may fail at present . In the 3.1 In the festival , Each job has its own motivation and explanation . In the 3.1.1 In the festival , We first discussed an early project on efficient model poisoning attacks , The project highlights a key weakness of the exposed training pipeline model . Next , We introduced a series of research projects , These projects introduce and build on the new idea of feature space attack . Such attacks have proved to be much more powerful than existing output space attacks in a more realistic black box attack environment . These papers are published in 3.1.2-3.2.4 This section deals with . In the 3.1.5 In the festival , We considered an attack background that we hadn't considered before , There is no class distribution overlap between the black box target model and the target model . We show that , Even in this challenging situation , We can also use the adjustment of our feature distribution attack to pose a major threat to the black box model . Last , The first 3.1.6 Section covers A new class of black box antagonistic attacks against reinforcement learning agents , This is an unexplored field , It is becoming more and more popular in control based applications . Please note that , The experiments of these projects 、 The results and analysis will be presented in 4.0 In the corresponding chapter of section .

We The goal of the second research direction is to directly enhance DNN The soundness of . As we detailed in the first line , At present, adversarial attacks are based on DNN Our system poses a major risk . Before we trust these models enough and integrate them into our most trusted system ( Such as defense technology ) Before , We must ensure that we take into account all possible forms of data corruption and variation . In the 3.2.1 In the festival , The first case we consider is to formulate a principled defense against data reversal attacks in a distributed learning environment . after , In the 3.2.2 In the festival , We have greatly improved automatic target recognition (ATR) The accuracy and robustness of the model in an open environment , Because we cannot guarantee that the incoming data will contain the categories in the training distribution . In the 3.2.3 In the festival , We go further , An online learning algorithm with limited memory is developed , By using samples in the deployment environment , Enhanced in an open world environment ATR The robustness of the model . Again , Experiments of these works 、 The results and discussion are included in section 4.0 Section .

https://apps.dtic.mil/sti/pdfs/AD1170105.pdf

原网站

版权声明
本文为[Zhiyuan community]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207061943471022.html

随机推荐