当前位置:网站首页>Discussion on the dimension of confrontation subspace

Discussion on the dimension of confrontation subspace

2022-07-05 04:24:00 PaperWeekly

7e8a96da5bbe55a4ba95827c9f27c53b.gif

PaperWeekly original ·  author | Sun Yudao

Company | Beijing University of Posts and telecommunications

Research direction | GAN Image generation 、 Emotional confrontation sample generation

0e029c785c554cacf53be073c60a4604.png

introduction

Confronting samples is one of the main threats of deep learning models , Confrontation samples will make the target classifier model classification error, and it exists in the dense confrontation subspace , The antagonism subspace is contained in a specific sample space . This paper mainly discusses the dimension of antagonism subspace , That is, for a specific sample of a single model, what is the dimension of the subspace , What is the dimension of the subspace against a specific sample of multiple models .

f7b6059821377fe87c1e72851696c880.png

Antagonism subspace

Given a clean sample , And its corresponding label , With parameters The neural network classifier of is , The loss function is , The confrontation sample is , Then according to the multivariate Taylor expansion :

e37517a7255fc395a1d939d3d680897a.png

Further, the optimization objective is :

021a314a5a69b20bece3af83adf7cd7f.png

Further, the calculation formula of the countermeasure sample is :

60daba6e3020a5a3b4ec91475f8d2e2c.png

among It indicates the size of the counter disturbance . It can be seen from the above formula that , Clean samples Along the gradient You can enter the confrontation subspace . Further details are shown in the figure below , Among them (a),(b) and (c) It represents the result diagram of the classifier classification given a clean sample generated in different directions , Each square represents the classification result of each sample , White in the square indicates that the classifier is classified correctly , Color means that the classifier is classified into other different categories . chart (d),(e) and (f) Decomposition diagram showing the direction of sample movement .

2e8ab7ee7ad6df6735504d4ffb743188.png

From above (d) You know , If you choose two orthogonal directions , One is the gradient direction against disturbance , The other is the direction of random disturbance , From the picture (a) You know , Clean samples along the anti disturbance direction can enter the anti disturbance subspace , Along the direction of random disturbance, no countermeasure samples are generated . From above (e) You know , If these two orthogonal directions are at an angle to the gradient direction , From the diagram (b) It can be seen that these two orthogonal directions can enter the confrontation subspace , But it's not the fastest direction . From above (f) You know , If these two orthogonal directions are random disturbances , From the picture (c) You know , It is difficult for clean samples to enter the confrontation subspace , The misclassification of the figure is independent of the confrontation samples , It is related to the training of the model itself .

d5c8bdeb9e066f861e5deb16c706aa8a.png

Single model antagonism subspace dimension

From the multivariate Taylor expansion of the loss function against samples in the previous section, we can approximate :

2f080661af738f5ea56f143b0eb85555.png

Among them, the order is ,. The purpose is to explore a given model , Solve the anti disturbance Make the model loss function grow at least We have to confront the problem of subspace dimension , The mathematical expression is :

de59b790a9f9ddd988f7810b1ebdd11b.png

among , Disturbance Belong to this In the confrontation subspace composed of orthogonal vectors , It's against the dimension of subspace . At this point, the following theorem holds , The detailed proof process is as follows :

Theorem 1: Given and , Maximum antagonism subspace dimension Orthogonal vector of Satisfy , If and only if .

prove :

Proof of necessity : It is known that and , Make , also It is orthogonal. , Thus we can see that .

1. If  , Then we can know from the vector product formula :

54058bed0ecf866e365e0d4111877a45.png

among , It's a vector and Cosine of , And I know , So there is :

bce095709a42eb94f6e6a8302e72e247.png

Then there are :

be836a1fc16604201dae1da7e2c11e1b.png

2. If , First of all Orthogonal expansion , Expand to :

e4438e35b9b5e7a9d6646a34dcea135d.png

Then we can see :

8d9ddb8ec1eb83ade797545baaced61f.png

Then we can know :

6c1f39d0e3da6b73635259d44efc2f8d.png

Again because , So there is :

3cf6a5e41c3d19133c954155049f27be.png

because ,, So there is :

761a48d2a8cf39200306488f7e861524.png

Again because :

8043aac712999018c26ff254f94bd460.png

finally :

44f75c41e74fc92cfc261b4ce04e2e8d.png

Sufficiency proof :

It is known that , Make It means Base vector of , Is a rotation matrix and has .

Make , also For the rotation matrix , So there is :

2fd5b82769459b7fad21649556fc950c.png

Easy to know , matrix For the rotation matrix , Its satisfaction :

95781bc5f9a96eda12449c0224b9ec6b.png

Let vector , also , among It's a matrix Of the Column , It's an orthogonal matrix , Then we can know :

10ec2b9fc926a2111418e891d008efa3.png

Certificate completion !

Through the above proof, we can get a very rigorous and beautiful conclusion , That is, against the dimension of subspace Size and growth degree of loss function Is inversely proportional to the square of , This is also very intuitive . The greater the growth , The more the antagonism subspace collapses towards the gradient , Because the gradient direction is the fastest direction .

d3b215442bdc76c1729456308b2bd0de.png

Multi model antagonism subspace dimension

In the black box model , It often takes advantage of the mobility of the counter samples to attack , That is, use the model Generated countermeasure samples , Migrate unknown classification model Attack in , The main reason is that there are overlapping confrontation subspaces for two different models , Therefore, it can make the anti sample have the mobility of attack .

Assume It's a sample For the model Makes its loss function grow To counter disturbance ; It's a sample For the model Makes its loss function grow To counter disturbance . among , Disturbance Belong to this In the confrontation subspace composed of orthogonal vectors . among , Disturbance Belong to this In the confrontation subspace composed of orthogonal vectors ; At this time, the size of the subspace dimension against multiple models is :

0c0efa2ca2bb065cedaa221091d2a742.png

Similarly, according to the above derivation ideas, we can find 3 Dimensions of confrontation subspaces with more than models overlapping .

Thank you very much

thank TCCI Tianqiao Academy of brain sciences for PaperWeekly Support for .TCCI Focus on the brain to find out 、 Brain function and brain health .

Read more

bcc1d42f165dc1763df4d6d3a7d0d4db.png

56c2a0218faedbdbf1d31b3d16e4b982.png

d2360f09a0a0966212aea28705cfe2a1.png

7d42fef56186b6d31eec9959f87e74d4.gif

# cast draft   through Avenue #

  Let your words be seen by more people  

How to make more high-quality content reach the reader group in a shorter path , How about reducing the cost of finding quality content for readers ? The answer is : People you don't know .

There are always people you don't know , Know what you want to know .PaperWeekly Maybe it could be a bridge , Push different backgrounds 、 Scholars and academic inspiration in different directions collide with each other , There are more possibilities . 

PaperWeekly Encourage university laboratories or individuals to , Share all kinds of quality content on our platform , It can be Interpretation of the latest paper , It can also be Analysis of academic hot spots Scientific research experience or Competition experience explanation etc. . We have only one purpose , Let knowledge really flow .

  The basic requirements of the manuscript :

• The article is really personal Original works , Not published in public channels , For example, articles published or to be published on other platforms , Please clearly mark  

• It is suggested that  markdown  Format writing , The pictures are sent as attachments , The picture should be clear , No copyright issues

• PaperWeekly Respect the right of authorship , And will be adopted for each original first manuscript , Provide Competitive remuneration in the industry , Specifically, according to the amount of reading and the quality of the article, the ladder system is used for settlement

  Contribution channel :

• Send email :[email protected] 

• Please note your immediate contact information ( WeChat ), So that we can contact the author as soon as we choose the manuscript

• You can also directly add Xiaobian wechat (pwbot02) Quick contribution , remarks : full name - contribute

2ddbf68ea00a20b6f53cb0a76620dbb5.png

△ Long press add PaperWeekly Small make up

Now? , stay 「 You know 」 We can also be found

Go to Zhihu home page and search 「PaperWeekly」

Click on 「 Focus on 」 Subscribe to our column

·

a03789b2ad8d705f00366d8239f13378.png

原网站

版权声明
本文为[PaperWeekly]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202140642495064.html