当前位置:网站首页>ICLR 2022 | pre training language model based on anti self attention mechanism

ICLR 2022 | pre training language model based on anti self attention mechanism

2022-07-07 12:46:00 PaperWeekly

cc822c324e9f0de08ef8163f4f009924.gif

author | Zeng Weihao

Company | Beijing University of Posts and telecommunications

Research direction | Conversation summary generation

02c4b6eea4f3c35a2441041b8c2bca82.png

Title of thesis :

Adversarial Self-Attention For Language Understanding

Source of the paper :

ICLR 2022

Thesis link :

https://arxiv.org/pdf/2206.12608.pdf

2e6ae60e3c0cb0972e9daaa5249453b0.png

Introduction

This paper proposes Adversarial Self-Attention Mechanism (ASA), Use confrontation training to reconstruct Transformer The attention of , Make the model trained in the polluted model structure .

Try to solve the problem :

  1. There is a great deal of evidence that , Self attention can be drawn from allowing bias Benefit from ,allowing bias A certain degree of transcendence ( Such as masking, Smoothing of distribution ) Add to the original attention structure . These prior knowledge can enable the model to learn useful knowledge from smaller corpus . But these prior knowledge are generally task specific knowledge , It makes it difficult to extend the model to rich tasks .

  2. adversarial training The robustness of the model is improved by adding disturbances to the input content . The author found that only input embedding Adding disturbances is difficult confuse To attention maps. The attention of the model does not change before and after the disturbance .

In order to solve the above problems , The author puts forward ASA, It has the following advantages :

  1. Maximize empirical training risk, Learn by automating the process of building prior knowledge biased(or adversarial) Structure .

  2. adversial Structure is learned from input data , bring ASA It is different from the traditional confrontation training or the variant of self attention .

  3. Use gradient inversion layer to convert model and adversary Combine as a whole .

  4. ASA Nature is interpretable .

fd9e49d336f5eac37b3c7447d9b73c8f.png


Preliminary

Represents the characteristics of the input , In the traditional confrontation training , Usually token Sequence or token Of embedding, Express ground truth. For the A parameterized model , The prediction result of the model can be expressed as .


2.1 Adversarial training

The purpose of confrontation training is to improve the robustness of the model by pushing the distance between the disturbed model prediction and the target distribution :

53b747404d2917fc6dad4830a88b5262.png

among Represents the process of resisting disturbance Model prediction after disturbance , Represent the target distribution of the model .

Against disturbance By maximizing empirical training risk get :

17b45a24ad1b2af14c5742f5a6fbc529.png

among It's right Constraints made , Hope that in In small cases, it will cause large disturbance to the model . The above two representations show the process of confrontation .

2.2 General Self-Attention

The expression defining self attention is :

2d8204c345c235d7d09824ccc274401b.png

In the most common self attention mechanism Represents congruent matrix , In previous studies , It represents a certain degree of prior knowledge used to smooth the output distribution of attention structure .

In this article, the author will Defined as element Of binary matrix .

b66bf44578ec54f94b0c594f8833dc03.png


Adversarial Self-Attention Mechanism


3.1 Optimization

ASA The purpose of is to cover up the most vulnerable attention unit in the model . These most vulnerable units depend on the input of the model , Therefore, confrontation can be expressed as input learned “meta-knowledge”:,ASA Attention can be expressed as :

2f86b5685a7f1267acf30ebb90121852.png

Similar to confrontation training , The model is used to minimize the following divergence:

39ceebf6545b1b85d577c6c810091945.png

By maximizing empirical risk It is estimated that :

02112750388f036e14b675e5d0692e15.png

among It means The decision boundary of , To prevent ASA Damage model training .

in consideration of With attention mask There is a form of , Therefore, it is more suitable to use constraints masked units To constrain . Because it is difficult to measure .

The specific value of , So it will hard constraint Into a punishment unconstraint:

15db2289b0d333c58898cf19f30dd78f.png

among t Used to control the degree of confrontation .


3.2 Implementation

The author puts forward ASA Simple and fast implementation of .

381b4187cbce8767c69f409a2275fcab.png

For the first From the attention level , Can be obtained from the input hidden layer state . To be specific , Use linear layers to transform the hidden layer state into as well as , Obtain matrix by dot multiplication , Then the matrix is transformed by the re parameterization technique binary turn .

Because confrontation training usually includes inner maximization as well as outer minimization Two goals , So at least twice backward The process . So in order to speed up training , The author adopts Gradient Reversal Layer(GRL) Merge the two processes .

3.3 Training

The training objectives are as follows :

ea1b7c05f1f8da344b2bde8d43155bf4.png

Express task- specific Loss , Express plus ASA Losses after confrontation , Means for Constraints .

cac93735014d9e50cffe11824ffbd179.png

Experiments


4.1 Result

1e488b8d12309802d975a6cf9ed8df72.png

As can be seen from the table above , In terms of fine tuning ,ASA The supported model always exceeds the original BERT and RoBERTa. You can see ,ASA In small data sets, for example STS-B,DREAM Excellent performance ( It is generally believed that these small data sets are easier to over fit ) At the same time, on a larger data set, such as MNLI,QNLI as well as QQP There is still a good improvement in , Illustrates the ASA It can not only improve the generalization ability of the model, but also improve the language expression ability of the model .

As shown in the following table ,ASA It plays a great role in improving the robustness of the model .

a42c08eaaf35836f63f17004332596ba.png

4.2 Analytical experiment

1. VS. Naive smoothing

take ASA Compare with other attention smoothing methods .

255b31991ea3cdc69c95a72ae89373f5.png

2. VS. Adversial training

take ASA Compare with other confrontation training methods

9f5d2b2c7b25ec53d18a1df37c79a216.png

4.3 Visualization

1. Why ASA improves generalization

Confrontation can reduce the attention of keywords and let non keywords receive more attention .ASA Prevents lazy prediction of the model , But urge it to learn from contaminated clues , Thus, the generalization ability is improved .

e0f043b4028aeb56605dbe9c566976df.png

2. Bottom layers are more vulnerable 

You can see masking The proportion gradually decreases with the number of floors from the bottom to the top , Higher masking The proportion means that the vulnerability of the layer is higher .

04ca4a5998f574dd67aaf3d1cb6c5c4d.png

0f5c6ef1260c3568eab27ffa2d0a7fed.png


Conclusion

This paper proposes Adversarial Self-Attention mechanism(ASA) To improve the generalization and robustness of the pre training language model . A large number of experiments show that the proposed method can improve the robustness of the model in the pre training and fine-tuning stages .

Read more

fddc47e6100e606612ef1b47f3ccec81.png

2ca0f99deafe0d2b63d4655820a9a1be.png

0202ea58b85d7a78199114c15edddab8.png

791ca8aaceba1fdcaac9fa2bbcb8074c.gif

# cast draft   through Avenue #

  Let your words be seen by more people  

How to make more high-quality content reach the reader group in a shorter path , How about reducing the cost of finding quality content for readers ? The answer is : People you don't know .

There are always people you don't know , Know what you want to know .PaperWeekly Maybe it could be a bridge , Push different backgrounds 、 Scholars and academic inspiration in different directions collide with each other , There are more possibilities . 

PaperWeekly Encourage university laboratories or individuals to , Share all kinds of quality content on our platform , It can be Interpretation of the latest paper , It can also be Analysis of academic hot spots Scientific research experience or Competition experience explanation etc. . We have only one purpose , Let knowledge really flow .

  The basic requirements of the manuscript :

• The article is really personal Original works , Not published in public channels , For example, articles published or to be published on other platforms , Please clearly mark  

• It is suggested that  markdown  Format writing , The pictures are sent as attachments , The picture should be clear , No copyright issues

• PaperWeekly Respect the right of authorship , And will be adopted for each original first manuscript , Provide Competitive remuneration in the industry , Specifically, according to the amount of reading and the quality of the article, the ladder system is used for settlement

  Contribution channel :

• Send email :[email protected] 

• Please note your immediate contact information ( WeChat ), So that we can contact the author as soon as we choose the manuscript

• You can also directly add Xiaobian wechat (pwbot02) Quick contribution , remarks : full name - contribute

a7fb928fd62b6e9a30d5f8addcbcbf5e.png

△ Long press add PaperWeekly Small make up

Now? , stay 「 You know 」 We can also be found

Go to Zhihu home page and search 「PaperWeekly」

Click on 「 Focus on 」 Subscribe to our column

·

3f32a907c9f4d3ebf52bbbd5bf416461.jpeg

原网站

版权声明
本文为[PaperWeekly]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207071032427670.html