当前位置:网站首页>What does the inner structure of the neural network "alchemy furnace" look like? An interpretation of the thesis by the doctor of Oxford University

What does the inner structure of the neural network "alchemy furnace" look like? An interpretation of the thesis by the doctor of Oxford University

2022-06-26 16:27:00 Xiaobai learns vision

Click on the above “ Xiaobai studies vision ”, Optional plus " Star standard " or “ Roof placement

 Heavy dry goods , First time delivery 

Neural networks are like “ The alchemy furnace ” equally , Feed a lot of data , Maybe you can get a magical effect .

28bbef6476d1b8cf604fd53620bf1ac4.png

“ Alchemy ” After success , Neural networks can also predict data that we have not seen before ~

However , In this case , The neural network actually becomes “ black box ”—— It has certain functions , But I can't see how it works .

If we only do simple image classification , Actually, it's OK ; But if used in medicine , Predict the disease , So the neural network “ Judge ” You can't believe it .

If you can understand how it works , That's better. .

For this reason , A doctoral student from Oxford University Oana-Maria Camburu I wrote my graduation thesis 《 Explain the neural network (Explaining Deep Neural Networks)》.

In this paper , She took these “ black box ” Open one by one , The principle of neural network is explained in detail .

Why open the neural network “ black box ”?

in fact , The reason why neural networks work , The most intuitive reason is , it It consists of a large number of nonlinear functions .

a7f55dd6bfdb9a16d710f472ddda14b6.png

These nonlinear functions , So that the network can learn various abstract level features in the original data .

However , It is precisely because of these nonlinear functions in neural networks , Makes it difficult for humans to understand , How they work .

This leads to the application of neural networks in disease prediction 、 Line of credit 、 In the direction of criminal law “ Not very popular ”.

Doctors and legal researchers tend to prefer interpretable models , For example, linear regression 、 Decision tree , Because neural networks do have problems in disease prediction :

People use neural networks to predict the development of pneumonia , One of the patient characteristics is History of asthma .

3f82d68580fec470328b0164d60c5e07.png

Neural networks are trained to predict , Patients with a history of asthma are less likely to die of pneumonia .

But the result is just the opposite , Asthma itself can make pneumonia worse .

The reason why the data show that asthma patients die less of pneumonia , Often because asthma can be detected early , So patients with pneumonia can be treated as soon as possible .

If this neural network is applied in practice , It will bring very dangerous results .

Besides , Even neural networks , It will also affect the gender of men and women stereotypes 、 Create racial prejudice .

5219d35ef85934ba63601da5baab1fe5.png

for example , Investigation shows , Some corpora and models , When predicting recidivism , Will be more “ A preference for ” men .

Except for wrong predictions and race 、 Beyond sexism , Neural networks are still fragile .

Whether it's Make small changes to the image to deceive the classification algorithm 、 Still use speech recognition to hide NLP Model , The neural network is “ Explosion ray ” There are also many cases .

In order to apply neural network in more directions , In order to let us learn its principle better , The author explains the neural network from two aspects .

2 There are two ways to explain neural networks

“ Explain later ”

The first method , It is called feature-based interpretation , Also called “ Explain later ”—— Because this way , After the training of neural network , To explain its input characteristics .

This method is aimed at the words of the text (token)、 Or super pixels for images (super pixels), Conduct “ After the event ” explain .

24bfe87706b6d6578e40e69ce42f50f9.png

At present, this method is widely used , Not prone to interpretation bias , But the authenticity of the interpretation method needs to be verified .

The fundamental principle here , It is the explanation given by studying the external explanation method 、 And the natural language interpretation generated by the model itself , Is there a correlation , And what is the relevance .

In the paper , The author introduces a new verification method , To judge the authenticity of the interpretation method .

Let the neural network explain itself

that , If the neural network can be trained at the same time 、 On one side “ Explain yourself ” Well ?

This is the second method mentioned in the paper , That is, a module for generating prediction interpretation is embedded in the model , Explain the predicted results .

7fc3e318fe56826d0ae68767107b219f.png

As for whether the neural network is correct in explaining itself , It also requires human judgment .

In this , The author also introduces a judgment method , Judge the interpretation generated by the model itself , Thus, the result of neural network interpretation .

6344bba03f09a66d366429b5c99631d1.png

The detailed structure of neural network 、 Explain the method specifically to interested partners , You can check the address of the paper below ~ 

The authors introduce

ac585843e892ba8c4db3ecb3f342394f.png

Oana-Maria Camburu, From Romania , He is currently a doctoral student at Oxford University , Major in machine learning 、 Artificial intelligence and other directions .

In high school ,Oana-Maria Camburu Once obtained IMO( International Mathematical Olympiad competition ) Silver medal . She used to work in Mapu 、 Google internship , During my blog reading , The paper was ACL、EMNLP、IJCNLP It will be included in the summit .

Address of thesis :
https://arxiv.org/abs/2010.01496

The good news !

Xiaobai learns visual knowledge about the planet

Open to the outside world

4fdd781f5fe6fc3330164d1ad27bde9e.png

 download 1:OpenCV-Contrib Chinese version of extension module 

 stay 「 Xiaobai studies vision 」 Official account back office reply : Extension module Chinese course , You can download the first copy of the whole network OpenCV Extension module tutorial Chinese version , Cover expansion module installation 、SFM Algorithm 、 Stereo vision 、 Target tracking 、 Biological vision 、 Super resolution processing and other more than 20 chapters .


 download 2:Python Visual combat project 52 speak 
 stay 「 Xiaobai studies vision 」 Official account back office reply :Python Visual combat project , You can download, including image segmentation 、 Mask detection 、 Lane line detection 、 Vehicle count 、 Add Eyeliner 、 License plate recognition 、 Character recognition 、 Emotional tests 、 Text content extraction 、 Face recognition, etc 31 A visual combat project , Help fast school computer vision .


 download 3:OpenCV Actual project 20 speak 
 stay 「 Xiaobai studies vision 」 Official account back office reply :OpenCV Actual project 20 speak , You can download the 20 Based on OpenCV Realization 20 A real project , Realization OpenCV Learn advanced .


 Communication group 

 Welcome to join the official account reader group to communicate with your colleagues , There are SLAM、 3 d visual 、 sensor 、 Autopilot 、 Computational photography 、 testing 、 Division 、 distinguish 、 Medical imaging 、GAN、 Wechat groups such as algorithm competition ( It will be subdivided gradually in the future ), Please scan the following micro signal clustering , remarks :” nickname + School / company + Research direction “, for example :” Zhang San  +  Shanghai Jiaotong University  +  Vision SLAM“. Please note... According to the format , Otherwise, it will not pass . After successful addition, they will be invited to relevant wechat groups according to the research direction . Please do not send ads in the group , Or you'll be invited out , Thanks for your understanding ~
原网站

版权声明
本文为[Xiaobai learns vision]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/177/202206261547494918.html