当前位置:网站首页>Li Hongyi machine learning team learning punch in activity day06 --- convolutional neural network
Li Hongyi machine learning team learning punch in activity day06 --- convolutional neural network
2022-07-27 05:27:00 【Charleslc's blog】
Write it at the front
Reported for a team punch out , Now it's the task 6, Today is learning Convolutional Neural Networks , I've heard it before , But I didn't study hard , Just take advantage of this opportunity , Study hard .
Reference video :https://www.bilibili.com/video/av59538266
Reference notes : https://github.com/datawhalechina/leeml-notes
Why CNN?
CNN Generally used in image processing , Of course, you can also use general neural network Do image processing , You don't have to use CNN.
But in this way , Will make hidden layer Too many parameters , and CNN What you do is simplify neural network The architecture of .
CNN framework
CNN The architecture of is as follows , First input a sheet image in the future , This piece of image Will pass convolution layer, Next, I will do max pooling This matter , And then doing convolution, Do it again max pooling This matter . This process It can be repeated countless times , After enough repetitions ,( But how many times you have to decide in advance , It is network The architecture of ( It's like yours neural There are several layers ), How many layers do you want to make convolution, Make several layers Max Pooling, You decide neural When it comes to architecture , You have to decide in advance ). You finish what you decide to do convolution and Max Pooling in the future , You have to do another thing , This thing is called flatten, And then flatten Of output Throw it to the general fully connected feedforward network, Then get the result of image recognition 
Convolution( Convolution )
Propetry1:
as follows , Suppose our network Of input It's a 6*6 Of Image, among 1 It means there is ink ,0 It means no ink . That's in convolution layer Inside , It consists of a group of filter,( Each of them filter In fact, it is equivalent to fully connect layer One of the inside neuron), every last filter It's really just a matrix(3 *3), Each of these filter The parameters inside (matrix Every one of them element value ) Namely network Of parameter( these parameter Is to learn , It doesn't need people to design )
If filter yes 33 Of , It means you need to spy on one 33 Of pattern, During reconnaissance, I only watch 3*3 Within the scope, you can determine whether there is a pattern Appearance .
first filter It's a 3*3 Of matrix, First from image Operate in the upper left corner , Calculation image Of 9 Value and filter Of 9 Worth inner product , The result is 3, After that stride Move ( This is designed by myself ), Let's set up first. stride by 1. The result is as follows ( On the right side of the figure )
After this incident , It was originally 6 *6 Of matrix, after convolution process You get 4 *4 Of matrix. If you look at it filter Value , The diagonal value is 1,1,1. So its job is detain1 Is there any 1,1,1( Continuous upper left to lower right appears in this image Inside ). for instance : Appear here ( As shown in the figure, the blue line ), So this filter I'll tell you : The maximum value appears at the top left and bottom left
In a convolution layer There will be a lot of them filter( Just a moment ago filter Result ), The other filter There will be different parameters ( Shown in the figure filter2), It also does the following filter1 Something as like as two peas. , stay filter Put it in the upper left corner and inner product to get the result -1, By analogy . hold filter2 Follow input image finish convolution after , You get another 4*4 Of matrix, Red 4 *4 Of matrix With the blue matrix Together, it is called feature map, Look how many you have filter, You get as many image( Do you have 100 individual filter, You get 100 individual 4 *4 Of image)
If for colored Image, It's the same thing , Color pictures are made of RGB Composed of . therefore , a sheet Image Just a few matrix Fold together .
convolution and fully connected The relationship between

convolution Namely fully connected layer Some of the weight Take it off . after convolution Of output It's really just a hidden layer Of neural Of output. If you take these two link Together ,convolution Namely fully connected Take off some weight Result .
stay fully connected in , One neural Should be connected in all input( Yes 36 individual pixel treat as input, This neuron It shall be connected to 36 individual input On ), But now it's only connected 9 individual input(detain One pattern, You don't need to look at the whole picture image, see 9 individual input Just fine ), In this way, less parameters are used .
take stride=1( Move one space ) Do the inner product to get another value -1, Suppose this -1 It's another one neural Of output, This neural Connect to input Of (2,3,4,8,9,10,14,15,16), alike weight Represents the same color . stay 9 individual matrix.
When we do this, we say : these two items. neuron It's in fully connect These two inside neural I had my own weight, When we are doing convolution when , First, put each neural Connected wight Reduce , Force these two neural Share one weight. This is called shared weight, When we do this , This parameter we use is less than the original one .
Max pooling

be relative to convolution Come on ,Max Pooling It's relatively simple . We according to the filter 1 obtain 4*4 Of maxtrix, according to filter2 Get another 4 *4 Of matrix, Next, put output ,4 In groups . In each group, you can choose their average or the largest , Is to put four value Synthesis of a value. This can make your image narrow .


Finish one convolution And once max pooling, It will be the same 6 * 6 Of image Become a 2 *2 Of image. This 2 *2 Of pixel The depth of the depend How many do you have filter( Do you have 50 individual filter You have 50 dimension ), The result is a new image but smaller, One filter It represents a channel.
This operation can repeat Many times , Through one convolution + max pooling You get a new image, It's a smaller one image, However , You can do it repeatedly convolution + max pooling, Get a smaller image.
Flatten

flatten Namely feature map straighten , After straightening, you can throw it to fully connected feedforward netwwork, And then it's over .
The following is about CNN The practice of , Add... Next time .
边栏推荐
- 2022年郑州轻工业新生赛题目-打死我也不说
- pytorch中几个难理解的方法整理--gather&squeeze&unsqueeze
- Three waiting methods of selenium and three processing methods of alert pop-up
- 笔记系列k8s编排MySQL容器-有状态的容器创建过程
- B1029 old keyboard
- JVM上篇:内存与垃圾回收篇十--运行时数据区-直接内存
- JVM上篇:内存与垃圾回收篇五--运行时数据区-虚拟机栈
- numpy 数据类型转化
- The receiver sets the concurrency and current limit
- Rolling Division
猜你喜欢

消息可靠性处理

How to quickly and effectively solve the problem of database connection failure

JVM上篇:内存与垃圾回收篇十二--StringTable

Differences among bio, NiO and AIO

Critical path principle

torch中乘法整理,*&torch.mul()&torch.mv()&torch.mm()&torch.dot()&@&torch.mutmal()

Utility gadget: kotlin code snippet

JVM上篇:内存与垃圾回收篇十一--执行引擎

Flask的使用

SQL database → constraint → design → multi table query → transaction
随机推荐
Raspberry pie RTMP streaming local camera image
JVM part I: memory and garbage collection part II -- class loading subsystem
B1025 reverse linked list*******
B1029 old keyboard
Database connection pool & Druid usage
Flask登录实现
JVM Part 1: memory and garbage collection part 11 -- execution engine
pytorch 数据类型 和 numpy 数据 相互转化
JVM Part 1: memory and garbage collection part 8 - runtime data area - Method area
B1024 科学计数法
JVM上篇:内存与垃圾回收篇七--运行时数据区-堆
MQ FAQ
6 zigzag conversion of leetcode
JVM上篇:内存与垃圾回收篇十一--执行引擎
李宏毅机器学习组队学习打卡活动day05---网络设计的技巧
Rolling Division
Integrate SSM
如何快速有效解决数据库连接失败问题
JVM Part 1: memory and garbage collection part 10 - runtime data area - direct memory
JDBC API details