当前位置:网站首页>Complete deep neural network CNN training with tensorflow to complete picture recognition case 2
Complete deep neural network CNN training with tensorflow to complete picture recognition case 2
2022-07-03 13:27:00 【Haibao 7】
To be continued . Previous link :https://blog.csdn.net/dongbao520/article/details/125456950
Convolutional neural networks
• Convolutional neural networks
• Visual cortex 、 Feel the field , Some neurons see the line , Some neurons see the line
Direction , Some neurons have larger receptive fields , Combine the patterns on the bottom
• 1998 year Yann LeCun Et al LeNet-5 framework , Widely used in hands
Written digit recognition , Including full connection layer and sigmoid Activation function , There are also volumes
Accumulation layer and pool layer
Convolutional neural networks (Convolutional Neural Networks, CNN) It is a kind of feedforward neural network with convolution calculation and depth structure (Feedforward Neural Networks), It's deep learning (deep learning) One of the representative algorithms of [1-2] . Convolutional neural network has the characteristics of representation learning (representation learning) Ability , The input information can be classified according to its hierarchical structure (shift-invariant classification), So it's also called “ Translation invariant artificial neural networks (Shift-Invariant Artificial Neural Networks, SIANN
Convolution neural network imitates biological visual perception (visual perception) Mechanism construction , Supervised learning and unsupervised learning , The sharing of convolution kernel parameters in the hidden layer and the sparsity of inter layer connections make the convolution neural network lattice with less computation (grid-like topology) features , For example, pixels and audio for learning 、 It has a stable effect and has no additional feature engineering on the data (feature engineering) Complete principle related requirements can ---->> Reference resources
For receptive field :
For pre trained networks
Reuse TensorFlow Model 
CNN The most important building unit is the convolution layer
• Neurons in the first convolution layer are not connected to every pixel of the input picture ,
Just connect the pixels of their receptive field , And so on , Of the second accretion layer
Each neuron is only connected to a small square God located in the first convolution layer
Jing Yuan
Convolution layer diagram 

Convolution cases :


In steps of 2, Then there are 
Filter Convolution kernel
• Convolution kernels
• Vertical line filter The middle column is 1, The surrounding areas are listed as 0
• Horizontal line filter Intermediate behavior 1, Surrounding behavior 0
• 7*7 matrix

In a feature map , All neurons share the same parameters (
weights bias), Weight sharing
• Different feature maps have different parameters

Convolution training process 
Padding Pattern
VALID
• Do not apply zero padding, It is possible to ignore the right or bottom of the picture , This depends stride Set up
• SAME
• If necessary, add zero padding, In this case , The number of output neurons is equal to the number of input neurons divided by the step size ceil(13/5)=3

Pooling Pooling Handle
The goal is downsampling subsample,shrink, Reduce the calculated load , Memory usage , The number of arguments ( It can also prevent over fitting )• Reducing the size of the input image also allows the neural network to withstand a little image translation , Not affected by location
• Just like convolutional neural networks , Each neuron in the pooling layer is connected to the neuron output in the upper layer , It only corresponds to a small area of receptive field . We have to define size , step ,padding type
• Pooled neurons have no weight value , It just aggregates the input according to the maximum or the average
• 2*2 The pooled core of , In steps of 2, There is no filling , Only the maximum value is passed down

Twice as long and twice as wide , area 4 Times smaller , lose 75% The input value of
• In general , The pooling layer works on each independent input channel , So the depth of output is the same as that of input
CNN framework
• Typical CNN The architecture heap lists some volume layers :
• Usually a convolution layer is followed by ReLU layer , Then there is a pool layer , Then there are other convolutions +ReLU layer , Then another pooling layer , The pictures transmitted through the network are getting smaller and smaller , But it's getting deeper and deeper , For example, more feature maps !
• Finally, the conventional feedforward neural network is added , By some fully connected layers +ReLU layers , Finally, the output layer prediction , For example, one softmax Class probability of layer output prediction
• A common misconception is that the convolution kernel is too large , You can use and 99 Two of the same effect of the nucleus 33 The core of , The advantage is that there will be fewer parameters , Simplify the operation .
To be continued ..
边栏推荐
- This math book, which has been written by senior ml researchers for 7 years, is available in free electronic version
- 刚毕业的欧洲大学生,就能拿到美国互联网大厂 Offer?
- CVPR 2022 image restoration paper
- 2022-02-14 analysis of the startup and request processing process of the incluxdb cluster Coordinator
- Server coding bug
- [Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter IV exercises]
- 开始报名丨CCF C³[email protected]奇安信:透视俄乌网络战 —— 网络空间基础设施面临的安全对抗与制裁博弈...
- Red Hat Satellite 6:更好地管理服务器和云
- Resolved (error in viewing data information in machine learning) attributeerror: target_ names
- Flink SQL knows why (13): is it difficult to join streams? (next)
猜你喜欢

Flink SQL knows why (16): dlink, a powerful tool for developing enterprises with Flink SQL

JSP and filter

Logseq 评测:优点、缺点、评价、学习教程

stm32和电机开发(从mcu到架构设计)
![[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter III exercises]](/img/b4/3442c62586306b4fceca992ce6294a.png)
[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter III exercises]

Flink SQL knows why (7): haven't you even seen the ETL and group AGG scenarios that are most suitable for Flink SQL?

Sitescms v3.1.0 release, launch wechat applet

(first) the most complete way to become God of Flink SQL in history (full text 180000 words, 138 cases, 42 pictures)

物联网毕设 --(STM32f407连接云平台检测数据)
[email protected] chianxin: Perspective of Russian Ukrainian cyber war - Security confrontation and sanctions g"/>Start signing up CCF C ³- [email protected] chianxin: Perspective of Russian Ukrainian cyber war - Security confrontation and sanctions g
随机推荐
阿南的疑惑
Resolved (error in viewing data information in machine learning) attributeerror: target_ names
Finite State Machine FSM
Flink SQL knows why (16): dlink, a powerful tool for developing enterprises with Flink SQL
MySQL_ JDBC
35道MySQL面试必问题图解,这样也太好理解了吧
MapReduce implements matrix multiplication - implementation code
Flink SQL knows why (17): Zeppelin, a sharp tool for developing Flink SQL
Sword finger offer 16 Integer power of numeric value
Logback 日志框架
Fabric. JS three methods of changing pictures (including changing pictures in the group and caching)
Fabric.js 更换图片的3种方法(包括更换分组内的图片,以及存在缓存的情况)
Flink SQL knows why (XIV): the way to optimize the performance of dimension table join (Part 1) with source code
Kotlin - improved decorator mode
[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter 7 exercises]
正则表达式
今日睡眠质量记录77分
[Database Principle and Application Tutorial (4th Edition | wechat Edition) Chen Zhibo] [Chapter IV exercises]
双链笔记 RemNote 综合评测:快速输入、PDF 阅读、间隔重复/记忆
2022-02-11 practice of using freetsdb to build an influxdb cluster