当前位置:网站首页>Mxnet implementation of googlenet (parallel connection network)
Mxnet implementation of googlenet (parallel connection network)
2022-07-04 18:44:00 【Yinque Guangqian】
Address of thesis :Going deeper with convolutions
With in AI The publication of new papers on , We can see the development of Neural Networks , More and more people are studying the structure of human brain , From the initial perceptron to the full connection layer structure ( Dense structure ), Then to convolutional neural network ( Sparse structure ), Especially as the network becomes deeper and more complex , It will highlight the advantages of sparse structure , Why? , Because the connection between neurons in the human brain is a sparse structure , Similar to Heb's theory . among “neurons that fire together, wire together” Neurons activate together ( discharge ), Connect together , This is very interesting , in other words , Neurons in the brain rely on discharge to transmit signals , If different neurons discharge frequently at the same time , Then the connection between them will be closer .
The design of the model in this paper follows the practical intuition , That is, visual information should be processed on different scales and then aggregated , In order to abstract features from different scales at the same time in the next stage . The conclusion also points out that through It is a feasible method to improve computer vision neural network to approximate the desired optimal sparse result with easily available dense building blocks , That is to say GoogLeNet“ Networks with parallel connections ” The meaning of the model , Because such a new idea breaks the previous practice of series and deep , Develop in a more sparse direction , I think this is the article paper The most important value .
Let's look at two pictures first , Intuitive feeling , a sheet Inception modular , The other is GoogLeNet Model , because GoogLeNet The number of layers of the model is relatively deep , Avoid too big picture size , Pay attention to the direction I draw the arrow and the modules that use color discrimination .
Inception modular
The picture above shows , The core part is Inception modular , There are 4 Groups of parallel lines , The code implementation is as follows :
import d2lzh as d2l
from mxnet import gluon,init,nd
from mxnet.gluon import nn
# Four parallel lines , Then connect in the channel dimension
class Inception(nn.Block):
def __init__(self,c1,c2,c3,c4,**kwargs):
super(Inception,self).__init__(**kwargs)
# line 1
self.p1=nn.Conv2D(c1,kernel_size=1,activation='relu')
# line 2
self.p2_1=nn.Conv2D(c2[0],kernel_size=1,activation='relu')
self.p2_2=nn.Conv2D(c2[1],kernel_size=3,padding=1,activation='relu')
# line 3
self.p3_1=nn.Conv2D(c3[0],kernel_size=1,activation='relu')
self.p3_2=nn.Conv2D(c3[1],kernel_size=5,padding=2,activation='relu')
# line 4
self.p4_1=nn.MaxPool2D(pool_size=3,strides=1,padding=1)
self.p4_2=nn.Conv2D(c4,kernel_size=1,activation='relu')
def forward(self,x):
p1=self.p1(x)
p2=self.p2_2(self.p2_1(x))
p3=self.p3_2(self.p3_1(x))
p4=self.p4_2(self.p4_1(x))
return nd.concat(p1,p2,p3,p4,dim=1)# Connect with channel dimension
structure GoogLeNet The whole model
# Five modules
B1=nn.Sequential()
B1.add(nn.Conv2D(64,kernel_size=7,strides=2,padding=3,activation='relu'),
nn.MaxPool2D(pool_size=3,strides=2,padding=1))
B2=nn.Sequential()
B2.add(nn.Conv2D(64,kernel_size=1,activation='relu'),
nn.Conv2D(192,kernel_size=3,padding=1,activation='relu'),
nn.MaxPool2D(pool_size=3,strides=2,padding=1))
B3=nn.Sequential()
B3.add(Inception(64,(96,128),(16,32),32),
Inception(128,(128,192),(32,96),64),# Number of output channels 128+192+96+64=480
nn.MaxPool2D(pool_size=3,strides=2,padding=1))
B4=nn.Sequential()
B4.add(Inception(192,(96,208),(16,48),64),
Inception(160,(112,224),(24,64),64),
Inception(128,(128,256),(24,64),64),
Inception(112,(144,288),(32,64),64),
Inception(256,(160,320),(32,128),128),
nn.MaxPool2D(pool_size=3,strides=2,padding=1))
B5=nn.Sequential()
B5.add(Inception(256,(160,320),(32,128),(128)),
Inception(384,(192,384),(48,128),128),
nn.GlobalAvgPool2D())
net=nn.Sequential()
net.add(B1,B2,B3,B4,B5,nn.Dense(10))
# View the output shape of each layer
X=nd.random.uniform(shape=(1,1,96,96))
net.initialize()
for layer in net:
X=layer(X)
print(layer.name,' Shape of the output :',X.shape)
'''
sequential0 Shape of the output : (1, 64, 24, 24)
sequential1 Shape of the output : (1, 192, 12, 12)
sequential2 Shape of the output : (1, 480, 6, 6)
sequential3 Shape of the output : (1, 832, 3, 3)
sequential4 Shape of the output : (1, 1024, 1, 1)
dense0 Shape of the output : (1, 10)
'''
Training models
Limited by GPU, Or to Fashion-MNIST Data sets, for example , Note that paying attention to the new features of this network model is the focus of learning .
lr,num_epochs,batch_size,ctx=0.1,5,128,d2l.try_gpu()
net.initialize(force_reinit=True,ctx=ctx,init=init.Xavier())
trainer=gluon.Trainer(net.collect_params(),'sgd',{'learning_rate':lr})
train_iter,test_iter=d2l.load_data_fashion_mnist(batch_size,resize=96)
d2l.train_ch5(net,train_iter,test_iter,batch_size,trainer,ctx,num_epochs)
'''
epoch 1, loss 2.1157, train acc 0.210, test acc 0.511, time 154.6 sec
epoch 2, loss 0.8424, train acc 0.666, test acc 0.782, time 143.6 sec
epoch 3, loss 0.5345, train acc 0.802, test acc 0.847, time 143.9 sec
epoch 4, loss 0.4107, train acc 0.846, test acc 0.870, time 144.0 sec
epoch 5, loss 0.3557, train acc 0.865, test acc 0.875, time 142.4 sec
'''
边栏推荐
- Summary of subsidy policies across the country for dcmm certification in 2022
- 机器学习概念漂移检测方法(Aporia)
- 华为云ModelArts的使用教程(附详细图解)
- Just today, four experts from HSBC gathered to discuss the problems of bank core system transformation, migration and reconstruction
- 2022年DCMM认证全国各地补贴政策汇总
- [211] go handles the detailed documents of Excel library
- DB engines database ranking in July 2022: Microsoft SQL Server rose sharply, Oracle fell sharply
- 一、C语言入门基础
- ISO27001 certification process and 2022 subsidy policy summary
- Behind the ultra clear image quality of NBA Live Broadcast: an in-depth interpretation of Alibaba cloud video cloud "narrowband HD 2.0" technology
猜你喜欢
78 year old professor Huake impacts the IPO, and Fengnian capital is expected to reap dozens of times the return
Li Kou brush question diary /day5/2022.6.27
TCP两次挥手,你见过吗?那四次握手呢?
Tutorial on the use of Huawei cloud modelarts (with detailed illustrations)
Scala基础教程--19--Actor
【2022年江西省研究生数学建模】水汽过饱和的核化除霾 思路分析及代码实现
Halcon template matching
mysql5.7安装教程图文详解
Blue bridge: sympodial plant
庆贺!科蓝SUNDB与中创软件完成七大产品的兼容性适配
随机推荐
Grain Mall (I)
Wireshark packet capturing TLS protocol bar displays version inconsistency
力扣刷题日记/day1/2022.6.23
Scala基础教程--12--读写数据
Nature Microbiology | 可感染阿斯加德古菌的六种深海沉积物中的病毒基因组
Installation and use of VMware Tools and open VM tools: solve the problems of incomplete screen and unable to transfer files of virtual machines
被忽视的问题:测试环境配置管理
输入的查询SQL语句,是如何执行的?
Wireshark抓包TLS协议栏显示版本不一致问题
Clever use of curl command
Implementation of shell script replacement function
[209] go language learning ideas
Scala基础教程--19--Actor
Blue bridge: sympodial plant
With the stock price plummeting and the market value shrinking, Naixue launched a virtual stock, which was deeply in dispute
Once the "king of color TV", he sold pork before delisting
Behind the ultra clear image quality of NBA Live Broadcast: an in-depth interpretation of Alibaba cloud video cloud "narrowband HD 2.0" technology
项目通用环境使用说明
uni-app与uviewUI实现仿小米商城app(附源码)
同事悄悄告诉我,飞书通知还能这样玩