当前位置:网站首页>Building neural network based on tensorflow
Building neural network based on tensorflow
2022-07-28 06:14:00 【Jiyu Wangchuan】
be based on tensorflow Building neural networks
One 、tf.keras Steps of building neural network
Six steps
- import
- train,test
- model = tf.keras.models.Sequential
- model.compile
- model.fit
- model.summary
models.Sequential()
model = tf.keras.models.Sequential ([ Network structure ]) # Describe each layer of network
Network structure example :
- Straightening layer :
tf.keras.layers.Flatten( )
- Fully connected layer :
tf.keras.layers.Dense( Number of neurons , activation= “ Activation function ”,kernel_regularizer= What kind of regularization )
activation( The string gives ) Optional : relu、 softmax、 sigmoid 、 tanh
kernel_regularizer Optional : tf.keras.regularizers.l1()、tf.keras.regularizers.l2()
- Convolution layer :
tf.keras.layers.Conv2D(filters = Number of convolution kernels , kernel_size = Convolution kernel size ,strides = Convolution step , padding = " valid" or “same”)
- LSTM layer : tf.keras.layers.LSTM()
model.compile()
model.compile(optimizer = Optimizer ,
loss = Loss function
metrics = [“ Accuracy rate ”] )
Optimizer Optional :
‘sgd’ or tf.keras.optimizers.SGD (lr= Learning rate ,momentum= Momentum parameter )
‘adagrad’ or tf.keras.optimizers.Adagrad (lr= Learning rate )
‘adadelta’ or tf.keras.optimizers.Adadelta (lr= Learning rate )
‘adam’ or tf.keras.optimizers.Adam (lr= Learning rate , beta_1=0.9, beta_2=0.999)
loss Optional :
‘mse’ or tf.keras.losses.MeanSquaredError()
‘sparse_categorical_crossentropy’ or tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
Metrics Optional :
‘accuracy’ :y_ and y It's all numbers , Such as y_=[1] y=[1]
‘categorical_accuracy’ :y_ and y It's all hot code ( A probability distribution ), Such as y_=[0,1,0] y=[0.256,0.695,0.048]
‘sparse_categorical_accuracy’ :y_ Numerical value ,y It's the only hot code ( A probability distribution ), Such as y_=[1] y=[0.256,0.695,0.048]
model.fit()
model.fit ( Input characteristics of training set , Training set label ,
batch_size= , epochs= ,
validation_data=( Input characteristics of validation set , Label of validation set ),
validation_split= How much proportion is divided from the training set to the verification set ,
validation_freq = How many times epoch Verify once )
model.summary()

Two 、 Customize Model
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
Define the network structure block
def call(self, x):
Call the network structure block , Achieve forward propagation
return y
model = MyModel()
example :
class IrisModel(Model):
def __init__(self):
super(IrisModel, self).__init__()
self.d1 = Dense(3, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2())
def call(self, x):
y = self.d1(x)
return y
model = IrisModel()
3、 ... and 、tf.keras Realize handwritten numeral classification
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=['sparse_categorical_accuracy'])
model.fit(x_train, y_train, batch_size=32, epochs=5, validation_data=(x_test, y_test), validation_freq=1)
model.summary()
The operation results are as follows :
60000/60000 [==============================] - 5s 89us/sample - loss: 0.0455 - sparse_categorical_accuracy: 0.9861 - val_loss: 0.0806 - val_sparse_categorical_accuracy: 0.9752
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) multiple 0
_________________________________________________________________
dense (Dense) multiple 100480
_________________________________________________________________
dense_1 (Dense) multiple 1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
边栏推荐
- vscode uniapp
- 二、OpenVINO简述与构建流程
- Boosting unconstrained face recognition with auxiliary unlabeled data to enhance unconstrained face recognition
- Improved knowledge distillation for training fast lr_fr for fast low resolution face recognition model training
- 更新包与已安装应用签名不一致
- Four perspectives to teach you to choose applet development tools?
- 用于快速低分辨率人脸识别模型训练的改进知识蒸馏《Improved Knowledge Distillation for Training Fast LR_FR》
- Deploy the project to GPU and run
- Solution to the crash after setting up a cluster
- 使用神经网络实现对天气的预测
猜你喜欢
随机推荐
【6】 Redis cache policy
NLP project actual custom template framework
What are the detailed steps of wechat applet development?
二、OpenVINO简述与构建流程
基于tensorflow搭建神经网络
深度学习——Pay Attention to MLPs
First meet flask
Which is more reliable for small program development?
Reinforcement learning - Multi-Agent Reinforcement Learning
Scenario solution of distributed cluster architecture: cluster clock synchronization
用于快速低分辨率人脸识别模型训练的改进知识蒸馏《Improved Knowledge Distillation for Training Fast LR_FR》
小程序商城制作一个需要多少钱?一般包括哪些费用?
一、语音合成与自回归模型
高端大气的小程序开发设计有哪些注意点?
tf.keras搭建神经网络功能扩展
3: MySQL master-slave replication setup
知识点21-泛型
Deep learning (self supervision: CPC V2) -- data efficient image recognition with contractual predictive coding
NLP中基于Bert的数据预处理
深度学习(自监督:SimSiam)——Exploring Simple Siamese Representation Learning








