当前位置:网站首页>Deep learning --- the weight of the three good students' scores (2)
Deep learning --- the weight of the three good students' scores (2)
2022-06-29 18:48:00 【knighthood2001】
🥰 Blog's front page :knighthood2001
Comments are welcome on thumb up ️
️ love python, Looking forward to progress and growth with you !!️
In this paper Deep learning ( First time to know tensorflow2. edition ) Third, good student performance (1) We can find out , The constructed neural network can already run , But obviously it can't really be used , Because it There are errors in the final calculation results . Neural network before putting into use , All have to go through the process of training . that , How to train neural networks ?
Catalog
Training neural network steps
① input data : For example, input in the example x1、x2、x3, That is, the moral education of the two students 、 Intellectual education 、 sports 3 Item score .
② The result of the calculation is : The neural network calculates the result according to the input data and the current variable parameter value , In this article, the example is y.
③ Calculation error : The result of the calculation y And the results we expect ( Or the standard answer , Call it... For the time being yTrain compare , Look at the error (loss) How much is the ; In the example ,yTrain The value of is the total score known by each student .
④ Adjust the variable parameters of neural network : According to the size of the error , Back propagation algorithm is used , For the variable parameters in the neural network ( This is the example in this chapter w1、w2、w3) Adjust accordingly .
⑤ Train again : After adjusting the variable parameters , Repeat the above steps to retrain , Until the error is below our ideal level , The training of neural network is completed .
The program written in the last article has implemented the first two steps in this process , Let's implement the remaining steps .
Code display
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
x1 = tf.compat.v1.placeholder(dtype=tf.float32)
x2 = tf.compat.v1.placeholder(dtype=tf.float32)
x3 = tf.compat.v1.placeholder(dtype=tf.float32)
# Set the standard answer
yTrain = tf.compat.v1.placeholder(dtype=tf.float32)
w1 = tf.Variable(0.1, dtype=tf.float32)
w2 = tf.Variable(0.1, dtype=tf.float32)
w3 = tf.Variable(0.1, dtype=tf.float32)
n1 = x1 * w1
n2 = x2 * w2
n3 = x3 * w3
y = n1 + n2 + n3
loss = tf.abs(y - yTrain)
optimizer = tf.compat.v1.train.RMSPropOptimizer(0.001)
train = optimizer.minimize(loss)
sess = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
sess.run(init)
for i in range(10000):
result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 90, x2: 80, x3: 70, yTrain: 85})
print(result)
result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 98, x2: 95, x3: 87, yTrain: 96})
print(result)Changes
① Placeholders are defined yTrain, This is used to enter the corresponding calculation result value we expect for each set of input data during training , It is generally abbreviated as “ Target calculation results ” or “ The target ”.
# Target calculation results ( The target )
yTrain = tf.compat.v1.placeholder(dtype=tf.float32)②Ⅰ After calculating the result y after , We use it tf.abs(y-yTrain) To calculate the error ,
Ⅱ Then an optimizer variable is defined optimizer. The so-called optimizer , It is the object used to adjust the variable parameters of neural network . We're going to use RMSPropOptimizer, Parameters 0.001 Is the learning rate of the optimizer (learn rate). The so-called learning rate , Here we can simply understand it as : The learning rate determines the magnitude of each parameter adjustment by the optimizer .
Ⅲ After defining the optimizer , We define a training object train, It represents how we are going to train this neural network . We define it as optimizer.minimize(loss), That is, the optimizer is required to loss The principle of minimization to adjust variable parameters .
loss = tf.abs(y - yTrain)
optimizer = tf.compat.v1.train.RMSPropOptimizer(0.001)
train = optimizer.minimize(loss)Then we can train , The training code is very similar to the previous calculation .
for i in range(10000):
result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 90, x2: 80, x3: 70, yTrain: 85})
print(result)
result = sess.run([train, x1, x2, x3, w1, w2, w3, y, yTrain, loss], feed_dict={x1: 98, x2: 95, x3: 87, yTrain: 96})
print(result)The difference There are two main ones , One Is in feed_dict Specify one more... In the parameter yTrain The numerical , That is, corresponding to each group of input data x1,x2,x3, The target result value we specified ; Two Is in sess.run First argument to function , That is, the result array we want to output , One more train object , There are... In the result array train object , It means that the program is required to execute train The training process contained in the object , So in the process ,y、loss And the result will be calculated naturally ; So even if only one... Is written in the result array train, Other results will also be calculated . It's just that we can't see it .
Only the training object is added to the result array , This time, sess.run The execution of a function can only be called once “ Training ”, Otherwise it's just “ function ” A neural network or a neural network “ Calculation ”.
Despite two training sessions x1,x2,x3 Different , But the training of neural network has adaptability , Be able to gradually adjust variable parameters during training , Try to reduce the calculation error of all input data .
We use for loop , To a 5000 round . The last two results are as follows :

loss Shrink to 0.023246765-0.0332489,w1,w2,w3 The value of is also very close to what we expect 0.6,0.3,0.1( The weight we assumed before ).
after , The author will explain how to optimize the neural network model here .
边栏推荐
- MySQL -connector/j driver download
- Source code installation mavros
- Interview question 10.10 Rank of digital stream
- 2022.6.29-----leetcode. five hundred and thirty-five
- 数据分析--时间序列预测
- JDBC Codes connexes
- Adobe Premiere基础-素材嵌套(制作抖音结尾头像动画)(九)
- Elegant writing controller (parameter verification + unified exception handling)
- 【日常训练】535. TinyURL 的加密与解密
- How to use the oak camera on raspberry pie?
猜你喜欢

js文本粒子动态背景

mysql -connector/j驱动下载

centos 7.5安装mysql 8.0.27----yum

龙canvas动画

2. 在STM32CubeMX建立的Keil5工程中添加自定义的相关文件
Detailed analysis on the use of MySQL stored procedure loop

报错Failed to allocate graph: MYRIAD device is not opened.

Data warehouse model layered ODS, DWD, DWM practice

剑指 Offer 34. 二叉树中和为某一值的路径-dfs法

如何将OAK相机当做网络摄像头使用?
随机推荐
Leetcode 984. String without AAA or BBB (thought of netizens)
Error [warning] neural network information was performed on socket 'RGB', depth frame is aligned to socket
svg画圆路径动画
SD6.25集训总结
Chapter 02_ MySQL data directory
Travel card "star picking" hot search first! Stimulate the search volume of tourism products to rise
js文本粒子动态背景
Anaconda安装并配置jupyter notebook远程
深度学习---三好学生各成绩所占权重问题(2)
Encryption and decryption of 535 tinyurl
MySQL 企业级开发规范
Adobe Premiere基础-声音调整(音量矫正,降噪,电话音,音高换挡器,参数均衡器)(十八)
Fluent's MSH grid learning
PostGIS generate graphic cut
【TcaplusDB知识库】TcaplusDB单据受理-创建业务介绍
2. 在STM32CubeMX建立的Keil5工程中添加自定义的相关文件
[tcapulusdb knowledge base] tcapulusdb operation and maintenance doc introduction
Sd6.25 summary of intensive training
JS text particle dynamic background
[tcapulusdb knowledge base] tcapulusdb doc acceptance - transaction execution introduction