当前位置:网站首页>Tensorflow customize the whole training process
Tensorflow customize the whole training process
2022-07-06 01:35:00 【@zhou】
Create a machine learning problem
f ( x ) = 3 x + 7 f(x) = 3x + 7 f(x)=3x+7
For a machine learning problem , There are the following steps :
- Get training data .
- Defining models .
- Define the loss function .
- Traverse the training data , Calculate the loss from the target value .
- Calculate the gradient of this loss , And use optimizer Adjust variables to fit data .
- The result of the calculation is .
Build data
Supervised learning uses input ( Usually expressed as x) And the output ( Expressed as y, Commonly known as labels ). The goal is to learn from paired inputs and outputs , So that you can predict the output value according to the input .TensorFlow Almost every input data in is represented by tensor , And it's usually a vector . In supervised learning , Output ( That is, think of the predicted value ) It's also a tensor . This is done by putting Gauss ( Normal distribution ) Some data synthesized by adding noise to the points on the line , And visualize these data .
x = np.random.random([1000]) * 5
noise = np.random.random([1000])
y = 3 * x + 7
import matplotlib.pyplot as plt
plt.scatter(x, y, c="b")
plt.show()

Customize the model we need
We inherit tf.module class , And define two variables , Its attribute is trainable_variables.
class selfmodel(tf.Module):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.v1 = tf.Variable(1.0, trainable=True)
self.v2 = tf.Variable(2.0, trainable=True)
def __call__(self, x):
y = self.v1 * x + self.v2
return y
Define the loss function
We use the mean square deviation here to calculate the loss
def loss(target_y, predicted_y):
return tf.reduce_mean(tf.square(target_y - predicted_y))
Define the cycle training function
We use epcohs Function to get the two variables we need v 1 , v 2 v1,v2 v1,v2, After each training v 1 , v 2 v1, v2 v1,v2 Record , Finally, visualization
def train(model, x, y,epochs,optimizer):
v1, v2 = [], []
for j in range(epochs):
with tf.GradientTape() as gd:
y_pred = model(x) # This needs to be inside
loss_score = loss(y, y_pred)
grad = gd.gradient(loss_score, model.trainable_variables)
optimizer.apply_gradients(zip(grad, model.trainable_variables))
v1.append(model.v1.numpy())
v2.append(model.v2.numpy())
return (model, v1, v2)
The end result shows
I'm defining epcohs When , If the setting is too small , Will lead to v 1 , v 2 v1,v2 v1,v2 Can't get the right result
opt = tf.keras.optimizers.SGD()
model = selfmodel()
epochs = 1000
(model, v1, v2) = train(model, x, y,epochs, opt)
# draw
plt.plot(range(epochs), v1, "r",
range(epochs), v2, "b")
plt.plot([3] * epochs, "r--",
[7] * epochs, "b--")
plt.legend(["W", "b", "True W", "True b"])
plt.show()

Problems in the code
# The code in this case will report an error , Say our grad The result is (none, none),
# because y_pred = model(x) It should be written in with Inside
# The following will write the correct way , The reason for this error is loss Function in
# Yes molel.trainable_variables When seeking derivative , Gradient not found
y_pred = model(x)
with tf.GradientTape() as t:
l = loss(y, y_pred)
grad = t.gradient(l, model.trainable_variables)
optimizer = tf.keras.optimizers.SGD()
optimizer.apply_gradients(zip(grad, model.trainable_variables))
# Correct writing
with tf.GradientTape() as t:
y_pred = model(x)
l = loss(y, model(x))
grad = t.gradient(l, model.trainable_variables)
print(model.trainable_variables)
optimizer = tf.keras.optimizers.SGD()
optimizer.apply_gradients(zip(grad, model.trainable_variables))
边栏推荐
- C web page open WinForm exe
- How to see the K-line chart of gold price trend?
- A Cooperative Approach to Particle Swarm Optimization
- [技术发展-28]:信息通信网大全、新的技术形态、信息通信行业高质量发展概览
- Leetcode skimming questions_ Sum of squares
- 竞赛题 2022-6-26
- 【Flask】获取请求信息、重定向、错误处理
- 基於DVWA的文件上傳漏洞測試
- Test de vulnérabilité de téléchargement de fichiers basé sur dvwa
- What is weak reference? What are the weak reference data types in ES6? What are weak references in JS?
猜你喜欢
![[flask] official tutorial -part3: blog blueprint, project installability](/img/fd/fc922b41316338943067469db958e2.png)
[flask] official tutorial -part3: blog blueprint, project installability
Folio.ink 免费、快速、易用的图片分享工具

02.Go语言开发环境配置

MUX VLAN configuration

Initialize MySQL database when docker container starts

一图看懂!为什么学校教了你Coding但还是不会的原因...
![[技术发展-28]:信息通信网大全、新的技术形态、信息通信行业高质量发展概览](/img/94/05b2ff62a8a11340cc94c69645db73.png)
[技术发展-28]:信息通信网大全、新的技术形态、信息通信行业高质量发展概览

Leetcode skimming questions_ Invert vowels in a string

Huawei Hrbrid interface and VLAN division based on IP

3D model format summary
随机推荐
晶振是如何起振的?
剑指 Offer 38. 字符串的排列
Une image! Pourquoi l'école t'a - t - elle appris à coder, mais pourquoi pas...
MySQL learning notes 2
How to get the PHP version- How to get the PHP Version?
Huawei converged VLAN principle and configuration
ctf. Show PHP feature (89~110)
Redis-字符串类型
Leetcode skimming questions_ Invert vowels in a string
A picture to understand! Why did the school teach you coding but still not
Electrical data | IEEE118 (including wind and solar energy)
Redis-Key的操作
Leetcode1961. Check whether the string is an array prefix
【SSRF-01】服务器端请求伪造漏洞原理及利用实例
Hcip---ipv6 experiment
Spir - V premier aperçu
[flask] static file and template rendering
Docker compose配置MySQL并实现远程连接
网易智企逆势进场,游戏工业化有了新可能
现货白银的一般操作方法