当前位置:网站首页>Tensorflow2.0 deep learning simple tutorial of running code
Tensorflow2.0 deep learning simple tutorial of running code
2022-07-26 23:44:00 【Hekai】
Tensorflow2.0 Deep learning simple tutorial of running code
Before written Pytorch, Now work needs are used Tensorflow2.0 Framework , After this period of study , To sum up .Tensorflow2.0 There are two training modes that you can use .
Simple mode
Tensorflow Deep learning simple mode , Its basic idea is to encapsulate all operations , It's easy if you can , So just know the input data format and how it receives data .
Collating data
# Introduction package
import tensorflow as tf
# Do data ,tensorflow A particularly good mechanism is to encapsulate common data packets
mnist = tf.keras.datasets.mnist
# there x_train, y_train and x_test, y_test The specific data is ordinary array type
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
Model
Tensorflow The grammar of is always encapsulated , Many API It's all packaged , Don't write it yourself , Just find a little trouble
# The model establishment and pytorch Of `Sequential` As simple as listing , Code execution is sequential execution
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
# Here is how to set the optimizer , Loss function , And index parameters
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Run
It's much easier to run
# Training
model.fit(x_train, y_train, epochs=5)
# verification
model.evaluate(x_test, y_test, verbose=2)
Of course, you can also customize some callback functions , To control the early stop , Model storage , Learning rate optimization and so on .
Expert model
There are many expert modes that we can set ourselves. Let's take a look step by step
Collating data
It should be noted that Tensorflow Unlike Pytorch The same data is <x, y> One to one to ground output , It's a column x And a column y apart .
# Introduction package
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
# Make the simplest data
x = [i for i in range(1, 10000)]
y = [(i+1)*(i+2) for i in x]
# Turning into Tensorflow Of Tensor The format needs to be converted to numpy Format
x = np.array(x)
y = np.array(y)
# This is similar to Pytorch Of Dataset
# batch Namely batchsize
# shuffle Is the degree to which the data order is disrupted
# map It can carry out secondary processing on the proposed data
# The input here should be the same as from_tensor_slices((x, y)) bring into correspondence with , Output with yourself
def fun(x, y):
return x, x+2, y
# there ds Namely Tensorflow Where the data is stored , Be similar to Pytorch Of DataLoader, It can be used for Loop printing
ds = tf.data.Dataset.from_tensor_slices((x, y)).batch(1000).shuffle(5000).map(fun)
Model
This model is also similar to Pytorch Almost , But you need to rewrite the inheritance method call, It's forward propagation .
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.d = Dense(32, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(32, activation='relu')
self.d2 = Dense(10)
# Above map Two of them x, Here we also use two parameters to receive
def call(self, x, x1):
x = self.d(x)
x = self.flatten(x)
x1 = self.d1(x1)
return self.d2((x+x1))
# Create a model instance
model = MyModel()
# Then define the optimizer and loss function
loss_object = tf.keras.losses.MeanSquaredError()
optimizer = tf.keras.optimizers.Adam()
# Loss storage
train_loss = tf.keras.metrics.Mean(name='train_loss')
Run
Generally, a training method will be defined , Add... Before the method @tf.function this sentence , Can speed up the training , But you can't print the information of data flow .
@tf.function
def train_step(x, x1, y):
with tf.GradientTape() as tape:
# training=True is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(x, x1, training=True)
loss = loss_object(y, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
# Take a few reincarnations
EPOCHS = 500
for epoch in range(EPOCHS):
# Reset the metrics at the start of the next epoch
train_loss.reset_states()
for x, x1, y in ds:
train_step(x, x1, y)
print(
f'Epoch {
epoch + 1}, '
f'Loss: {
train_loss.result()}, '
)
That's basically it ,Tensorflow Of API Especially rich , Familiar with many functions, you don't have to write them yourself , It's quite convenient , But you need to understand .
边栏推荐
- Azure Synapse Analytics 性能优化指南(4)——使用结果集缓存优化性能
- Pre research of data quality management tools Griffin vs deequ vs great expectations vs quality
- Question 152: product maximum subarray
- 第二部分—C语言提高篇_7. 结构体
- Azure synapse analytics Performance Optimization Guide (3) -- optimize performance using materialized views (Part 2)
- MVC三层架构
- How to use data pipeline to realize test modernization
- Part II - C language improvement_ 8. File operation
- Part II - C language improvement_ 13. Recursive function
- Interview questions of Bank of Hangzhou [Hangzhou multi tester] [Hangzhou multi tester _ Wang Sir]
猜你喜欢

Three effective strategies for the transformation of data supply chain to be coordinated and successful

带你熟悉云网络的“电话簿”:DNS

Silicon Valley class lesson 7 - Tencent cloud on demand management module (2)

Easily implement seckill system with redis! (including code)

Sign up now | frontier technology exploration: how to make spark stronger and more flexible

文件上传到服务器
![[shader realizes shine effect _shader effect Chapter 3]](/img/ea/6c14f682e6157a96c1877d99c9f7d3.png)
[shader realizes shine effect _shader effect Chapter 3]

ES6新特性
![[postgresql]postgresqlg use generate_ Series() function completes statistics](/img/62/893986eb97a61f4e9ef32abc8d2a90.png)
[postgresql]postgresqlg use generate_ Series() function completes statistics
![[2016] [paper notes] differential frequency tunable THz technology——](/img/7e/71126950250997fc436a4ee730aee7.png)
[2016] [paper notes] differential frequency tunable THz technology——
随机推荐
C.Net timestamp and time conversion support time zone
[shader realizes swaying effect _shader effect Chapter 4]
Dajiang Zhitu and CC have produced multiple copies of data. How to combine them into one and load them in the new earth map
[step by step, even thousands of miles] key words in the specified time period of the statistical log
In simple terms, cchart's daily lesson - Lesson 59 of happy high school 4 comes to the same end by different ways, and the C code style of the colorful interface library
Go uses flag package to parse command line parameters
[MySQL] CentOS 7.9 installation and use mysql-5.7.39 binary version
证券公司哪家佣金最低?网上开户安全吗
Hcia-r & s self use notes (18) campus network architecture foundation, switch working principle, VLAN principle
Bid farewell to wide tables and achieve a new generation of Bi with DQL
Application of workflow engine in vivo marketing automation | engine 03
Disk expansion process and problems encountered in the virtual machine created by VMWare
2022年物联网行业有哪些用例?
Learn various details and thoughts of chatroom implementation in Muduo
What are the use cases in the Internet of things industry in 2022?
Security team: Recently, there is an rce vulnerability in the COREMAIL email client of windows, which may lead to the disclosure of the private key of the wallet
第6节:cmake语法介绍
C language array
Easily implement seckill system with redis! (including code)
An online accident, I suddenly realized the essence of asynchrony