当前位置:网站首页>Tensorflow2.1 basic concepts and common functions
Tensorflow2.1 basic concepts and common functions
2022-07-28 06:13:00 【Jiyu Wangchuan】
TensorFlow2.1 Basic concepts and common functions
One 、 Basic concepts
1.1 Tensor The concept of
TensorFlow Medium Tensor It's a tensor , It's a multidimensional array ( list ), Dimension of tensor expressed by order .
| dimension | rank | name | give an example |
|---|---|---|---|
| 0-D | 0 | Scalar scalar | s = 1 , 2 , 3 s=1,2,3 s=1,2,3 |
| 1-D | 1 | vector vector | v = [ 1 , 2 , 3 ] v=[1,2,3] v=[1,2,3] |
| 2-D | 2 | matrix matrix | m = [ [ 1 , 2 , 3 ] ] , [ 4 , 5 , 6 ] ] , [ 7 , 8 , 9 ] ] m=[[1,2,3]],[4,5,6]],[7,8,9]] m=[[1,2,3]],[4,5,6]],[7,8,9]] |
| n-D | 3 | tensor tensor | t = [ [ [ . . . ] ] ] n individual t=[[[...]]] n individual t=[[[...]]]n individual |
Tensors can represent 0 Step to n Order array ( list )
1.2 Tensor data type
- tf.int,tf.float integer 、 floating-point
tf.int 32,tf.float 32,tf.float 64 - tf.bool Boolean type
tf.constant([True,False]) - tf.string String type
tf.constant(“Hello,world!”)
1.3 establish Tensor
- Create a tensor directly
tf.constant( Tensor content ,dtype= data type ( Optional ))
import tensorflow as tf
a=tf.constant([1,5],dtype=tf.int64)
print(a)
print(a.dtype)
print(a.shape)
The operation results are as follows :
<tf.Tensor([1,5], shape=(2 , ) , dtype=int64)
<dtype: 'int64'>
(2,)
- take numpy The data type of is converted to Tensor data type
tf.convert_to_tensor( Data name ,dtype= data type ( Optional ))
import tensorflow as tf
import numpy as np
a = np.arange(0, 5)
b = tf.convert_to_tensor( a, dtype=tf.int64 )
print(a)
print(b)
The operation results are as follows :
[0 1 2 3 4]
tf.Tensor([0 1 2 3 4], shape=( 5 , ), dtype=int64)
- Use the function to initialize
- Create all for 0 Tensor tf.zeros( dimension )
- Create all for 1 Tensor tf.ones( dimension )
- Create tensors with all specified values tf.fill( dimension , Specify the value )
dimension :
- A one-dimensional Write the number directly
- A two-dimensional use [ That's ok , Column ] [ That's ok , Column ] [ That's ok , Column ]
- Multidimensional use [ n , m , j , k … … ] [n,m,j,k……] [n,m,j,k……]
a = tf.zeros([2, 3])
b = tf.ones(4)
c = tf.fill([2, 2], 9)
print(a)
print(b)
print(c)
The operation results are as follows :
tf.Tensor([[0. 0. 0.] [0. 0. 0.]], shape=(2, 3), dtype=float32)
tf.Tensor([1. 1. 1. 1.], shape=(4, ), dtype=float32)
tf.Tensor([[9 9] [9 9]], shape=(2, 2), dtype=int32)
- Generating random number of normal distribution , The default mean is 0, The standard deviation is 1
tf. random.normal ( dimension ,mean= mean value ,stddev= Standard deviation ) - Generate the random number of truncated normal distribution
tf. random.truncated_normal ( dimension ,mean= mean value ,stddev= Standard deviation )
stay tf.truncated_normal If the value of randomly generated data in (μ-2σ,μ+2σ) outside , Then regenerate , Ensure that the generated value is near the mean value .
d = tf.random.normal ([2, 2], mean=0.5, stddev=1)
print(d)
e = tf.random.truncated_normal ([2, 2], mean=0.5, stddev=1)
print(e)
The operation results are as follows :
tf.Tensor(
[[0.7925745 0.643315 ]
[1.4752257 0.2533372]], shape=(2, 2), dtype=float32)
tf.Tensor(
[[ 1.3688478 1.0125661 ]
[ 0.17475659 -0.02224463]], shape=(2, 2), dtype=float32)
- Generate evenly distributed random numbers ( minval, maxval )
tf.random.uniform( dimension ,minval= minimum value ,maxval= Maximum )
f = tf.random.uniform([2, 2], minval=0, maxval=1)
print(f)
tf.Tensor(
[[0.28219545 0.15581512]
[0.77972126 0.47817433]], shape=(2, 2), dtype=float32)
Two 、 Common functions
2.1 Statistical class functions
- mandatory tensor Convert to the data type
tf.cast ( Zhang Liangming ,dtype= data type ) - Calculate the minimum value of the element in the tensor dimension
tf.reduce_min ( Zhang Liangming ) - Calculate the maximum value of the element in the tensor dimension
tf.reduce_max ( Zhang Liangming )
x1 = tf.constant ([1., 2., 3.],
dtype=tf.float64)
print(x1)
x2 = tf.cast (x1, tf.int32)
print(x2)
print (tf.reduce_min(x2),
tf.reduce_max(x2))
The operation results are as follows :
tf.Tensor([1. 2. 3.], shape=(3,), dtype=float64)
tf.Tensor([1 2 3], shape=(3,), dtype=int32)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=intt32)
In a two-dimensional tensor or array , Can be adjusted by axis be equal to 0 or 1 Control execution dimension .
- axis=0 On behalf of inter-bank ( longitude ,down),
- axis=1 For cross column ( latitude ,across)
- If you don't specify axis, Then all elements participate in the calculation

4. Calculates the mean value of the tensor along the specified dimension
tf.reduce_mean ( Zhang Liangming ,axis= Operating shaft )
5. Calculate the sum of the tensor along the specified dimension
tf.reduce_sum ( Zhang Liangming ,axis= Operating shaft )
x=tf.constant( [ [ 1, 2, 3],[ 2, 2, 3] ] )
print(x)
print(tf.reduce_mean( x ))
print(tf.reduce_sum( x, axis=1 ))
The operation results are as follows :
tf.Tensor([[1 2 3][2 2 3]], shape=(2, 3), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor([6 7], shape=(2,), dtype=int32)
2.2 Mathematical operation function
2.2.1 arithmetic
- Add the corresponding elements of two tensors
tf.add ( tensor 1, tensor 2) - Subtracting the corresponding elements of two tensors
tf.subtract ( tensor 1, tensor 2) - Multiplication of the corresponding elements of two tensors
tf.multiply ( tensor 1, tensor 2) - To divide the corresponding elements of two tensors
tf.divide ( tensor 1, tensor 2)
Only Same dimension The tensor of can do four operations
a = tf.ones([1, 3])
b = tf.fill([1, 3], 3.)
print(a)
print(b)
print(tf.add(a,b))
print(tf.subtract(a,b))
print(tf.multiply(a,b))
print(tf.divide(b,a))
The operation results are as follows :
tf.Tensor([[1. 1. 1.]], shape=(1, 3), dtype=float32)
tf.Tensor([[3. 3. 3.]], shape=(1, 3), dtype=float32
tf.Tensor([[4. 4. 4.]], shape=(1, 3), dtype=float32)
tf.Tensor([[-2. -2. -2.]], shape=(1, 3), dtype=float32)
tf.Tensor([[3. 3. 3.]], shape=(1, 3), dtype=float32)
tf.Tensor([[3. 3. 3.]], shape=(1, 3), dtype=float32)
2.2.2 square 、 Power and square
- Calculate the square of a tensor :tf.square ( Zhang Liangming )
- To compute a tensor n Power :tf.pow ( Zhang Liangming ,n Power number )
- Calculate the square root of a tensor :tf.sqrt ( Zhang Liangming )
a = tf.fill([1, 2], 3.)
print(a)
print(tf.pow(a, 3))
print(tf.square(a))
print(tf.sqrt(a))
The operation results are as follows :
tf.Tensor([[3. 3.]], shape=(1, 2), dtype=float32)
tf.Tensor([[27. 27.]], shape=(1, 2), dtype=float32)
tf.Tensor([[9. 9.]], shape=(1, 2), dtype=float32)
tf.Tensor([[1.7320508 1.7320508]], shape=(1, 2), dtype=float32)
2.2.3 Matrix multiplication
Realize the multiplication of two matrices :tf.matmul( matrix 1, matrix 2)
a = tf.ones([3, 2])
b = tf.fill([2, 3], 3.)
print(tf.matmul(a, b))
The operation results are as follows :
tf.Tensor(
[[6. 6. 6.]
[6. 6. 6.]
[6. 6. 6.]], shape=(3, 3), dtype=float32)
2.3 Other common functions
1. tf.Variable()
tf.Variable () Mark the variable as “ Can be trained ”, The marked variable propagates in the back
Record gradient information in . In neural network training , This function is often used to mark the parameters to be trained
w = tf.Variable(tf.random.normal([2, 2], mean=0, stddev=1))
2.tf.data.Dataset.from_tensor_slices()
Segment the first dimension of the incoming tensor , Generate input features / Label pair , Build data set
data = tf.data.Dataset.from_tensor_slices(( Input characteristics , label ))
features = tf.constant([12,23,10,17])
labels = tf.constant([0, 1, 1, 0])
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
print(dataset)
for element in dataset:
print(element)
The operation results are as follows :
<TensorSliceDataset shapes: ((),()), types: (tf.int32, tf.int32))>
(<tf.Tensor: id=9, shape=(), dtype=int32, numpy=12>, <tf.Tensor: id=10, shape=(), dtype=int32, numpy=0>)
(<tf.Tensor: id=11, shape=(), dtype=int32, numpy=23>, <tf.Tensor: id=12, shape=(), dtype=int32, numpy=1>)
(<tf.Tensor: id=13, shape=(), dtype=int32, numpy=10>, <tf.Tensor: id=14, shape=(), dtype=int32, numpy=1>)
(<tf.Tensor: id=15, shape=(), dtype=int32, numpy=17>, <tf.Tensor: id=16, shape=(), dtype=int32, numpy=0>)
3.tf.GradientTape()
with The structure records the calculation process ,gradient Find the gradient of the tensor
with tf.GradientTape( ) as tape:
w = tf.Variable(tf.constant(3.0))
loss = tf.pow(w,2) #loss=w^2 loss’=2w
grad = tape.gradient(loss,w)
print(grad)
The operation results are as follows :
tf.Tensor(6.0, shape=(), dtype=float32)
4.enumerate()
enumerate yes python Is a built-in function of , It can traverse every element ( As listing 、 Tuples
Or a string ), Combination for : Indexes Elements , Often in for Use in loop .
seq = ['one', 'two', 'three']
for i, element in enumerate(seq):
print(i, element)
The operation results are as follows :
0 one
1 two
2 three
5.tf.one_hot()
Hot coding alone (one-hot encoding): In the classification problem , It is often labeled with a unique heat code ,
Tag category :1 Said is ,0 Express non
tf.one_hot() Function to convert data , Convert to one-hot Form of data output
tf.one_hot ( Data to be converted , depth= Several categories )
classes = 3
labels = tf.constant([1,0,2]) # The minimum value of the input element is 0, The maximum is 2
output = tf.one_hot( labels, depth=classes )
print(output)
The operation results are as follows :
([[0. 1. 0.]
[1. 0. 0.]
[0. 0. 1.]], shape=(3, 3), dtype=float32)
6.tf.nn.softmax()
S o f t m a x ( y i ) = e y i ∑ j = 0 n e y j Softmax(y_i)=\frac{e^{y_i}}{\sum^n_{j=0}e^{y_j}} Softmax(yi)=∑j=0neyjeyi
When n Classified n Outputs ( y 0 , y 1 , … … y n − 1 ) (y_0 ,y_1, …… y_{n-1}) (y0,y1,……yn−1) adopt softmax( ) function ,
It conforms to the probability distribution
∀ P ( X = x ) ∈ [ 0 , 1 ] And ∑ x P ( X = x ) = 1 \forall P(X=x)\in [0,1] And \sum_xP(X=x)=1 ∀P(X=x)∈[0,1] And x∑P(X=x)=1
y = tf.constant ( [1.01, 2.01, -0.66] )
y_pro = tf.nn.softmax(y)
print("After softmax, y_pro is:", y_pro)
The operation results are as follows :
After softmax, y_pro is: tf.Tensor([0.25598174 0.69583046 0.0481878], shape=(3,), dtype=float32)
7.assign_sub()
Assignment operation , Update the value of the parameter and return .
call assign_sub front , First use tf.Variable Defining variables w To be able to train ( Self updating ).
w.assign_sub (w Content to be subtracted )
w = tf.Variable(4)
w.assign_sub(1)
print(w)
<tf.Variable 'Variable:0' shape=() dtype=int32, numpy=3>
8.tf.argmax()
Returns the index of the tensor along the maximum value of the specified dimension
tf.argmax ( Zhang Liangming ,axis= Operating shaft )
import numpy as np
test = np.array([[1, 2, 3], [2, 3, 4], [5, 4, 3], [8, 7, 2]])
print(test)
print( tf.argmax (test, axis=0)) # Return each column ( longitude ) Index of maximum value
print( tf.argmax (test, axis=1)) # Return each line ( latitude ) Index of maximum value
The operation results are as follows :
[[1 2 3]
[2 3 4]
[5 4 3]
[8 7 2]]
tf.Tensor([3 3 1], shape=(3,), dtype=int64)
tf.Tensor([2 2 0 0], shape=(4,), dtype=int64)
9.tf.where()
tf.where( Conditional statements , Really back to A, False return B)
a=tf.constant([1,2,3,1,1])
b=tf.constant([0,1,3,4,5])
c=tf.where(tf.greater(a,b), a, b) # if a>b, return a Elements corresponding to the position , otherwise
return b Elements corresponding to the position
print("c:",c)
The operation results are as follows :
c: tf.Tensor([1 2 3 4 5], shape=(5,), dtype=int32)
10.np.random.RandomState.rand()
Return to one [0,1) Random number between
np.random.RandomState.rand( dimension ) # Dimension is empty , Return scalar
import numpy as np
rdm=np.random.RandomState(seed=1) #seed= The constant generates the same random number every time
a=rdm.rand() # Returns a random scalar
b=rdm.rand(2,3) # The return dimension is 2 That's ok 3 Column random number matrix
print("a:",a)
print("b:",b)
The operation results are as follows :
a: 0.417022004702574
b: [[7.20324493e-01 1.14374817e-04 3.02332573e-01]
[1.46755891e-01 9.23385948e-02 1.86260211e-01]]
11.np.vstack()
Stack the two arrays vertically
np.vstack( Array 1, Array 2)
import numpy as np
a = np.array([1,2,3])
b = np.array([4,5,6])
c = np.vstack((a,b))
print("c:\n",c)
The operation results are as follows :
c:
[[1 2 3]
[4 5 6]]
12.np.mgrid[ ]
np.mgrid[ Starting value : End value : step , Starting value : End value : step , … ]
x.ravel( ): take x To a one-dimensional array
np.c_[ Array 1, Array 2, … ]: Pairing the returned interval values
import numpy as np
x, y = np.mgrid [1:3:1, 2:4:0.5]
grid = np.c_[x.ravel(), y.ravel()]
print("x:",x)
print("y:",y)
print('grid:\n', grid)
The operation results are as follows :
x = [[1. 1. 1. 1.]
[2. 2. 2. 2.]]
y = [
[2. 2.5 3. 3.5]
[2. 2.5 3. 3.5]]
grid:
[[1. 2. ]
[1. 2.5]
[1. 3. ]
[1. 3.5]
[2. 2. ]
[2. 2.5]
[2. 3. ]
[2. 3.5]]
边栏推荐
- Construction of redis master-slave architecture
- 自动定时备份远程mysql脚本
- 深度学习(自监督:CPC v2)——Data-Efficient Image Recognition with Contrastive Predictive Coding
- Reinforcement learning - Basic Concepts
- matplotlib数据可视化
- Kubesphere installation version problem
- Deep learning - metaformer is actually what you need for vision
- Automatic scheduled backup of remote MySQL scripts
- Interviewer: let you design a set of image loading framework. How would you design it?
- What are the advantages of small program development system? Why choose it?
猜你喜欢

Distributed cluster architecture scenario optimization solution: distributed scheduling problem

强化学习——连续控制

Knowledge point 21 generic

深度学习(增量学习)——(ICCV)Striking a Balance between Stability and Plasticity for Class-Incremental Learning

五、视频处理与GStreamer

How much is wechat applet development cost and production cost?

Interpreting the knowledge in a neural network

How to do wechat group purchase applet? How much does it usually cost?

深度学习(自监督:MoCo V3):An Empirical Study of Training Self-Supervised Vision Transformers

小程序开发解决零售业的焦虑
随机推荐
基于选择性知识提取的野外低分辨率人脸识别的论文阅读笔记
Bert based data preprocessing in NLP
速查表之转MD5
深度学习——Patches Are All You Need
深度学习(自监督:SimCLR)——A Simple Framework for Contrastive Learning of Visual Representations
The business of digital collections is not so easy to do
强化学习——Proximal Policy Optimization Algorithms
强化学习——价值学习中的DQN
The signature of the update package is inconsistent with that of the installed app
Reinforcement learning - Multi-Agent Reinforcement Learning
Reinforcement learning - Basic Concepts
What about the app store on wechat?
深度学习(自监督:MoCo v2)——Improved Baselines with Momentum Contrastive Learning
利用辅助未标记数据增强无约束人脸识别《Boosting Unconstrained Face Recognition with Auxiliary Unlabeled Data》
Alpine, Debian replacement source
How much does small program development cost? Analysis of two development methods!
NLP project actual custom template framework
【4】 Redis persistence (RDB and AOF)
What are the points for attention in the development and design of high-end atmospheric applets?
【3】 Redis features and functions