当前位置:网站首页>Tensorflow knowledge points
Tensorflow knowledge points
2022-07-28 10:40:00 【Upper back left love】
== Feeling TensorFlow knowledge Very weak , Spend some time on the basics every day TensorFlow Knowledge to understand , At the same time, know the function of each function and the data converted in it ==
Reference resources TensorFlow 2.0:https://github.com/czy36mengfei/tensorflow2_tutorials_chinese
At this stage The project is still in use TensorFlow 1.5 The engineering of , Now learn tensorflow 1.5 Basic knowledge of
stay tensorflow Use static diagram Separate definition from operation , First build a structure with a program , Let the data be calculated according to the structural order of the diagram ,tensorflow2.0 Default dynamic graph , Can be in 1.x You can operate while debugging
## Randomly generated batch_size Of Small matrix data
def batch_generator(X,y,batch_size):
size = X.shape[0]
X_copy = X.copy()
y_copy = y.copy()
indices = np.arange(size) # array([0,1,2,3...size])
X_copy = X_copy[indices]
print(X_copy)
y_copy = y_copy[indices]
i = 0;
while True:
if i + batch_size <= size:
yield X_copy[i:i+batch_size],y_copy[i:i+batch_size]
i += batch_size
else:
i = 0
indices = np.arange(size)
np.random.shuffle(indices) ## Random exchange indices Column order of
X_copy = X_copy[indices]
y_copy = y_copy[indices]
continue
## tf.placeholder(dtype,shape=None,name= None)
/** dtype data type , Common data types tf.float32
* shape None It's a one-dimensional array [None, 3] Column 3 You can't decide
* feed_dict : In the feedback matrix Enter initialization data
example: compute 3*12
*/
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.multiply(input1, input2)
with tf.Session() as sess:
print (sess.run(output, feed_dict={
input1: [3.], input2: [4.]}))
summary : feed_dict Function to use placeholder Create an object to assign tensor value , Use sess.run() Instead of calculating the entire tensor , Only calculate the function value in the first parameter , [loss, output] Calculate only these two parts , Knowledge Computing and fetch_dict Value related part of
tf.name_scope and tf.variable_scope Open up their own space in the model , Variables are managed in their respective spaces
- tf.Vaiable function :
tf.Variable(initilaizer, name), initilaizer Parameters tf.random_normal, tf.constant Equal parameter
## tf.random_uniform((4,4), minval= low,maxval=high,dtype=tf.float32): Return to a low-high Mean distribution between 4*4 matrix
with tf.Session() as sess:
print(sess.run(tf.random_uniform(
(4, 4), minval=-0.5,
maxval=0.5, dtype=tf.float32)))
- tf.nn.embedding_lookup:
Select the element corresponding to the index in a tensor
tf.nn.embedding_lookup(tensor, id):tensor Is the input tensor ,id Is the index corresponding to the tensor
as follows id =[1,3] selection 1,3 Tensor at index position
c = np.random.random([10, 1])
b = tf.nn.embedding_lookup(c, [1, 3])
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print (sess.run(b))
print(c)
result :b: [[0.60539241]
[0.48570814]]
c: [[0.12925993]
[0.60539241]
[0.85519417]
[0.48570814]
[0.42811298]
[0.70807729]
[0.64743353]
[0.35472522]
[0.30595551]
[0.67203577]]
- tf.summary.histogram()
Check the distribution of a tensor in the training process , The histogram of visual tensor at different time points shows the change of distribution with time
- tensorflow Several ways to calculate cross entropy , Calculate each sample loss
tf.nn.sigmoid_cross_entropy_with_logits(_sentinel=None,labels=None, logits=None, name=None)
The most important parameter logits,labels, In the process : Input logits Through the first sigmoid Function calculation , Cross entropy in calculator , However, the calculation method of its cross entropy is optimized , Output results output It's a batch Each sample in loss Andtf.reduce_mean(loss) UseAmong them, there are similar :- tf.nn.sparse_softmax_cross_entropy_with_logits(_sentinel=None,labels=None,logits=None, name=None)
- tf.nn.weighted_cross_entropy_with_logits(labels,logits, pos_weight, name=None)
- tf.nn.softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, dim=-1, name=None)
- tf.reduce_mean:
tf.cast(A,tf.float32): Used to change the data type of a tensor
tf.reduce_mean(): Calculate the tensor tensor Average along a certain dimension , Mainly used for cooling or calculation tensor Image average of Replace the loss matrix on a dimension with the average loss value
边栏推荐
猜你喜欢
随机推荐
14. Double pointer - the container that holds the most water
The IPO of SMIC International Technology Innovation Board passed smoothly, and its market value is expected to exceed 200billion!
Read write separation standby backup error
ZTE: 5nm 5g base station chip is being introduced!
Inside story of Wu xiongang being dismissed by arm: did the establishment of a private investment company harm the interests of shareholders?
Test question discovery ring of previous test questions
Shortest path topic
GKRandom
Install MySQL under centos7, and the online articles are not accurate
Aqua Data Studio 18.5.0 export insert statement
Operation log of dbeaver
Machine learning -- handwritten English alphabet 3 -- engineering features
Small knowledge in Oracle
Idea packages jar packages and runs jar package commands
Alibaba cloud image address
GKPolygonObstacle
Machine learning -- handwritten English alphabet 2 -- importing and processing data
机器学习--手写英文字母1--分类流程
7、MapReduce自定义排序实现
c语言进阶篇:指针(一)








