当前位置:网站首页>6. Dropout application

6. Dropout application

2022-07-08 01:01:00 booze-J

One 、 not used Dropout Under normal circumstances

stay 4. Cross entropy Add some hidden layers to the network model construction of code , And output the loss and accuracy of the training set .
take 4. In cross entropy

#  Creating models   Input 784 Neurons , Output 10 Neurons 
model = Sequential([
        #  Define output yes 10  Input is 784, Set offset to 1, add to softmax Activation function 
        Dense(units=10,input_dim=784,bias_initializer='one',activation="softmax"),
])

Add a hidden layer and change it to :

#  Creating models   Input 784 Neurons , Output 10 Neurons 
model = Sequential([
        #  Define output yes 200  Input is 784, Set offset to 1, add to softmax Activation function   The first hidden layer has 200 Neurons 
        Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh"),
        #  The second hidden layer has  100 Neurons 
        Dense(units=100,bias_initializer='one',activation="tanh"),
        Dense(units=10,bias_initializer='one',activation="softmax")
])

Code run results :
 Insert picture description here
contrast 4. Cross entropy Results of operation , You can find that after adding more hidden layers , The accuracy of model testing is greatly improved , But there is a slight over fitting phenomenon .

Two 、 Use Dropout

Add Dropout:

#  Creating models   Input 784 Neurons , Output 10 Neurons 
model = Sequential([
        #  Define output yes 200  Input is 784, Set offset to 1, add to softmax Activation function   The first hidden layer has 200 Neurons 
        Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh"),
        #  Give Way 40% Of neurons do not work 
        Dropout(0.4),
        #  The second hidden layer has  100 Neurons 
        Dense(units=100,bias_initializer='one',activation="tanh"),
        #  Give Way 40% Of neurons do not work 
        Dropout(0.4),
        Dense(units=10,bias_initializer='one',activation="softmax")
])

Use Dropout You need to import from keras.layers import Dropout
Running results :
 Insert picture description here
In this example, it is not to say that dropout Will get better results , But in some cases , Use dropout Can get better results .
But use dropout after , The test accuracy and training accuracy are relatively close , Over fitting phenomenon is not very obvious . We can see from the training results , The accuracy of the training process is lower than that of the final model test training set , This is because of the use of dropout after , Each training only uses some neurons , Then after the model training , At the last test , It is tested with all neurons , So the effect will be better .

Complete code

1. not used Dropout Complete code of the situation

The code running platform is jupyter-notebook, Code blocks in the article , According to jupyter-notebook Written in the order of division in , Run article code , Glue directly into jupyter-notebook that will do .
1. Import third-party library

import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense,Dropout
from tensorflow.keras.optimizers import SGD

2. Loading data and data preprocessing

#  Load data 
(x_train,y_train),(x_test,y_test) = mnist.load_data()
# (60000, 28, 28)
print("x_shape:\n",x_train.shape)
# (60000,)  Not yet one-hot code   You need to operate by yourself later 
print("y_shape:\n",y_train.shape)
# (60000, 28, 28) -> (60000,784) reshape() Middle parameter filling -1 Parameter results can be calculated automatically   Divide 255.0 To normalize 
x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_test.reshape(x_test.shape[0],-1)/255.0
#  in one hot Format 
y_train = np_utils.to_categorical(y_train,num_classes=10)
y_test = np_utils.to_categorical(y_test,num_classes=10)

3. Training models

#  Creating models   Input 784 Neurons , Output 10 Neurons 
model = Sequential([
        #  Define output yes 200  Input is 784, Set offset to 1, add to softmax Activation function   The first hidden layer has 200 Neurons 
        Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh"),
        #  The second hidden layer has  100 Neurons 
        Dense(units=100,bias_initializer='one',activation="tanh"),
        Dense(units=10,bias_initializer='one',activation="softmax")
])
#  Define optimizer 
sgd = SGD(lr=0.2)

#  Define optimizer ,loss_function, The accuracy of calculation during training 
model.compile(
    optimizer=sgd,
    loss="categorical_crossentropy",
    metrics=['accuracy']
)
#  Training models 
model.fit(x_train,y_train,batch_size=32,epochs=10)

#  Evaluation model 
#  Test set loss And accuracy 
loss,accuracy = model.evaluate(x_test,y_test)
print("\ntest loss",loss)
print("test_accuracy:",accuracy)

#  Training set loss And accuracy 
loss,accuracy = model.evaluate(x_train,y_train)
print("\ntrain loss",loss)
print("train_accuracy:",accuracy)

2. Use Dropout Complete code for

The code running platform is jupyter-notebook, Code blocks in the article , According to jupyter-notebook Written in the order of division in , Run article code , Glue directly into jupyter-notebook that will do .
1. Import third-party library

import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense,Dropout
from tensorflow.keras.optimizers import SGD

2. Loading data and data preprocessing

#  Load data 
(x_train,y_train),(x_test,y_test) = mnist.load_data()
# (60000, 28, 28)
print("x_shape:\n",x_train.shape)
# (60000,)  Not yet one-hot code   You need to operate by yourself later 
print("y_shape:\n",y_train.shape)
# (60000, 28, 28) -> (60000,784) reshape() Middle parameter filling -1 Parameter results can be calculated automatically   Divide 255.0 To normalize 
x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_test.reshape(x_test.shape[0],-1)/255.0
#  in one hot Format 
y_train = np_utils.to_categorical(y_train,num_classes=10)
y_test = np_utils.to_categorical(y_test,num_classes=10)

3. Training models

#  Creating models   Input 784 Neurons , Output 10 Neurons 
model = Sequential([
        #  Define output yes 200  Input is 784, Set offset to 1, add to softmax Activation function   The first hidden layer has 200 Neurons 
        Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh"),
        #  Give Way 40% Of neurons do not work 
        Dropout(0.4),
        #  The second hidden layer has  100 Neurons 
        Dense(units=100,bias_initializer='one',activation="tanh"),
        #  Give Way 40% Of neurons do not work 
        Dropout(0.4),
        Dense(units=10,bias_initializer='one',activation="softmax")
])
#  Define optimizer 
sgd = SGD(lr=0.2)

#  Define optimizer ,loss_function, The accuracy of calculation during training 
model.compile(
    optimizer=sgd,
    loss="categorical_crossentropy",
    metrics=['accuracy']
)
#  Training models 
model.fit(x_train,y_train,batch_size=32,epochs=10)

#  Evaluation model 
#  Test set loss And accuracy 
loss,accuracy = model.evaluate(x_test,y_test)
print("\ntest loss",loss)
print("test_accuracy:",accuracy)

#  Training set loss And accuracy 
loss,accuracy = model.evaluate(x_train,y_train)
print("\ntrain loss",loss)
print("train_accuracy:",accuracy)
原网站

版权声明
本文为[booze-J]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/189/202207072310361769.html