当前位置:网站首页>7. Regularization application
7. Regularization application
2022-07-08 01:01:00 【booze-J】
One 、 Application of regularization
stay 6.Dropout application Unused in Dropout Add regularization to the network model construction of code .
take 6.Dropout In application
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh"),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh"),
Dense(units=10,bias_initializer='one',activation="softmax")
])
It is amended as follows
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
Dense(units=10,bias_initializer='one',activation="softmax",kernel_regularizer=l2(0.0003))
])
Use l2 Before regularization, you need to import from keras.regularizers import l2
.
Running results :
It can be seen from the running results that some over fitting conditions have been obviously overcome , The model is not very complex for data sets , With regularization , Its effect may not be very good .
Complete code
The code running platform is jupyter-notebook, Code blocks in the article , According to jupyter-notebook Written in the order of division in , Run article code , Glue directly into jupyter-notebook that will do .
1. Import third-party library
import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from keras.regularizers import l2
2. Loading data and data preprocessing
# Load data
(x_train,y_train),(x_test,y_test) = mnist.load_data()
# (60000, 28, 28)
print("x_shape:\n",x_train.shape)
# (60000,) Not yet one-hot code You need to operate by yourself later
print("y_shape:\n",y_train.shape)
# (60000, 28, 28) -> (60000,784) reshape() Middle parameter filling -1 Parameter results can be calculated automatically Divide 255.0 To normalize
x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_test.reshape(x_test.shape[0],-1)/255.0
# in one hot Format
y_train = np_utils.to_categorical(y_train,num_classes=10)
y_test = np_utils.to_categorical(y_test,num_classes=10)
3. Training models
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
Dense(units=10,bias_initializer='one',activation="softmax",kernel_regularizer=l2(0.0003))
])
# Define optimizer
sgd = SGD(lr=0.2)
# Define optimizer ,loss_function, The accuracy of calculation during training
model.compile(
optimizer=sgd,
loss="categorical_crossentropy",
metrics=['accuracy']
)
# Training models
model.fit(x_train,y_train,batch_size=32,epochs=10)
# Evaluation model
# Test set loss And accuracy
loss,accuracy = model.evaluate(x_test,y_test)
print("\ntest loss",loss)
print("test_accuracy:",accuracy)
# Training set loss And accuracy
loss,accuracy = model.evaluate(x_train,y_train)
print("\ntrain loss",loss)
print("train_accuracy:",accuracy)
边栏推荐
- 6.Dropout应用
- Class head up rate detection based on face recognition
- ReentrantLock 公平锁源码 第0篇
- 【深度学习】AI一键换天
- QT establish signal slots between different classes and transfer parameters
- 12. RNN is applied to handwritten digit recognition
- What has happened from server to cloud hosting?
- Codeforces Round #804 (Div. 2)(A~D)
- 《因果性Causality》教程,哥本哈根大学Jonas Peters讲授
- A network composed of three convolution layers completes the image classification task of cifar10 data set
猜你喜欢
Class head up rate detection based on face recognition
New library online | cnopendata China Star Hotel data
Kubernetes Static Pod (静态Pod)
Fofa attack and defense challenge record
Invalid V-for traversal element style
Lecture 1: the entry node of the link in the linked list
取消select的默认样式的向下箭头和设置select默认字样
3.MNIST数据集分类
10.CNN应用于手写数字识别
Course of causality, taught by Jonas Peters, University of Copenhagen
随机推荐
Service mesh introduction, istio overview
2022-07-07: the original array is a monotonic array with numbers greater than 0 and less than or equal to K. there may be equal numbers in it, and the overall trend is increasing. However, the number
LeetCode刷题
串口接收一包数据
Marubeni official website applet configuration tutorial is coming (with detailed steps)
y59.第三章 Kubernetes从入门到精通 -- 持续集成与部署(三二)
4.交叉熵
Service Mesh介绍,Istio概述
[OBS] the official configuration is use_ GPU_ Priority effect is true
FOFA-攻防挑战记录
丸子官网小程序配置教程来了(附详细步骤)
Su embedded training - Day7
Is it safe to open an account on the official website of Huatai Securities?
QT adds resource files, adds icons for qaction, establishes signal slot functions, and implements
新库上线 | CnOpenData中国星级酒店数据
Reentrantlock fair lock source code Chapter 0
Kubernetes static pod (static POD)
v-for遍历元素样式失效
第一讲:链表中环的入口结点
Su embedded training - Day6