当前位置:网站首页>7. Regularization application
7. Regularization application
2022-07-08 01:01:00 【booze-J】
One 、 Application of regularization
stay 6.Dropout application Unused in Dropout Add regularization to the network model construction of code .
take 6.Dropout In application
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh"),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh"),
Dense(units=10,bias_initializer='one',activation="softmax")
])
It is amended as follows
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
Dense(units=10,bias_initializer='one',activation="softmax",kernel_regularizer=l2(0.0003))
])
Use l2 Before regularization, you need to import from keras.regularizers import l2.
Running results :
It can be seen from the running results that some over fitting conditions have been obviously overcome , The model is not very complex for data sets , With regularization , Its effect may not be very good .
Complete code
The code running platform is jupyter-notebook, Code blocks in the article , According to jupyter-notebook Written in the order of division in , Run article code , Glue directly into jupyter-notebook that will do .
1. Import third-party library
import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from keras.regularizers import l2
2. Loading data and data preprocessing
# Load data
(x_train,y_train),(x_test,y_test) = mnist.load_data()
# (60000, 28, 28)
print("x_shape:\n",x_train.shape)
# (60000,) Not yet one-hot code You need to operate by yourself later
print("y_shape:\n",y_train.shape)
# (60000, 28, 28) -> (60000,784) reshape() Middle parameter filling -1 Parameter results can be calculated automatically Divide 255.0 To normalize
x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_test.reshape(x_test.shape[0],-1)/255.0
# in one hot Format
y_train = np_utils.to_categorical(y_train,num_classes=10)
y_test = np_utils.to_categorical(y_test,num_classes=10)
3. Training models
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
Dense(units=10,bias_initializer='one',activation="softmax",kernel_regularizer=l2(0.0003))
])
# Define optimizer
sgd = SGD(lr=0.2)
# Define optimizer ,loss_function, The accuracy of calculation during training
model.compile(
optimizer=sgd,
loss="categorical_crossentropy",
metrics=['accuracy']
)
# Training models
model.fit(x_train,y_train,batch_size=32,epochs=10)
# Evaluation model
# Test set loss And accuracy
loss,accuracy = model.evaluate(x_test,y_test)
print("\ntest loss",loss)
print("test_accuracy:",accuracy)
# Training set loss And accuracy
loss,accuracy = model.evaluate(x_train,y_train)
print("\ntrain loss",loss)
print("train_accuracy:",accuracy)
边栏推荐
- Hotel
- Letcode43: string multiplication
- 跨模态语义关联对齐检索-图像文本匹配(Image-Text Matching)
- Basic mode of service mesh
- Handwriting a simulated reentrantlock
- The method of server defense against DDoS, Hangzhou advanced anti DDoS IP section 103.219.39 x
- New library launched | cnopendata China Time-honored enterprise directory
- Mathematical modeling -- knowledge map
- 基于微信小程序开发的我最在行的小游戏
- Marubeni official website applet configuration tutorial is coming (with detailed steps)
猜你喜欢

New library launched | cnopendata China Time-honored enterprise directory

利用GPU训练网络模型

133. 克隆图

13.模型的保存和載入

Application practice | the efficiency of the data warehouse system has been comprehensively improved! Data warehouse construction based on Apache Doris in Tongcheng digital Department

英雄联盟胜负预测--简易肯德基上校

QT establish signal slots between different classes and transfer parameters

13.模型的保存和载入

基于人脸识别实现课堂抬头率检测

【笔记】常见组合滤波电路
随机推荐
Kubernetes Static Pod (静态Pod)
Get started quickly using the local testing tool postman
CVE-2022-28346:Django SQL注入漏洞
Analysis of 8 classic C language pointer written test questions
v-for遍历元素样式失效
QT adds resource files, adds icons for qaction, establishes signal slot functions, and implements
12. RNN is applied to handwritten digit recognition
Image data preprocessing
Deep dive kotlin synergy (XXII): flow treatment
Kubernetes static pod (static POD)
手机上炒股安全么?
韦东山第二期课程内容概要
German prime minister says Ukraine will not receive "NATO style" security guarantee
Course of causality, taught by Jonas Peters, University of Copenhagen
牛客基础语法必刷100题之基本类型
Binder core API
跨模态语义关联对齐检索-图像文本匹配(Image-Text Matching)
Stock account opening is free of charge. Is it safe to open an account on your mobile phone
How does starfish OS enable the value of SFO in the fourth phase of SFO destruction?
9. Introduction to convolutional neural network