当前位置:网站首页>7. Regularization application
7. Regularization application
2022-07-08 01:01:00 【booze-J】
One 、 Application of regularization
stay 6.Dropout application Unused in Dropout Add regularization to the network model construction of code .
take 6.Dropout In application
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh"),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh"),
Dense(units=10,bias_initializer='one',activation="softmax")
])
It is amended as follows
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
Dense(units=10,bias_initializer='one',activation="softmax",kernel_regularizer=l2(0.0003))
])
Use l2 Before regularization, you need to import from keras.regularizers import l2
.
Running results :
It can be seen from the running results that some over fitting conditions have been obviously overcome , The model is not very complex for data sets , With regularization , Its effect may not be very good .
Complete code
The code running platform is jupyter-notebook, Code blocks in the article , According to jupyter-notebook Written in the order of division in , Run article code , Glue directly into jupyter-notebook that will do .
1. Import third-party library
import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from keras.regularizers import l2
2. Loading data and data preprocessing
# Load data
(x_train,y_train),(x_test,y_test) = mnist.load_data()
# (60000, 28, 28)
print("x_shape:\n",x_train.shape)
# (60000,) Not yet one-hot code You need to operate by yourself later
print("y_shape:\n",y_train.shape)
# (60000, 28, 28) -> (60000,784) reshape() Middle parameter filling -1 Parameter results can be calculated automatically Divide 255.0 To normalize
x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_test.reshape(x_test.shape[0],-1)/255.0
# in one hot Format
y_train = np_utils.to_categorical(y_train,num_classes=10)
y_test = np_utils.to_categorical(y_test,num_classes=10)
3. Training models
# Creating models Input 784 Neurons , Output 10 Neurons
model = Sequential([
# Define output yes 200 Input is 784, Set offset to 1, add to softmax Activation function The first hidden layer has 200 Neurons
Dense(units=200,input_dim=784,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
# The second hidden layer has 100 Neurons
Dense(units=100,bias_initializer='one',activation="tanh",kernel_regularizer=l2(0.0003)),
Dense(units=10,bias_initializer='one',activation="softmax",kernel_regularizer=l2(0.0003))
])
# Define optimizer
sgd = SGD(lr=0.2)
# Define optimizer ,loss_function, The accuracy of calculation during training
model.compile(
optimizer=sgd,
loss="categorical_crossentropy",
metrics=['accuracy']
)
# Training models
model.fit(x_train,y_train,batch_size=32,epochs=10)
# Evaluation model
# Test set loss And accuracy
loss,accuracy = model.evaluate(x_test,y_test)
print("\ntest loss",loss)
print("test_accuracy:",accuracy)
# Training set loss And accuracy
loss,accuracy = model.evaluate(x_train,y_train)
print("\ntrain loss",loss)
print("train_accuracy:",accuracy)
边栏推荐
- Hotel
- Kubernetes Static Pod (静态Pod)
- What has happened from server to cloud hosting?
- Analysis of 8 classic C language pointer written test questions
- tourist的NTT模板
- Mathematical modeling -- knowledge map
- 【obs】Impossible to find entrance point CreateDirect3D11DeviceFromDXGIDevice
- jemter分布式
- Summary of the third course of weidongshan
- 新库上线 | CnOpenData中华老字号企业名录
猜你喜欢
Fofa attack and defense challenge record
Application practice | the efficiency of the data warehouse system has been comprehensively improved! Data warehouse construction based on Apache Doris in Tongcheng digital Department
Malware detection method based on convolutional neural network
[OBS] the official configuration is use_ GPU_ Priority effect is true
[necessary for R & D personnel] how to make your own dataset and display it.
Interface test advanced interface script use - apipost (pre / post execution script)
Codeforces Round #804 (Div. 2)(A~D)
5G NR 系统消息
1.线性回归
【GO记录】从零开始GO语言——用GO语言做一个示波器(一)GO语言基础
随机推荐
ThinkPHP kernel work order system source code commercial open source version multi user + multi customer service + SMS + email notification
Su embedded training - Day5
swift获取url参数
Handwriting a simulated reentrantlock
9. Introduction to convolutional neural network
tourist的NTT模板
新库上线 | CnOpenData中华老字号企业名录
Langchao Yunxi distributed database tracing (II) -- source code analysis
股票开户免费办理佣金最低的券商,手机上开户安全吗
133. 克隆图
How is it most convenient to open an account for stock speculation? Is it safe to open an account on your mobile phone
Reentrantlock fair lock source code Chapter 0
Semantic segmentation model base segmentation_ models_ Detailed introduction to pytorch
2.非线性回归
Stock account opening is free of charge. Is it safe to open an account on your mobile phone
13. Model saving and loading
Serial port receives a packet of data
Codeforces Round #804 (Div. 2)
AI遮天传 ML-初识决策树
New library online | information data of Chinese journalists