当前位置:网站首页>Keras crash Guide
Keras crash Guide
2022-07-05 16:44:00 【Small margin, rush】
Reference resources B Stand Mo fan python
brief introduction
Keras By pure python Based on theano/tensorflow Deep learning framework . Keras It's a high-level neural network API
install
- In the installation Keras Before , It needs to be confirmed that it has been installed Numpy and Scipy.
- because Keras Is based on Tensorflow perhaps Theano Of . So you can install it yourself first Tensorflow perhaps Theano.
- install Keras.
pip3 install keras
compatible
Keras Can be based on two Backend, One is Theano, One is Tensorflow
adopt import keras Inquire about
modify Backend :json- stay python In the code import keras
Add an environment variable modification statement before :
import os
os.environ['KERAS_BACKEND']='theano'
Return to
Neural networks can be used to simulate regression problems
models.Sequential
, It is used to build the neural layer layer by layer ; layers.Dense
It means that this neural layer is a fully connected layer .
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt # Visualization module
# create some data
X = np.linspace(-1, 1, 200)
np.random.shuffle(X) # randomize the data
Y = 0.5 * X + 2 + np.random.normal(0, 0.05, (200, ))
# plot data
plt.scatter(X, Y)
plt.show()
X_train, Y_train = X[:160], Y[:160] # train front 160 data points
X_test, Y_test = X[160:], Y[160:] # test after 40 data points
And then use Sequential
establish model
, Reuse model.add
Add neural layer , Added is Dense
Fully connected nerve layer .
Use... During training model.train_on_batch
Batch by batch training X_train
, Y_train
. The default return value is cost
Test the model The function used is model.evaluate
model = Sequential()
model.add(Dense(output_dim=1, input_dim=1))
# choose loss function and optimizing method
model.compile(loss='mse', optimizer='sgd')
# training
print('Training -----------')
for step in range(301):
cost = model.train_on_batch(X_train, Y_train)
if step % 100 == 0:
print('train cost: ', cost)
# test
print('\nTesting ------------')
cost = model.evaluate(X_test, Y_test, batch_size=40)
print('test cost:', cost)
W, b = model.layers[0].get_weights()
print('Weights=', W, '\nbiases=', b)
classification
Data preprocessing -
Related to the package
models.Sequential
, It is used to build the neural layer layer by layer ;layers.Dense
It means that this neural layer is a fully connected layer .layers.Activation
Excitation function .optimizers.RMSprop
The optimizer usesRMSprop
, Accelerating neural network training method .
The first paragraph is to add Dense
Nerve layer .32 Is the dimension of the output ,784 It's the dimension of input . The data from the first layer is 32 individual feature, To the excitation unit , The excitation function uses relu
function . After the excitation function , It becomes nonlinear data . Then pass this data to the next nerve layer , This Dense
We define it as 10 Output feature. Then enter it into the following softmax
function , Used to classify .
use RMSprop
As an optimizer , Its parameters include learning rate, etc , use model.compile
Excitation neural network
# Another way to build your neural net
model = Sequential([
Dense(32, input_dim=784),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
# Another way to define your optimizer
rmsprop = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# We add metrics to get more results you want to see
model.compile(optimizer=rmsprop,
loss='categorical_crossentropy',
metrics=['accuracy'])
CNN
The first is data preprocessing and model Set up . Then add the first convolution , The number of filters is 32, Size is 5*5,Padding The method is same That is, do not change the length and broadband of data .
model.add(Convolution2D(
batch_input_shape=(64, 1, 28, 28),
filters=32,
kernel_size=5,
strides=1,
padding='same', # Padding method
data_format='channels_first',
))
model.add(Activation('relu'))
first floor pooling( Pooling , Down sampling ), The length and width of the resolution are reduced by half , Output data shape by (32,14,14)
model.add(MaxPooling2D(
pool_size=2,
strides=2,
padding='same', # Padding method
data_format='channels_first',
))
Add the second convolution layer and pool layer
model.add(Convolution2D(64, 5, strides=1, padding='same', data_format='channels_first'))
model.add(Activation('relu'))
model.add(MaxPooling2D(2, 2, 'same', data_format='channels_first'))
After the above processing, the data shape by (64,7,7), You need to flatten the data into one dimension , Then add the full connection layer and output layer . Set up adam
An optimization method ,loss
function , metrics
Method to observe the output
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(optimizer=adam,
loss='categorical_crossentropy',
metrics=['accuracy'])
Self encoder
Neural networks need to accept a lot of input information , For example, when the input information is HD picture , The amount of input information can reach tens of millions , It is a hard work to let neural network learn directly from tens of millions of information sources
The self encoder makes the original data white X Compress , decompression Black X, Then by contrasting black and white X , Find out the prediction error , Carry out reverse transmission , Gradually improve the accuracy of self coding
# in order to plot in a 2D figure
encoding_dim = 2 #encoding_dim, Dimensions to be compressed .
# this is our input placeholder
input_img = Input(shape=(784,))
establish encoded
and decoded
, Reuse autoencoder
Put the two together . For training autoencoder
# encoder layers
encoded = Dense(128, activation='relu')(input_img)
encoded = Dense(64, activation='relu')(encoded)
encoded = Dense(10, activation='relu')(encoded)
encoder_output = Dense(encoding_dim)(encoded)
# decoder layers
decoded = Dense(10, activation='relu')(encoder_output)
decoded = Dense(64, activation='relu')(decoded)
decoded = Dense(128, activation='relu')(decoded)
decoded = Dense(784, activation='tanh')(decoded)
# construct the autoencoder model
autoencoder = Model(input=input_img, output=decoded)
The next step is to compile the self coding model , The optimizer uses adam
, The loss function uses mse
.
# compile autoencoder
autoencoder.compile(optimizer='adam', loss='mse')
Save extraction
After training the model , You can print the predicted results , Next, save the model .
Only one line of code is needed when saving model.save
, Add another name to it and you can use h5
Save the format of .
# save
print('test before save: ', model.predict(X_test[0:2]))
model.save('my_model.h5') # HDF5 file, you have to pip3 install h5py if don't have it
del model # deletes the existing model
"""
test before save: [[ 1.87243938] [ 2.20500779]]
"""
Import model
# load
model = load_model('my_model.h5')
print('test after load: ', model.predict(X_test[0:2]))
边栏推荐
- Seaborn绘制11个柱状图
- Jarvis OJ Webshell分析
- Using graylog alarm function to realize the regular work reminder of nail group robots
- [深度学习][原创]让yolov6-0.1.0支持yolov5的txt读取数据集模式
- Cartoon: what is blue-green deployment?
- Basic introduction to the control of the row component displaying its children in the horizontal array (tutorial includes source code)
- APICloud云调试解决方案
- The difference between searching forward index and inverted index
- 10分钟帮你搞定Zabbix监控平台告警推送到钉钉群
- How to install MySQL
猜你喜欢
Desci: is decentralized science the new trend of Web3.0?
Domestic API management artifact used by the company
Detailed explanation of use scenarios and functions of polar coordinate sector diagram
Single merchant v4.4 has the same original intention and strength!
《21天精通TypeScript-3》-安装搭建TypeScript开发环境.md
Bs-xx-042 implementation of personnel management system based on SSM
Pspnet | semantic segmentation and scene analysis
Flet教程之 12 Stack 重叠组建图文混合 基础入门(教程含源码)
How does win11 change icons for applications? Win11 method of changing icons for applications
解决CMakeList find_package找不到Qt5,找不到ECM
随机推荐
[61dctf]fm
Detailed explanation of use scenarios and functions of polar coordinate sector diagram
漫画:什么是MapReduce?
Cartoon: what is MapReduce?
Google Earth engine (GEE) -- a brief introduction to kernel kernel functions and gray level co-occurrence matrix
Reduce the cost by 40%! Container practice of redis multi tenant cluster
【学术相关】多位博士毕业去了三四流高校,目前惨不忍睹……
Jarvis OJ Webshell分析
Apiccloud cloud debugging solution
服务器的数据库连不上了2003,10060“Unknown error“【服务已起、防火墙已关、端口已开、netlent 端口不通】
The memory of a Zhang
Data verification before and after JSON to map -- custom UDF
You should have your own persistence
Basic introduction to the control of the row component displaying its children in the horizontal array (tutorial includes source code)
【深度学习】深度学习如何影响运筹学?
Some cognitive thinking
Bs-xx-042 implementation of personnel management system based on SSM
Today's sleep quality record 79 points
漫画:什么是服务熔断?
Pspnet | semantic segmentation and scene analysis