当前位置:网站首页>2. Nonlinear regression

2. Nonlinear regression

2022-07-08 01:01:00 booze-J


The code running platform is jupyter-notebook, Code blocks in the article , According to jupyter-notebook Written in the order of division in , Run article code , Glue directly into jupyter-notebook that will do .

1. Import third-party library

import keras
import numpy as np
import matplotlib.pyplot as plt
# Sequential  Sequential model 
from keras.models import Sequential
# Dense  Fully connected layer 
from keras.layers import Dense,Activation
from tensorflow.keras.optimizers import SGD

2. Randomly generate data sets

#  Use numpy Generate 200 A random point 
#  stay -0.5~0.5 Generate 200 A little bit 
x_data = np.linspace(-0.5,0.5,200)
noise = np.random.normal(0,0.02,x_data.shape)
# y = x^2 + noise
y_data = np.square(x_data) + noise

#  Show random points 
plt.scatter(x_data,y_data)
plt.show()

Running results :
 Insert picture description here

3. Nonlinear regression

#  Build a sequential model 
model = Sequential()
#  Press shift+tab Parameters can be displayed 

# 1-10-1  Input a neural layer ,10 Hidden layers , Output a neural layer 
model.add(Dense(units=10,input_dim=1))

#  Add activation function   The activation function is linear by default , But we are nonlinear regression , Therefore, the activation function should be modified 
#  Add activation function   Mode one : Add... Directly activation Parameters 
# model.add(Dense(units=10,input_dim=1,activation="relu"))

#  Add activation function   Mode two : Add... Directly Activation Activation layer 
model.add(Activation("tanh"))

model.add(Dense(units=1,input_dim=10))
# model.add(Dense(units=1,input_dim=10,activation="relu"))
model.add(Activation("tanh"))

#  Define optimization algorithms   Improving the learning rate can reduce the number of iterations 
sgd = SGD(lr=0.3)

# sgd:Stochastic gradient descent ,  Random gradient descent method   default sgd The learning rate table is smaller   Therefore, the number of iterations required is relatively large   It takes more time 
# mse:Mean Squared Error ,  Mean square error 
model.compile(optimizer=sgd,loss='mse')


#  Training 3001 Lots 
for step in range(3001):
    #  One batch at a time 
    cost = model.train_on_batch(x_data,y_data)
    #  Every time 500 individual batch Print once cost
    if step%500==0:
        print("cost:",cost)

#  Print weights and batch values 
W,b = model.layers[0].get_weights()
print("W:",W)
print("b:",b)

# x_data Input the predicted value in the network 
y_pred = model.predict(x_data)

#  Show random points 
plt.scatter(x_data,y_data)
#  Show forecast results 
plt.plot(x_data,y_pred,"r-",lw=3)
plt.show()

Running results :
 Insert picture description here
Be careful

  • 1. For nonlinear regression, we should pay attention to modifying the activation function , Because the activation function is linear by default .
  • 2. default sgd The learning rate table is smaller Therefore, the number of iterations required is relatively large It takes more time , So you can customize sgd Learning rate to learn .
  • 3. On the whole, the code is very similar to the code of linear regression , Just modify the data set , A network model , Activation function , And optimizer .
原网站

版权声明
本文为[booze-J]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/189/202207072310362033.html