当前位置:网站首页>Neural network and deep learning-5-perceptron-pytorch
Neural network and deep learning-5-perceptron-pytorch
2022-07-08 01:54:00 【Bai Xiaosheng in Ming Dynasty】
Reference documents :
《 Neural networks and deep learning 》
Preface :
Perceptron is 1957 Year by year Frank RoseBlatt Proposed , It is a widely used linear classifier .
This is an error driven algorithm
One perceptron
1.1 Parameter learning
The algorithm tries to find a set of parameters w, So that for each sample Yes
1.1 Loss function
Using random gradient descent
1.2 Algorithm flow
1.3 The convergence of the algorithm
1963 year Novioff The convergence of the algorithm on linearly separable data sets is proved
shortcoming :
Poor generalization ability ,
The order of each iteration is different, and the hyperplane is also different .
If linear is inseparable , Will never converge
Two Voting perceptron
The weight vector of perceptron learning is related to the order of training samples .
In order to improve the robustness and generalization ability of the perceptron , We can talk about everything in the process of perceptron learning K Save four weight vectors ( Wrong ), And give each weight vector a A confidence coefficient
, The final classification result is passed through K Perceptrons with different weights vote , This model is also known as the voting perceptron
set up
For the first time k Update weight for times
Iterations of ( Number of trained samples )
Number of iterations for the next update
Confidence coefficient of
Set to
To
The number of iterations between
, Confidence coefficient
The bigger it is ,
Explain the weight
After that, the more samples with correct classification , The more trustworthy
This is an idea of integrated learning , use K A classifier , Vote to decide a result
3、 ... and Average perceptron
T Is the total number of iterations ,
by T Iteration average weight vector
Four Generate linearly separable data sets
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 6 12:07:09 2022
@author: chengxf2
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
import csv
def saveCsv(trainData):
with open('trainData.csv','w') as f:
wr = csv.writer(f)
wr.writerows(trainData)
'''
Classification generator
Reference resources https://cloud.tencent.com/developer/ask/sof/1961912/answer/2665673
args
n_samples: Total number of generated samples
n_features: Single sample dimension n_informative + n_redundant + n_repeated
n_classes : Category
centers: Sample center to be generated , The default is 3
n_informative: Multi information feature dimension
n_redundant: Redundant feature dimension
n_repeated: Duplicate information
shuffle: Upset
n_clusters-per_calsss: A category consists of several cluster form
return
data: array sample X
feature: Sample characteristics
'''
def makeTrain(batch=1000):
separable = False
trainData =[]
while not separable:
samples = make_classification(n_samples=batch, n_features=2, n_redundant=0, n_informative=1, n_clusters_per_class=1, flip_y=-1)
red = samples[0][samples[1] == 0]
blue = samples[0][samples[1] == 1]
separable = any([red[:, k].max() < blue[:, k].min() or red[:, k].min() > blue[:, k].max() for k in range(2)])
data = samples[0]
feature = samples[1]
#print(np.shape(data),type(data))
for i in range(batch):
item = list(data[i])
label = feature[i]
item.append(label)
trainData.append(item)
#print(label)
plt.plot(red[:, 0], red[:, 1], 'r.')
plt.plot(blue[:, 0], blue[:, 1], 'b.')
plt.show()
return trainData
data = makeTrain(100)
saveCsv(data)
5、 ... and Examples of parameter learning for two types of perceptrons
With the data set generated above Train
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 6 14:08:27 2022
@author: chengxf2
"""
import numpy as np
import torch
import csv
import os
# perceptron ( Artificial network that simulates the brain to recognize things and differences )
class perceptron():
'''
from csv Load the data set in the file
args
self.fileName : File path
'''
def loadData(self):
if not os.path.exists(self.fileName) :
print("\n ------ The file path does not exist -----")
return None
feature =[] # Sample labels Y skelearn by 0-1 Turn to -1,1
trainData =[] # Training set X
with open(self.fileName)as f:
f_csv = csv.reader(f)
for row in f_csv:
Y = int(row[-1])*2-1 # become [-1,1]
X = [float(v) for v in row[0:-1]]
X.append(1) # Generate augmented matrix
trainData.append(X)
feature.append(Y)
#print(data,label)
self.m,self.n = np.shape(trainData)
print("\n ---- First step Load data set ---------")
return torch.FloatTensor(trainData),torch.IntTensor(feature)
'''
forecast
'''
def forecast(self,w,x):
hatY = torch.matmul(w.T, x)
sgnY = 0
#print("\n forecast",w, x)
if hatY>0:
sgnY =1
elif hatY<0:
sgnY =-1
return hatY, sgnY
'''
forecast
'''
def test(self,trainData, feature,w):
err = 0
for i in range(0,self.m):
index = torch.LongTensor([i])
#print(index, index.dtype)
i = 0
x = trainData[index].T# Column vector
y = torch.index_select(feature, -1, index) # Corresponding tag value
predy,sngY =self.forecast(w,x)
result = sngY*y
#print("\n sngY: ",sngY,"\t y ",y)
if result<=0:
err = err+1
print("\n Number of classification errors ",err)
'''
Training
'''
def train(self,trainData, feature):
w = torch.zeros((3,1)) # Weight factor
k = 0 # The number of samples with errors in each round of prediction
t = 0 # The number of iterations
bLoop = True
print("\n -----step2 Training begins ---------------")
while(bLoop):
perm = torch.randperm(self.m) # Scrambling data sets Random sampling
k = 0 # The default for errors in this round of classification is 0
t =t+1 # The number of iterations
for i in range(self.m):
index = perm[i] # Index
x = trainData[index].T# Column vector
y = torch.index_select(feature, -1, index) # Corresponding tag value
hatY,sgnY = self.forecast(w,x) # forecast
result = y*sgnY # Whether the prediction is correct
if result<=0: # The prediction is wrong
k = k+1
a = y*x
w = w+a.view(3,1) # Update gradient
#print("\n result ",result, "\t ",y, "\t sgny ",sgnY,index)
print("\n k:%d t:%d "%(k,t),"w: ",w)
if t == self.maxIter:
print("\n --- Stop training ---------")
bLoop = False
break
return w
def __init__(self):
self.m = 0 # Number of samples
self.n = 0 # Sample dimension
self.maxIter = 10 # Maximum number of iterations
self.fileName = "trainData.csv"
if __name__ == "__main__":
model = perceptron()
trainData, feature = model.loadData()
w = model.train(trainData, feature)
model.test(trainData, feature, w)
边栏推荐
- Dataworks duty table
- 为什么更新了 DNS 记录不生效?
- GBASE观察 | 数据泄露频发 信息系统安全应如何守护
- Usage of xcolor color in latex
- Partage d'expériences de contribution à distance
- 电路如图,R1=2kΩ,R2=2kΩ,R3=4kΩ,Rf=4kΩ。求输出与输入关系表达式。
- Graphic network: uncover the principle behind TCP's four waves, combined with the example of boyfriend and girlfriend breaking up, which is easy to understand
- Exit of processes and threads
- [target tracking] |atom
- ArrayList源码深度剖析,从最基本的扩容原理,到魔幻的迭代器和fast-fail机制,你想要的这都有!!!
猜你喜欢
云原生应用开发之 gRPC 入门
Introduction to ADB tools
【目标跟踪】|atom
快速熟知XML解析
SQLite3 data storage location created by Android
C语言-Cmake-CMakeLists.txt教程
从Starfish OS持续对SFO的通缩消耗,长远看SFO的价值
Capability contribution three solutions of gbase were selected into the "financial information innovation ecological laboratory - financial information innovation solutions (the first batch)"
nmap工具介绍及常用命令
Write a pure handwritten QT Hello World
随机推荐
Codeforces Round #643 (Div. 2)——B. Young Explorers
液压旋转接头的使用事项
从Starfish OS持续对SFO的通缩消耗,长远看SFO的价值
腾讯游戏客户端开发面试 (Unity + Cocos) 双重轰炸 社招6轮面试
分布式定时任务之XXL-JOB
Mouse event - event object
Tapdata 的 2.0 版 ,开源的 Live Data Platform 现已发布
从Starfish OS持续对SFO的通缩消耗,长远看SFO的价值
php 获取音频时长等信息
第七章 行为级建模
PHP to get information such as audio duration
Version 2.0 de tapdata, Open Source Live Data Platform est maintenant disponible
Is NPDP recognized in China? Look at it and you'll see!
Summary of log feature selection (based on Tianchi competition)
[SolidWorks] modify the drawing format
Dataworks duty table
metasploit
common commands
ClickHouse原理解析与应用实践》读书笔记(8)
【SolidWorks】修改工程图格式