当前位置:网站首页>MLP neural network, GRNN neural network, SVM neural network and deep learning neural network compare and identify human health and non-health data
MLP neural network, GRNN neural network, SVM neural network and deep learning neural network compare and identify human health and non-health data
2022-08-01 03:44:00 【fpga and matlab】
目录
一、理论基础
MLP多层感知器神经网络(Multi-layer perceptron neural networks),其结构由输入层、一个或多个隐藏层、输出层构成,其结构框图如下图所示:
GRNNThe network structure of the generalized regression neural network is shown in the figure6所示,整个网络包括四层神经元:第一层为GRNNThe input layer of the neural network、第二层为GRNNThe pattern layer of the neural network、第三层为GRNNThe summation layer of the neural network,第四层为GRNN神经网络的输出层.
SVM支持向量机方法VapnikA new machine learning method based on statistical correlation theory proposed by et al,The basic idea is that it is linearly separable,在原空间寻找两类样本的最优分类超平面.在线性不可分的情况下,Slack variables were added for analysis,The low-dimensional input space is mapped to a high-dimensional attribute space by using a nonlinear mapping to make it a linear case,Therefore, it is possible to use linear algorithm to analyze the nonlinearity of samples in high-dimensional attribute space,And find the optimal classification hyperplane in this feature space.SVMThe optimal classification hyperplane is constructed in the attribute space by using the principle of structural risk minimization,Make the classifier get the global optimum,And the expected risk in the entire sample space satisfies a certain upper bound with a certain probability.
目前,Almost all neural network technologies are based on a shallow network architecture,The main feature of the learning model based on shallow structure is its simple structure,And the learning process in the middle is not observable.this technique alone,Its biggest flaw is in dealing with conversations involving natural signals such as human language,For complex real-world applications such as images and vision,A better learning effect cannot be obtained.And human processing of this complex information,Then it extracts its internal structure through deep architecture,And builds the internal representation through rich sensory input.因此,Studying a deep neural network architecture will facilitate the learning, training and testing of complex signals.The deep learning neural network is based on this background,深度神经网络,It mainly includes the network structure based on restricted Boltzmann machine and the network structure based on convolution operation,Deep LearningThe overall structure of the neural network is shown in the figure below:
二、案例背景
1.问题描述
Feature selection is a difficult problem in the field of machine learning,Essentially a combinatorial optimization problem,The most straightforward way to solve combinatorial optimization problems is to search,In theory, all possible feature combinations can be searched by exhaustive method,The feature subset that makes the evaluation criteria optimal is selected as the final output,But the amount of operations by the exhaustive method will increase exponentially with the number of features.因此,Feature selection needs to be performed through a specific feature set search strategy,The basic steps are shown in the following block diagram:
2.思路流程
Machine learning is an important part of the field of computer intelligence algorithms.Machine learning research is based on physiology、Cognitive science, etc. to understand the mechanism of human learning,Build a computational or cognitive model of the human learning process,Develop a variety of learning theories and learning methods,Investigate general learning algorithms and perform theoretical analysis,Build task-oriented learning systems with specific applications.For four commonly used machine learning algorithms,分为是MLP神经网络,GRNN神经网络,SVMNeural network and deep learning neural network for comparison of recognition performance,The characteristic data used is a group of physical characteristic data of healthy people and non-healthy people,for these characteristic data,This paper also proposes a feature selection method,Obtain the most effective feature data from a large number of feature data as feature identification data for healthy and unhealthy populations.最后通过MATLABFour algorithms are tested,仿真结果表明,After feature selection,The recognition algorithm based on deep neural network can be achieved96%以上的识别率.
三、部分MATLAB程序
MLP
for i = 1:k
[ Traindata,Trainaim,Testdata,Testaim] = func_crossvalidation(data_random, indices,i,row,col);
Testdata = [Testdata;Testdata(30:end,:)];
Testaim = [Testaim;Testaim(30:end,:)];
Traindata = Traindata(1:30,:);
Trainaim = Trainaim(1:30,:);
%%
%Feature not selected
[Ftrain,Ftest] = func_feature_selection0(Traindata,Testdata);
%%
%MLP识别
[Preaim,Preaim2,Rate] = func_machine_Learing_method(Ftrain,Trainaim,Ftest,Testaim);
%ROC&PR
% draw_prc(2*Testaim'-1, 2*Preaim2'-1,2);
end
GRNN
for i = 1:k
[ Traindata,Trainaim,Testdata,Testaim] = func_crossvalidation(data_random, indices,i,row,col);
Testdata = [Testdata;Testdata(30:end,:)];
Testaim = [Testaim;Testaim(30:end,:)];
Traindata = Traindata(1:30,:);
Trainaim = Trainaim(1:30,:);
%%
%Feature not selected
[Ftrain,Ftest] = func_feature_selection0(Traindata,Testdata);
%%
%MLP识别
[Preaim,Preaim2,Rate] = func_machine_Learing_method(Ftrain,Trainaim,Ftest,Testaim);
%ROC&PR
% draw_prc(2*Testaim'-1, 2*Preaim2'-1,2);
end
SVM
for i = 1:k
[ Traindata,Trainaim,Testdata,Testaim] = func_crossvalidation(data_random, indices,i,row,col);
Testdata = [Testdata;Testdata(30:end,:)];
Testaim = [Testaim;Testaim(30:end,:)];
Traindata = Traindata(1:30,:);
Trainaim = Trainaim(1:30,:);
%%
%Feature not selected
[Ftrain,Ftest] = func_feature_selection0(Traindata,Testdata);
%%
%MLP识别
[Preaim,Preaim2,Rate] = func_machine_Learing_method(Ftrain,Trainaim,Ftest,Testaim);
%ROC&PR
% draw_prc(2*Testaim'-1, 2*Preaim2'-1,2);
end
Deep learning
for i = 1:k
[ Traindata,Trainaim,Testdata,Testaim] = func_crossvalidation(data_random, indices,i,row,col);
Testdata = [Testdata;Testdata(30:end,:)];
Testaim = [Testaim;Testaim(30:end,:)];
Traindata = Traindata(1:30,:);
Trainaim = Trainaim(1:30,:);
%%
%Feature not selected
[Ftrain,Ftest] = func_feature_selection0(Traindata,Testdata);
%%
%MLP识别
[Preaim,Preaim2,Rate] = func_machine_Learing_method(Ftrain,Trainaim,Ftest,Testaim);
%ROC&PR
% draw_prc(2*Testaim'-1, 2*Preaim2'-1,2);
end
四、仿真结论分析
MLP
对比图ROC图和PR图可知,After processing through feature filtering,It has better recognition accuracy.Finally, compare and contrast what is presented in this paperForward和BackwardFeature selection simulation,Their feature recognition rates are respectively93.6709%和69.6203%
GRNN
对比ROC图和PR图可知,After processing through feature filtering,It has better recognition accuracy.Finally, compare and contrast what is presented in this paperForward和BackwardFeature selection simulation,Their feature recognition rates are respectively89.8734%和74.6835%
SVM
对比ROC图和PR图可知,After processing through feature filtering,It has better recognition accuracy.Finally, compare and contrast what is presented in this paperForward和BackwardFeature selection simulation,Their feature recognition rates are respectively88.6076%和71.3241%
深度学习:
ROC图和PR图可知,After processing through feature filtering,It has better recognition accuracy.
It can be seen from the comparison of the final results of the above four algorithms,The final recognition rates are respectively:
It can be seen from the comparison of the four algorithms,in the feature data,第3个特征,第23个特征,第19个特征和第64This feature has a strong discriminative ability,through these four types of characteristics,A higher recognition rate can be obtained.此外,从性能上看,Deep learning neural networks outperformGRNNNeural network performance,优于MLPNeural network performance,优于SVM神经网络的性能.
五、参考文献
[01]Langley P, Simon H A. Applications of machine learning and rule induction. Communications of the ACM, 1995, 38(11): 55–64.
[02]Aha D W, Kibler D, Albert M K. Instance based learning algorithms. Machine Learning, 1991, 6:37–66. A05-42
边栏推荐
- MySQL4
- MySQL4
- 2022 CSP-J1 CSP-S1 Round 1 Preliminary Competition Registration Guide
- Error using ts-node
- Lua introductory case of actual combat 1234 custom function and the standard library function
- 软件测试面试(三)
- 【SemiDrive源码分析】系列文章链接汇总(全)
- 一个service层需要调用另两个service层获取数据,并组装成最后的数据,数据都是list,缓存如何设计?
- 树莓派 的 arm 版的 gcc 安装 和环境变量的配置
- Step by step hand tearing carousel Figure 3 (nanny level tutorial)
猜你喜欢
Four implementations of
batch insert: have you really got it? MySQL3
[uniCloud] Application and Improvement of Cloud Objects
【分层强化学习】HIRO:Data-Efficient Hierarchical Reinforcement Learning
Input输入框光标在前输入后自动跳到最后面的bug
leetcode6133. 分组的最大数量(中等)
软件测试面试(三)
带wiringPi库在unbutu 编译 并且在树莓派运行
Hackers can how bad to what degree?
每周小结(*67):为什么不敢发表观点
随机推荐
787. Merge Sort
Four implementations of
batch insert: have you really got it? "Youth Pie 2": The new boyfriend stepped on two boats, and the relationship between Lin Miaomiao and Qian Sanyi warmed up
Google Earth Engine - Error resolution of Error: Image.clipToBoundsAndScale, argument 'input': Invalid type
树莓派 的 arm 版的 gcc 安装 和环境变量的配置
Difference Between Compiled and Interpreted Languages
pdb药物综合数据库
一个service层需要调用另两个service层获取数据,并组装成最后的数据,数据都是list,缓存如何设计?
简单易用的任务队列-beanstalkd
从设备树(dtb格式数据)中解析出bootargs
设备树的树形结构到底是怎样体现的?
Flutter Tutorial 01 Configure the environment and run the demo program (tutorial includes source code)
What is a programming language
[Message Notification] How about using the official account template message?
[cellular automata] based on matlab interface aggregation cellular automata simulation [including Matlab source code 2004]
初出茅庐的小李第113篇博客项目笔记之机智云智能浇花器实战(2)-基础Demo实现
Article summary: the basic model of VPN and business types
Which interpolation is better for opencv to zoom in and out??
JS new fun(); class and instance JS is based on object language Can only act as a class by writing constructors
被 CSDN,伤透了心