当前位置:网站首页>Three schemes of SVM to realize multi classification
Three schemes of SVM to realize multi classification
2022-07-06 21:01:00 【wx5d786476cd8b2】
SVM It is a binary classifier
SVM The algorithm was originally designed for binary classification problems , When dealing with multiple types of problems , We need to construct a suitable multi class classifier .
at present , structure SVM There are two main methods of multi class classifier
(1) direct method , Modify directly on the objective function , The parameter solutions of multiple classification surfaces are combined into an optimization problem , By solving the optimization problem “ Disposable ” Implementation of multi class classification . This method seems simple , But its computational complexity is relatively high , It's more difficult to achieve , Only suitable for small problems ;
(2) indirect method , Mainly through the combination of multiple two classifiers to achieve the construction of multiple classifiers , Common methods are one-against-one and one-against-all Two kinds of .
One to many (one-versus-rest, abbreviation OVR SVMs)
During the training, the samples of a certain category are classified into one category in turn , The rest of the samples fall into another category , such k Samples of categories construct k individual SVM. In classification, the unknown samples are classified into the category with the maximum classification function value .
If I had four categories ( That is to say 4 individual Label), They are A、B、C、D.
So when I was extracting the training set , Separate extraction
(1)A The corresponding vector is a positive set ,B,C,D The corresponding vector is a negative set ;
(2)B The corresponding vector is a positive set ,A,C,D The corresponding vector is a negative set ;
(3)C The corresponding vector is a positive set ,A,B,D The corresponding vector is a negative set ;
(4)D The corresponding vector is a positive set ,A,B,C The corresponding vector is a negative set ;
Use these four training sets to train separately , Then we get four training result files .
During the test , The corresponding test vectors are tested by using the four training result files .
In the end, each test has a result f1(x),f2(x),f3(x),f4(x).
So the final result is the largest of the four values as the classification result .
evaluation :
There's a flaw in this approach , Because the training set is 1:M, In this case there is biased. So it's not very practical . When extracting data sets , One third of the complete negative set is taken as the training negative set .
One on one (one-versus-one, abbreviation OVO SVMs perhaps pairwise)
This is done by designing a... Between any two types of samples SVM, therefore k Samples of each category need to be designed k(k-1)/2 individual SVM.
When classifying an unknown sample , The last category with the most votes is the category of the unknown sample .
Libsvm The multi class classification in is based on this method .
Suppose there are four types A,B,C,D Four types of . In training, I choose A,B; A,C; A,D; B,C; B,D;C,D The corresponding vector is used as the training set , And then we get six training results , During the test , Test the six results with the corresponding vectors , And then take the form of a vote , Finally, we get a set of results .
The vote is like this :
A=B=C=D=0;
(A,B)-classifier If it is A win, be A=A+1;otherwise,B=B+1;
(A,C)-classifier If it is A win, be A=A+1;otherwise, C=C+1;
...
(C,D)-classifier If it is A win, be C=C+1;otherwise,D=D+1;
The decision is the Max(A,B,C,D)
evaluation : This method is good , But when there are many categories ,model The number of is n*(n-1)/2, The cost is still considerable .
边栏推荐
- PG基础篇--逻辑结构管理(事务)
- Summary of different configurations of PHP Xdebug 3 and xdebug2
- Detailed explanation of knowledge map construction process steps
- Tips for web development: skillfully use ThreadLocal to avoid layer by layer value transmission
- Introduction to the use of SAP Fiori application index tool and SAP Fiori tools
- Solution to the 38th weekly match of acwing
- Opencv learning example code 3.2.3 image binarization
- 什么是RDB和AOF
- C language games - minesweeping
- 2022菲尔兹奖揭晓!首位韩裔许埈珥上榜,四位80后得奖,乌克兰女数学家成史上唯二获奖女性
猜你喜欢
(work record) March 11, 2020 to March 15, 2021
15 millions d'employés sont faciles à gérer et la base de données native du cloud gaussdb rend le Bureau des RH plus efficace
Utilisation de l'écran OLED
Why do novices often fail to answer questions in the programming community, and even get ridiculed?
拼多多败诉,砍价始终差0.9%一案宣判;微信内测同一手机号可注册两个账号功能;2022年度菲尔兹奖公布|极客头条
SSO single sign on
[200 opencv routines] 220 Mosaic the image
Design your security architecture OKR
使用.Net分析.Net达人挑战赛参与情况
Pinduoduo lost the lawsuit, and the case of bargain price difference of 0.9% was sentenced; Wechat internal test, the same mobile phone number can register two account functions; 2022 fields Awards an
随机推荐
Huawei device command
硬件开发笔记(十): 硬件开发基本流程,制作一个USB转RS232的模块(九):创建CH340G/MAX232封装库sop-16并关联原理图元器件
use. Net drives the OLED display of Jetson nano
Spark SQL chasing Wife Series (initial understanding)
KDD 2022 | 通过知识增强的提示学习实现统一的对话式推荐
LLVM之父Chris Lattner:为什么我们要重建AI基础设施软件
(工作记录)2020年3月11日至2021年3月15日
基于深度学习的参考帧生成
7. Data permission annotation
正则表达式收集
快过年了,心也懒了
[DSP] [Part 1] start DSP learning
Web开发小妙招:巧用ThreadLocal规避层层传值
Reference frame generation based on deep learning
use. Net analysis Net talent challenge participation
How to implement common frameworks
Performance test process and plan
Variable star --- article module (1)
OAI 5g nr+usrp b210 installation and construction
1500萬員工輕松管理,雲原生數據庫GaussDB讓HR辦公更高效