当前位置:网站首页>Feature generation

Feature generation

2022-07-07 21:00:00 Full stack programmer webmaster

Hello everyone , I meet you again , I'm the king of the whole stack .

Characteristic criteria

Distinctiveness : Different category patterns can be divided in the feature space

invariance : The change of the same category pattern in the feature space ( change 、 deformation 、 noise ) Selection is highly differentiated 、 And agree with the characteristics of certain invariance

Some methods of feature generation 1 Time domain 、 frequency domain 、 Video union The correlation coefficient 、FFT、DCT、Wavelet、Gabor 2 Statistics 、 structure 、 blend Histogram 、 attribute - The diagram 3 Bottom 、 Middle level 、 high-level Color 、 gradient (Robert、Prewitt、Sobel、 Difference + smooth 、HOG)、 texture ( class Harr、LBP)、 shape 、 semantics 4 Model ARMA、LPC

Three examples

A SIFT 1 Build the Gauss pyramid Make difference generation DOG(LOG) Approximation of 2 Find the extreme point , And the optimal extreme point is obtained according to the derivative 3 basis Hessian matrix ( Can autocorrelation function ) Remove edges and unstable points 4 Carry out gradient description narration

For detailed steps, please refer to 《 Image local invariance features and description 》 And http://underthehood.blog.51cto.com/2531780/658350 with SIFT Staring code , Please refer to

B Bag of Words 1 clustering - Building a dictionary 2 Map to dictionary , then SVM Wait for other classifiers to train and classify

There are some details 1 Feature extraction 2 Codebook generation 3 Coding(Hard or Soft) 4 Polling(Average or Max) 5 Classify

“ Now Computer Vision Medium Bag of words It is also very popular to describe the characteristics of images .

The general idea is this , If there is 5 Class image . There are... In each category 10 Images . In this way, each image is divided into patch( It can be rigid cutting or like SIFT Based on key point detection ), such . Each image consists of many patch Express , Every patch Use an eigenvector to represent , If we use Sift It means , An image may have hundreds of patch, Every patch The dimension of the eigenvector 128.

The next step is to build Bag of words Model , If Dictionary The dictionary's Size by 100, That is to say 100 Word . Then we can use K-means Algorithm for all patch Clustering ,k=100, We know , etc. k-means When converging . We also got each cluster The final center of mass . So this 100 A center of mass ( dimension 128) It's the dictionary reed 100 A word , The dictionary is built .

How to use the dictionary after it is built ? It's this , Initialize one first 100 individual bin The initial value of 0 Histogram h. There are not many images patch Well ? Let's calculate these again patch And the distance from each centroid , Look at each patch Which centroid is near , So histogram h Corresponding to bin Just add 1, Then calculate all of this image patches after , I get one bin=100 Histogram . Then normalize . Use this 100 Weide vector to represent this image .

After calculating all the images . Then we can carry out classification, clustering, training and prediction . “

C Image saliency 1 Multiscale comparison 2 Histogram around the center 3 Color space distribution

Publisher : Full stack programmer stack length , Reprint please indicate the source :https://javaforall.cn/116276.html Link to the original text :https://javaforall.cn

原网站

版权声明
本文为[Full stack programmer webmaster]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207072056574375.html