当前位置:网站首页>Research on 3D model retrieval method based on two channel attention residual network - Zhou Jie - paper notes

Research on 3D model retrieval method based on two channel attention residual network - Zhou Jie - paper notes

2022-06-25 07:32:00 Programmer base camp

2020 Master's thesis

1、 Innovation points :

(1) An attention residual convolution neural network is designed (RVCNN), Realize feature extraction and classification of 3D models . Keep the Bulls focused 、 Residual is applied to convolution neural network , The weighted loss function defined by the cross entropy loss function and the central loss function is used to optimize the network performance .
(2) In order to improve the RVCNN The ability of feature extraction , In the single channel network model (single-RVCNN) On the basis of , A two channel network model is proposed (double-RVCNN)

2、 advantage :

Voxelization technology 、3D Radon Transform and convolution neural network double-RVCNN, It is superior to the comparison method in retrieval and classification performance , It is proved that combining the deep learning method with the traditional 3D model feature extraction method can further improve the performance of 3D model retrieval and classification .

3、 shortcoming

There are many network parameters .
Model voxelization and 3D Radon Some information may be lost in the process of transformation , This may lead to insufficient information expression of the input data of the neural network .

4、 Algorithm principle

Convolution neural network can effectively learn the characteristics of key features from a large number of samples , The mechanism of multi head attention and the idea of residual are applied to convolutional neural network , An attention residual convolution neural network is designed to extract the features of 3D models (RVCNN). Use 3D Radon The voxel data is processed by transformation to obtain 3D Radon Characteristic matrix , Combine the three-dimensional voxel model with 3D Radon Characteristic matrix data sets are input alternately RVCNN, To build the double-RVCNN( Two channel attention residual convolution network ).
 Insert picture description here
 Insert picture description here

5、 Experimental design

(1) The influence of network layers on performance evaluation index

take RVCNN Part of the convolution and ** Remove the layer , Get the network cnet
 Insert picture description here

(2) The influence of multi head attention module on performance evaluation index

The first set of experiments was analyzed in a single multi head attention module , The effect of the number of headers on the performance index ;
 Insert picture description here
The number of headers is 2 It works best when .
The second group of experiments discussed the effect of the position of multiple attention modules and the number of multiple attention modules on the performance index .
 Insert picture description here
 Insert picture description here
Experiments show that , Put the multi head attention module on a The location is appropriate .

(3) The influence of residual block on performance evaluation index

among , Residual block a from a1、a2 form ,a1 Corresponding 3.3.2 Figure in subsection 3.8(a) The left part ,a2 Corresponding graph 3.8(a) The right part , Residual block b from b1、b2 form ,b1 Corresponding graph 3.8(b) The left part ,b2 Corresponding graph 3.8(b) The right part
 Insert picture description here
 Insert picture description here
When a1、a2、b1、b2 Get the highest value when using all .

(4) Weighted loss function test and comparison

 Insert picture description here
 Insert picture description here

(5) Experimental test and comparison of single channel network model and dual channel network model

 Insert picture description here
 Insert picture description here

原网站

版权声明
本文为[Programmer base camp]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202201233271016.html