当前位置:网站首页>Multimodal point cloud fusion and visual location based on image and laser

Multimodal point cloud fusion and visual location based on image and laser

2022-07-07 18:59:00 biyezuopinvip

Resource download address :https://download.csdn.net/download/sheziqiong/85941708
Resource download address :https://download.csdn.net/download/sheziqiong/85941708
Chinese abstract
In recent years , 3D scene reconstruction and location is an important research direction in the field of computer vision . With the continuous development of automatic driving technology and industrial robot technology , The requirements for scene reconstruction accuracy and positioning accuracy are also constantly improving . How to use the data collected by various sensors , Complete the accurate reconstruction and positioning of the scene , It is a very valuable and promising research direction .
At present, there are many challenges in this field : In terms of reconstruction , Using traditional methods to map large scenes at one time will produce drift errors , At the same time, the efficiency is low , The method of sub region reconstruction depends on accurate fusion technology ; In terms of positioning , Due to the change of lighting environment , There are differences between the environmental details at the positioning time and the map reconstruction time , There is difficulty in matching ; meanwhile , The field of view of visual localization based on traditional monocular camera is small , Sometimes enough features cannot be captured for positioning . These challenges together limit the accuracy of mapping and the robustness of positioning .
In response to the above challenges , This paper presents a reconstruction and location process : First, use sensor data , Generate local point cloud of the scene , Then the local point clouds are spliced and fused , Synthesize the high-precision point cloud of the overall scene , Finally, the panoramic camera is used to realize the positioning in the scene .
The main research contents of this paper include :
1. Three parallel local reconstruction algorithms are used : Visual based SfM Algorithm 、 Based on lidar LOAM Algorithm and method based on image laser scanner , Complete the reconstruction of the local scene of Tsinghua Park , It provides a reference for the selection of equipment and algorithm for reconstruction tasks in different environments ;
2. A coarse to fine point cloud registration algorithm is proposed , Fuse the point clouds of local scenes , Generate global high-precision point cloud . The algorithm combines traditional feature extraction and neural network methods , It can have relatively robust registration results without relying on the initial transfer matrix .
3. A cross modal robust matching visual localization algorithm is proposed , Completed the positioning task in the scene based on a priori point cloud . The algorithm uses 2D-3D The matching problem is converted to 2D-2D Matching problems , Be able to use only panoramic camera , And realize positioning without any arrangement of the scene in advance .
key word : Three dimensional reconstruction ; location ; Point cloud registration ; neural network ; High precision map
ABSTRACT
Inrecentyears,3Dscenereconstructionandlocalizationisanimportantresearchdirectionincomputervisionfield.Withthecontinuousdevelopmentofautonomousdrivingtechnologyandindustrialrobottechnology,therequirementsfortheaccuracyofreconstructionandlocalizationarealsoincreasing.Therefore,howtousethedatacollectedbyvarioussensorstoaccuratelyreconstructthesceneandlocateinit,isaveryvaluableandpromisingresearchdirection.
Atpresent,therearemanychallengesinthisfield.Intheaspectofreconstruction,thetraditionalmethodwillproducedrifterrorandlowefficiencyinthereconstructionoflargesceneinonetime,whilethemethodofsubregionalreconstructionreliesonac-curatefusiontechnology.Intheaspectoflocalization,duetothechangeofilluminationenvironment,thedetailedfeaturesofthelocationtimearedifferentfromthoseofthemapreconstructiontime,soitisdifficulttomatchthem.Futhermore,thevisuallocal-izationtechonologybasedontraditionalmonocularcamerahasasmallfieldofview,sosometimesitcannotcaptureenoughfeaturesforlocalization.Thesechallengestogetherlimittheaccuracyofmappingandtherobustnessoflocalization.
Aimingattheabovechallenges,areconstructionandlocationprocessisproposedinthisthesis.Firstly,localpointcloudsaregeneratedfromsensordata,andthenthelocalpointcloudsaresplicedandfusedtosynthesizehigh-precisionpointcloudsofthewholescene.Finally,thepanoramiccameraisusedtorealizethelocationinthescene.
Themainresearchcontentsofthisthesisinclude:
1.Threeparallellocalreconstructionalgorithmsareadopted:visualbasedSFMalgorithm,lidarbasedLOAMalgorithmandimage-laserscannerbasedmethod,tocom-pletethereconstructiontaskofthelocalsceneofTsinghuaUniversity,whichprovidesareferencefortheselectionofequipmentandalgorithmforreconstructiontasksindif-ferentenvironments;
2.ACoarse-to-finepointcloudregistrationalgorithmisproposedtointegratethelocalpointcloudsandgeneratetheglobalhigh-precisionpointcloud.Thisalgorithmcombinesthetraditionalfeatureextractionandneuralnetworkmethods,andcanachieverobustregistrationresultswithoutrelyingontheinitialtransfermatrix.
3.Across-modalrobustmatchingvisionlocationalgorithmisproposedtoaccom-plishthelocationtaskbasedonaprioripointcloudscene.Bytransformingthe2D-3Dmatchingproblemintoa2D-2Dmatchingproblem,thealgorithmcanrealizepositioningusingonlythepanoramiccameraandwithoutanylayoutofthesceneinadvance.
Keywords:3Dreconstruction;localization;pointcloudregistration;neuralnet-work;highprecisionmap
Objective record
The first 1 Chapter introduction 1
1.1 Research background and significance 1
1.2 Research status 3
1.2.1 Visual 3D reconstruction 3
1.2.2 Point cloud registration 8
1.2.3 Image feature description 12
1.2.4 Visual positioning 14
1.3 The work of this paper 15
1.4 The structure of the paper 17
The first 2 Chapter Local 3D scene reconstruction 18
2.1 introduction 18
2.2 Visual based SfM 3D reconstruction 18
2.2.1 Introduction to the algorithm 18
2.2.2COLMAP brief introduction 19
2.2.3 Experiments and results 20
2.3 Based on lidar LOAM 3D reconstruction 21
2.3.1 Introduction to the algorithm 21
2.3.2 Experimental equipment 22
2.3.3 Experiments and results 22
2.4 3D reconstruction based on laser scanner 23
2.4.1LeicaBLK360 23
2.4.2 Scan results 24
2.5 Summary of this chapter 25
The first 3 Chapter Based on coarse to fine point cloud registration 26
3.1 introduction 26
3.2 Point cloud coarse registration 26
3.2.1 Local feature descriptor 26
3.2.2 Global rough matching based on point features 28
3.3 Point cloud precise registration based on deep learning 30
3.3.1 Model structure 31
3.3.2 Loss function 32
3.4 Experiments and results 33
3.4.1 Data collection 33
3.4.2 Global point cloud registration experiment 34
3.4.3 Algorithm evaluation 35
3.5 Summary of this chapter 40
The first 4 Chapter be based on 2D-3D Image - Laser cross modal matching positioning 42
4.1 introduction 42
4.2 Point cloud - Planar mapping 43
4.2.1 Virtual viewpoint image generation 43
4.2.2 experimental result 45
4.3 Panoramic distortion correction 46
4.3.1 Panoramic imaging principle 46
4.3.2 Distortion correction principle 47
4.3.3 experimental result 48
4.4 be based on SOSNet Image feature extraction and matching 49
4.4.1 Network structure and loss function 50
4.4.2 Feature extraction and matching results 51
4.5 Location algorithm based on polar constraint 53
4.5.1 A contrapolar constraint 53
4.5.2 Positioning experiment 54
4.6 Summary of this chapter 55
The first 5 Chapter Summary and prospect 57
5.1 Job summary 57
5.2 Job outlook 58
Illustration Index 59
Table index 61
reference 62
In the process of drawing , This paper adopts the method of "part first and then whole..." , This is mainly to reduce the drift phenomenon that the traditional algorithm is prone to appear in the large scene construction , At the same time, improve the efficiency in the process of drawing . utilize SfM Algorithm or LOAM Algorithm , It can recover the local map of the scene from the data collected by monocular camera or lidar . Several local maps are matched in pairs , Through a coarse to fine matching algorithm to fuse , This algorithm combines the traditional point cloud feature descriptor , At the same time, it also combines deep learning to screen rough matching point pairs , Without relying on the initial transfer estimation, we get a better result than the traditional ICP Better effect of Algorithm , Thus, the three-dimensional point cloud of the global scene is obtained .
In the positioning link , In order to avoid 3D More time-consuming matching in space , This paper chooses to unify the point cloud and panoramic image into 2D Match in space , According to this idea, we can solve the problem of cross modal matching between image and laser , But this involves two issues : First of all , How the point cloud is projected into the image , And retain as many features as possible ; second , Panoramic images bring a wider field of vision, but also bring greater distortion , Affect feature description . To solve these two problems , This paper designs two modules: point cloud plane mapping and panoramic distortion correction . meanwhile , The challenge of mapping and locating changes in ambient lighting conditions , This paper adopts

Deep learning for descriptor learning , The result is more robust than the traditional feature matching method . Finally, the position coordinates of the query image in the scene are solved by the epipolar constraint .

1.4 The structure of the paper

This paper mainly consists of 5 Chapters constitute , The main contents are as follows :
1. The first 1 Chapter introduces the background and content of 3D reconstruction and positioning technology , The significance and difficulties of research in this field are analyzed . At the same time, in recent years SLAM Technology and point cloud fusion algorithm are briefly introduced .
2. The first 2 Chapter introduces the local 3D reconstruction method used in this paper , And completed the use SfM、LOAM And the local reconstruction experiment of the auditorium scene by image laser scanner , And made a comparison .
3. The first 3 Chapter introduces a coarse to fine point cloud registration process proposed in this paper , It can fuse local point clouds to generate a global point cloud map of the scene , Experiments and analysis were carried out on the data collected indoors and outdoors .
4. The first 4 Chapter introduces a visual location method , It can use a priori point cloud map to determine the absolute position , The feasibility of this method is proved by experiments .
5. The first 5 Chapter summarizes the results of this study , And put forward the possible improvement direction in the future .

 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
 Insert picture description here
Resource download address :https://download.csdn.net/download/sheziqiong/85941708
Resource download address :https://download.csdn.net/download/sheziqiong/85941708

原网站

版权声明
本文为[biyezuopinvip]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207071652141537.html