当前位置:网站首页>Open3d learning note 3 [sampling and voxelization]
Open3d learning note 3 [sampling and voxelization]
2022-07-02 07:54:00 【Silent clouds】
open3d Voxelization of learning notes
One 、 Add some small knowledge
1、 With mesh Mode reading ply file
import open3d as o3d
mesh = o3d.io.read_triangle_mesh("mode/Fantasy Dragon.ply")
mesh.compute_vertex_normals()

2. Rotation matrix
The 3D model uses R,T Two parameters to transform , The spatial coordinate system of the view is established : Up for z Axis , To the right is y Axis ,x The axis points to the front of the screen . Use transform Method transform coordinates , The transformation matrix is [4*4] Matrix ,transform([[R, T], [0, 1]]).
Read one normally ply file :
import open3d as o3d
pcd = o3d.io.read_point_cloud("mode/Fantasy Dragon.ply")
o3d.visualization.draw_geometries([pcd], width=1280, height=720)
The display effect is as shown in the figure :
Use the conversion function , Put him horizontally , And the head faces the screen . Then it is to change the original z Shaft change to y Axis ,y Shaft change to x Axis ,x Shaft change to z Axis , So the code is :
import open3d as o3d
mesh = o3d.io.read_triangle_mesh("mode/Fantasy Dragon.ply")
mesh.compute_vertex_normals()
mesh.transform([[0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 0, 1]])
o3d.visualization.draw_geometries([mesh], width=1280, height=720)
effect :
Two 、 The way to convert to point cloud
1、 Turn into numpy The array is redrawn into a point cloud
import open3d as o3d
import numpy as np
mesh = o3d.io.read_triangle_mesh("mode/Fantasy Dragon.ply")
mesh.compute_vertex_normals()
v_mesh = np.asarray(mesh.vertices)
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(v_mesh)
o3d.visualization.draw_geometries([pcd], width=1280, height=720)

about ply The format of the file is ok , But if stl This triangular mesh , The result of transformation will be a little unsatisfactory .
2、 sampling
open3d Provides a sampling method , Sampling points can be set , simplified model .
import open3d as o3d
mesh = o3d.io.read_triangle_mesh("mode/ganyu.STL")
mesh.compute_vertex_normals()
pcd = o3d.geometry.TriangleMesh.sample_points_uniformly(mesh, number_of_points=10000) # Sampling point cloud
o3d.visualization.draw_geometries([pcd], width=1280, height=720)

3、 ... and 、 Voxelization
Voxelization , Can simplify the model , Get a uniform mesh .
Convert triangle mesh to voxel mesh
import open3d as o3d
import numpy as np
print("Load a ply point cloud, print it, and render it")
mesh = o3d.io.read_triangle_mesh("mode/ganyu.STL")
mesh.compute_vertex_normals()
mesh.scale(1 / np.max(mesh.get_max_bound() - mesh.get_min_bound()), center=mesh.get_center())
voxel_grid = o3d.geometry.VoxelGrid.create_from_triangle_mesh(mesh, voxel_size=0.05)
o3d.visualization.draw_geometries([voxel_grid], width=1280, height=720)

Point cloud generates voxel mesh
import open3d as o3d
import numpy as np
print("Load a ply point cloud, print it, and render it")
pcd = o3d.io.read_point_cloud("mode/Fantasy Dragon.ply")
pcd.scale(1 / np.max(pcd.get_max_bound() - pcd.get_min_bound()), center=pcd.get_center())
pcd.colors = o3d.utility.Vector3dVector(np.random.uniform(0,1,size=(2000,3)))
print('voxelization')
voxel_grid = o3d.geometry.VoxelGrid.create_from_point_cloud(pcd, voxel_size=0.05)
o3d.visualization.draw_geometries([voxel_grid], width=1280, height=720)

Four 、 Vertex normal estimation
voxel_down_pcd = pcd.voxel_down_sample(voxel_size=0.05)
voxel_down_pcd.estimate_normals(
search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30))
o3d.visualization.draw_geometries([voxel_down_pcd], point_show_normal=True, width=1280, height=720)
estimate_normals Calculate the normal of each point . This function finds adjacent points and calculates the principal axis of adjacent points using covariance analysis .
This function will KDTreeSearchParamHybrid Class as a parameter . The two key parameters are the specified search radius and the maximum nearest neighbor .radius=0.1, max_nn=30 That is to 10cm Search radius for , And only consider 30 Adjacent points to save computing time .

Read the normal vector
print(" Print the first vector :")
print(voxel_down_pcd.normals[0])
# Print the first vector :
#[ 0.51941952 0.82116269 -0.23642166]
# Print the first ten normal vectors
print(np.asarray(voxel_down_pcd.normals)[:10,:])
5、 ... and 、 What should be noted
- pcd The format file belongs to the point cloud type ,ply It can be read in point cloud and grid mode at the same time , use mesh When reading , It can be treated as triangular mesh , use pcd Reading can be directly used as point cloud data processing .
- When triangle meshes are directly sampled and then normals are calculated, there will be errors in normal annotation , That is, all normals point in one direction .
- To avoid sampling normal errors, use
sample_points_poisson_disk()Method sampling .
边栏推荐
- The difference and understanding between generative model and discriminant model
- How do vision transformer work? [interpretation of the paper]
- 用MLP代替掉Self-Attention
- 利用Transformer来进行目标检测和语义分割
- The hystrix dashboard reported an error hystrix Stream is not in the allowed list of proxy host names solution
- 【Mixed Pooling】《Mixed Pooling for Convolutional Neural Networks》
- How to turn on night mode on laptop
- 【Paper Reading】
- 【MobileNet V3】《Searching for MobileNetV3》
- Common machine learning related evaluation indicators
猜你喜欢

Faster-ILOD、maskrcnn_benchmark安装过程及遇到问题

【Sparse-to-Dense】《Sparse-to-Dense:Depth Prediction from Sparse Depth Samples and a Single Image》

【Cutout】《Improved Regularization of Convolutional Neural Networks with Cutout》

【Random Erasing】《Random Erasing Data Augmentation》

How to turn on night mode on laptop

【TCDCN】《Facial landmark detection by deep multi-task learning》

【Batch】learning notes

iOD及Detectron2搭建过程问题记录

【Mixup】《Mixup:Beyond Empirical Risk Minimization》
![[CVPR‘22 Oral2] TAN: Temporal Alignment Networks for Long-term Video](/img/bc/c54f1f12867dc22592cadd5a43df60.png)
[CVPR‘22 Oral2] TAN: Temporal Alignment Networks for Long-term Video
随机推荐
程序的执行
Win10+vs2017+denseflow compilation
【Sparse-to-Dense】《Sparse-to-Dense:Depth Prediction from Sparse Depth Samples and a Single Image》
联邦学习下的数据逆向攻击 -- GradInversion
用MLP代替掉Self-Attention
PHP returns the corresponding key value according to the value in the two-dimensional array
Record of problems in the construction process of IOD and detectron2
程序的内存模型
[binocular vision] binocular correction
【TCDCN】《Facial landmark detection by deep multi-task learning》
Thesis writing tip2
解决jetson nano安装onnx错误(ERROR: Failed building wheel for onnx)总结
Yolov3 trains its own data set (mmdetection)
基于pytorch的YOLOv5单张图片检测实现
【C#笔记】winform中保存DataGridView中的数据为Excel和CSV
Hystrix dashboard cannot find hystrix Stream solution
Faster-ILOD、maskrcnn_benchmark训练自己的voc数据集及问题汇总
[learning notes] matlab self compiled image convolution function
mmdetection训练自己的数据集--CVAT标注文件导出coco格式及相关操作
【Batch】learning notes