当前位置:网站首页>Point cloud processing - KD tree
Point cloud processing - KD tree
2022-07-28 17:50:00 【Zhang Feifei~】
KD-tree On the basis of binary trees , In fact, it becomes a multi-dimensional segmentation , The segmentation method changes to dimension rotation segmentation :x-y-z-x-y-z…
1.kd-tree The construction of
(1) Node definition
According to my understanding, each node is actually a spatial region in a single dimension , This node stores the Split dimension axis, Coordinates of the split axis value, Index of points in the node area point_indices, The left and right nodes separated by the split axis are also two regions , Be treated as left、right Two nodes are stored .
from result_set import KNNResultsets, RadiusNNResultSet
import numpy as np
import math
import os
import random
# define the node
class Node:
# init function
def __init__(self, axis, value, left, right, point_indices):
''' pram: axis: The dimensions of division value: The value of the node left: The left node right: Right node point_indices: Sequence number in point set '''
self.axis = axis
self.value = value
self.left = left
self.right = right
self.point_indices = point_indices
# check if is leaf
def is_leaf(self):
# When leaf nodes are used, the space is no longer divided , therefore value yes none
if self.value == None:
return True
else:
return False
# print function
def __str__(self):
output = ''
output += 'axis %d, ' % self.axis
if self.value is None:
output += 'split value: leaf, '
else:
output += 'split value: %.2f, ' % self.value
output += 'point_indices: '
output += str(self.point_indices.tolist())
return output
(2)kd Tree construction
kdtree_construction Call recursive constructor kdtree_recursive_builder structure kd-tree node
def kdtree_construction(data, leaf_size):
''' input: data(numpy array) leaf_size(the smallest size of each node) '''
N, dim = data.shape[0], data.shape[1]
# build the kd-tree
root = None
root = kdtree_recursive_builder(
root, data, np.arange(N), axis=0, leaf_size=leaf_size)
return root
kdtree_recursive_builder
def kdtree_recursive_builder(root, db, point_indices, axis, leaf_size):
""" param: root data:NxD point_indeces:N axis:scale leaf_size:scale """
# If root non-existent , First establish root
if root is None:
root = Node(axis, None, None, None, point_indices)
# If the current number of nodes is greater than the number of leaf nodes , Before further segmentation , Otherwise, no further segmentation , The current node is the leaf node
# The characteristics of leaf nodes are value==None
if len(point_indices) > leaf_size:
# --- get the split position ---
# Sort the currently passed in data nodes in the division dimension , Select the intermediate value data point of the current dimension
# Divided point value It is equal to the mean value of the median data points , Note that the middle plane divided here does not pass through the data points
point_indices_sorted, _ = sort_key_by_value(
point_indices, db[point_indices, axis]) # The index of points is in accordance with value Sort in order of size
# Find the index position of the point of the intermediate value under the current dimension
middle_left_idx = math.ceil(point_indices_sorted.shape[0] / 2) - 1
# The index of the intermediate point in the original point set
middle_left_point_idx = point_indices_sorted[middle_left_idx]
# In the middle value value
middle_left_point_value = db[middle_left_point_idx, axis]
# The point after the middle point is the same
middle_right_idx = middle_left_idx + 1
middle_right_point_idx = point_indices_sorted[middle_right_idx]
middle_right_point_value = db[middle_right_point_idx, axis]
root.value = (middle_left_point_value + middle_right_point_value) * 0.5# Take the middle two data points value Average value , Do not cross data points
# === get the split position ===
root.left = kdtree_recursive_builder(root.left,
db,
point_indices_sorted[0:middle_right_idx],
axis_round_robin(
axis, dim=db.shape[1]),
leaf_size)
root.right = kdtree_recursive_builder(root.right,
db,
point_indices_sorted[middle_right_idx:],
axis_round_robin(
axis, dim=db.shape[1]),
leaf_size)
return root
sort_key_by_value Sorting function :
# sort_key_by_value
def sort_key_by_value(point_indeces, value):
# Make sure that the index of the entered point is the same as the value dimension of the point
assert(point_indeces.shape == value.shape)
assert(len(point_indeces.shape) == 1) # 1xN
sort_idx = np.argsort(value) # value It's a list , No numpy
point_indeces_sort = point_indeces[sort_idx]
value_sort = value[sort_idx]
return point_indeces_sort, value_sort
axis_round_robin function , Rotation determines the segmentation dimension :0-->1,1--》2,2-->0
# axis_round_robin, Change the segmentation dimension
def axis_round_robin(axis, dim):
if axis == dim-1:
return 0
else:
return axis+1
2.kd-tree Of KNN lookup
It is very similar to the search of binary tree , The difference lies in whether the current node is a leaf node at the initial time , If yes, directly calculate the distance between each node and the current query point , Insert these points into the result set .
Not when leaf nodes : First, compare the query point with the current node value Size , Choose from the left or the right .
The key to finding a node area is : Judge the current worst distance from ( Query point coordinates — The difference between the division coordinates of the node area ) Size relationship between .
def kdtree_knn_search(root: Node, data: np.ndarray, result_set: KNNResultsets, query_point: np.ndarray):
if root == None:
return
# check if is a leaf
# When the current node is a leaf node , Because we can't further distinguish the left and right subspaces in the current node space ,
# So the nodes in all leaf nodes are violently taken out to calculate the distance from the query point , Insert all points close to result
if root.is_leaf():
# compare the contents of a leaf
leaf_points = data[root.point_indices, :]#root.point_indeces It's a list , What is stored is the index of the point in the metadata
#print("leaf index:", root.point_indices)
diff = np.linalg.norm(np.expand_dims(
query_point, 0) - leaf_points, axis=1)
for i in range(diff.shape[0]):
result_set.add_point(diff[i], root.point_indices[i])
return False
# Distance less than root value, Start on the left
if query_point[root.axis] <= root.value:
kdtree_knn_search(root.left, data, result_set, query_point)
# If you don't find enough on the left , Just keep looking from the right
if math.fabs(query_point[root.axis] - root.value) < result_set.worst_Dist():
kdtree_knn_search(root.right, data, result_set, query_point)
else:
# Start on the right
kdtree_knn_search(root.right, data, result_set, query_point)
if math.fabs(query_point[root.axis] - root.value) < result_set.worst_Dist():
kdtree_knn_search(root.left, data, result_set, query_point)
return False
3.kd-tree Of radius lookup
def kdtree_radius_search(root: Node, data: np.ndarray, result_set: RadiusNNResultSet, query_point: np.ndarray):
if root == None:
return
# check if is a leaf
if root.is_leaf():
# compare the contents of a leaf
leaf_points = data[root.point_indices, :]
print("leaf index:", root.point_indices)
diff = np.linalg.norm(np.expand_dims(
query_point, 0) - leaf_points, axis=1)
for i in range(diff.shape[0]):
result_set.add_point(diff[i], root.point_indices[i])
return False
if query_point[root.axis] <= root.value:
kdtree_radius_search(root.left, data, result_set, query_point)
if math.fabs(query_point[root.axis] - root.value) < result_set.worst_Dist():
kdtree_radius_search(root.right, data, result_set, query_point)
else:
kdtree_radius_search(root.right, data, result_set, query_point)
if math.fabs(query_point[root.axis] - root.value) < result_set.worst_Dist():
kdtree_radius_search(root.left, data, result_set, query_point)
return False
main Function call :
def main():
# generate the data
db = 64
dim = 3
leaf_size = 4
k = 8
print('runing')
db_np = np.random.rand(db, dim)
root = kdtree_construction(data=db_np, leaf_size=leaf_size)
query = np.asarray([0, 0, 0])
# result_set = KNNResultsets(capacity=k)
# kdtree_knn_search(root, data=db_np,
# result_set=result_set, query_point=query)
# print(result_set)
result_set = RadiusNNResultSet(radius=1)
kdtree_radius_search(
root, data=db_np, result_set=result_set, query_point=query)
print(result_set)
if __name__ == '__main__':
main()
python relevant :
diff = np.linalg.norm(np.expand_dims(
query_point, 0) - leaf_points, axis=1)
# Calculation norm : Calculate in the direction of the column , That is to calculate the distance between each point and the query point ( Space distance )
边栏推荐
猜你喜欢

【p5.js】实战练习——无规则对称
![[unity] timeline learning notes (VII): Custom clip](/img/25/0402a28539cb568d5a539a757b2482.png)
[unity] timeline learning notes (VII): Custom clip
![[p5.js] practical exercise - irregular symmetry](/img/b0/d5ce69db2304e5045e6e4fca43b478.png)
[p5.js] practical exercise - irregular symmetry

Mysql 优化总结

7-8 浪漫侧影(25分)建树+新解题思路

Please make sure you have the correct access rights and the repository exists.
@Detailed explanation of requestmapping

How to install PS filter plug-in

JVM performance tuning

USB virtual serial port (CDC) limit speed test
随机推荐
小白如何零基础学习软件测试?
MySQL optimization summary
How to install PS filter plug-in
电工学下册自学笔记1.23
关于非递归和递归分别实现求第n个斐波那契数
点云处理---二叉树
分支与循环(for与do-while)
Can‘t use an undefined value as an ARRAY reference at probe2symbol
MySQL与IDEA连接
2.1-运算符
软件测试和软件开发应该怎么选择?
怎样将IDEA与码云进行绑定
Encapsulation, inheritance, polymorphism
PCA reports error in eigen (crossprod (t (x), t (x)), symmetric = true): 'x' has infinite value or missing value
点云处理---kd-tree
2021 National Undergraduate data statistics and Analysis Competition
@Detailed explanation of requestmapping
R language sub() usage
域名解析问题记录
【p5.js】实战临摹——国际象棋盘