当前位置:网站首页>Openmv (IV) -- STM32 to realize feature detection
Openmv (IV) -- STM32 to realize feature detection
2022-07-28 17:57:00 【A bone loving cat】
STM32 Realize feature detection
lead
OpenMV( One )– Basic introduction and hardware architecture
OpenMV( Two )–IDE Installation and firmware download
OpenMV( 3、 ... and )– Get camera pictures in real time
Preface
This column is based on STM32H743 by MCU Of OpenMV-H7 The base plate , combination OV7725 Roller shutter camera for the development of related machine vision applications . Feature detection is the basis of machine vision , The content to be done includes edge detection 、 Various shape recognition 、 Feature point recognition, etc . Feature detection is based on the pictures obtained by the camera , Between feature detection , We need to know if we draw a mark on the acquired image .
OpenMV Image processing has been encapsulated into various modules for our use :
image.draw_line( x 0 x_0 x0, y 0 y_0 y0, x 1 x_1 x1, y 1 y_1 y1, color, thickness = 1)
Draw line segments- ( x 0 x_0 x0, y 0 y_0 y0): The starting coordinates
- ( x 1 x_1 x1, y 1 y_1 y1): End coordinates
- color: Color ,(255, 0, 0) It means red
- thickness: thickness , The default value is 1
image.draw_rectangle(x, y, w, h, color, thickness = 1, fill = false)
Draw a rectangular- ( x x x, y y y): The starting coordinates
- w: Width
- h: length
- color: Color ,(255, 0, 0) It means red
- thickness: thickness , The default value is 1
- fill: Whether to fill
image.draw_circle(x, y, radius, color, thickness = 1, fill = false)
Draw a circle- ( x x x, y y y): center of a circle
- radius: radius
- color: Color ,(255, 0, 0) It means red
- thickness: thickness , The default value is 1
- fill: Whether to fill
image.draw_cross(x, y, color, size = 5, thickness = 1)
Draw a cross- ( x x x, y y y): Cross coordinates
- color: Color ,(255, 0, 0) It means red
- size: Size
- thickness: thickness , The default value is 1
- fill: Whether to fill
image.draw_string(x, y, text, color,scale = 1, mono_space = True…)
Writing characters- (x, y): Actually, the coordinates
- text: Character content
- color: Color
- scale: font size
- mono_space: Force spacing
1. edge detection
Edge detection is contour detection , Based on this section OpenMV Official source code edges.py To complete real-time contour extraction .
1.1 Constructors
OpenMV The library integration of is very high , Only one function is needed to detect the edge of the image .
- image.find_edges(edge_type, threshold)
Contour detection , Change the image to black and white , Keep white pixels on the edge- edge_type: Processing mode
- image.EDGE_SIMPLE: Simple threshold high pass filtering algorithm
- image.EDGE_CANNY: canny Edge detection algorithm
- thresold: Contains high 、 Low threshold binary array , The default is (100, 200), Only grayscale images are supported
- edge_type: Processing mode
1.2 Source code analysis
""" Real time feature detection routines : Use canny Feature detection algorithm """
# Import the corresponding library
import sensor, image, time
# Initialize the camera
sensor.reset()
# Set the format of the collected photos : Gray image
sensor.set_pixformat(sensor.GRAYSCALE)
# Set the size of the collected photos : 320 * 240
sensor.set_framesize(sensor.QVGA)
# Wait for a while 2s, Wait until the camera is set
sensor.skip_frames(time = 2000)
# Set the upper limit of camera gain
sensor.set_gainceiling(8)
# Create a clock to calculate the number of frames captured by the camera per second FPS
clock = time.clock()
# Real time display of photos taken by the camera
while(True):
# to update FPS The clock
clock.tick()
# Take a picture and return img
img = sensor.snapshot()
# Use canny edge detection
img.find_edges(image.EDGE_CANNY, threshold = (50, 80))
# Serial printing FPS Parameters
print(clock.fps())
We connect the board to OpenMV IDE, New file , And put the above code copy go in , Click the green button in the lower left corner , We can see IDE The window on the right displays the extracted edge feature pictures in real time :
Segment recognition , The principle of line recognition is similar to that of edge recognition , Just call different functions , For specific operations, please refer to the official source code .
2. Circle Recognition
2.1 Constructors
The goal of this section is to recognize the circles in the images collected by the camera and draw them . The constructor is as follows :
- image.find_circle(roi, x_stride=2, y_stride=1. threshold=2000, x_margin=10, y_margin=10, r_margin=10, r_min=2, r_max, r_step=2)
Circle finding function , Return to one image.circle A circular object , This object has four values : center of a circle (x,y), radius ( r), Magnitude (magnitude), The greater the magnitude, the higher the reliability . Most parameters can be used by default .- roi: Identification area (x, y, w, h), If not specified, the whole picture will be defaulted
- threshold: threshold , Returns greater than or equal to threshold The circle of , Adjust the recognition credibility
- x_stride, y_stride: Skip when Hough transform x,y The amount of pixels . Hough transform is a feature detection method , It is used to identify and find out the characteristics of objects , for example : line . The algorithm flow is as follows , Given an object 、 The type of shape to distinguish , The algorithm will vote in parameter space to determine the shape of the object , And this is determined by the local maximum in the accumulation space .
- x_margin, y_margin, r_margin: Control the merging of detected circles
- r_min, r_max: Control the radius range of the recognition circle
- r_step: Control the identification steps
2.2 Source code analysis
""" Real time circular feature detection routine : Use Hough transform algorithm """
# Import the corresponding library
import sensor, image, time
# Initialize the camera
sensor.reset()
# Set the format of the collected photos : colour RGB, Using gray images will speed up detection
sensor.set_pixformat(sensor.RGB565)
# Set the size of the collected photos : 320 * 240
sensor.set_framesize(sensor.QQVGA)
# Wait for a while 2s, Wait until the camera is set
sensor.skip_frames(time = 2000)
# Create a clock to calculate the number of frames captured by the camera per second FPS
clock = time.clock()
# Real time display of photos taken by the camera
while(True):
# to update FPS The clock
clock.tick()
# Take a picture and return img, lens_corr() The purpose is to remove distortion ,1.8 Is the default value
img = sensor.snapshot().lens_corr(1.8)
# Detect circular , And draw . parms When detecting the object parameters returned by the circle , Including the center of the circle (x,y), radius ( r), Magnitude (magnitude)
for params in img.find_circles(threshold = 2000,
x_margin = 10, y_margin = 10, r_margin = 10,
r_min = 2, r_max = 100,
r_step = 2):
# Draw the circle
img.draw_circle(params.x(), params.y(), params.r(), color = (255, 0, 0))
# Print out the parameter information of the circle
print(params)
# Serial printing FPS Parameters
print(clock.fps())
We connect the board to OpenMV IDE, New file , And put the above code copy go in , Click the green button in the lower left corner , We can see IDE The window on the right displays the extracted circular features in real time :
The principle of rectangle recognition is similar to that of circle recognition , Just call different functions , For specific operations, please refer to the official source code .
3. Feature point recognition
Feature point recognition realized in this section , First, learn and recognize the collected patterns or objects , Then when the camera captures the same picture as the learned target feature points again , Just mark it .
3.1 Constructors
image.find_keypoints(roi, threshold=20, normalized=False, scale_factor=1.5, max_keypoints=100, corner_detector=image.CORNER_AGAST)
Feature point recognition function , Return to one image.rect Rectangular object list- roi: Identification area (x, y, w, h), If not specified, the whole picture will be defaulted
- threshold: A number that controls the amount of extraction ( Value 0-255), For the default AGAST Corner detector , This value should be in 20 about . about FAST Corner detector , The value is approximately 60-80. The lower the threshold , The more corners are extracted .
- normalized: Boolean value . If True, Turn off extracting keys in multi-resolution , If you don't care about dealing with expansion problems , And I hope the algorithm will run faster , I can set it to zero True
- scale_factor: One must be greater than 1.0 Floating point number . The larger the number , The faster it runs , But the image matching is relatively poor . The ideal value is between 1.35~1.5 Between .
- max_keypoints: The maximum number of key objects can hold , If the object is too large, it may cause memory problems , This value needs to be reduced .
- corner_detector: Corner detection method , It is divided into AGAST Corner detection algorithm and FAST Corner detection algorithm
image.match_descriptor(descriptor0, descriptor1, threshold=70, filter_outliers=False)
Feature point comparison function , Used to compare the similarity of two feature points . Back to a kptmatch object .- descriptor0, descriptor1: To compare 2 Characteristic points
- threshold: Control the matching degree
- For the returned kptmatch object , We can use the following methods :
- kptmatch.rect(): Return the bounding box of characteristic points , Is a rectangular tuple (x, y, w, h)
- kptmatch.cx(): Return to the center of the characteristic point x coordinates , It can also be indexed [0] Get the value
- kptmatch.cy(): Return to the center of the characteristic point y coordinates , It can also be indexed [1] Get the value
3.2 Source code analysis
The steps of feature point recognition are as follows :
Import the corresponding library –> Initialize camera module –> Take images and learn feature points K1–> Collect current feature points in real time K2–> contrast K1 and K2 Is it consistent –> If consistent, mark the position of feature points
""" Real time feature point detection routine """
# Import the corresponding library
import sensor, image, time
# Initialize the camera
sensor.reset()
# Set the camera contrast
sensor.set_contrast(3)
# Set the upper limit of camera gain
sensor.set_gainceiling(16)
# Set the size of the collected photos
sensor.set_framesize(sensor.VGA)
# Set camera resolution
sensor.set_windowing((320, 240))
# Set the format of the collected photos : Gray image
sensor.set_pixformat(sensor.GRAYSCALE)
# Wait for a while 2s, Wait until the camera is set
sensor.skip_frames(time = 2000)
# Turn off the automatic gain of the camera and set the fixed gain value to 100
sensor.set_auto_gain(False, value = 100)
""" Define the function of drawing characteristic points """
def draw_keypoints(img, kpts):
if kpts:
print(kpts)
img.draw_keypoints(kpts)
img = sensor.snapshot()
time.sleep(1000)
# Initialize feature points
kpts1 = None
kpts2 = None
""" We can also import feature points from the file kpts1 = image.load_descriptor("/desc.orb") img = sensor.snapshot() draw_keypoints(img, kpts1) """
while(True):
# Take a picture and return img
img = sensor.snapshot()
# If the characteristic point K1 non-existent , Just learn the feature points of the pictures taken by the camera
if(kpts1 == None):
kpts1 = img.find_keypoints(max_keypoints=150, threshold=10, scale_factor=1.2)
# Draw feature points
draw_keypoints(img,kpts1)
# If the characteristic point K1 There is , Just detect feature points K2, Then compare whether the two are consistent
else:
kpts2 = img.find_keypoints(max_keypoints=150, threshold=10, normalized=True)
if (kpts2):
# If the characteristic point K2 There is , Match two features
match = image.match_descriptor(kpts1, kpts2, threshold=85)
# Greater than 10 It can be judged that the two feature points are consistent , This value can be adjusted
if(match.count()>10):
# Circle the consistent features with crosses and rectangles
img.draw_rectangle(match.rect())
img.draw_cross(match.cx(), match.cy(), size = 10)
We connect the board to OpenMV IDE, New file , And put the above code copy go in , Click the green button in the lower left corner , We first see that the camera is learning feature points , And marked it :
then , Test again , And mark the same place of feature points :
边栏推荐
- 分支与循环语句
- Encapsulation, inheritance, polymorphism
- Image processing code sorting
- MySQL details
- [p5.js learning notes] local variable (let) and global variable (VaR) declaration
- 视频号如何将公域流量将用户导入私域
- Pycharm connects to remote docker
- Leetcode systematic question brushing (I) -- linked list, stack, queue, heap
- 视频号直播支持商品回放
- Leetcode systematic question brushing (V) -- dynamic programming
猜你喜欢

OpenMV(五)--STM32实现人脸识别

IDEA报错Error running ‘Application‘ Command line is too long解决方案

abstract、static、final

【p5.js学习笔记】鼠标交互事件

Xcode packaging IPA configuration manual configuration certificate

IO的操作

Openmv (III) -- get camera pictures in real time

MySQL installation

Tips--解决No module named matlab.engine的问题

On the non recursive and recursive implementation of finding the nth Fibonacci number respectively
随机推荐
数字滤波器(四)--模拟滤波器转化为数字滤波器
方法、函数
【Unity】Timeline学习笔记(七):自定义片段(Clip)
【Unity Scriptable Object】教程 | 在Unity中使用 Scriptable Object 存储物体数据信息
横向listview的最佳实现——RecycleView
Compilation principle learning notes 2 (Introduction to syntax analysis)
移动端overflow失效问题
[p5.js actual combat] my self portrait
Collection collection
视频号一场书法直播近20万人观看
Leetcode systematic question brushing (V) -- dynamic programming
ROS scattered knowledge points and error resolution
[advanced C language] - function pointer
其他电脑连接本地mysql
数字滤波器(二)--最小相位延时系统和全通系统
进程、线程、信号量和互斥锁
How to install PS filter plug-in
C# WPF 正常的项目突然提示 当前上下文中不存在名称“InitializeComponent”
Mysql5.7 compressed package installation tutorial
视频号账号变现的一些方法