当前位置:网站首页>Opencv learning notes 9 -- background modeling + optical flow estimation
Opencv learning notes 9 -- background modeling + optical flow estimation
2022-07-06 07:32:00 【Cloudy_ to_ sunny】
opencv Study notes 9 -- Background modeling + Optical flow estimation
Background modeling
Frame difference method
Because the target in the scene is moving , The image of the target has different positions in different image frames . This kind of algorithm performs differential operation on two consecutive images in time , The pixels corresponding to different frames are subtracted , Judge the absolute value of gray difference , When the absolute value exceeds a certain threshold , It can be judged as a moving target , So as to realize the detection function of the target .

The frame difference method is very simple , But it will introduce noise and cavity problems
Gaussian mixture model
Before foreground detection , First train the background , A Gaussian mixture model is used to simulate each background in the image , The number of Gaussian mixtures per background can be adaptive . Then in the test phase , Update the new pixel GMM matching , If the pixel value can match one of the Gauss , It is considered to be the background , Otherwise, it is considered a prospect . Because of the whole process GMM The model is constantly updating and learning , Therefore, it has certain robustness to dynamic background . Finally, through the foreground detection of a dynamic background with branch swing , Good results have been achieved .
In the video, the change of pixels should conform to Gaussian distribution

The actual distribution of the background should be a mixture of multiple Gaussian distributions , Each Gaussian model can also be weighted

Gaussian mixture model learning method
1. First initialize each Gaussian model matrix parameter .
2. Take the video T Frame data image is used to train Gaussian mixture model . After you get the first pixel, use it as the first Gaussian distribution .
3. When the pixel value comes later , Compared with the previous Gaussian mean , If the difference between the value of the pixel and its model mean is within 3 Within the variance of times , It belongs to this distribution , And update its parameters .
4. If the next pixel does not meet the current Gaussian distribution , Use it to create a new Gaussian distribution .
Gaussian mixture model test method
In the test phase , Compare the value of the new pixel with each mean in the Gaussian mixture model , If the difference is 2 Times the variance between words , It is considered to be the background , Otherwise, it is considered a prospect . Assign the foreground to 255, The background is assigned to 0. In this way, a foreground binary map is formed .

import numpy as np
import cv2
# Classic test video
cap = cv2.VideoCapture('test.avi')
# Morphological operations need to use , Used to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
# Create Gaussian mixture model for background modeling
fgbg = cv2.createBackgroundSubtractorMOG2()
while(True):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
# Morphological operation to remove noise
fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)
# Find the outline in the video
im, contours, hierarchy = cv2.findContours(fgmask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
# Calculate the perimeter of each profile
perimeter = cv2.arcLength(c,True)
if perimeter > 188:
# Find a straight rectangle ( Won't rotate )
x,y,w,h = cv2.boundingRect(c)
# Draw this rectangle
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('frame',frame)
cv2.imshow('fgmask', fgmask)
k = cv2.waitKey(150) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()

Optical flow estimation
Optical flow is the result of the pixel motion of a space moving object on the observation imaging plane “ Instantaneous speed ”, According to the velocity vector characteristics of each pixel , The image can be dynamically analyzed , Target tracking, for example .
Brightness is constant : The same point changes over time , Its brightness will not change .
Small movement : The change over time will not cause a drastic change in position , Only in the case of small motion can the gray change caused by the change of unit position between the front and back frames be used to approximate the partial derivative of gray to position .
Spatial consistency : Adjacent points on a scene projected onto the image are also adjacent points , And the velocity of adjacent points is the same . Because there is only one constraint on the basic equation of optical flow method , And demand x,y Speed in direction , There are two unknown variables . So we need to stand together n Solve multiple equations .


Lucas-Kanade Algorithm

How to solve the equations ? It seems that one pixel is not enough , What other characteristics are there in the process of object movement ?( Corner reversible , So feature points use corners )

cv2.calcOpticalFlowPyrLK():
Parameters :
prevImage Previous frame image
nextImage The current frame image
prevPts Feature point vector to be tracked
winSize The size of the search window
maxLevel The largest number of pyramid layers
return :
nextPts Output tracking feature point vector
status Whether the feature points are found , The status found is 1, The status of not found is 0
import numpy as np
import cv2
cap = cv2.VideoCapture('test.avi')
# Parameters required for corner detection
feature_params = dict( maxCorners = 100,
qualityLevel = 0.3,
minDistance = 7)
# lucas kanade Parameters
lk_params = dict( winSize = (15,15),
maxLevel = 2)
# Random color bar
color = np.random.randint(0,255,(100,3))
# Get the first image
ret, old_frame = cap.read()
old_gray = cv2.cvtColor(old_frame, cv2.COLOR_BGR2GRAY)
# Return all detected feature points , You need to enter an image , Maximum number of corners ( efficiency ), Quality factor ( The larger the eigenvalue, the better , To screen )
# The distance is equivalent to that there is a stronger angle in this interval than this corner , I don't want this weak one
p0 = cv2.goodFeaturesToTrack(old_gray, mask = None, **feature_params)
# Create a mask
mask = np.zeros_like(old_frame)
while(True):
ret,frame = cap.read()
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# It is necessary to input the previous frame, the current image and the corner detected in the previous frame
p1, st, err = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params)
# st=1 Express
good_new = p1[st==1]
good_old = p0[st==1]
# Drawing tracks
for i,(new,old) in enumerate(zip(good_new,good_old)):
a,b = new.ravel()
c,d = old.ravel()
mask = cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2)
frame = cv2.circle(frame,(a,b),5,color[i].tolist(),-1)
img = cv2.add(frame,mask)
cv2.imshow('frame',img)
k = cv2.waitKey(150) & 0xff
if k == 27:
break
# to update
old_gray = frame_gray.copy()
p0 = good_new.reshape(-1,1,2)
cv2.destroyAllWindows()
cap.release()

Reference resources
边栏推荐
- C - Inheritance - polymorphism - virtual function member (lower)
- mysql如何合并数据
- NiO programming introduction
- Twelve rules for naming variables
- C # connect to SQLite database to read content
- Related operations of Excel
- Introduction to the basics of network security
- Simulation of Michelson interferometer based on MATLAB
- 数字IC设计笔试题汇总(一)
- How MySQL merges data
猜你喜欢

Simulation of Teman green interferometer based on MATLAB

数字IC设计笔试题汇总(一)

Detailed explanation | detailed explanation of internal mechanism of industrial robot

ORACLE列转行--某字段按指定分隔符转多行

QT color is converted to string and uint

Compliance and efficiency, accelerate the digital transformation of pharmaceutical enterprises, and create a new document resource center for pharmaceutical enterprises

Related operations of Excel

JDBC learning notes

word中如何删除某符号前面或后面所有的文字

杰理之BLE【篇】
随机推荐
SSM learning
Go learning --- use reflection to judge whether the value is valid
Jerry needs to modify the profile definition of GATT [chapter]
#systemverilog# 可綜合模型的結構總結
word中如何删除某符号前面或后面所有的文字
TypeScript 函数定义
The way to learn go (II) basic types, variables and constants
Key value judgment in the cycle of TS type gymnastics, as keyword use
The way to learn go (I) the basic introduction of go to the first HelloWorld
洛谷P1836 数页码 题解
[dictionary tree] [trie] p3879 [tjoi2010] reading comprehension
Typescript interface properties
杰理之BLE【篇】
When the Jericho development board is powered on, you can open the NRF app with your mobile phone [article]
学go之路(一)go的基本介绍到第一个helloworld
word设置目录
Select all the lines with a symbol in word and change them to titles
剪映的相关介绍
超级浏览器是指纹浏览器吗?怎样选择一款好的超级浏览器?
2022年Instagram运营小技巧简单讲解