当前位置:网站首页>Yolov5 face video stream
Yolov5 face video stream
2022-06-13 09:08:00 【Glass lamp】
I am not using the original version here yoloface Code for , The original code is a class , Five key points , My code is 4 class ,4 A key point . Please refer to my previous blog for specific change methods and related knowledge
yoloV5-face Learning notes _m0_58348465 The blog of -CSDN Blog
Change your mind
because yoloV5-face Castrate away many functions , Like video streaming , Deploy , So I hope to be able to directly in yolov5 Source code detect.py Make changes , Enable it to run video streams directly
Debug both , Yes yolov5 Of detect.py Make changes
First of all, will detect.py Of
pred = non_max_suppression_face
Change it to yolov5face Of
pred = non_max_suppression
And remember to be on top import Corresponding function .
Find the problem
After the change ,yolov5 The bounding box can be output normally , But the position of the key points is very out of line *(yoloface The key point output of is normal ).
First attempt
Debugging found , The thickness of the two is different , Continue debugging and find face The input is 800*640, and yolo The input is 640*512; That is to say, they are img_size Different . take yoloface Of img_size Default to 640, as follows
def detect_one(model, image_path, device):
# Load model
img_size = 640
conf_thres = 0.3
iou_thres = 0.5
It was found that , Unexpectedly, the key point position effect has also improved
Continue to be curious about img_size Make changes , It was found that the value was in 600 The best effect is left and right , Why? ?
reason : Because the weight of training is wrong , Select the wrong weight that is not fully convergent , So other subtle changes have a huge impact
A second try
After modification , Both have the same dimension , But the key point is still outrageous . The most extraordinary thing is , The non maximum suppression function is a copy of the past , During debugging pred The output is the same . Just as I was about to despair , I thought it might be because tensor Too big , Only the displayed parts are identical , Actually, it's different internally .
The following code can be used to remove the tensor The ellipsis of
import numpy as np
np.set_printoptions(threshold = 1e3)# Set the threshold for the number of prints
torch.set_printoptions(threshold=np.inf)
Add the above code and debug , It is found that the ellipsis inside the two are different .
Then I used the document comparison tool to parse model, But the two architectures are exactly the same . The only possibility is that the resolution of the two is different . because yoloface Of yolo.py Of detect I corrected it myself when I was debugging , So the most likely reason is detect Different modules . So according to yoloface( The four point model I modified ) Revision in China detect modular , The results are as follows
class Detect(nn.Module):
stride = None # strides computed during build
onnx_dynamic = False # ONNX export parameter
def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
super().__init__()
self.nc = nc # number of classes
self.no = nc + 5 + 8 # number of outputs per anchor
self.nl = len(anchors) # number of detection layers
self.na = len(anchors[0]) // 2 # number of anchors
self.grid = [torch.zeros(1)] * self.nl # init grid
self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid
self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
self.inplace = inplace # use in-place ops (e.g. slice assignment)
def forward(self, x):
z = [] # inference output
for i in range(self.nl):
x[i] = self.m[i](x[i]) # conv
bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
if not self.training: # inference
if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
y = torch.full_like(x[i], 0)
class_range = list(range(5)) + list(range(13,13+self.nc))
y[..., class_range] = x[i][..., class_range].sigmoid() # Here, only values other than the key point are sigmoid
y[..., 5:13] = x[i][..., 5:13]
if self.inplace:
y[..., 0:2] = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
y[..., 5:7] = y[..., 5:7] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x1 y1
y[..., 7:9] = y[..., 7:9] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x2 y2
y[..., 9:11] = y[..., 9:11] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x3 y3
y[..., 11:13] = y[..., 11:13] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x4 y4
else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
xy = (y[..., 0:2] * 2 - 0.5 + self.grid[i]) * self.stride[i] # xy
wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
# y[..., 5:15] = y[..., 5:15] * 8 - 4
y[..., 5:7] = y[..., 5:7] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x1 y1
y[..., 7:9] = y[..., 7:9] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x2 y2
y[..., 9:11] = y[..., 9:11] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x3 y3
y[..., 11:13] = y[..., 11:13] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x4 y4
y = torch.cat((xy, wh, y[..., 4:]), -1)
z.append(y.view(bs, -1, self.no))
return x if self.training else (torch.cat(z, 1), x)
def _make_grid(self, nx=20, ny=20, i=0):
d = self.anchors[i].device
if check_version(torch.__version__, '1.10.0'): # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility
yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)], indexing='ij')
else:
yv, xv = torch.meshgrid([torch.arange(ny, device=d), torch.arange(nx, device=d)])
grid = torch.stack((xv, yv), 2).expand((1, self.na, ny, nx, 2)).float()
anchor_grid = (self.anchors[i].clone() * self.stride[i]) \
.view((1, self.na, 1, 1, 2)).expand((1, self.na, ny, nx, 2)).float()
return grid, anchor_grid
Then debug , It is found that the output result is correct , So the next step is to visualize the results
Results output
To output the results , First, the feature points should be reduced to the actual size of the image , stay yolo170 Row or so ,if len(det): Back , Add a sentence scale_coords_landmarks function , The results after adding are as follows
if len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()
det[:, 5:13] = scale_coords_landmarks(im.shape[2:], det[:, 5:13], im0.shape).round()
# Print results
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
# s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string change
meanwhile , We will be having detect.py Definition above scale_coords_landmarks function code , The code is as follows
def scale_coords_landmarks(img1_shape, coords, img0_shape, ratio_pad=None):
# Rescale coords (xyxy) from img1_shape to img0_shape
if ratio_pad is None: # calculate from img0_shape
gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
else:
gain = ratio_pad[0][0]
pad = ratio_pad[1]
coords[:, [0, 2, 4, 6]] -= pad[0] # x padding
coords[:, [1, 3, 5, 7]] -= pad[1] # y padding
coords[:, :8] /= gain
#clip_coords(coords, img0_shape)
coords[:, 0].clamp_(0, img0_shape[1]) # x1
coords[:, 1].clamp_(0, img0_shape[0]) # y1
coords[:, 2].clamp_(0, img0_shape[1]) # x2
coords[:, 3].clamp_(0, img0_shape[0]) # y2
coords[:, 4].clamp_(0, img0_shape[1]) # x3
coords[:, 5].clamp_(0, img0_shape[0]) # y3
coords[:, 6].clamp_(0, img0_shape[1]) # x4
coords[:, 7].clamp_(0, img0_shape[0]) # y4
# coords[:, 8].clamp_(0, img0_shape[1]) # x5
# coords[:, 9].clamp_(0, img0_shape[0]) # y5
return coords
Result visualization ( The beta , You can skip )
The last step is to visualize the results , We will first detect.py Of weite results Comment all out , as follows
# Write results
# for *xyxy, conf, cls in reversed(det):
# if save_txt: # Write to file
# xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
# line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
# with open(txt_path + '.txt', 'a') as f:
# f.write(('%g ' * len(line)).rstrip() % line + '\n')
#
# if save_img or save_crop or view_img: # Add bbox to image
# c = int(cls) # integer class
# label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
# # annotator.box_label(xyxy, label, color=colors(c, True))
# if save_crop:
# save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)
Then change to this code
for j in range(det.size()[0]):
xywh = (xyxy2xywh(det[j, :4].view(1, 4)) / gn).view(-1).tolist()
conf = det[j, 4].cpu().numpy()
landmarks = (det[j, 5:13].view(1, 8) / gn_lks).view(-1).tolist()
class_num = det[j, 13].cpu().numpy()
im0 = show_results(im0, xywh, conf, landmarks, class_num)
Allied , We will be having detect.py Definition above show_results function , as follows
def show_results(img, xywh, conf, landmarks, class_num):
h,w,c = img.shape
tl = 1 or round(0.002 * (h + w) / 2) + 1 # line/font thickness
x1 = int(xywh[0] * w - 0.5 * xywh[2] * w)
y1 = int(xywh[1] * h - 0.5 * xywh[3] * h)
x2 = int(xywh[0] * w + 0.5 * xywh[2] * w)
y2 = int(xywh[1] * h + 0.5 * xywh[3] * h)
cv2.rectangle(img, (x1,y1), (x2, y2), (0,255,0), thickness=tl, lineType=cv2.LINE_AA)
clors = [(255,0,0),(0,255,0),(0,0,255),(255,255,0)]
for i in range(4):
point_x = int(landmarks[2 * i] * w)
point_y = int(landmarks[2 * i + 1] * h)
cv2.circle(img, (point_x, point_y), tl+1, clors[i], -1)
tf = max(tl - 1, 1) # font thickness
label = str(conf)[:5]
cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA)
return img
Besides , This code uses an undefined gn_lks, So it will detect.py 210 Row or so
gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]
Change it to
gn = torch.tensor(im0.shape)[[1, 0, 1, 0]].to(device) # normalization gain whwh
gn_lks = torch.tensor(im0.shape)[[1, 0, 1, 0, 1, 0, 1, 0, ]].to(device) # normalization gain landmarks
After the change , The results can be visualized normally
Result visualization ( The final version )
Although the above changes can also make the results visible , But it's just painting dots on photos or videos , Various are saved as txt Such functions are not used . So to improve the robustness of the code , We'd better detect.py Modify directly on
Found a very interesting thing , stay run function ( as follows ) in , Default data yes coco128.yaml, But it doesn't really matter data What is it? , Even a nonexistent file , Will not have any impact on the output , And the output data still coco128.yaml
@torch.no_grad()
def run(weights=ROOT / 'yolov5s.pt', # model.pt path(s)
source=ROOT / 'data/images', # file/dir/URL/glob, 0 for webcam
data=ROOT / 'data/coco.yaml', # dataset.yaml path
imgsz=(640, 640), # inference size (height, width)
After analysis, we found that , This is because the code of the following parameters is passed in by default coco128.yaml, in other words , about run function , It is equivalent to the introduction of coco128.yaml, Therefore, the default coco.yaml; Want to change the default parameters , You need to change the code for the following incoming parameters , stay detect.py Bottom of
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)')
parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')
There is another interesting thing . Although in detect I added yaml file , There is the name of the category , But the output class name is still 0,1,2,3. Found after analysis ,.pt The class name is reserved in the file , The class name in my training set is 0,1,2,3. Although in detect Added in yaml file , But this step is to read .pt Before the file class name , That is to say, my new class name is .pt The class name in the file overrides
边栏推荐
- 20211108 differential tracker
- 20211006 linear transformation
- CAS无锁
- How excel adds hyperlinks to some text in a cell
- JUC 原子累加器 源码之 LongAdder
- Tutorial (5.0) 04 Fortint cloud services and scripts * fortiedr * Fortinet network security expert NSE 5
- 银行理财产品有哪些?清算期是多长?
- Message Oriented Middleware
- pytorch统计模型的参数个数
- Jfinal and swagger integration
猜你喜欢
Qvector shallow copy performance test
Message Oriented Middleware
Knowledge points related to system architecture 3
Tutorial (5.0) 04 Fortint cloud services and scripts * fortiedr * Fortinet network security expert NSE 5
「解读」华为云桌面说“流畅”的时候,究竟在说什么?
Drill down to protobuf - Introduction
线上调试工具Arthas高级
教程篇(5.0) 04. Fortint云服务和脚本 * FortiEDR * Fortinet 网络安全专家 NSE 5
BGP 联邦+Community
Top+jstack to analyze the causes of excessive CPU
随机推荐
final 原理
20211108 A转置乘A是正定矩阵吗?A转置乘A是正定矩阵的充分必要条件是什么?
JUC atomic reference and ABA problem
「解读」华为云桌面说“流畅”的时候,究竟在说什么?
共享模型之不可变
Knowledge points related to system architecture 3
20211005 Hermite矩阵及几个性质
ES6 use of dynamic attributes
Visual studio tools using shortcut keys (continuous update)
The Jenkins console does not output custom shell execution logs
20211020 段院士全驱系统
C language time difference calculation
Vscode plug in
JUC 原子累加器
pytorch相同结构不同参数名模型加载权重
Is it safe to open an account online? Can a novice open an account?
20211115 矩阵对角化的充要条件;满秩矩阵不一定有n个线性无关的特征向量;对称矩阵一定可以对角化
How to become a white hat hacker? I suggest you start from these stages
【安全】零基础如何从0到1逆袭成为安全工程师
Two good kids