当前位置:网站首页>Dnn+yolo+flask reasoning (raspberry pie real-time streaming - including Yolo family bucket Series)

Dnn+yolo+flask reasoning (raspberry pie real-time streaming - including Yolo family bucket Series)

2022-07-08 02:18:00 pogg_

YOLO-Streaming

This resource library records the process of pushing video streams on some ultra lightweight Networks . The general procedure is ,opencv Call board ( Like raspberry pie ) 's camera , Transmit the detected real-time video to yolo-fastest、nanodet、ghostnet And other ultra lightweight Networks , Then talk about using flask The lightweight framework pushes the processed video frames to the network , Basically, real-time performance can be guaranteed . also , This repository also records the performance of some side-by-side reasoning frameworks , Interested netizens can communicate .

 Insert picture description here
Warehouse link :https://github.com/pengtougu/DNN-Lightweight-Streaming

welcome Star and PR!
Welcome to add qq Group communication :696654483

Requirements

Please install dependencies first (for dnn→ call dnn Environment )

  • Linux & MacOS & window
  • python>= 3.6.0
  • opencv-python>= 4.2.X
  • flask>= 1.0.0

Please install dependencies first (for ncnn→ call ncnn Environment )

  • Linux & MacOS & window
  • Visual Studio 2019
  • cmake-3.16.5
  • protobuf-3.4.0
  • opencv-3.4.0
  • vulkan-1.14.8

inference

EquipmentComputing backendSystemFrameworkinput_sizeRun time
Raspberrypi 3B4xCortex-A53Linux(arm64)dnn32089ms
IntelCore i5-4210window10(x64)dnn32021ms
EquipmentComputing backendSystemFrameworkinput_sizeRun time
Raspberrypi 3B4xCortex-A53Linux(arm64)dnn320315ms
IntelCore i5-4210window10(x64)dnn32041ms
EquipmentComputing backendSystemFrameworkinput_sizeRun time
Raspberrypi 3B4xCortex-A53Linux(arm64)dnn320673ms
IntelCore i5-4210window10(x64)dnn320131ms
Raspberrypi 3B4xCortex-A53Linux(arm64)ncnn160716ms
IntelCore i5-4210window10(x64)ncnn160197ms
EquipmentComputing backendSystemFrameworkinput_sizeRun time
Raspberrypi 3B4xCortex-A53Linux(arm64)dnn320113ms
IntelCore i5-4210window10(x64)dnn32023ms

updating. . .

Demo

First , I am already in window、Mac and Linux The demo was tested in the environment , It works on all three platforms , And there's no need to change the code , Use as you go .

The following files are included after downloading ( The red line is the main operation file ):
 Insert picture description here

Run v3_fastest.py

  • Inference images use python yolov3_fastest.py --image dog.jpg
  • Inference video use python yolov3_fastest.py --video test.mp4
  • Inference webcam use python yolov3_fastest.py --fourcc 0

Run v4_tiny.py

  • Inference images use python v4_tiny.py --image person.jpg
  • Inference video use python v4_tiny.py --video test.mp4
  • Inference webcam use python v4_tiny.py --fourcc 0

Run v5_dnn.py

  • Inference images use python v5_dnn.py --image person.jpg
  • Inference video use python v5_dnn.py --video test.mp4
  • Inference webcam use python v5_dnn.py --fourcc 0

Run NanoDet.py

  • Inference images use python NanoDet.py --image person.jpg
  • Inference video use python NanoDet.py --video test.mp4
  • Inference webcam use python NanoDet.py --fourcc 0

Run app.py -(Push-Streaming online)

  • Inference with v3-fastest python app.py --model v3_fastest
  • Inference with v4-tiny python app.py --model v4_tiny
  • Inference with v5-dnn python app.py --model v5_dnn
  • Inference with NanoDet python app.py --model NanoDet

Please note! Be sure to be on the same LAN!

Demo Effects

Run v3_fastest.py

  • image→video→capture→push stream
     Insert picture description here

Run v4_tiny.py

  • image→video→capture→push stream
    It needs to be optimized , Follow up quantitative version , To be updated …

Run v5_dnn.py

 Insert picture description here

  • image(473 ms / Inference Image / Core i5-4210)→video→capture(213 ms / Inference Image / Core i5-4210)→push stream

2021-04-26 remember : Interestingly , use onnx+dnn Method call v5s Model of , It takes twice as long to infer pictures than to process frames with the camera , I watched it for a long time , Still can't find the problem , I hope you can help me look at the code , Point out the problem , thank !

2021-05-01 more : I found the problem today , because v5_dnn.py There is a function of reasoning time drawn on the frame graph in the file (cv2.putText(frame, "TIME: " + str(localtime), (8, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2)), And this function actually takes the time of reasoning post-processing per frame 2/3( About one frame is 50-80ms), All subsequent versions are removed , Change to terminal display , The reasoning time of each frame is determined by 190ms→130ms, It's also horrible .

Supplement

This is a DNN repository that integrates the current detection algorithms. You may ask why call the model with DNN, not just git clone the whole framework down? In fact, when we are working with models, it is more advisable to separate training and inference. More, when you deploy models on a customer’s production line, if you package up the training code and the training-dependent environment for each set of models (yet the customer’s platform only needs you to infer, no training required for you), you will be dead after a few sets of models. As an example, here is the docker for the same version of yolov5 (complete code and dependencies & inference code and dependencies). The entire docker has enough memory to support about four sets of inference dockers.

This is an integration of current detection algorithms DNN The repository . You may ask , Why DNN Call model , Not directly git Clone the whole framework ? in fact , When we are dealing with models , It is wise to separate training from reasoning . Is more of a , When you deploy the model on the customer's production line , If you package the training code of each model with the training dependent environment ( However, the customer's platform only needs your reasoning , You don't need to train ), Then you get cold after several sets of models . As an example , Here is the same version yolov5 Of docker( Complete code and dependencies →6.06G & Reasoning code and dependencies →0.4G). Whole docker There is enough memory to support about 15 Nested reasoning docker.

 Insert picture description here
 Insert picture description here

Thanks

other

Welcome to join the deep learning exchange group :696654483
There are many graduate leaders and bigwigs from all walks of life ~

原网站

版权声明
本文为[pogg_]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202130540226088.html