当前位置:网站首页>Build your own application based on Google's open source tensorflow object detection API video object recognition system (IV)
Build your own application based on Google's open source tensorflow object detection API video object recognition system (IV)
2022-07-06 20:28:00 【gmHappy】
The main content of this chapter is to use mqtt、 Multithreading 、 The queue implementation model is loaded at one time , Batch image recognition and classification function
The directory structure is as follows :
mqtt Connection and multithreaded queue management
# -*- coding:utf8 -*-
import paho.mqtt.client as mqtt
from multiprocessing import Process, Queue
import images_detect
MQTTHOST = "192.168.3.202"
MQTTPORT = 1883
mqttClient = mqtt.Client()
q = Queue()
# Connect MQTT The server
def on_mqtt_connect():
mqttClient.connect(MQTTHOST, MQTTPORT, 60)
mqttClient.loop_start()
# Message handler
def on_message_come(mqttClient, userdata, msg):
q.put(msg.payload.decode("utf-8")) # Put in queue
print(" Generate news ", msg.payload.decode("utf-8"))
def consumer(q, pid):
print(" Start the process of consumption sequence ", pid)
# Publishing messages in multiple processes requires reinitialization mqttClient
ImagesDetect = images_detect.ImagesDetect()
ImagesDetect.detect(q)
# subscribe News subscription
def on_subscribe():
mqttClient.subscribe("test", 1) # The theme is "test"
mqttClient.on_message = on_message_come # Message arrival processing function
# publish News release
def on_publish(topic, msg, qos):
mqttClient.publish(topic, msg, qos);
def main():
on_mqtt_connect()
on_subscribe()
for i in range(1, 3):
c1 = Process(target=consumer, args=(q, i))
c1.start()
while True:
pass
if __name__ == '__main__':
main()
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
- 45.
- 46.
- 47.
- 48.
- 49.
- 50.
- 51.
- 52.
- 53.
Image recognition
images_detect.py
# coding: utf-8
import numpy as np
import os
import sys
import tarfile
import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
import cv2
import decimal
import MyUtil
context = decimal.getcontext()
context.rounding = decimal.ROUND_05UP
class ImagesDetect():
def __init__(self):
sys.path.append("..")
MODEL_NAME = 'faster_rcnn_inception_v2_coco_2018_01_28'
MODEL_FILE = MODEL_NAME + '.tar.gz'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
# ## Load a (frozen) Tensorflow model into memory.
self.detection_graph = tf.Graph()
with self.detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
# ## Loading label map
# Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
self.category_index = label_map_util.create_category_index(categories)
self.image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents an object detected
self.boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represents the reliability of the detected object .
self.scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
self.classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
self.num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')
def detect(self, q):
with self.detection_graph.as_default():
config = tf.ConfigProto()
# config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.2
with tf.Session(graph=self.detection_graph, config=config) as sess:
while True:
img_src = q.get()
print('------------start------------' + MyUtil.get_time_stamp())
image_np = cv2.imread(img_src)
# Expand dimensions , Expected for model : [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Perform detection task .
(boxes, scores, classes, num_detections) = sess.run(
[self.boxes, self.scores, self.classes, self.num_detections],
feed_dict={self.image_tensor: image_np_expanded})
# Visualization of test results
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
self.category_index,
use_normalized_coordinates=True,
line_thickness=8)
print('------------end------------' + MyUtil.get_time_stamp())
# cv2.imshow('object detection', cv2.resize(image_np, (800, 600)))
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
- 45.
- 46.
- 47.
- 48.
- 49.
- 50.
- 51.
- 52.
- 53.
- 54.
- 55.
- 56.
- 57.
- 58.
- 59.
- 60.
- 61.
- 62.
- 63.
- 64.
- 65.
- 66.
- 67.
- 68.
- 69.
- 70.
- 71.
- 72.
- 73.
- 74.
- 75.
- 76.
- 77.
- 78.
- 79.
- 80.
- 81.
- 82.
- 83.
- 84.
- 85.
- 86.
- 87.
- 88.
- 89.
- 90.
- 91.
- 92.
- 93.
- 94.
MyUtil.py
import time
def get_time_stamp():
ct = time.time()
local_time = time.localtime(ct)
data_head = time.strftime("%Y-%m-%d %H:%M:%S", local_time)
data_secs = (ct - int(ct)) * 1000
time_stamp = "%s.%03d" % (data_head, data_secs)
return time_stamp
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
effect :
边栏推荐
- [network planning] Chapter 3 data link layer (3) channel division medium access control
- Guangzhou's first data security summit will open in Baiyun District
- BeagleBoneBlack 上手记
- 【每周一坑】信息加密 +【解答】正整数分解质因数
- 枚举根据参数获取值
- HMS core machine learning service creates a new "sound" state of simultaneous interpreting translation, and AI makes international exchanges smoother
- js获取浏览器系统语言
- An East SMS login resurrection installation and deployment tutorial
- Oceanbase Community Edition OBD mode deployment mode stand-alone installation
- JS implementation force deduction 71 question simplified path
猜你喜欢
OLED屏幕的使用
An East SMS login resurrection installation and deployment tutorial
[DSP] [Part 2] understand c6678 and create project
Leetcode question 283 Move zero
22-07-05 upload of qiniu cloud storage pictures and user avatars
Notes on beagleboneblack
[weekly pit] calculate the sum of primes within 100 + [answer] output triangle
Number of schemes from the upper left corner to the lower right corner of the chessboard (2)
OLED屏幕的使用
使用.Net驱动Jetson Nano的OLED显示屏
随机推荐
rt-thread i2c 使用教程
【每周一坑】信息加密 +【解答】正整数分解质因数
Learn to punch in Web
Function optimization and arrow function of ES6
BUUCTF---Reverse---easyre
22-07-05 七牛云存储图片、用户头像上传
【云原生与5G】微服务加持5G核心网
New generation garbage collector ZGC
JMeter server resource indicator monitoring (CPU, memory, etc.)
Tencent byte Alibaba Xiaomi jd.com offer got a soft hand, and the teacher said it was great
JS get browser system language
【每周一坑】输出三角形
使用.Net驱动Jetson Nano的OLED显示屏
电子游戏的核心原理
JS implementation force deduction 71 question simplified path
01 基础入门-概念名词
小孩子学什么编程?
Crawler (14) - scrape redis distributed crawler (1) | detailed explanation
POJ 3207 Ikki' s Story IV – Panda' s Trick (2-SAT)
棋盘左上角到右下角方案数(2)