This project deploys a yolo fastest model in the form of tflite on raspberry 3b+. The model is from another repository of mine called -Trash-Classification-Car

Overview

Deploy-yolo-fastest-tflite-on-raspberry

觉得有用的话可以顺手点个star嗷

这个项目将垃圾分类小车中的tflite模型移植到了树莓派3b+上面。
该项目主要是为了记录在树莓派部署yolo fastest tflite的流程

(之后有时间会尝试用C++部署来提升性能)

一些问题

1. 如何运行tflite文件?

关于如何在linux端运行tflite模型的问题,官方文档中已经给的非常清楚,详见tflite.API

2. yolo-fastest的解码问题?

由于yolo fastest的输出格式和其他版本的yolo不太一样,所以其yolo输出的解码模式和其他版本yolo不同,需要引起注意。若要部署的模型不是yolo fastest tflite而是其他yolo,该项目可能不能直接适用, 但根据能力进行修改即可。

3.yolo-fastest的源码:Yolo-Fastest

现在yolo fastest的作者推出了V2版本,性能更好。该项目采用的是V1.

4.关于如何在windows上训练yolo fastest模型,详见本人另一个仓库:Yolo-Fastest-on-Windows

模型实机效果

image
该项目在树莓派3b+上可以跑到平均25帧每秒。
image

小车实机效果

image

项目内容

本项目包含两个文件夹,detect-camera-streamdetect-single-img
两个文件夹中结构相同,模型文件存在两个文件夹下的tflite/Sample_TFlite_model中,主程序写在TFLite_detection_stream.py和TFLite_detection_img.py里,yolo相关的函数写在yolo_layer.py中。

detect-camera-stream文件可以在树莓派3b+连接USB摄像头的情况下,实时的对视频流进行目标检测。
detect-single-img文件可以对tflite/下的4.jpg图片,即单独一张图片进行检测。

关于运行的命令,存放在instruction.txt之中。

如何直接运行该项目:

  1. 确保树莓派上有python3.7解释器。

  2. 安装virtualenv:

python3 -m venv tflite-env 

  3. 下载该项目所有文件。
  4. 进入tflite文件夹,进入虚拟python环境:

source tflite-env/bin/activate
bash get_pi_requirements.sh                  :若上一步提示缺少环境则执行这一行

  5. 在tflite文件夹下,运行instruction.txt中的指令:

python3 TFLite_detection_image.py
python3 TFLite_detection_stream.py

从零部署流程

  以detect-camera-stream为例。

  1. 创建虚拟python环境:

  创建一个tflite文件夹,创建虚拟环境:

cd tflite                                     :进入tflite
sudo pip3 install virtualenv                  :创建虚拟环境需要的工具
python3 -m venv tflite-env                    :创建虚拟环境,虚拟环境储存在tflite/tflite-env中
source tflite-env/bin/activate                :进入虚拟环境,每次推出terminal后都要执行此命令以进入虚拟环境

  2. 安装包和依赖:

  在进入虚拟环境后,提取出该项目中的get_pi_requirements.sh,放在tflite文件夹下:

bash get_pi_requirements.sh                   :下载包和依赖

  此时可通过以下代码来测试cv2模块是否安装好(opencv-python模块经常抽风):

python3
import cv2

  此时若没有报错则说明opencv-python安装成功,但经常出现以下错误:

ImportError: libjasper.so.1: cannot open shared object file: No such file or directory

  这个报错说明少安装了依赖,执行以下命令即可:(我是这样解决的,若解决不了请百度)

sudo apt-get install libjasper-dev

  3. 在tflite文件夹下创建Sample_TFlite_model文件夹,其中存放训练好的tflite模型。

  4. 运行模型 在tflite文件夹下,运行:

python3 TFLite_detection_stream.py

  即可看到效果

注意:若是自己的训练的模型而不是该项目里的,需要到TFLite_detection_stream.py中修改图片分辨率等参数。

由于树莓派要和小车通信,因此这里在记录一下在树莓派用AMA0实现串口通信的过程。

首先安装gedit编辑器,比vim好用一些:

sudo apt-get install gedit

然后禁用串口启动,开启串口硬件:

sudo raspi-config
interfacing options --> would you like a login shell to be accessible  over serial? --> No
                    --> would you like the serial port hardware to be enabled? --> Yes

由于蓝牙和AMA0使用的是同一个GPIO,将ttyAMA0和ttyS0的映射调换:

sudo gedit /boot/config.txt
在最后一行添加:dtoverlay=pi3-miniuart-bt
sudo reboot

因为控制台使用串口和通信串口只能存在一个,所以要禁用控制台来使用串口:

sudo systemctl stop [email protected]
sudo systemctl disable [email protected]

然后删除serial0相关:

sudo gedit /boot/cmdline.txt
删除 console=serial0,115200 ,没有就不管
sudo reboot

至此串口设置就完了,因为树莓派的python3解释器自带serial库,但我们之前创建的虚拟环境没有,所以要在虚拟环境再次安装:

sudo pip3 install pyserial
sudo pip3 install serial

可以通过以下代码来控制串口:

import serial
ser = serial.Serial('/dev/ttyAMA0',115200)      # 获取串口
if(ser.isOpen):
  ser.write(b'123')                             # 出现编码问题可以尝试加上 .encode()

通过GPIO来控制识别的开始和结束

这一部分的文件在该仓库的GPIO文件夹中可找到。

由于通过ssh连接树莓派比较复杂,且每次运行程序都需要电脑在手边,因此若能通过树莓派自身来控制程序的跑与结束,是最方便不过的了。
因此,我选择用一个按键开关来控制
image
这是一个双刀双掷开关,这里只用其中两个引脚。
  1. 写一个脚本来实现启动py文件:
  在/home/pi目录下编写charlie.sh文件:

cd /home/pi/Desktop/demo1/tflite
source tflite-env/bin/activate
python3 TFLite_detection_stream.py

  此时通过命令行输入bash /home/pi/charlie.sh即可运行py文件。
  2. 连线:
  将树莓派的3,5引脚连到开关的一段,GND连接到另一端。
  这样,在初始化时将3,5拉高。当开关按下时,3被拉低,可以此作为启动程序的标志。
  当开关被松开后,5被拉高,可以此作为退出程序的标志。
  3. 编写GPIO.py:
  首先在虚拟环境中安装RPi库:

pip3 install RPi.GP

  在/home/pi/Desktop/demo1/tflite的目录下编写GPIO.py,使当引脚3被拉低后运行charlie.sh:

import time
import RPi.GPIO as GPIO
import os

run_yolo_cmd = 'bash /home/pi/charlie.sh'

GPIO.setmode(GPIO.BOARD)
GPIO.setup(3, GPIO.IN, pull_up_down=GPIO.PUD_UP)

while(True):
    while(True):
        x = GPIO.input(3)
        if(x == 0):
            break

    print('pressed')
    time.sleep(1)
    os.system(run_yolo_cmd)
    

    while(True):
        x = GPIO.input(3)
        if (x == 1):
            break
    print('not pressed')
    time.sleep(1)

  注意,time.sleep(1)是必要的,因为按键在按下和松开时,电压是不稳定的,延时可以消抖。
  4. 修改TFLite_detection_stream.py:
  修改TFLite_detection_stream.py以使得其拥有检测到引脚5升高后自动结束运行的功能:

在TFLite_detection_stream中作如下添加:
import RPi.GPIO as GPIO

GPIO.setmode(GPIO.BOARD)
GPIO.setup(5, GPIO.IN, pull_up_down=GPIO.PUD_UP)

在进行模型推理的大循环中添加:
x = GPIO.input(5)
if(x == 1):
    print('camera exit')
    sys.exit(0)

  5. 实现开机自动运行GPIO.py

sudo gedit /etc/rc.local
在exit 0的上一行添加:
python3 /home/pi/Desktop/demo1/tflite/GPIO.py &
(&符号使得其一直在后台运行)

至此,开机后会自动运行GPIO.py,GPIO.py会不停检测引脚3。当按下引脚3后,GPIO.py会调用charlie.py来运行TFLite_detection_stream.py。TFLite_detection_stream.py会检测引脚5,当按键松开后,TFLite_detection_stream.py会自动退出。这是一个循环。再按下会再启动......

Least Square Calibration for Peer Reviews

Least Square Calibration for Peer Reviews Requirements gurobipy - for solving convex programs GPy - for Bayesian baseline numpy pandas To generate p

Sigma <a href=[email protected]"> 1 Nov 01, 2021
PyElecCL - Electron Monte Carlo Second Checks

PyElecCL Python program to perform second checks for electron Monte Carlo radiat

Reese Haywood 3 Feb 22, 2022
Python Rapid Artificial Intelligence Ab Initio Molecular Dynamics

Python Rapid Artificial Intelligence Ab Initio Molecular Dynamics

14 Nov 06, 2022
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022
MixRNet(Using mixup as regularization and tuning hyper-parameters for ResNets)

MixRNet(Using mixup as regularization and tuning hyper-parameters for ResNets) Using mixup data augmentation as reguliraztion and tuning the hyper par

Bhanu 2 Jan 16, 2022
Fuzzing tool (TFuzz): a fuzzing tool based on program transformation

T-Fuzz T-Fuzz consists of 2 components: Fuzzing tool (TFuzz): a fuzzing tool based on program transformation Crash Analyzer (CrashAnalyzer): a tool th

HexHive 244 Nov 09, 2022
Pytorch Performace Tuning, WandB, AMP, Multi-GPU, TensorRT, Triton

Plant Pathology 2020 FGVC7 Introduction A deep learning model pipeline for training, experimentaiton and deployment for the Kaggle Competition, Plant

Bharat Giddwani 0 Feb 25, 2022
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
Direct Multi-view Multi-person 3D Human Pose Estimation

Implementation of NeurIPS-2021 paper: Direct Multi-view Multi-person 3D Human Pose Estimation [paper] [video-YouTube, video-Bilibili] [slides] This is

Sea AI Lab 251 Dec 30, 2022
Reinforcement learning for self-driving in a 3D simulation

SelfDrive_AI Reinforcement learning for self-driving in a 3D simulation (Created using UNITY-3D) 1. Requirements for the SelfDrive_AI Gym You need Pyt

Surajit Saikia 17 Dec 14, 2021
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Yulei Niu 94 Dec 03, 2022
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Syed Waqas Zamir 906 Dec 30, 2022
Vision transformers (ViTs) have found only limited practical use in processing images

CXV Convolutional Xformers for Vision Vision transformers (ViTs) have found only limited practical use in processing images, in spite of their state-o

Cloudwalker 23 Sep 10, 2022
This is the code for "HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields".

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields This is the code for "HyperNeRF: A Higher-Dimensional

Google 702 Jan 02, 2023
Aircraft design optimization made fast through modern automatic differentiation

Aircraft design optimization made fast through modern automatic differentiation. Plug-and-play analysis tools for aerodynamics, propulsion, structures, trajectory design, and much more.

Peter Sharpe 394 Dec 23, 2022
From Perceptron model to Deep Neural Network from scratch in Python.

Neural-Network-Basics Aim of this Repository: From Perceptron model to Deep Neural Network (from scratch) in Python. ** Currently working on a basic N

Aditya Kahol 1 Jan 14, 2022
(NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductive few-shot classification"

SSR (NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductivefew-shot classification" [Paper] [Project webpage]

xshen 29 Dec 06, 2022
Code for unmixing audio signals in four different stems "drums, bass, vocals, others". The code is adapted from "Jukebox: A Generative Model for Music"

Status: Archive (code is provided as-is, no updates expected) Disclaimer This code is a based on "Jukebox: A Generative Model for Music" Paper We adju

Wadhah Zai El Amri 24 Dec 29, 2022
Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning

RIIT Our open-source code for RIIT: Rethinking the Importance of Implementation Tricks in Multi-AgentReinforcement Learning. We implement and standard

405 Jan 06, 2023