Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Overview

Vision recognition using LOBE AI and Azure Functions

License: MIT Twitter: elbruno GitHub: elbruno

During the last couple of months, I’ve having fun with my new friends at home: 🐿️ 🐿️ 🐿️ . These little ones, are extremelly funny, and they literally don’t care about the cold 🥶 ❄️ ☃️ .

So, I decided to help them and build an Automatic Feeder using Azure IoT, a Wio Terminal and maybe some more devices. You can check the Azure IoT project here Azure IoT - Squirrel Feeder.

Once the feeder was ready, I decided to add a new feature to the scenario, detecting when a squirrel 🐿️ is nearby the feeder. In this repository I'll share:

  • How to create an image recognition model using LOBE.
  • How to export the model to a TensorFlow image format.
  • How to run the model in an Azure Function.

LOBE AI

LOBE is a free, easy-to-use Microsoft desktop application that allows you to build, manage, and use custom machine learning models. With Lobe, you can create an image classification model to categorize images into labels that represent their content.

Here's a summary of how to prepare a model in Lobe:

  • Import and label images.
  • Train your model.
  • Evaluate training results.
  • Play with your model to experiment with different scenarios.
  • Export and use your model in an app.

The Overview of image classification model by Lobe section contains step-by-step instructions that let you make calls to the service and get results in a short period of time.

You can use the images in the "LOBE/Train/" directory in this repository to train your model.

Here is the model performing live recognition in action:

Exporting the model to TensorFlow

Once the project was trained, you can export it to several formats. We will use a TensorFlow format for the Azure Function.

The exported model has several files. The following list shows the files that we use in our Azure Function:

  • labels.txt: The labels that the model recognizes
  • saved_model.pb: The model definition
  • signature.json: The model signature
  • example/tf_example.py.py: sample python code that uses the exported model.

You can check the exported model in the "Lobe/ExportedModel" directory in this repository.

Azure Function

Time to code! Let's create a new Azure Function Using Visual Studio Code and the Azure Functions for Visual Studio Code extension.

Changes to __ init __.py

The following code is the final code for the __ init __.py file in the Azure Function.

A couple of notes:

  • The function will receive a POST request with the file bytes in the body.
  • In order to use the tf_model_helper file, we must import the tf_model_helper.py function from the tf_model_helper.py file using ".tf_model_helper"
  • ASSETS_PATH and TF_MODEL are the variables that we will use to access the exported model. We will use os.path to resolve the current path to the exported model.
  • The result of the function will be a JSON string with the prediction. Jsonify will convert the TF_Model() image prediction to a JSON string.
import logging
import azure.functions as func

# Imports for image procesing
import io
import os
from PIL import Image
from flask import Flask, jsonify

# Imports for prediction
from .tf_model_helper import TFModel

def main(req: func.HttpRequest) -> func.HttpResponse:
    logging.info('Python HTTP trigger function processed a request.')

    results = "{}"
    try:
        # get and load image from POST
        image_bytes = req.get_body()    
        image = Image.open(io.BytesIO(image_bytes))

        # Load and intialize the model and the app context
        app = Flask(__name__)  

        # load LOBE Model using the current directory
        scriptpath = os.path.abspath(__file__)
        scriptdir  = os.path.dirname(scriptpath)
        ASSETS_PATH = os.path.join(scriptdir, "model")
        TF_MODEL = TFModel(ASSETS_PATH)

        with app.app_context():        
            # prefict image and process results in json string format
            results = TF_MODEL.predict(image)            
            jsonresult = jsonify(results)
            jsonStr = jsonresult.get_data(as_text=True)
            results = jsonStr

    except Exception as e:
        logging.info(f'exception: {e}')
        pass 

    # return results
    logging.info('Image processed. Results: ' + results)
    return func.HttpResponse(results, status_code=200)

Changes to requirements.txt

The requirements.txt file will define the necessary libraries for the Azure Function. We will use the following libraries:

# DO NOT include azure-functions-worker in this file
# The Python Worker is managed by Azure Functions platform
# Manually managing azure-functions-worker may cause unexpected issues

azure-functions
requests
Pillow
numpy
flask
tensorflow
opencv-python

Sample Code

You can view a sample function completed code in the "AzureFunction/LobeSquirrelDetectorFunction/" directory in this repository.

Testing the sample

Once our code is complete we can test the sample in local mode or in Azure Functions, after we deploy the Function. In both scenarios we can use any tool or language that can perform HTTP POST requests to the Azure Function to test our function.

Test using Curl

Curl is a command line tool that allows you to send HTTP requests to a server. It is a very simple tool that can be used to send HTTP requests to a server. We can test the local function using curl with the following command:

❯ curl http://localhost:7071/api/LobeSquirrelDetectorFunction -Method 'Post' -InFile 01.jpg

Test using Postman

Postman is a great tool to test our function. You can use it to test the function in local mode and also to test the function once it has been deployed to Azure Functions. You can download Postman here.

In order to test our function we need to know the function url. In Visual Studio Code, we can get the url by clicking on the Functions section in the Azure Extension. Then we can right click on the function and select "Copy Function URL".

Now we can go to Postman and create a new POST request using our function url. We can also add the image we want to test. Here is a live demo, with the function running locally, in Debug mode in Visual Studio Code:

We are now ready to test our function in Azure Functions. To do so we need to deploy the function to Azure Functions. And use the new Azure Function url with the same test steps.

Additional Resources

You can check a session recording about this topic in English and Spanish.

These links will help to understand specific implementations of the sample code:

In my personal blog "ElBruno.com", I wrote about several scenarios on how to work and code with LOBE.

Author

👤 Bruno Capuano

🤝 Contributing

Contributions, issues and feature requests are welcome!

Feel free to check issues page.

Show your support

Give a ⭐️ if this project helped you!

📝 License

Copyright © 2021 Bruno Capuano.

This project is MIT licensed.


Owner
El Bruno
Sr Cloud Advocate @Microsoft, former Microsoft MVP (14 years!), lazy runner, lazy podcaster, technology enthusiast
El Bruno
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral

Generative Image Inpainting An open source framework for generative image inpainting task, with the support of Contextual Attention (CVPR 2018) and Ga

2.9k Dec 16, 2022
PyTorch Lightning + Hydra. A feature-rich template for rapid, scalable and reproducible ML experimentation with best practices. ⚡🔥⚡

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Łukasz Zalewski 2.1k Jan 09, 2023
Real-Time High-Resolution Background Matting

Real-Time High-Resolution Background Matting Official repository for the paper Real-Time High-Resolution Background Matting. Our model requires captur

Peter Lin 6.1k Jan 03, 2023
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
User-friendly bulk RNAseq deconvolution using simulated annealing

Welcome to cellanneal - The user-friendly application for deconvolving omics data sets. cellanneal is an application for deconvolving biological mixtu

11 Dec 16, 2022
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch .

PyTorch-High-Res-Stereo-Depth-Estimation Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch. Stereo dep

Ibai Gorordo 26 Nov 24, 2022
FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery (TGRS)

FactSeg: Foreground Activation Driven Small Object Semantic Segmentation in Large-Scale Remote Sensing Imagery by Ailong Ma, Junjue Wang*, Yanfei Zhon

Kingdrone 43 Jan 05, 2023
MTA:SA Server Configer.

MTAConfiger MTA:SA Server Configer. Hi 👋 , I'm Alireza A Python Developer Boy 🔭 I’m currently working on my C# projects 🌱 I’m currently Learning CS

3 Jun 07, 2022
Memory efficient transducer loss computation

Introduction This project implements the optimization techniques proposed in Improving RNN Transducer Modeling for End-to-End Speech Recognition to re

Fangjun Kuang 51 Nov 25, 2022
Pytorch implementation of our paper accepted by NeurIPS 2021 -- Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme (NeurIPS2021) (Link) Overview Prerequisites Linu

Shaojie Li 34 Mar 31, 2022
Framework for Spectral Clustering on the Sparse Coefficients of Learned Dictionaries

Dictionary Learning for Clustering on Hyperspectral Images Overview Framework for Spectral Clustering on the Sparse Coefficients of Learned Dictionari

Joshua Bruton 6 Oct 25, 2022
Robust & Reliable Route Recommendation on Road Networks

NeuroMLR: Robust & Reliable Route Recommendation on Road Networks This repository is the official implementation of NeuroMLR: Robust & Reliable Route

4 Dec 20, 2022
This is a GUI interface which can process forest fire detection, smoke detection and fire segmentation

This is a GUI interface which can process forest fire detection, smoke detection and fire segmentation. Yolov5 is used to detect fire and smoke and unet is used to segment fire.

7 Jan 08, 2023
This repository contains code demonstrating the methods outlined in Path Signature Area-Based Causal Discovery in Coupled Time Series presented at Causal Analysis Workshop 2021.

signed-area-causal-inference This repository contains code demonstrating the methods outlined in Path Signature Area-Based Causal Discovery in Coupled

Will Glad 1 Mar 11, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022
The repository for the paper "When Do You Need Billions of Words of Pretraining Data?"

pretraining-learning-curves This is the repository for the paper When Do You Need Billions of Words of Pretraining Data? Edge Probing We use jiant1 fo

ML² AT CILVR 19 Nov 25, 2022
VOS: Learning What You Don’t Know by Virtual Outlier Synthesis

VOS This is the source code accompanying the paper VOS: Learning What You Don’t

248 Dec 25, 2022
ShapeGlot: Learning Language for Shape Differentiation

ShapeGlot: Learning Language for Shape Differentiation Created by Panos Achlioptas, Judy Fan, Robert X.D. Hawkins, Noah D. Goodman, Leonidas J. Guibas

Panos 32 Dec 23, 2022
[CVPR 2022 Oral] Balanced MSE for Imbalanced Visual Regression https://arxiv.org/abs/2203.16427

Balanced MSE Code for the paper: Balanced MSE for Imbalanced Visual Regression Jiawei Ren, Mingyuan Zhang, Cunjun Yu, Ziwei Liu CVPR 2022 (Oral) News

Jiawei Ren 267 Jan 01, 2023