Potato Disease Classification - Training, Rest APIs, and Frontend to test.

Overview

Potato Disease Classification

Setup for Python:

  1. Install Python (Setup instructions)

  2. Install Python packages

pip3 install -r training/requirements.txt
pip3 install -r api/requirements.txt
  1. Install Tensorflow Serving (Setup instructions)

Setup for ReactJS

  1. Install Nodejs (Setup instructions)
  2. Install NPM (Setup instructions)
  3. Install dependencies
cd frontend
npm install --from-lock-json
npm audit fix
  1. Copy .env.example as .env.

  2. Change API url in .env.

Setup for React-Native app

  1. Go to the React Native environment setup, then select React Native CLI Quickstart tab.

  2. Install dependencies

cd mobile-app
yarn install
  • 2.1 Only for mac users
cd ios && pod install && cd ../
  1. Copy .env.example as .env.

  2. Change API url in .env.

Training the Model

  1. Download the data from kaggle.
  2. Only keep folders related to Potatoes.
  3. Run Jupyter Notebook in Browser.
jupyter notebook
  1. Open training/potato-disease-training.ipynb in Jupyter Notebook.
  2. In cell #2, update the path to dataset.
  3. Run all the Cells one by one.
  4. Copy the model generated and save it with the version number in the models folder.

Running the API

Using FastAPI

  1. Get inside api folder
cd api
  1. Run the FastAPI Server using uvicorn
uvicorn main:app --reload --host 0.0.0.0
  1. Your API is now running at 0.0.0.0:8000

Using FastAPI & TF Serve

  1. Get inside api folder
cd api
  1. Copy the models.config.example as models.config and update the paths in file.
  2. Run the TF Serve (Update config file path below)
docker run -t --rm -p 8501:8501 -v C:/Code/potato-disease-classification:/potato-disease-classification tensorflow/serving --rest_api_port=8501 --model_config_file=/potato-disease-classification/models.config
  1. Run the FastAPI Server using uvicorn For this you can directly run it from your main.py or main-tf-serving.py using pycharm run option (as shown in the video tutorial) OR you can run it from command prompt as shown below,
uvicorn main-tf-serving:app --reload --host 0.0.0.0
  1. Your API is now running at 0.0.0.0:8000

Running the Frontend

  1. Get inside api folder
cd frontend
  1. Copy the .env.example as .env and update REACT_APP_API_URL to API URL if needed.
  2. Run the frontend
npm run start

Running the app

  1. Get inside mobile-app folder
cd mobile-app
  1. Copy the .env.example as .env and update URL to API URL if needed.

  2. Run the app (android/iOS)

npm run android

or

npm run ios
  1. Creating public (signed APK)

Creating the TF Lite Model

  1. Run Jupyter Notebook in Browser.
jupyter notebook
  1. Open training/tf-lite-converter.ipynb in Jupyter Notebook.
  2. In cell #2, update the path to dataset.
  3. Run all the Cells one by one.
  4. Model would be saved in tf-lite-models folder.

Deploying the TF Lite on GCP

  1. Create a GCP account.
  2. Create a Project on GCP (Keep note of the project id).
  3. Create a GCP bucket.
  4. Upload the potatoes.h5 model in the bucket in the path models/potatos.h5.
  5. Install Google Cloud SDK (Setup instructions).
  6. Authenticate with Google Cloud SDK.
gcloud auth login
  1. Run the deployment script.
cd gcp
gcloud functions deploy predict_lite --runtime python38 --trigger-http --memory 512 --project project_id
  1. Your model is now deployed.
  2. Use Postman to test the GCF using the Trigger URL.

Inspiration: https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions

Deploying the TF Model (.h5) on GCP

  1. Create a GCP account.
  2. Create a Project on GCP (Keep note of the project id).
  3. Create a GCP bucket.
  4. Upload the tf .h5 model generate in the bucket in the path models/potato-model.h5.
  5. Install Google Cloud SDK (Setup instructions).
  6. Authenticate with Google Cloud SDK.
gcloud auth login
  1. Run the deployment script.
cd gcp
gcloud functions deploy predict --runtime python38 --trigger-http --memory 512 --project project_id
  1. Your model is now deployed.
  2. Use Postman to test the GCF using the Trigger URL.

Inspiration: https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions

Owner
codebasics
codebasics
Industrial Image Anomaly Localization Based on Gaussian Clustering of Pre-trained Feature

Industrial Image Anomaly Localization Based on Gaussian Clustering of Pre-trained Feature Q. Wan, L. Gao, X. Li and L. Wen, "Industrial Image Anomaly

smiler 6 Dec 25, 2022
VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

VID-Fusion VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation Authors: Ziming Ding , Tiankai Yang, Kunyi Zhan

ZJU FAST Lab 86 Nov 18, 2022
A Number Recognition algorithm

Paddle-VisualAttention Results_Compared SVHN Dataset Methods Steps GPU Batch Size Learning Rate Patience Decay Step Decay Rate Training Speed (FPS) Ac

1 Nov 12, 2021
Reimplementation of Learning Mesh-based Simulation With Graph Networks

Pytorch Implementation of Learning Mesh-based Simulation With Graph Networks This is the unofficial implementation of the approach described in the pa

Jingwei Xu 33 Dec 14, 2022
Official PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)

MeTAL - Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning (ICCV2021 Oral) Sungyong Baik, Janghoon Choi, Heewon Kim, Dohee Cho, Jaes

Sungyong Baik 44 Dec 29, 2022
Unsupervised Feature Ranking via Attribute Networks.

FRANe Unsupervised Feature Ranking via Attribute Networks (FRANe) converts a dataset into a network (graph) with nodes that correspond to the features

7 Sep 29, 2022
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 07, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022
Object tracking implemented with YOLOv4, DeepSort, and TensorFlow.

Object tracking implemented with YOLOv4, DeepSort, and TensorFlow. YOLOv4 is a state of the art algorithm that uses deep convolutional neural networks to perform object detections. We can take the ou

The AI Guy 1.1k Dec 29, 2022
Source code for PairNorm (ICLR 2020)

PairNorm Official pytorch source code for PairNorm paper (ICLR 2020) This code requires pytorch_geometric=1.3.2 usage For SGC, we use original PairNo

62 Dec 08, 2022
Pytorch implementation of DeepMind's differentiable neural computer paper.

DNC pytorch This is a Pytorch implementation of DeepMind's Differentiable Neural Computer (DNC) architecture introduced in their recent Nature paper:

Yuanpu Xie 91 Nov 21, 2022
[CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

Andy_Ge 54 Dec 21, 2022
A faster pytorch implementation of faster r-cnn

A Faster Pytorch Implementation of Faster R-CNN Write at the beginning [05/29/2020] This repo was initaited about two years ago, developed as the firs

Jianwei Yang 7.1k Jan 01, 2023
AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

4 Feb 13, 2022
Einshape: DSL-based reshaping library for JAX and other frameworks.

Einshape: DSL-based reshaping library for JAX and other frameworks. The jnp.einsum op provides a DSL-based unified interface to matmul and tensordot o

DeepMind 62 Nov 30, 2022
Viewmaker Networks: Learning Views for Unsupervised Representation Learning

Viewmaker Networks: Learning Views for Unsupervised Representation Learning Alex Tamkin, Mike Wu, and Noah Goodman Paper link: https://arxiv.org/abs/2

Alex Tamkin 31 Dec 01, 2022
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022
VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations

This repository contains the introduction to the collected VRViewportPose dataset and the code for the IEEE INFOCOM 2022 paper: "VR Viewport Pose Model for Quantifying and Exploiting Frame Correlatio

0 Aug 10, 2022
Saliency - Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).

Saliency Methods 🔴 Now framework-agnostic! (Example core notebook) 🔴 🔗 For further explanation of the methods and more examples of the resulting ma

PAIR code 849 Dec 27, 2022