This project shows how to serve an TF based image classification model as a web service with TFServing, Docker, and Kubernetes(GKE).

Overview

Deploying ML models with CPU based TFServing, Docker, and Kubernetes

By: Chansung Park and Sayak Paul

This project shows how to serve a TensorFlow image classification model as RESTful and gRPC based service with TFServing, Docker, and Kubernetes. The idea is to first create a custom TFServing docker image with a TensorFlow model, and then deploy it on a k8s cluster running on Google Kubernetes Engine (GKE). Also we are using GitHub Actions to automate all the procedures when a new TensorFlow model is released.

👋 NOTE

  • Even though this project uses an image classification its structure and techniques can be used to serve other models as well.
  • There is a counter part project using FastAPI instead of TFServing. If you wonder from how to convert TensorFlow model to ONNX optimized model to deploy it on k8s cluster, check out the this repo.

Deploying the model as a service with k8s

  • Prerequisites: Doing anything beforehand, you have to create GKE cluster and service accounts with appropriate roles. Also, you need to grasp GCP credentials to access any GCP resources in GitHub Action. Please check out the more detailed information here
flowchart LR
    A[First: Environmental Setup]-->B;
    B[Second: Build TFServing Image]-->C[Third: Deploy on GKE];
  • To deploy a custom TFServing docker image, we define deployment.yml workflow file which is is only triggered when there is a new release for the current repository. It is subdivided into three parts to do the following tasks:
    • First subtask handles the environmental setup.
      • GCP Authentication (GCP credential has to be provided in GitHub Secret)
      • Install gcloud CLI toolkit
      • Authenticate Docker to push images to GCR(Google Cloud Registry)
      • Connect to the designated GKE cluster
    • Second subtask handles building a custom TFServing image.
      • Download and extract the latest released model from the current repository
      • Run the CPU optimized TFServing image which is compiled from the source code (FYI. image tag is gcr.io/gcp-ml-172005/tfs-resnet-cpu-opt, and it is publicly available)
      • Copy the extracted model into the running container
      • Commit the changes of the running container and give it a new image name
      • Push the commited image
    • Third subtask handles deploying the custom TFServing image to GKE cluster.
      • Pick a one of the scenarios from a various experiments
      • Download Kustomize toolkit to handle overlay configurations.
      • Update image tag with the currently built one with Kustomize
      • By provisioning Deployment, Service, and ConfigMap, the custom TFServing image gets deployed.
        • NOTE: ConfigMap is only used for batching enabled scenarios to inject batching configurations dynamically into the Deployment.
    • In order to use this repo for your own purpose, please read this document to know what environment variables have to be set.

If the entire workflow goes without any errors, you will see something silimar to the text below. As you see, two external interfaces(8500 for RESTful, 8501 for gRPC) are exposed. You can check out the complete logs in the past runs.

NAME             TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                          AGE
tfs-server       LoadBalancer   xxxxxxxxxx     xxxxxxxxxx      8500:30869/TCP,8501:31469/TCP    23m
kubernetes       ClusterIP      xxxxxxxxxx     <none>          443/TCP                         160m

Load testing

We used Locust to conduct load tests for both TFServing and FastAPI. Below is the results for TFServing(gRPC) on a various setups, and you can find out the result for FastAPI(RESTful) in a separate repo. For specific instructions about how to install Locust and run a load test, follow this separate document.

Hypothesis

  • This is a follow-up project after ONNX optimized FastAPI deployment, so we wanted to know how CPU optimized TensorFlow runtime could be compared to ONNX based one.
  • TFServing's objective is to maximize throughput while keeping tail-latency below certain bounds. We wanted to see if this is true, how reliably it provides a good throughput performance and how much throughput is sacrified to keep the reliability.
  • According to the TFServing's official document, TFServing can achieve the best performance when it is deployed on fewer, larger(in terms of CPU, RAM) machines. We wanted to estimate how large of machine and how many nodes are enough. For this, we have prepared a set of different setups in combination of (# of nodes + # of CPU cores + RAM capacity).
  • TFServing has a number of configurable options to tune the performance. Especially, we wanted to find out how different values of --tensorflow_inter_op_parallelism, --tensorflow_intra_op_parallelism, and --enable_batching options gives different results.

Conclusion

From the results above,

  • TFServing focuses more on reliability than performance(in terms of throughput). In any cases, no failures are observed, and the the response time is consistent.
  • Req/s is lower than ONNX optimized FastAPI deployment, so it sacrifies some performance to achieve reliability. However, you need to notice that TFServing comes with lots of built-in features which are required in most of ML serving scenarios such as multi model serving, dynamic batching, model versioning, and so on. Those features possibly make TFServing heavier than simple FastAPI server.
    • NOTE: We spawned requests every seconds to clearly see how TFServing behaves with the increasing number of clients. So you can assume that the Req/s doesn't reflect the real world situation where clients try to send requests in any time.
  • 8vCPU + 16GB RAM seems like large enough machine. At least bigger size of RAM doesn't help much. We might achieve better performance if we increase the number of CPU core than 8, but beyond 8 cores is somewhat costly.
  • In any cases, the optimal value of --tensorflow_inter_op_parallelism seems like 4. The value of --tensorflow_intra_op_parallelism is fixed to the number of CPU cores since it specifies the number of threads to use to parallelize the execution of an individual op.
  • --enable_batching could give you better performance. However, since TFServing doesn't immediately response to each requests, there is a trade-off.
  • By considering cost trade-off, our recommendation from the experiment is to choose 2n-8c-16r-interop4 configuration unless you care about dynamic batching capabilities. Or you can write a similar setup by referencing 2n-8c-16r-interop2-batch but for smaller machines as well.

👋 NOTE

  • Locust doesnt' have a built-in support to write a gRPC based client, so we have written one for ourselves. If you are curious about the implementation, check this locustfile.py out.
  • For the legend in the plot, n means the number of nodes(pods), c means the number of CPU cores, r means the RAM capacity, interop means the number of --tensorflow_inter_op_parallelism, and batch means the batching configuration is enabled with this config.

Future works

  • More load test comparisons with more ML inference frameworks such as NVIDIA's Triton Inference Server, KServe, and RedisAI.

  • Advancing this repo by providing a semi-automatic model deployment. To be more specific, when new codes implementing new ML model is pull requested, maintainers could trigger model performance evaluable on GCP's Vertex Training via comments. The experiment results could be exposed through TensorBoard.dev or W&B. If it is approved, the code will be merged, the trained model will be released, and it is going to be deployed on GKE.

Acknowledgements

ML-GDE program for providing GCP credit support.

You might also like...
Checkmk kube agent - Checkmk Kubernetes Cluster and Node Collectors

Checkmk Kubernetes Cluster and Node Collectors Checkmk cluster and node collecto

A basic instruction for Kubernetes setup and understanding.

A basic instruction for Kubernetes setup and understanding Module ID Module Guide - Install Kubernetes Cluster k8s-install 3 Docker Core Technology mo

A Blazing fast Security Auditing tool for Kubernetes
A Blazing fast Security Auditing tool for Kubernetes

A Blazing fast Security Auditing tool for kubernetes!! Basic Overview Kubestriker performs numerous in depth checks on kubernetes infra to identify th

Official Python client library for kubernetes

Kubernetes Python Client Python client for the kubernetes API. Installation From source: git clone --recursive https://github.com/kubernetes-client/py

A Kubernetes operator that creates UptimeRobot monitors for your ingresses

This operator automatically creates uptime monitors at UptimeRobot for your Kubernetes Ingress resources. This allows you to easily integrate uptime monitoring of your services into your Kubernetes deployments.

A Simple script to hunt unused Kubernetes resources.

K8SPurger A Simple script to hunt unused Kubernetes resources. Release History Release 0.3 Added Ingress Added Services Account Adding RoleBindding Re

Run Oracle on Kubernetes with El Carro

El Carro is a new project that offers a way to run Oracle databases in Kubernetes as a portable, open source, community driven, no vendor lock-in container orchestration system. El Carro provides a powerful declarative API for comprehensive and consistent configuration and deployment as well as for real-time operations and monitoring.

Chartreuse: Automated Alembic migrations within kubernetes
Chartreuse: Automated Alembic migrations within kubernetes

Chartreuse: Automated Alembic SQL schema migrations within kubernetes "How to automate management of Alembic database schema migration at scale using

Caboto, the Kubernetes semantic analysis tool
Caboto, the Kubernetes semantic analysis tool

Caboto Caboto, the Kubernetes semantic analysis toolkit. It contains a lightweight Python library for semantic analysis of plain Kubernetes manifests

Comments
  • Update README.md

    Update README.md

    The README looks really comprehensive.

    A couple of minor things I would suggest changing / adding:

    • I think it'd be helpful for the readers to know that we're interested in deploying the gRPC client of TF Serving via GitHub Actions.
    • A note on how to perform inference with the gRPC client deployed via Actions. Or better yet, include the pre-processing and post-processing handlers since we're not load-testing anymore.
    • Brief notes about the important numbers shown in the load-test plots.
    • Spec of the machine we used to perform load-testing.
    • How the load-test charts were generated.
    opened by sayakpaul 1
Owner
Chansung Park
GDE for Machine Learning
Chansung Park
pyinfra automates infrastructure super fast at massive scale. It can be used for ad-hoc command execution, service deployment, configuration management and more.

pyinfra automates/provisions/manages/deploys infrastructure super fast at massive scale. It can be used for ad-hoc command execution, service deployme

Nick Barrett 2.1k Dec 29, 2022
Supervisor process control system for UNIX

Supervisor Supervisor is a client/server system that allows its users to control a number of processes on UNIX-like operating systems. Supported Platf

Supervisor 7.6k Dec 31, 2022
Some automation scripts to setup a deployable development database server (with docker).

Postgres-Docker Database Initializer This is a simple automation script that will create a Docker Postgres database with a custom username, password,

Pysogge 1 Nov 11, 2021
Define and run multi-container applications with Docker

Docker Compose Docker Compose is a tool for running multi-container applications on Docker defined using the Compose file format. A Compose file is us

Docker 28.2k Jan 08, 2023
The leading native Python SSHv2 protocol library.

Paramiko Paramiko: Python SSH module Copyright: Copyright (c) 2009 Robey Pointer 8.1k Jan 04, 2023

A collection of beginner-friendly DevOps content

mansion Mansion is just a testing repo for learners to commit into open source project. These are the steps you need to learn: Please do not edit thes

Bryan Lim 62 Nov 30, 2022
Pulumi - Developer-First Infrastructure as Code. Your Cloud, Your Language, Your Way 🚀

Pulumi's Infrastructure as Code SDK is the easiest way to create and deploy cloud software that use containers, serverless functions, hosted services,

Pulumi 14.7k Jan 08, 2023
Hatch plugin for Docker containers

hatch-containers CI/CD Package Meta This provides a plugin for Hatch that allows

Ofek Lev 11 Dec 30, 2022
Dockerized service to backup all running database containers

Docker Database Backup Dockerized service to automatically backup all of your database containers. Docker Image Tags: docker.io/jandi/database-backup

Jan Dittrich 16 Dec 31, 2022
GitGoat enables DevOps and Engineering teams to test security products intending to integrate with GitHub

GitGoat is an open source tool that was built to enable DevOps and Engineering teams to design and implement a sustainable misconfiguration prevention strategy. It can be used to test with products w

Arnica 149 Dec 22, 2022
Deploying a production-ready Django project using Nginx and Gunicorn

django-nginx-gunicorn This project is for deploying a production-ready Django project using Nginx and Gunicorn. Running a local server of Django is no

Arash Sayareh 8 Jul 03, 2022
Travis CI testing a Dockerfile based on Palantir's remix of Apache Cassandra, testing IaC, and testing integration health of Debian

Testing Palantir's remix of Apache Cassandra with Snyk & Travis CI This repository is to show Travis CI testing a Dockerfile based on Palantir's remix

Montana Mendy 1 Dec 20, 2021
CI repo for building Skia as a shared library

Automated Skia builds This repo is dedicated to building Skia binaries for use in Skija. Prebuilt binaries Prebuilt binaries can be found in releases.

Humble UI 20 Jan 06, 2023
Self-hosted, easily-deployable monitoring and alerts service - like a lightweight PagerDuty

Cabot Maintainers wanted Cabot is stable and used by hundreds of companies and individuals in production, but it is not actively maintained. We would

Arachnys 5.4k Dec 23, 2022
A repository containing a short tutorial for Docker (with Python).

Docker Tutorial for IFT 6758 Lab In this repository, we examine the advtanges of virtualization, what Docker is and how we can deploy simple programs

Arka Mukherjee 0 Dec 14, 2021
Push Container Image To Docker Registry In Python

push-container-image-to-docker-registry 概要 push-container-image-to-docker-registry は、エッジコンピューティング環境において、特定のエッジ端末上の Private Docker Registry に特定のコンテナイメー

Latona, Inc. 3 Nov 04, 2021
Docker Container wallstreetbets-sentiment-analysis

Docker Container wallstreetbets-sentiment-analysis A docker container using restful endpoints exposed on port 5000 "/analyze" to gather sentiment anal

145 Nov 22, 2022
A basic instruction for Kubernetes setup and understanding.

A basic instruction for Kubernetes setup and understanding Module ID Module Guide - Install Kubernetes Cluster k8s-install 3 Docker Core Technology mo

648 Jan 02, 2023
This is a tool to develop, build and test PHP extensions in Docker containers.

Develop, Build and Test PHP Extensions This is a tool to develop, build and test PHP extensions in Docker containers. Installation Clone this reposito

Suora GmbH 10 Oct 22, 2022
Tiny Git is a simplified version of Git with only the basic functionalities to gain better understanding of git internals.

Tiny Git is a simplified version of Git with only the basic functionalities to gain better understanding of git internals. Implemented Functi

Ahmed Ayman 2 Oct 15, 2021