Management Dashboard for Torchserve

Overview

Torchserve Dashboard

Total Downloads

Torchserve Dashboard using Streamlit

Related blog post

Demo

Usage

Additional Requirement: torchserve (recommended:v0.5.2)

Simply run:

pip3 install torchserve-dashboard --user
# torchserve-dashboard [streamlit_options(optional)] -- [config_path(optional)] [model_store(optional)] [log_location(optional)] [metrics_location(optional)]
torchserve-dashboard
#OR change port 
torchserve-dashboard --server.port 8105 -- --config_path ./torchserve.properties
#OR provide a custom configuration 
torchserve-dashboard -- --config_path ./torchserve.properties --model_store ./model_store

Keep in mind that If you change any of the --config_path,--model_store,--metrics_location,--log_location options while there is a torchserver already running before starting torch-dashboard they won't come into effect until you stop&start torchserve. These options are used instead of their respective environment variables TS_CONFIG_FILE, METRICS_LOCATION, LOG_LOCATION.

OR

git clone https://github.com/cceyda/torchserve-dashboard.git
streamlit run torchserve_dashboard/dash.py 
#OR
streamlit run torchserve_dashboard/dash.py --server.port 8105 -- --config_path ./torchserve.properties 

Example torchserve config:

inference_address=http://127.0.0.1:8443
management_address=http://127.0.0.1:8444
metrics_address=http://127.0.0.1:8445
grpc_inference_port=7070
grpc_management_port=7071
number_of_gpu=0
batch_size=1
model_store=./model_store

If the server doesn't start for some reason check if your ports are already in use!

Updates

[15-oct-2020] add scale workers tab

[16-feb-2021] (functionality) make logpath configurable,(functionality)remove model_name requirement,(UI)add cosmetic error messages

[10-may-2021] update config & make it optional. update streamlit. Auto create folders

[31-may-2021] Update to v0.4 (Add workflow API) Refactor out streamlit from api.py.

[30-nov-2021] Update to v0.5, adding support for encrypted model serving (not tested). Update streamlit to v1+

FAQs

  • Does torchserver keep running in the background?

    The torchserver is spawned using Popen and keeps running in the background even if you stop the dashboard.

  • What about environment variables?

    These environment variables are passed to the torchserve command:

    ENVIRON_WHITELIST=["LD_LIBRARY_PATH","LC_CTYPE","LC_ALL","PATH","JAVA_HOME","PYTHONPATH","TS_CONFIG_FILE","LOG_LOCATION","METRICS_LOCATION","AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "AWS_DEFAULT_REGION"]

  • How to change the logging format of torchserve?

    You can set the location of your custom log4j2 config in your configuration file as in here

    vmargs=-Dlog4j.configurationFile=file:///path/to/custom/log4j2.xml

  • What is the meaning behind the weird versioning?

    The minor follows the compatible torchserve version, patch version reflects the dashboard versioning

Help & Question & Feedback

Open an issue

TODOs

  • Async?
  • Better logging
  • Remote only mode
Comments
  • Update to serve 0.4

    Update to serve 0.4

    I love your project and was hoping we can feature it more prominently in the main torchserve repo - I was wondering if you'd be OK and interested in this. And if so I was wondering if you could give me some feedback on the below

    Installation instructions

    I tried to torchserve-dashboard --server.port 8105 -- --config_path ./torchserve.properties --model_store ./model_store but the page never seems to load regardless of whether I use the network or external url that I have

    I setup a config

    (torchservedashboard) [email protected]:~/torchserve-dashboard$ cat torchserve.properties 
    inference_address=http://127.0.0.1:8443
    management_address=http://127.0.0.1:8444
    metrics_address=http://127.0.0.1:8445
    grpc_inference_port=7070
    grpc_management_port=7071
    number_of_gpu=0
    batch_size=1
    model_store=model_store
    

    But perhaps makes the most sense to just add a default one to the repo so things just work. I'm happy to make the PR just let me know what you suggest. Ideally things just work with zero config and people can come back and change stuff once they feel more comfortable.

    Also on Ubuntu I had to type export PATH="$HOME/.local/bin:$PATH" so I could call torchserve-dashboard

    Features

    Also there's some new features we're excited like the below which would be very interesting to see like

    1. Model interpretability with Captum https://github.com/pytorch/serve/blob/master/captum/Captum_visualization_for_bert.ipynb
    2. Workflow support coming in 0.4 which will allow much more configurable pipelines https://github.com/pytorch/serve/pull/1024/files

    In all cases please let me know if you think we're on the right track and how we can make the torchserve more useful to you. I liked your suggestion on automatic doc generation and it's something I'm looking into so please keep them coming!

    opened by msaroufim 5
  • Improvements of package setup logic

    Improvements of package setup logic

    This PR is related to #1 . It improves the structure of the package setup: All package related info is moved to torchserve_dashboard.init.py.

    Requirement files are added which are split up depending on the usage of the repo/package.

    All functions linked to setup are moved to torchserve_dashboard.setup_tools.py. The function parsing the requirements can handle commented requirements as well as references to github etc (#egg included in requirement)

    opened by FlorianMF 3
  • click >=8 possibly not compatible

    click >=8 possibly not compatible

    Couldn't run the dashboard initially

    Traceback (most recent call last):
      File "/Users/me/Desktop/pytorch-mnist/venv/bin/torchserve-dashboard", line 8, in <module>
        sys.exit(main())
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1137, in __call__
        return self.main(*args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1062, in main
        rv = self.invoke(ctx)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 763, in invoke
        return __callback(*args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
        return f(get_current_context(), *args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/torchserve_dashboard/cli.py", line 16, in main
        ctx.forward(streamlit.cli.main_run, target=filename, args=args, *kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 784, in forward
        return __self.invoke(__cmd, *args, **kwargs)
      File "/Users/me/Desktop/pytorch-mnist/venv/lib/python3.8/site-packages/click/core.py", line 763, in invoke
        return __callback(*args, **kwargs)
    TypeError: main_run() got multiple values for argument 'target'
    

    After a bit of googling, I found this: https://github.com/rytilahti/python-eq3bt/issues/30

    The default install brought in click==8.0.1. I had to downgrade to 7.1.2 to get past the error.

    opened by jsphweid 1
  • better caching, init option, v0.6 update

    better caching, init option, v0.6 update

    • Better caching using @st.experimental_singleton

      • argument parsing and API class initialization should only happen once (across sessions) on initial load.
      • Should be way better compared to before which ran those functions after each page refresh 😱 Might be optimized further later...need to refactor cli-param parsing/init logic.
    • Added --init option to initialize torchserve on start. as per this issue: https://github.com/cceyda/torchserve-dashboard/issues/16 Use like: torchserve-dashboard --init Although you still have to load the dashboard screen once for it to actually start!

    • Update to match changes in torchserve v0.6

      • there seems to be only one update to ManagementAPI in v0.6 https://github.com/pytorch/serve/pull/1421 which adds ?customized=true option to return custom_metadata in model details. Although the feature seems to be buggy for old .mar files not implementing it. (tested on: https://github.com/pytorch/serve/blob/master/frontend/archive/src/test/resources/models/noop-customized.mar)

      Anyway I added a checkbox (defaulted to False) to return custom_metadata if needed.

    opened by cceyda 0
  • update streamlit version to v1.11.1

    update streamlit version to v1.11.1

    update streamlit version to include security update v1.11.1 Although torchserve-dashboard isn't using any custom components and therefore not effected by it.

    opened by cceyda 0
  • Update to 0.5.0

    Update to 0.5.0

    update torchserve:

    • v0.5
    • add aws encrypted model feature
    • add log-config to options

    update steamlit:

    • v1.2.0 -> drop beta_ prefix
    • drop python 3.6

    https://github.com/cceyda/torchserve-dashboard/issues/13

    opened by cceyda 0
  • Update to v0.4

    Update to v0.4

    • Add workflow management endpoints (untested)
    • Add version check
    • Refactor api.py (remove streamlit)

    Closes: https://github.com/cceyda/torchserve-dashboard/issues/2

    opened by cceyda 0
  • Add workflow Management API

    Add workflow Management API

    Self-todo ETA: june 3rd https://github.com/pytorch/serve/tree/release_0.4.0/examples/Workflows https://github.com/pytorch/serve/blob/release_0.4.0/docs/workflow_management_api.md

    opened by cceyda 0
  • Can't start torchserve-dashboard

    Can't start torchserve-dashboard

    I'm getting this error while on the start

    Traceback (most recent call last):
      File "/home/kavan/.local/bin/torchserve-dashboard", line 5, in <module>
        from torchserve_dashboard.cli import main
      File "/home/kavan/.local/lib/python3.8/site-packages/torchserve_dashboard/cli.py", line 2, in <module>
        import streamlit.cli
      File "/home/kavan/.local/lib/python3.8/site-packages/streamlit/__init__.py", line 49, in <module>
        from streamlit.proto.RootContainer_pb2 import RootContainer
      File "/home/kavan/.local/lib/python3.8/site-packages/streamlit/proto/RootContainer_pb2.py", line 22, in <module>
        create_key=_descriptor._internal_create_key,
    AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
    

    Btw thanks for this awesome lib.

    opened by Kavan72 1
  • Explanations API

    Explanations API

    Mentioned in https://github.com/cceyda/torchserve-dashboard/issues/1 Model interpretability with Captum https://github.com/pytorch/serve/blob/master/captum/Captum_visualization_for_bert.ipynb

    This would be good to add if we end up adding an InferenceAPI

    opened by cceyda 0
  • Inference API

    Inference API

    Moving the discussion from https://github.com/cceyda/torchserve-dashboard/issues/1#issuecomment-863911194 to here

    Current challenges blocking this:

    • there is no way to know the format of the expected request/response. Especially for custom handlers.

    (I prefer not do model_name->type matching manually)

    If a request/response schema is added to the returned OpenAPI definitions, I can probably auto generate something like SwaggerUI.

    opened by cceyda 0
  • Docker container

    Docker container

    I think it would be great for users and for developers to be able to easily share their dashboard or run it in production without deploying via Streamlit. I could add a simple Dockerfile wrapping everything up into a container.

    Torchserve-Dashbord would be to Torchserve what MongoExpress is to MongoDB. Thoughts?

    opened by FlorianMF 3
Releases(v0.6.0)
  • v0.6.0(Aug 1, 2022)

    What's Changed

    • update streamlit version to v1.11.1 by @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/18
    • Better caching using @st.experimental_singleton https://github.com/cceyda/torchserve-dashboard/pull/19
    • Added --init option to initialize torchserve on start https://github.com/cceyda/torchserve-dashboard/pull/19
    • Update to match changes in torchserve v0.6 @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/19

    Full Changelog: https://github.com/cceyda/torchserve-dashboard/compare/v0.5.0...v0.6.0

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Nov 30, 2021)

    Update to v0.5, adding support for encrypted model serving (not tested). Update streamlit to v1+

    What's Changed

    • Improvements of package setup logic by @FlorianMF in https://github.com/cceyda/torchserve-dashboard/pull/5
    • WIP: Add type annotations by @FlorianMF in https://github.com/cceyda/torchserve-dashboard/pull/7
    • Update to 0.5.0 by @cceyda in https://github.com/cceyda/torchserve-dashboard/pull/15

    New Contributors

    • @FlorianMF made their first contribution in https://github.com/cceyda/torchserve-dashboard/pull/5

    Full Changelog: https://github.com/cceyda/torchserve-dashboard/compare/v0.4.0...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Jun 12, 2021)

  • v0.3.2(May 9, 2021)

  • v0.3.1(May 9, 2021)

  • v0.2.5(Feb 16, 2021)

  • v0.2.4(Feb 16, 2021)

  • v0.2.3(Oct 15, 2020)

  • v0.2.2(Oct 13, 2020)

  • v0.2.0(Oct 13, 2020)

Owner
Ceyda Cinarel
AI researcher & engineer~ all things NLP 🤖 generative models ★ like trying out new libraries & tools ♥ Python
Ceyda Cinarel
Paper: De-rendering Stylized Texts

Paper: De-rendering Stylized Texts Wataru Shimoda1, Daichi Haraguchi2, Seiichi Uchida2, Kota Yamaguchi1 1CyberAgent.Inc, 2 Kyushu University Accepted

CyberAgent AI Lab 55 Dec 18, 2022
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 02, 2022
Sparse-dense operators implementation for Paddle

Sparse-dense operators implementation for Paddle This module implements coo, csc and csr matrix formats and their inter-ops with dense matrices. Feel

北海若 3 Dec 17, 2022
A supplementary code for Editable Neural Networks, an ICLR 2020 submission.

Editable neural networks A supplementary code for Editable Neural Networks, an ICLR 2020 submission by Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitry Py

Anton Sinitsin 32 Nov 29, 2022
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Yvictor 1.1k Jan 02, 2023
Collection of TensorFlow2 implementations of Generative Adversarial Network varieties presented in research papers.

TensorFlow2-GAN Collection of tf2.0 implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will

41 Apr 28, 2022
Forecasting directional movements of stock prices for intraday trading using LSTM and random forest

Forecasting directional movements of stock-prices for intraday trading using LSTM and random-forest https://arxiv.org/abs/2004.10178 Pushpendu Ghosh,

Pushpendu Ghosh 270 Dec 24, 2022
Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Keeping it safe - AI Based COVID-19 Tracker using Deep Learning and facial recognition

Vansh Wassan 15 Jun 17, 2021
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
"Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices", official implementation

Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices This repository contains the official PyTorch implemen

Yandex Research 21 Oct 18, 2022
Code for all the Advent of Code'21 challenges mostly written in python

Advent of Code 21 Code for all the Advent of Code'21 challenges mostly written in python. They are not necessarily the best or fastest solutions but j

4 May 26, 2022
Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model

Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model About This repository contains the code to replicate the syn

Haruka Kiyohara 12 Dec 07, 2022
UltraGCN: An Ultra Simplification of Graph Convolutional Networks for Recommendation

UltraGCN This is our Pytorch implementation for our CIKM 2021 paper: Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: A

XUEPAI 93 Jan 03, 2023
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
Convolutional Neural Network to detect deforestation in the Amazon Rainforest

Convolutional Neural Network to detect deforestation in the Amazon Rainforest This project is part of my final work as an Aerospace Engineering studen

5 Feb 17, 2022
SpecAugmentPyTorch - A Pytorch (support batch and channel) implementation of GoogleBrain's SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition

SpecAugment An implementation of SpecAugment for Pytorch How to use Install pytorch, version=1.9.0 (new feature (torch.Tensor.take_along_dim) is used

IMLHF 3 Oct 11, 2022
Code for "MetaMorph: Learning Universal Controllers with Transformers", Gupta et al, ICLR 2022

MetaMorph: Learning Universal Controllers with Transformers This is the code for the paper MetaMorph: Learning Universal Controllers with Transformers

Agrim Gupta 50 Jan 03, 2023
Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.

Smaller Multilingual Transformers This repository shares smaller versions of multilingual transformers that keep the same representations offered by t

Geotrend 79 Dec 28, 2022
A Benchmark For Measuring Systematic Generalization of Multi-Hierarchical Reasoning

Orchard Dataset This repository contains the code used for generating the Orchard Dataset, as seen in the Multi-Hierarchical Reasoning in Sequences: S

Bill Pung 1 Jun 05, 2022
TensorFlow implementation of "Variational Inference with Normalizing Flows"

[TensorFlow 2] Variational Inference with Normalizing Flows TensorFlow implementation of "Variational Inference with Normalizing Flows" [1] Concept Co

YeongHyeon Park 7 Jun 08, 2022