Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions

Related tags

Deep LearningAquarius
Overview

Aquarius

Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions

NOTE: We are currently going through the open-source process required by our institution. The content will soon be available. The steps that need to be completed are listed below:

  • PREPARE
  • INCLUSIVELINT
  • UNITTEST
  • LINT
  • BUILD & PUBLISH
  • CORONA
  • BLACKDUCK
  • SONARQUBE
  • HELM
  • DEPLOY
  • DEPLOY-STATIC
  • E2E
  • APIDOCS
  • GOPUBLISH

Introduction

This repository implements a data-collection and data-exploitation mechanism Aquarius as a load balancer plugin in VPP. For the sake of reproducibility, software and data artifacts for performance evaluation are maintained in this repository.

Directory Roadmap

- config                    // configuration files in json format        
- sc-author-kit-log         // artifacts description of testbed hardware, required by sc21 committee
- src                       // source code
    + client/server         // scripts that run on client/server VMs
    + lb                    // scripts that run on lb VMs
        * dev               // dev version (for offline feature collection)
        * deploy            // deploy version (for online policy evaluation)
    + utils                 // utility scripts that help to run the testbed
    + vpp                   // vpp plugin
        * dev               // dev version (for offline feature collection)
        * deploy            // deploy version (for online policy evaluation)
    + test                  // unit test codes
- data                      
    + trace                 // network traces replayed on the testbed
    + results (omitted)     // This is where all the datasets are dumped (will be automatically created once we run experiments)
    + img                   // VM image files (omitted here because of file size, server configurations are documented in README)
    + vpp_deb               // stores deb files for installing VPP on VMs
        * dev               // dev version (for offline feature collection)
        * deploy            // deploy version (for online policy evaluation)

Get Started

Pre-Configuration

Run python3 setup.py, which does the following things:

  • update the root directory in config/global_config.json to the directory of the cloned aquarius repository (replace the /home/yzy/aquarius);
  • clone the VPP repository in src/vpp/base;
  • update the physical_server_ip in config/global_config.json to the IP addresses of the actual server IP addresses in use;
  • update the vlan_if as the last network interface on the local machine
  • update the physical_server_ip in config/cluster/unittest-1.json to the local hostname

VM images

To prepare a VM original image, refer to the README file in data. To run all the experiments without issues, create a ssh-key on the host servers and copy the public key to the VMs so that commands can be executed from the host using ssh -t -t.

Run example

A simple example is created using a small network topology (1 client, 1 edge router, 1 load balancer, and 4 application servers) on a single machine. Simply follow the jupyter notebook in notebook/unittest. Make sure the configurations are well adapted to your own host machine. Also make sure that the host machine has at least 20 CPUs. Otherwise, the configuration can be modified in config/cluster/unittest-1.json. To reduce the amount of CPUs required, change the number of vcpu of each node in the json file.

Reproducibility

To reproduce the results in Aquarius paper, three notebooks are presented in notebook/reproduce. The dataset that are generated from the experiments are stored in data/reproduce. To run these experiments, 4 physical machines with 12 physcial cores (48 CPUs) each are required. MACROs in the notebook should be well adapted. For instance, VLAN should be configured across the actual inerfacesin use. An example of network topology is depicted below.

Multi-server Topology

Notes

Running the scripts, e.g. src/utils/testbed_utils.py, requires root access.

Aquarius

Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions

You might also like...
Official code for ICCV2021 paper
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

Official PyTorch implementation of
Official PyTorch implementation of "RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on" (IJCAI-ECAI 2022)

RMGN-VITON RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on In IJCAI-ECAI 2022(short oral). [Paper] [Supplementary Material] Abstra

Code of paper
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images

InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal Artifact Reduction in CT Images Hong Wang, Yuexiang Li, Haimiao Zhang, Deyu Men

AI-based, context-driven network device ranking
AI-based, context-driven network device ranking

Batea A batea is a large shallow pan of wood or iron traditionally used by gold prospectors for washing sand and gravel to recover gold nuggets. Batea

RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020)

RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020) Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng [PDF] [Supplementary M

A PyTorch Implementation of
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

The AugNet Python module contains functions for the fast computation of image similarity.
The AugNet Python module contains functions for the fast computation of image similarity.

AugNet AugNet: End-to-End Unsupervised Visual Representation Learning with Image Augmentation arxiv link In our work, we propose AugNet, a new deep le

A data-driven approach to quantify the value of classifiers in a machine learning ensemble.
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Releases(sc22-v1.0-alpha)
  • sc22-v1.0-alpha(Jun 11, 2022)

    ALPHA version of Aquarius release for SC22.

    This release aims at demonstrating the basic workflow of the artifacts of Aquarius. Besides the jupyter notebooks which documents the actual procedure of producing all the experimental results in the paper, a unittest is provided to guide you through the basic workflow of the artifact.

    Please refer to the latest main branch of the Github repo to reproduce the core results presented in the paper: https://github.com/ZhiyuanYaoJ/Aquarius

    Source code(tar.gz)
    Source code(zip)
Owner
Zhiyuan YAO
PhD student at L'Ecole Polytechnique.
Zhiyuan YAO
Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed

fast-Bart Reduction of BART model size by 3X, and boost in inference speed up to 3X BART implementation of the fastT5 library (https://github.com/Ki6a

Siddharth Sharma 19 Dec 09, 2022
Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification [pdf] The official repository for Self-Supervised Pre-Training for Transfo

Hao Luo 116 Jan 04, 2023
Unified file system operation experience for different backend

megfile - Megvii FILE library Docs: http://megvii-research.github.io/megfile megfile provides a silky operation experience with different backends (cu

MEGVII Research 76 Dec 14, 2022
PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision

Learning to Generate Grounded Visual Captions without Localization Supervision This is the PyTorch implementation of our paper: Learning to Generate G

Chih-Yao Ma 41 Nov 17, 2022
PyTorch implementations of deep reinforcement learning algorithms and environments

Deep Reinforcement Learning Algorithms with PyTorch This repository contains PyTorch implementations of deep reinforcement learning algorithms and env

Petros Christodoulou 4.7k Jan 04, 2023
This is a repository of our model for weakly-supervised video dense anticipation.

Introduction This is a repository of our model for weakly-supervised video dense anticipation. More results on GTEA, Epic-Kitchens etc. will come soon

2 Apr 09, 2022
An image classification app boilerplate to serve your deep learning models asap!

Image 🖼 Classification App Boilerplate Have you been puzzled by tons of videos, blogs and other resources on the internet and don't know where and ho

Smaranjit Ghose 27 Oct 06, 2022
Robust & Reliable Route Recommendation on Road Networks

NeuroMLR: Robust & Reliable Route Recommendation on Road Networks This repository is the official implementation of NeuroMLR: Robust & Reliable Route

4 Dec 20, 2022
CLIP (Contrastive Language–Image Pre-training) for Italian

Italian CLIP CLIP (Radford et al., 2021) is a multimodal model that can learn to represent images and text jointly in the same space. In this project,

Italian CLIP 114 Dec 29, 2022
This repository contains the DendroMap implementation for scalable and interactive exploration of image datasets in machine learning.

DendroMap DendroMap is an interactive tool to explore large-scale image datasets used for machine learning. A deep understanding of your data can be v

DIV Lab 33 Dec 30, 2022
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

MDCA Calibration 21 Dec 22, 2022
Explaining neural decisions contrastively to alternative decisions.

Contrastive Explanations for Model Interpretability This is the repository for the paper "Contrastive Explanations for Model Interpretability", about

AI2 16 Oct 16, 2022
2021:"Bridging Global Context Interactions for High-Fidelity Image Completion"

TFill arXiv | Project This repository implements the training, testing and editing tools for "Bridging Global Context Interactions for High-Fidelity I

Chuanxia Zheng 111 Jan 08, 2023
Paddle Graph Learning (PGL) is an efficient and flexible graph learning framework based on PaddlePaddle

DOC | Quick Start | 中文 Breaking News !! 🔥 🔥 🔥 OGB-LSC KDD CUP 2021 winners announced!! (2021.06.17) Super excited to announce our PGL team won TWO

1.5k Jan 06, 2023
Public repository containing materials used for Feed Forward (FF) Neural Networks article.

Art041_NN_Feed_Forward Public repository containing materials used for Feed Forward (FF) Neural Networks article. -- Illustration of a very simple Fee

SolClover 2 Dec 29, 2021
PyTorch implementation of the Crafting Better Contrastive Views for Siamese Representation Learning

Crafting Better Contrastive Views for Siamese Representation Learning This is the official PyTorch implementation of the ContrastiveCrop paper: @artic

249 Dec 28, 2022
Codebase for "Revisiting spatio-temporal layouts for compositional action recognition" (Oral at BMVC 2021).

Revisiting spatio-temporal layouts for compositional action recognition Codebase for "Revisiting spatio-temporal layouts for compositional action reco

Gorjan 20 Dec 15, 2022
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Jinpeng Wang 150 Dec 28, 2022
基于DouZero定制AI实战欢乐斗地主

DouZero_For_Happy_DouDiZhu: 将DouZero用于欢乐斗地主实战 本项目基于DouZero 环境配置请移步项目DouZero 模型默认为WP,更换模型请修改start.py中的模型路径 运行main.py即可 SL (baselines/sl/): 基于人类数据进行深度学习

1.5k Jan 08, 2023
Code for Paper "Evidential Softmax for Sparse MultimodalDistributions in Deep Generative Models"

Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models Abstract Many applications of generative models rely on the marginali

Stanford Intelligent Systems Laboratory 9 Jun 06, 2022