A code implementation of AC-GC: Activation Compression with Guaranteed Convergence, in NeurIPS 2021.

Related tags

Deep Learningacgc
Overview

Code For AC-GC: Lossy Activation Compression with Guaranteed Convergence

This code is intended to be used as a supplemental material for submission to NeurIPS 2021.

DO NOT DISTRIBUTE

Setup

This code is tested on Ubuntu 20.04 with Python 3 and CUDA 10.1. Other cuda versions can be used by modifying the cupy version in requirements.txt, provided that CuDNN is installed.

# Set up environment
python3 -m venv
source venv/bin/activate
pip3 install -r requirements.txt

Training

Configurations are provided for CIFAR10/ResNet50 in the acgc/configs folder.

source venv/bin/activate
cd acgc
./configs/rn50_baseline.sh

To replicate GridQuantZ results from the paper, you additionally need to:

  • Run quantz with bitwidths of 2, 4, 6, 8, 10, 12, 14, and 16 bits, and run each 5 times
  • Select the result with the lowest bitwidth and average accuracy no less than the baseline - 0.1%

Evaluation

Evaluation with the CIFAR10 test dataset is run during training. The 'validation/main/accuracy' entry in the report.txt or log contains test accuracy throughout training.

Pre-trained Models

You can download pre-trained snapshots for each config from acgc/configs.

These snapshots can be run using

python3 train_cifar_act_error.py ... --resume <snapshot_file>

Results

We have added reports and logs for each configuration under acgc/results. The logs are associated with each snapshot, above.

A summarized output from these runs is:

Configuration Best Test Acc Average Bits Epochs
rn50_baseline 95.16 % N/A 300
rn50_quant_8bit 94.90 % 8.000 300
rn50_quantz_8bit 94.82 % 7.426 300
rn50_autoquant 94.73 % 7.305 300
rn50_autoquantz 94.91 % 6.694 300

Code Layout

Argument parsing and model initialization are handled in acgc/cifar.py and acgc/train_cifar_act_error.py

Modifications to the training loop are in acgc/common/compression/compressed_momentum_sgd.py.

The baseline fixpoint implementation is in acgc/common/compression/quant.py.

The AutoQuant implementation, and error bound calculation are in acgc/common/compression/autoquant.py.

Gradient and parameter estimation are performed in acgc/common/compression/grad_approx.py

Owner
Dave Evans
Student at University of British Columbia. Interests: FPGAs, Accelerators, Computer Architecture, Machine Learning
Dave Evans
This project aims to be a handler for input creation and running of multiple RICEWQ simulations.

What is autoRICEWQ? This project aims to be a handler for input creation and running of multiple RICEWQ simulations. What is RICEWQ? From the descript

Yass Fuentes 1 Feb 01, 2022
SANet: A Slice-Aware Network for Pulmonary Nodule Detection

SANet: A Slice-Aware Network for Pulmonary Nodule Detection This paper (SANet) has been accepted and early accessed in IEEE TPAMI 2021. This code and

Jie Mei 39 Dec 17, 2022
Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, L

3 Dec 02, 2022
QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing

QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing Environment Tested on Ubuntu 14.04 64bit and 16.04 64bit Installation # disabl

gts3.org (<a href=[email protected])"> 581 Dec 30, 2022
In-place Parallel Super Scalar Samplesort (IPS⁴o)

In-place Parallel Super Scalar Samplesort (IPS⁴o) This is the implementation of the algorithm IPS⁴o presented in the paper Engineering In-place (Share

82 Dec 22, 2022
Nightmare-Writeup - Writeup for the Nightmare CTF Challenge from 2022 DiceCTF

Nightmare: One Byte to ROP // Alternate Solution TLDR: One byte write, no leak.

1 Feb 17, 2022
How to train a CNN to 99% accuracy on MNIST in less than a second on a laptop

Training a NN to 99% accuracy on MNIST in 0.76 seconds A quick study on how fast you can reach 99% accuracy on MNIST with a single laptop. Our answer

Tuomas Oikarinen 42 Dec 10, 2022
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
Accelerated deep learning R&D

Accelerated deep learning R&D PyTorch framework for Deep Learning research and development. It focuses on reproducibility, rapid experimentation, and

Catalyst-Team 3.1k Jan 06, 2023
This application explain how we can easily integrate Deepface framework with Python Django application

deepface_suite This application explain how we can easily integrate Deepface framework with Python Django application install redis cache install requ

Mohamed Naji Aboo 3 Apr 18, 2022
HyperPose is a library for building high-performance custom pose estimation applications.

HyperPose is a library for building high-performance custom pose estimation applications.

TensorLayer Community 1.2k Jan 04, 2023
Neural network-based build time estimation for additive manufacturing

Neural network-based build time estimation for additive manufacturing Oh, Y., Sharp, M., Sprock, T., & Kwon, S. (2021). Neural network-based build tim

Yosep 1 Nov 15, 2021
[ICML 2020] "When Does Self-Supervision Help Graph Convolutional Networks?" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen

When Does Self-Supervision Help Graph Convolutional Networks? PyTorch implementation for When Does Self-Supervision Help Graph Convolutional Networks?

Shen Lab at Texas A&M University 106 Nov 11, 2022
Customer Segmentation using RFM

Customer-Segmentation-using-RFM İş Problemi Bir e-ticaret şirketi müşterilerini segmentlere ayırıp bu segmentlere göre pazarlama stratejileri belirlem

Nazli Sener 7 Dec 26, 2021
Angora is a mutation-based fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution.

Angora Angora is a mutation-based coverage guided fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without s

833 Jan 07, 2023
Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation (CVPR 2020)

Super-BPD for Fast Image Segmentation (CVPR 2020) Introduction We propose direction-based super-BPD, an alternative to superpixel, for fast generic im

189 Dec 07, 2022
IRON Kaggle project done while doing IRONHACK Bootcamp where we had to analyze and use a Machine Learning Project to predict future sales

IRON Kaggle project done while doing IRONHACK Bootcamp where we had to analyze and use a Machine Learning Project to predict future sales. In this case, we ended up using XGBoost because it was the o

1 Jan 04, 2022
Unsupervised captioning - Code for Unsupervised Image Captioning

Unsupervised Image Captioning by Yang Feng, Lin Ma, Wei Liu, and Jiebo Luo Introduction Most image captioning models are trained using paired image-se

Yang Feng 207 Dec 24, 2022
PFFDTD is an open-source FDTD simulator for 3D room acoustics

PFFDTD is an open-source FDTD simulator for 3D room acoustics

Brian Hamilton 34 Nov 24, 2022
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Facebook Research 47 Jul 26, 2022