🔪 Elimination based Lightweight Neural Net with Pretrained Weights

Overview

ELimNet

ELimNet: Eliminating Layers in a Neural Network Pretrained with Large Dataset for Downstream Task

  • Removed top layers from pretrained EfficientNetB0 and ResNet18 to construct lightweight CNN model with less than 1M #params.
  • Assessed on Trash Annotations in Context(TACO) Dataset sampled for 6 classes with 20,851 images.
  • Compared performance with lightweight models generated with Optuna's Neural Architecture Search(NAS) constituted with same convolutional blocks.

Quickstart

Installation

# clone the repository
git clone https://github.com/snoop2head/elimnet

# fetch image dataset and unzip
!wget -cq https://aistages-prod-server-public.s3.amazonaws.com/app/Competitions/000081/data/data.zip
!unzip ./data.zip -d ./

Train

# finetune on the dataset with pretrained model
python train.py --model ./model/efficientnet_b0.yaml

# finetune on the dataset with ElimNet
python train.py --model ./model/efficientnet_b0_elim_3.yaml

Inference

# inference with the lastest ran model
python inference.py --model_dir ./exp/latest/

Performance

Performance is compared with (1) original pretrained model and (2) Optuna NAS constructed models with no pretrained weights.

  • Indicates that top convolutional layers eliminated pretrained CNN models outperforms empty Optuna NAS models generated with same convolutional blocks.
  • Suggests that eliminating top convolutional layers creates lightweight model that shows similar(or better) classifcation performance with original pretrained model.
  • Reduces parameters to 7%(or less) of its original parameters while maintaining(or improving) its performance. Saves inference time by 20% or more by eliminating top convolutional layters.

ELimNet vs Pretrained Models (Train)

[100 epochs] # of Parameters # of Layers Train Validation Test F1
Pretrained EfficientNet B0 4.0M 352 Loss: 0.43
Acc: 81.23%
F1: 0.84
Loss: 0.469
Acc: 82.17%
F1: 0.76
0.7493
EfficientNet B0 Elim 2 0.9M 245 Loss:0.652
Acc: 87.22%
F1: 0.84
Loss: 0.622
Acc: 87.22%
F1: 0.77
0.7603
EfficientNet B0 Elim 3 0.30M 181 Loss: 0.602
Acc: 78.17%
F1: 0.74
Loss: 0.661
Acc: 77.41%
F1: 0.74
0.7349
Resnet18 11.17M 69 Loss: 0.578
Acc: 78.90%
F1: 0.76
Loss: 0.700
Acc: 76.17%
F1: 0.719
-
Resnet18 Elim 2 0.68M 37 Loss: 0.447
Acc: 83.73%
F1: 0.71
Loss: 0.712
Acc: 75.42%
F1: 0.71
-

ELimNet vs Pretrained Models (Inference)

# of Parameters # of Layers CPU times (sec) CUDA time (sec) Test Inference Time (sec)
Pretrained EfficientNet B0 4.0M 352 3.9s 4.0s 105.7s
EfficientNet B0 Elim 2 0.9M 245 4.1s 13.0s 83.4s
EfficientNet B0 Elim 3 0.30M 181 3.0s 9.0s 73.5s
Resnet18 11.17M 69 - - -
Resnet18 Elim 2 0.68M 37 - - -

ELimNet vs Empty Optuna NAS Models (Train)

[100 epochs] # of Parameters # of Layers Train Valid Test F1
Empty MobileNet V3 4.2M 227 Loss 0.925
Acc: 65.18%
F1: 0.58
Loss 0.993
Acc: 62.83%
F1: 0.56
-
Empty EfficientNet B0 1.3M 352 Loss 0.867
Acc: 67.28%
F1: 0.61
Loss 0.898
Acc: 66.80%
F1: 0.61
0.6337
Empty DWConv & InvertedResidualv3 NAS 0.08M 66 - Loss: 0.766
Acc: 71.71%
F1: 0.68
0.6740
Empty MBConv NAS 0.33M 141 Loss: 0.786
Acc: 70.72%
F1: 0.66
Loss: 0.866
Acc: 68.09%
F1: 0.62
0.6245
Resnet18 Elim 2 0.68M 37 Loss: 0.447
Acc: 83.73%
F1: 0.71
Loss: 0.712
Acc: 75.42%
F1: 0.71
-
EfficientNet B0 Elim 3 0.30M 181 Loss: 0.602
Acc: 78.17%
F1: 0.74
Loss: 0.661
Acc: 77.41%
F1: 0.74
0.7603

ELimNet vs Empty Optuna NAS Models (Inference)

# of Parameters # of Layers CPU times (sec) CUDA time (sec) Test Inference Time (sec)
Empty MobileNet V3 4.2M 227 4 13 -
Empty EfficientNet B0 1.3M 352 3.780 3.782 68.4s
Empty DWConv &
InvertedResidualv3 NAS
0.08M 66 1 3.5 61.1s
Empty MBConv NAS 0.33M 141 2.14 7.201 67.1s
Resnet18 Elim 2 0.68M 37 - - -
EfficientNet B0 Elim 3 0.30M 181 3.0s 9s 73.5s

Background & WiP

Background

Work in Progress

  • Will test the performance of replacing convolutional blocks with pretrained weights with a single convolutional layer without pretrained weights.
  • Will add ResNet18's inference time data and compare with Optuna's NAS constructed lightweight model.
  • Will test on pretrained MobileNetV3, MnasNet on torchvision with elimination based lightweight model architecture search.
  • Will be applied on other small datasets such as Fashion MNIST dataset and Plant Village dataset.

Others

  • "Empty" stands for model with no pretrained weights.
  • "EfficientNet B0 Elim 2" means 2 convolutional blocks have been eliminated from pretrained EfficientNet B0. Number next to "Elim" annotates how many convolutional blocks have been removed.
  • Table's performance illustrates best performance out of 100 epochs of finetuning on TACO Dataset.

Authors

Owner
snoop2head
break, compose, display
snoop2head
An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Neural Architecture Search with Random Labels(RLNAS) Introduction This project provides an implementation for Neural Architecture Search with Random L

18 Nov 08, 2022
The official implementation of VAENAR-TTS, a VAE based non-autoregressive TTS model.

VAENAR-TTS This repo contains code accompanying the paper "VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis". Sa

THUHCSI 138 Oct 28, 2022
Convert dog pictures into various painting styles. Try LimnPet

LimnPet Cartoon stylization service project Try our service » Home page · Team notion · Members 목차 프로젝트 소개 프로젝트 목표 사용한 기술스택과 수행도구 팀원 구현 기능 주요 기능 추가 기능

LiJell 7 Jul 14, 2022
Open source implementation of "A Self-Supervised Descriptor for Image Copy Detection" (SSCD).

A Self-Supervised Descriptor for Image Copy Detection (SSCD) This is the open-source codebase for "A Self-Supervised Descriptor for Image Copy Detecti

Meta Research 68 Jan 04, 2023
Repository for the "Gotta Go Fast When Generating Data with Score-Based Models" paper

Gotta Go Fast When Generating Data with Score-Based Models This repo contains the official implementation for the paper Gotta Go Fast When Generating

Alexia Jolicoeur-Martineau 89 Nov 09, 2022
Source code for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning

Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning Official implementation of ACC, described in the paper "Adaptively Calibrated C

3 Sep 16, 2022
Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch

Cross Transformers - Pytorch (wip) Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch Install $ pip install cross-t

Phil Wang 40 Dec 22, 2022
A High-Quality Real Time Upscaler for Anime Video

Anime4K Anime4K is a set of open-source, high-quality real-time anime upscaling/denoising algorithms that can be implemented in any programming langua

15.7k Jan 06, 2023
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
Fair Recommendation in Two-Sided Platforms

Fair Recommendation in Two-Sided Platforms

gourabgggg 1 Nov 10, 2021
This repo contains the code for the paper "Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging" that has been accepted to NeurIPS 2021.

Dugh-NeurIPS-2021 This repo contains the code for the paper "Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroi

Ali Hashemi 5 Jul 12, 2022
Official repository for MixFaceNets: Extremely Efficient Face Recognition Networks

MixFaceNets This is the official repository of the paper: MixFaceNets: Extremely Efficient Face Recognition Networks. (Accepted in IJCB2021) https://i

Fadi Boutros 51 Dec 13, 2022
Stock-history-display - something like a easy yearly review for your stock performance

Stock History Display Available on Heroku: https://stock-history-display.herokua

LiaoJJ 1 Jan 07, 2022
Code for “ACE-HGNN: Adaptive Curvature ExplorationHyperbolic Graph Neural Network”

ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network This repository is the implementation of ACE-HGNN in PyTorch. Environment pyt

9 Nov 28, 2022
PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data.

Anti-Backdoor Learning PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data. Check the unlearning effect

Yige-Li 51 Dec 07, 2022
Bayesian regularization for functional graphical models.

BayesFGM Paper: Jiajing Niu, Andrew Brown. Bayesian regularization for functional graphical models. Requirements R version 3.6.3 and up Python 3.6 and

0 Oct 07, 2021
"Neural Turing Machine" in Tensorflow

Neural Turing Machine in Tensorflow Tensorflow implementation of Neural Turing Machine. This implementation uses an LSTM controller. NTM models with m

Taehoon Kim 1k Dec 06, 2022
A universal framework for learning timestamp-level representations of time series

TS2Vec This repository contains the official implementation for the paper Learning Timestamp-Level Representations for Time Series with Hierarchical C

Zhihan Yue 284 Dec 30, 2022
Fast image augmentation library and an easy-to-use wrapper around other libraries

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

11.4k Jan 09, 2023
A library of multi-agent reinforcement learning components and systems

Mava: a research framework for distributed multi-agent reinforcement learning Table of Contents Overview Getting Started Supported Environments System

InstaDeep Ltd 463 Dec 23, 2022