PyTorch implementation of PNASNet-5 on ImageNet

Overview

PNASNet.pytorch

PyTorch implementation of PNASNet-5. Specifically, PyTorch code from this repository is adapted to completely match both my implemetation and the official implementation of PNASNet-5, both written in TensorFlow. This complete match allows the pretrained TF model to be exactly converted to PyTorch: see convert.py.

If you use the code, please cite:

@inproceedings{liu2018progressive,
  author    = {Chenxi Liu and
               Barret Zoph and
               Maxim Neumann and
               Jonathon Shlens and
               Wei Hua and
               Li{-}Jia Li and
               Li Fei{-}Fei and
               Alan L. Yuille and
               Jonathan Huang and
               Kevin Murphy},
  title     = {Progressive Neural Architecture Search},
  booktitle = {European Conference on Computer Vision},
  year      = {2018}
}

Requirements

  • TensorFlow 1.8.0 (for image preprocessing)
  • PyTorch 0.4.0
  • torchvision 0.2.1

Data and Model Preparation

  • Download the ImageNet validation set and move images to labeled subfolders. To do the latter, you can use this script. Make sure the folder val is under data/.
  • Download PNASNet.TF and follow its README to download the PNASNet-5_Large_331 pretrained model.
  • Convert TensorFlow model to PyTorch model:
python convert.py

Notes on Model Conversion

  • In both TensorFlow implementations, net[0] means prev and net[1] means prev_prev. However, in the PyTorch implementation, states[0] means prev_prev and states[1] means prev. I followed the PyTorch implemetation in this repository. This is why the 0 and 1 in PNASCell specification are reversed.
  • The default value of eps in BatchNorm layers is 1e-3 in TensorFlow and 1e-5 in PyTorch. I changed all BatchNorm eps values to 1e-3 (see operations.py) to exactly match the TensorFlow pretrained model.
  • The TensorFlow pretrained model uses tf.image.resize_bilinear to resize the image (see utils.py). I cannot find a python function that exactly matches this function's behavior (also see this thread and this post on this topic), so currently in main.py I call TensorFlow to do the image preprocessing, in order to guarantee both models have the identical input.
  • When converting the model from TensorFlow to PyTorch (i.e. convert.py), I use input image size of 323 instead of 331. This is because the 'SAME' padding in TensorFlow may differ from padding in PyTorch in some layers (see this link; basically TF may only pad 1 right and bottom, whereas PyTorch always pads 1 for all four margins). However, they behave exactly the same when image size is 323: conv0 does not have padding, so feature size becomes 161, then 81, 41, etc.
  • The exact conversion when image size is 323 is also corroborated by the following table:
Image Size Official TensorFlow Model Converted PyTorch Model
(331, 331) (0.829, 0.962) (0.828, 0.961)
(323, 323) (0.827, 0.961) (0.827, 0.961)

Usage

python main.py

The last printed line should read:

Test: [50000/50000]	[email protected] 0.828	[email protected] 0.961
Owner
Chenxi Liu
Ph.D. Student in Computer Science
Chenxi Liu
The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation

BiMix The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation arxiv Framework: visualization results: Requiremen

stanley 18 Sep 18, 2022
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
Code to accompany our paper "Continual Learning Through Synaptic Intelligence" ICML 2017

Continual Learning Through Synaptic Intelligence This repository contains code to reproduce the key findings of our path integral approach to prevent

Ganguli Lab 82 Nov 03, 2022
When in Doubt: Improving Classification Performance with Alternating Normalization

When in Doubt: Improving Classification Performance with Alternating Normalization Findings of EMNLP 2021 Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoa

Menglin Jia 13 Nov 06, 2022
Code for paper: "Spinning Language Models for Propaganda-As-A-Service"

Spinning Language Models for Propaganda-As-A-Service This is the source code for the Arxiv version of the paper. You can use this Google Colab to expl

Eugene Bagdasaryan 16 Jan 03, 2023
Job-Recommend-Competition - Vectorwise Interpretable Attentions for Multimodal Tabular Data

SiD - Simple Deep Model Vectorwise Interpretable Attentions for Multimodal Tabul

Jungwoo Park 40 Dec 22, 2022
[Pedestron] Generalizable Pedestrian Detection: The Elephant In The Room. @ CVPR2021

Pedestron Pedestron is a MMdetection based repository, that focuses on the advancement of research on pedestrian detection. We provide a list of detec

Irtiza Hasan 594 Jan 05, 2023
Bald-to-Hairy Translation Using CycleGAN

GANiry: Bald-to-Hairy Translation Using CycleGAN Official PyTorch implementation of GANiry. GANiry: Bald-to-Hairy Translation Using CycleGAN, Fidan Sa

Fidan Samet 10 Oct 27, 2022
Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models

Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models, under review at ICLR 2017 requirements: T

Shuangfei Zhai 18 Mar 05, 2022
Gym for multi-agent reinforcement learning

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of Gym. Our website, with

Farama Foundation 1.6k Jan 09, 2023
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
Language-Agnostic Website Embedding and Classification

Homepage2Vec Language-Agnostic Website Embedding and Classification based on Curlie labels https://arxiv.org/pdf/2201.03677.pdf Homepage2Vec is a pre-

25 Dec 27, 2022
One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search Space Shrinking

One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search Space Shrinking This is an official implementation for NEAS presented in CVPR

Multimedia Research 19 Sep 08, 2022
This is a tensorflow-based rotation detection benchmark, also called AlphaRotate.

AlphaRotate: A Rotation Detection Benchmark using TensorFlow Abstract AlphaRotate is maintained by Xue Yang with Shanghai Jiao Tong University supervi

yangxue 972 Jan 05, 2023
Benchmark spaces - Benchmarks of how well different two dimensional spaces work for clustering algorithms

benchmark_spaces Benchmarks of how well different two dimensional spaces work fo

Bram Cohen 6 May 07, 2022
This repository is all about spending some time the with the original problem posed by Minsky and Papert

This repository is all about spending some time the with the original problem posed by Minsky and Papert. Working through this problem is a great way to begin learning computer vision.

Jaissruti Nanthakumar 1 Jan 23, 2022
Anagram Generator in Python

Anagrams Generator This is a program for computing multiword anagrams. It makes no effort to come up with sentences that make sense; it only finds ana

Day Fundora 5 Nov 17, 2022
Nerf pl - NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

nerf_pl Update: an improved NSFF implementation to handle dynamic scene is open! Update: NeRF-W (NeRF in the Wild) implementation is added to nerfw br

AI葵 1.8k Dec 30, 2022
PyTorch implementations of the beta divergence loss.

Beta Divergence Loss - PyTorch Implementation This repository contains code for a PyTorch implementation of the beta divergence loss. Dependencies Thi

Billy Carson 7 Nov 09, 2022
Accurate identification of bacteriophages from metagenomic data using Transformer

PhaMer is a python library for identifying bacteriophages from metagenomic data. PhaMer is based on a Transorfer model and rely on protein-based vocab

Kenneth Shang 9 Nov 30, 2022