PNASNet.pytorch
PyTorch implementation of PNASNet-5. Specifically, PyTorch code from this repository is adapted to completely match both my implemetation and the official implementation of PNASNet-5, both written in TensorFlow. This complete match allows the pretrained TF model to be exactly converted to PyTorch: see convert.py
.
If you use the code, please cite:
@inproceedings{liu2018progressive,
author = {Chenxi Liu and
Barret Zoph and
Maxim Neumann and
Jonathon Shlens and
Wei Hua and
Li{-}Jia Li and
Li Fei{-}Fei and
Alan L. Yuille and
Jonathan Huang and
Kevin Murphy},
title = {Progressive Neural Architecture Search},
booktitle = {European Conference on Computer Vision},
year = {2018}
}
Requirements
- TensorFlow 1.8.0 (for image preprocessing)
- PyTorch 0.4.0
- torchvision 0.2.1
Data and Model Preparation
- Download the ImageNet validation set and move images to labeled subfolders. To do the latter, you can use this script. Make sure the folder
val
is underdata/
. - Download PNASNet.TF and follow its README to download the
PNASNet-5_Large_331
pretrained model. - Convert TensorFlow model to PyTorch model:
python convert.py
Notes on Model Conversion
- In both TensorFlow implementations,
net[0]
meansprev
andnet[1]
meansprev_prev
. However, in the PyTorch implementation,states[0]
meansprev_prev
andstates[1]
meansprev
. I followed the PyTorch implemetation in this repository. This is why the 0 and 1 in PNASCell specification are reversed. - The default value of
eps
in BatchNorm layers is1e-3
in TensorFlow and1e-5
in PyTorch. I changed all BatchNormeps
values to1e-3
(seeoperations.py
) to exactly match the TensorFlow pretrained model. - The TensorFlow pretrained model uses
tf.image.resize_bilinear
to resize the image (seeutils.py
). I cannot find a python function that exactly matches this function's behavior (also see this thread and this post on this topic), so currently inmain.py
I call TensorFlow to do the image preprocessing, in order to guarantee both models have the identical input. - When converting the model from TensorFlow to PyTorch (i.e.
convert.py
), I use input image size of 323 instead of 331. This is because the 'SAME' padding in TensorFlow may differ from padding in PyTorch in some layers (see this link; basically TF may only pad 1 right and bottom, whereas PyTorch always pads 1 for all four margins). However, they behave exactly the same when image size is 323:conv0
does not have padding, so feature size becomes 161, then 81, 41, etc. - The exact conversion when image size is 323 is also corroborated by the following table:
Image Size | Official TensorFlow Model | Converted PyTorch Model |
---|---|---|
(331, 331) | (0.829, 0.962) | (0.828, 0.961) |
(323, 323) | (0.827, 0.961) | (0.827, 0.961) |
Usage
python main.py
The last printed line should read:
Test: [50000/50000] [email protected] 0.828 [email protected] 0.961