Naszilla is a Python library for neural architecture search (NAS)

Overview

License

A repository to compare many popular NAS algorithms seamlessly across three popular benchmarks (NASBench 101, 201, and 301). You can implement your own NAS algorithm, and then easily compare it with eleven algorithms across three benchmarks.

This repository contains the official code for the following three papers:

Paper README Blog Post
A Study on Encodings for Neural Architecture Search encodings.md Blog Post
BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search bananas.md Blog Post
Exploring the Loss Landscape in Neural Architecture Search local_search.md Blog Post

Installation

Clone this repository and install its requirements (which includes nasbench, nas-bench-201, and nasbench301). It may take a few minutes.

git clone https://github.com/naszilla/naszilla
cd naszilla
cat requirements.txt | xargs -n 1 -L 1 pip install
pip install -e .

You might need to replace line 32 of src/nasbench301/surrogate_models/surrogate_models.py with a new path to the configspace file:

self.config_loader = utils.ConfigLoader(os.path.expanduser('~/naszilla/src/nasbench301/configspace.json'))

Next, download the nas benchmark datasets (either with the terminal commands below, or from their respective websites (nasbench, nas-bench-201, and nasbench301). The versions recommended for use with naszilla are nasbench_only108.tfrecord, NAS-Bench-201-v1_0-e61699.pth, and nasbench301_models_v0.9.zip. If you use a different version, you might need to edit some of the naszilla code.

# these files are 0.5GB, 2.1GB, and 1.6GB, respectively
wget https://storage.googleapis.com/nasbench/nasbench_only108.tfrecord
wget https://ndownloader.figshare.com/files/25506206?private_link=7d47bf57803227af4909 -O NAS-Bench-201-v1_0-e61699.pth
wget https://ndownloader.figshare.com/files/24693026 -O nasbench301_models_v0.9.zip
unzip nasbench301_models_v0.9.zip

Place the three downloaded benchmark data files in ~/nas_benchmark_datasets (or choose another directory and edit line 15 of naszilla/nas_benchmarks.py accordingly).

Now you have successfully installed all of the requirements to run eleven NAS algorithms on three benchmark search spaces!

Test Installation

You can test the installation by running these commands:

cd naszilla
python naszilla/run_experiments.py --search_space nasbench_101 --algo_params all_algos --queries 30 --trials 1
python naszilla/run_experiments.py --search_space nasbench_201 --algo_params all_algos --queries 30 --trials 1
python naszilla/run_experiments.py --search_space nasbench_301 --algo_params all_algos --queries 30 --trials 1

These experiments should finish running within a few minutes.

Run NAS experiments on NASBench-101/201/301 search spaces

cd naszilla
python naszilla/run_experiments.py --search_space nasbench_201 --dataset cifar100 --queries 100 --trials 100

This will test several NAS algorithms against each other on the NASBench-201 search space. Note that NASBench-201 allows you to specify one of three datasets: cifar10, cifar100, or imagenet. To customize your experiment, open naszilla/params.py. Here, you can change the algorithms and their hyperparameters. For details on running specific methods, see these docs.

Contributions

Contributions are welcome!

Reproducibility

If you have any questions about reproducing an experiment, please open an issue or email [email protected].

Citation

Please cite our papers if you use code from this repo:

@inproceedings{white2020study,
  title={A Study on Encodings for Neural Architecture Search},
  author={White, Colin and Neiswanger, Willie and Nolen, Sam and Savani, Yash},
  booktitle={Advances in Neural Information Processing Systems},
  year={2020}
}

@inproceedings{white2021bananas,
  title={BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search},
  author={White, Colin and Neiswanger, Willie and Savani, Yash},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2021}
}

@inproceedings{white2021exploring,
  title={Exploring the Loss Landscape in Neural Architecture Search},
  author={White, Colin and Nolen, Sam and Savani, Yash},
  booktitle={Uncertainty in Artificial Intelligence},
  organization={PMLR},
  year={2021}
}

Contents

This repo contains encodings for neural architecture search, a variety of NAS methods (including BANANAS, a neural predictor Bayesian optimization method, and local search for NAS), and an easy interface for using multiple NAS benchmarks.

Encodings:

encodings

BANANAS:

adj_train adj_test path_train path_test

Local search:

local_search

[NIPS 2021] UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration.

UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration This repository is the official PyTorch implementation of UOT

6 Jun 29, 2022
DTCN IJCAI - Sequential prediction learning framework and algorithm

DTCN This is the implementation of our paper "Sequential Prediction of Social Me

Bobby 2 Jan 24, 2022
This is an implementation of PIFuhd based on Pytorch

Open-PIFuhd This is a unofficial implementation of PIFuhd PIFuHD: Multi-Level Pixel-Aligned Implicit Function forHigh-Resolution 3D Human Digitization

Lingteng Qiu 235 Dec 19, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
A simple baseline for 3d human pose estimation in PyTorch.

3d_pose_baseline_pytorch A PyTorch implementation of a simple baseline for 3d human pose estimation. You can check the original Tensorflow implementat

weigq 312 Jan 06, 2023
Extract MNIST handwritten digits dataset binary file into bmp images

MNIST-dataset-extractor Extract MNIST handwritten digits dataset binary file into bmp images More info at http://yann.lecun.com/exdb/mnist/ Dependenci

Omar Mostafa 6 May 24, 2021
Bunch of different tools which helps visualizing and annotating images for semantic/instance segmentation tasks

Data Framework for Semantic/Instance Segmentation Bunch of different tools which helps visualizing, transforming and annotating images for semantic/in

Bruno Fernandes Carvalho 5 Dec 21, 2022
Make a Turtlebot3 follow a figure 8 trajectory and create a robot arm and make it follow a trajectory

HW2 - ME 495 Overview Part 1: Makes the robot move in a figure 8 shape. The robot starts moving when launched on a real turtlebot3 and can be paused a

Devesh Bhura 0 Oct 21, 2022
eXPeditious Data Transfer

xpdt: eXPeditious Data Transfer About xpdt is (yet another) language for defining data-types and generating code for serializing and deserializing the

Gianni Tedesco 3 Jan 06, 2022
Semantic Segmentation in Pytorch

PyTorch Semantic Segmentation Introduction This repository is a PyTorch implementation for semantic segmentation / scene parsing. The code is easy to

Hengshuang Zhao 1.2k Jan 01, 2023
🔮 A refreshing functional take on deep learning, compatible with your favorite libraries

Thinc: A refreshing functional take on deep learning, compatible with your favorite libraries From the makers of spaCy, Prodigy and FastAPI Thinc is a

Explosion 2.6k Dec 30, 2022
Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimization"

Riggable 3D Face Reconstruction via In-Network Optimization Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimizati

130 Jan 02, 2023
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 07, 2023
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers Created by Xumin Yu*, Yongming Rao*, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou

Xumin Yu 317 Dec 26, 2022
Simple Python application to transform Serial data into OSC messages

SerialToOSC-Bridge Simple Python application to transform Serial data into OSC messages. The current purpose is to be a compatibility layer between ha

Division of Applied Acoustics at Chalmers University of Technology 3 Jun 03, 2021
Zsseg.baseline - Zero-Shot Semantic Segmentation

This repo is for our paper A Simple Baseline for Zero-shot Semantic Segmentation

98 Dec 20, 2022
🛠️ SLAMcore SLAM Utilities

slamcore_utils Description This repo contains the slamcore-setup-dataset script. It can be used for installing a sample dataset for offline testing an

SLAMcore 7 Aug 04, 2022
[TPAMI 2021] iOD: Incremental Object Detection via Meta-Learning

Incremental Object Detection via Meta-Learning To appear in an upcoming issue of the IEEE Transactions on Pattern Analysis and Machine Intelligence (T

Joseph K J 66 Jan 04, 2023
Official Implementation of DE-CondDETR and DELA-CondDETR in "Towards Data-Efficient Detection Transformers"

DE-DETRs By Wen Wang, Jing Zhang, Yang Cao, Yongliang Shen, and Dacheng Tao This repository is an official implementation of DE-CondDETR and DELA-Cond

Wen Wang 41 Dec 12, 2022
Machine learning framework for both deep learning and traditional algorithms

NeoML is an end-to-end machine learning framework that allows you to build, train, and deploy ML models. This framework is used by ABBYY engineers for

NeoML 704 Dec 27, 2022