This repo will contain code to reproduce and build upon understanding transfer learning

Overview

What is being transferred in transfer learning?

This repo contains the code for the following paper:

Behnam Neyshabur*, Hanie Sedghi*, Chiyuan Zhang*. What is being transferred in transfer learning?. *equal contribution. Advances in Neural Information Processing Systems (NeurIPS), 2020.

Disclaimer: this is not an officially supported Google product.

Setup

Library dependencies

This code has the following dependencies

  • pytorch (1.4.0 is tested)
  • gin-config
  • tqdm
  • wget (the python package)

GPUs are needed to run most of the experiments.

Data

CheXpert data (the train and valid folders) needs to be placed in /mnt/data/CheXpert-v1.0-img224. If your data is in a different place, you can specify the data.image_path parameter (see configs/p100_chexpert.py). We pre-resized all the CheXpert images to reduce the burden of data pre-processing using the following script:

'" ../$NEWDIR/{} cd .. ">
#!/bin/bash

NEWDIR=CheXpert-v1.0-img224
mkdir -p $NEWDIR/{train,valid}

cd CheXpert-v1.0

echo "Prepare directory structure..."
find . -type d | parallel mkdir -p ../$NEWDIR/{}

echo "Resize all images to have at least 224 pixels on each side..."
find . -name "*.jpg" | parallel convert {} -resize "'224^>'" ../$NEWDIR/{}

cd ..

The DomainNet data will be automatically downloaded from the Internet upon first run. By default, it will download to /mnt/data, which can be changed with the data_dir config (see configs/p100_domain_net.py).

Common Experiments

Training jobs

CheXpert training from random init. We use 2 Nvidia V100 GPUs for CheXpert training. If you run into out-of-memory error, you can try to reduce the batch size.

CUDA_VISIBLE_DEVICES=0,1 python chexpert_train.py -k train/chexpert/fixup_resnet50_nzfc/randinit-lr0.1-bs256

CheXpert finetuning from ImageNet pre-trained checkpoint. The code tries to load the ImageNet pre-trained chexpoint from /mnt/data/logs/imagenet-lr01/ckpt-E090.pth.tar. Or you can customize the path to checkpoint (see configs/p100_chexpert.py).

CUDA_VISIBLE_DEVICES=0,1 python chexpert_train.py -k train/chexpert/fixup_resnet50_nzfc/finetune-lr0.02-bs256

Similarly, DomainNet training can be executed using the script imagenet_train.py (replace real with clipart and quickdraw to run on different domains).

# randinit
CUDA_VISIBLE_DEVICES=0 python imagenet_train.py -k train/DomainNet_real/fixup_resnet50_nzfc/randinit-lr0.1-MstepLR

# finetune
CUDA_VISIBLE_DEVICES=0 python imagenet_train.py -k train/DomainNet_real/fixup_resnet50_nzfc/finetune-lr0.02-MstepLR

Training with shuffled blocks

The training jobs with block-shuffled images are defined in configs/p200_pix_shuffle.py. Run

python -m configs pix_shuffle

To see the keys of all the training jobs with pixel shuffling. Similarly,

python -m configs blk7_shuffle

list all the jobs with 7x7 block-shuffled images. You can run any of those jobs using the -k command line argument. For example:

CUDA_VISIBLE_DEVICES=0 python imagenet_train.py \
    -k blk7_shuffle/DomainNet_quickdraw/fixup_resnet50_nzfc_noaug/randinit-lr0.1-MstepLR/seed0

Finetuning from different pre-training checkpoints

The config file configs/p200_finetune_ckpt.py defines training jobs that finetune from different ImageNet pre-training checkpoints along the pre-training optimization trajectory.

Linear interpolation between checkpoints (performance barrier)

The script ckpt_interpolation.py performs the experiment of linearly interpolating between different solutions. The file is self-contained. You can edit the file directly to specify which combinations of checkpoints are to be used. The command line argument -a compute and -a plot can be used to switch between doing the computation and making the plots based on computed results.

General Documentation

This codebase uses gin-config to customize the behavior of the program, and allows us to easily generate a large number of similar configurations with Python loops. This is especially useful for hyper-parameter sweeps.

Running a job

A script mainly takes a config key in the commandline, and it will pull the detailed configurations according to this key from the pre-defined configs. For example:

python3 imagenet_train.py -k train/cifar10/fixup_resnet50/finetune-lr0.02-MstepLR

Query pre-defined configs

You can list all the pre-defined config keys matching a given regex with the following command:

python3 -m configs 

For example:

$ python3 -m configs cifar10
2 configs found ====== with regex: cifar10
    0) train/cifar10/fixup_resnet50/randinit-lr0.1-MstepLR
    1) train/cifar10/fixup_resnet50/finetune-lr0.02-MstepLR

Defining new configs

All the configs are in the directory configs, with the naming convention pXXX_YYY.py. Here XXX are digits, which allows ordering between configs (so when defining configs we can reference and extend previously defined configs).

To add a new config file:

  1. create pXXX_YYY.py file.
  2. edit __init__.py to import this file.
  3. in the newly added file, define functions to registery new configs. All the functions with the name register_blah will be automatically called.

Customing new functions

To customize the behavior of a new function, make that function gin configurable by

@gin.configurable('config_name')
def my_func(arg1=gin.REQUIRED, arg2=0):
  # blah

Then in the pre-defined config files, you can specify the values by

spec['gin']['config_name.arg1'] = # whatever python objects
spec['gin']['config_name.arg2'] = 2

See gin-config for more details.

PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Xinlei-Pei 6 Dec 23, 2022
The self-supervised goal reaching benchmark introduced in Discovering and Achieving Goals via World Models

Lexa-Benchmark Codebase for the self-supervised goal reaching benchmark introduced in 'Discovering and Achieving Goals via World Models'. Setup Create

1 Oct 14, 2021
TransReID: Transformer-based Object Re-Identification

TransReID: Transformer-based Object Re-Identification [arxiv] The official repository for TransReID: Transformer-based Object Re-Identification achiev

569 Dec 30, 2022
source code the paper Fast and Robust Iterative Closet Point.

Fast-Robust-ICP This repository includes the source code the paper Fast and Robust Iterative Closet Point. Authors: Juyong Zhang, Yuxin Yao, Bailin De

yaoyuxin 320 Dec 28, 2022
Simple node deletion tool for onnx.

snd4onnx Simple node deletion tool for onnx. I only test very miscellaneous and limited patterns as a hobby. There are probably a large number of bugs

Katsuya Hyodo 6 May 15, 2022
A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding his way.

GuidEye A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding h

Munal Jain 0 Aug 09, 2022
This repository is an unoffical PyTorch implementation of Medical segmentation in 3D and 2D.

Pytorch Medical Segmentation Read Chinese Introduction:Here! Recent Updates 2021.1.8 The train and test codes are released. 2021.2.6 A bug in dice was

EasyCV-Ellis 618 Dec 27, 2022
ViDT: An Efficient and Effective Fully Transformer-based Object Detector

ViDT: An Efficient and Effective Fully Transformer-based Object Detector by Hwanjun Song1, Deqing Sun2, Sanghyuk Chun1, Varun Jampani2, Dongyoon Han1,

NAVER AI 262 Dec 27, 2022
An Object Oriented Programming (OOP) interface for Ontology Web language (OWL) ontologies.

Enabling a developer to use Ontology Web Language (OWL) along with its reasoning capabilities in an Object Oriented Programming (OOP) paradigm, by pro

TheEngineRoom-UniGe 7 Sep 23, 2022
Simple, efficient and flexible vision toolbox for mxnet framework.

MXbox: Simple, efficient and flexible vision toolbox for mxnet framework. MXbox is a toolbox aiming to provide a general and simple interface for visi

Ligeng Zhu 31 Oct 19, 2019
B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search This is the offical implementation of the

SNU ADSL 0 Feb 07, 2022
A fast, dataset-agnostic, deep visual search engine for digital art history

imgs.ai imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings. It utilizes modern

Fabian Offert 5 Dec 14, 2022
RLMeta is a light-weight flexible framework for Distributed Reinforcement Learning Research.

RLMeta rlmeta - a flexible lightweight research framework for Distributed Reinforcement Learning based on PyTorch and moolib Installation To build fro

Meta Research 281 Dec 22, 2022
Tensorflow 2.x implementation of Vision-Transformer model

Vision Transformer Unofficial Tensorflow 2.x implementation of the Transformer based Image Classification model proposed by the paper AN IMAGE IS WORT

Soumik Rakshit 16 Jul 20, 2022
PyTorch implementation of VAGAN: Visual Feature Attribution Using Wasserstein GANs

Prototypical Networks for Few shot Learning in PyTorch Simple alternative Implementation of Prototypical Networks for Few Shot Learning (paper, code)

Orobix 93 Aug 17, 2022
Build and run Docker containers leveraging NVIDIA GPUs

NVIDIA Container Toolkit Introduction The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includ

NVIDIA Corporation 15.6k Jan 01, 2023
SymPy-powered, Wolfram|Alpha-like answer engine totally in your browser, without backend computation

SymPy Beta SymPy Beta is a fork of SymPy Gamma. The purpose of this project is to run a SymPy-powered, Wolfram|Alpha-like answer engine totally in you

Liumeo 25 Dec 21, 2022
The most simple and minimalistic navigation dashboard.

Navigation This project follows a goal to have simple and lightweight dashboard with different links. I use it to have my own self-hosted service dash

Yaroslav 23 Dec 23, 2022
Latent Network Models to Account for Noisy, Multiply-Reported Social Network Data

VIMuRe Latent Network Models to Account for Noisy, Multiply-Reported Social Network Data. If you use this code please cite this article (preprint). De

6 Dec 15, 2022
The King is Naked: on the Notion of Robustness for Natural Language Processing

the-king-is-naked: on the notion of robustness for natural language processing AAAI2022 DISCLAIMER:This repo will be updated soon with instructions on

Iperboreo_ 1 Nov 24, 2022