Beyond imagenet attack (accepted by ICLR 2022) towards crafting adversarial examples for black-box domains.

Overview

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains (ICLR'2022)

This is the Pytorch code for our paper Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains). In this paper, with only the knowledge of the ImageNet domain, we propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains (unknown classification tasks).

Requirement

  • Python 3.7
  • Pytorch 1.8.0
  • torchvision 0.9.0
  • numpy 1.20.2
  • scipy 1.7.0
  • pandas 1.3.0
  • opencv-python 4.5.2.54
  • joblib 0.14.1
  • Pillow 6.1

Dataset

images

  • Download the ImageNet training dataset.

  • Download the testing dataset.

Note: After downloading CUB-200-2011, Standford Cars and FGVC Aircraft, you should set the "self.rawdata_root" (DCL_finegrained/config.py: lines 59-75) to your saved path.

Target model

The checkpoint of target model should be put into model folder.

  • CUB-200-2011, Stanford Cars and FGVC AirCraft can be downloaded from here.
  • CIFAR-10, CIFAR-100, STL-10 and SVHN can be automatically downloaded.
  • ImageNet pre-trained models are available at torchvision.

Pretrained-Generators

framework Adversarial generators are trained against following four ImageNet pre-trained models.

  • VGG19
  • VGG16
  • ResNet152
  • DenseNet169

After finishing training, the resulting generator will be put into saved_models folder. You can also download our pretrained-generator from here.

Train

Train the generator using vanilla BIA (RN: False, DA: False)

python train.py --model_type vgg16 --train_dir your_imagenet_path --RN False --DA False

your_imagenet_path is the path where you download the imagenet training set.

Evaluation

Evaluate the performance of vanilla BIA (RN: False, DA: False)

python eval.py --model_type vgg16 --RN False --DA False

Citing this work

If you find this work is useful in your research, please consider citing:

@inproceedings{Zhang2022BIA,
  author    = {Qilong Zhang and
               Xiaodan Li and
               Yuefeng Chen and
               Jingkuan Song and
               Lianli Gao and
               Yuan He and
               Hui Xue},
  title     = {Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains},
  Booktitle = {International Conference on Learning Representations},
  year      = {2022}
}

Acknowledge

Thank @aaron-xichen and @Muzammal-Naseer for sharing their codes.

You might also like...
This repository contains the code and models necessary to replicate the results of paper:  How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

This repository contains the code and models necessary to replicate the results of paper:  How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

[ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
[ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators

AMOS This repository contains the scripts for fine-tuning AMOS pretrained models on GLUE and SQuAD 2.0 benchmarks. Paper: Pretraining Text Encoders wi

Iterative Normalization: Beyond Standardization towards Efficient Whitening

IterNorm Code for reproducing the results in the following paper: Iterative Normalization: Beyond Standardization towards Efficient Whitening Lei Huan

Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

Comments
  • About the comparative methods

    About the comparative methods

    Thank you for your insightful work! In Table3, I want to know that how to perform PGD or DIM on CUB with source models pretrained on ImageNet. Thank you~

    opened by lwmming 6
  • cursor already registered in Tk_GetCursor Aborted (core dumped)

    cursor already registered in Tk_GetCursor Aborted (core dumped)

    python train.py --model_type vgg16 --RN False --DA False

    I tried the above default training, but the error occurred at the end of the batch (epoch 1) training. Can you help debug this please?

    opened by hoonsyang 2
  • missing file

    missing file

    https://github.com/Alibaba-AAIG/Beyond-ImageNet-Attack/blob/7e8b1b8ec5728ebc01723f2c444bf2d5275ee7be/DCL_finegrained/LoadModel.py#L6 NameError: name 'pretrainedmodels' is not defined`

    opened by nkv1995 2
  • when computing cosine similarity

    when computing cosine similarity

    Hi! this is more of a question for the elegant work you have here but less of an issue.

    So when you take cosine similarity (which is to be decreased during training) between two feature maps, you take,

    loss = torch.cosine_similarity((adv_out_slice*attention).reshape(adv_out_slice.shape[0], -1), 
                                (img_out_slice*attention).reshape(img_out_slice.shape[0], -1)).mean()
    

    and that's to compare two flatten vectors, each of which is the flattened feature maps of size (N feature channels, width, height).

    I wonder why not comparing the flattened feature maps with respect to each channel, and then take the average across channels? To me, you're comparing two vectors that are (Nwidthheight)-dimensional, which is not so straightforward to me. Thanks in advance for any intuition behind!

    opened by juliuswang0728 1
Releases(pretrained_models)
Owner
Alibaba-AAIG
Alibaba Artificial Intelligence Governance Laboratory
Alibaba-AAIG
A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: c

InterDigital 680 Jan 06, 2023
A dual benchmarking study of visual forgery and visual forensics techniques

A dual benchmarking study of facial forgery and facial forensics In recent years, visual forgery has reached a level of sophistication that humans can

8 Jul 06, 2022
git《FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding》(CVPR 2021) GitHub: [fig8]

FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding (CVPR 2021) This repo contains the implementation of our state-of-the-art fewshot ob

233 Dec 29, 2022
Evaluation suite for large-scale language models.

This repo contains code for running the evaluations and reproducing the results from the Jurassic-1 Technical Paper (see blog post), with current support for running the tasks through both the AI21 S

71 Dec 17, 2022
2nd solution of ICDAR 2021 Competition on Scientific Literature Parsing, Task B.

TableMASTER-mmocr Contents About The Project Method Description Dependency Getting Started Prerequisites Installation Usage Data preprocess Train Infe

Jianquan Ye 298 Dec 21, 2022
Simple streamlit app to demonstrate HERE Tour Planning

Table of Contents About the Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing License Acknowledgements About Th

Amol 8 Sep 05, 2022
Simple helper library to convert a collection of numpy data to tfrecord, and build a tensorflow dataset from the tfrecord.

numpy2tfrecord Simple helper library to convert a collection of numpy data to tfrecord, and build a tensorflow dataset from the tfrecord. Installation

Ryo Yonetani 2 Jan 16, 2022
A human-readable PyTorch implementation of "Self-attention Does Not Need O(n^2) Memory"

memory_efficient_attention.pytorch A human-readable PyTorch implementation of "Self-attention Does Not Need O(n^2) Memory" (Rabe&Staats'21). def effic

Ryuichiro Hataya 7 Dec 26, 2022
Mercury: easily convert Python notebook to web app and share with others

Mercury Share your Python notebooks with others Easily convert your Python notebooks into interactive web apps by adding parameters in YAML. Simply ad

MLJAR 2.2k Dec 27, 2022
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation

AttentionGAN-v2 for Unpaired Image-to-Image Translation AttentionGAN-v2 Framework The proposed generator learns both foreground and background attenti

Hao Tang 530 Dec 27, 2022
The Instructed Glacier Model (IGM)

The Instructed Glacier Model (IGM) Overview The Instructed Glacier Model (IGM) simulates the ice dynamics, surface mass balance, and its coupling thro

27 Dec 16, 2022
Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction".

GNN_PPI Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction". Lear

Ursa Zrimsek 2 Dec 14, 2022
Implementation of momentum^2 teacher

Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning Requirements All experiments are done with python3.6, torch

jemmy li 121 Sep 26, 2022
Details about the wide minima density hypothesis and metrics to compute width of a minima

wide-minima-density-hypothesis Details about the wide minima density hypothesis and metrics to compute width of a minima This repo presents the wide m

Nikhil Iyer 9 Dec 27, 2022
In real-world applications of machine learning, reliable and safe systems must consider measures of performance beyond standard test set accuracy

PixMix Introduction In real-world applications of machine learning, reliable and safe systems must consider measures of performance beyond standard te

Andy Zou 79 Dec 30, 2022
D2Go is a toolkit for efficient deep learning

D2Go D2Go is a production ready software system from FacebookResearch, which supports end-to-end model training and deployment for mobile platforms. W

Facebook Research 744 Jan 04, 2023
Implementation for the "Surface Reconstruction from 3D Line Segments" paper.

Surface Reconstruction from 3D Line Segments Surface reconstruction from 3d line segments. Langlois, P. A., Boulch, A., & Marlet, R. In 2019 Internati

85 Jan 04, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 03, 2023
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.

6D Rotation Representation for Unconstrained Head Pose Estimation (Pytorch) Paper Thorsten Hempel and Ahmed A. Abdelrahman and Ayoub Al-Hamadi, "6D Ro

Thorsten Hempel 284 Dec 23, 2022
Code for the paper "Unsupervised Contrastive Learning of Sound Event Representations", ICASSP 2021.

Unsupervised Contrastive Learning of Sound Event Representations This repository contains the code for the following paper. If you use this code or pa

Eduardo Fonseca 81 Dec 22, 2022