Security evaluation module with onnx, pytorch, and SecML.

Overview

🚀 🐼 🔥 PandaVision

Integrate and automate security evaluations with onnx, pytorch, and SecML!

Installation

Starting the server without Docker

If you want to run the server with docker, skip to the next section.

This project uses Redis-RQ for handling the queue of requested jobs. Please install Redis if you plan to run this Flask server without using Docker.

Then, install the Python requirements, running the following command in your shell:

pip install -r requirements.txt

Make sure your Redis server is running on your local machine. Test the Redis connection with the following command:

redis-cli ping

The response PONG should appear in the shell.

If the database servers is down, check the linked docs for finding out how to restart it in your system.

Notice: the code is expected to connect to the database through its default port, 6379 for Redis.

Now we are ready to start the server. Don't forget that this system uses external workers to process the long-running tasks, so we need to start the workers along with the sever. Run the following commands from the app folder:

python app/worker.py

Now open another shell and run the server:

python app/runserver.py

Starting the server with docker

If you already started the server locally, you can skip to the next section.

If you already started the server locally, but you want to start it with docker instead, you should stop the running services. On linux, press CTRL + C to stop the server and the worker, then stop the redis service on the machine.

sudo service redis stop

In order to use the docker-compose file provided, install Docker and start the Docker service.

Since this project uses different interconnected containers, it is recommended to install and use Docker Compose.

Once set up, Docker Compose will automatically take care of the setup process. Just type the following commands in your shell, from the app path:

docker build . -t pandavision && docker-compose build && docker-compose up

If you want to use more workers, the following command should be used(replace the number 2 with the number of workers you want to set up):

docker-compose up --scale worker=2

Usage

Quick start

For a demo example, you can download a sample containing few images of the imagenet dataset and a resnet50-pretrained model from the onnx zoo.

Download the files and place them in a known directory.

Supported models

You can export your own ONNX pretrained model from the library of your choice, and pass them to the module. This project uses onnx2pytorch as a dependency to load the ONNX models. Check out the supported operations if you encounter problems when importing the models. A list of pretrained models is also available in the main page.

Data preparation

The module accepts HDF5 files as data sources. The file should contain the samples as the format NCHV.

Note that, while the standardization can be performed through the APIs themselves (preferred), the preprocessing such as resize, reshape, rotation and normalization should be applied in this step.

An example, that creates a subset of the imagenet dataset, can be found in this gist.

How to start a security evaluation job

The easy way

You can access the APIs through the web interface by connecting at http://localhost:8080. You will be prompted to the home page of the service. Click then on the "Try it out!" button, and you will see a form to configure the security evaluation. Upload the model and the dataset of choice, then select the paramters. Finally, click "Submit", and wait for the evaluation to finish. As soon as the worker finishes processing the data, you will see the security evaluation curve on the interface.

You can follow this video tutorial (click for YouTube video) for configuring the security evaluation:

Demo PandaVision

Coming soon ➡️ download data in csv format.

The nerdy way

A security evaluation job can be enqueued with a POST request to /security_evaluations. The API returns the job unique ID that can be used to access job status and results. Running workers will wait for new jobs in the queue and consume them with a FIFO rule.

The request should specify the following parameters in its body:

  • dataset (string): the path where to find the dataset to be loaded (validation dataset should be used, otherwise check out the "indexes" input parameter).
  • trained-model (string): the path of the onnx trained model.
  • performance-metric (string): the performance metric type that should be used to evaluate the system adversarial robustness. Currently implemented only the classification-accuracy metric.
  • evaluation-mode (string): one of 'fast', 'complete'. A fast evaluation will perform the experiment with a subset of the whole dataset (100 samples). For more info on the fast evaluation, see this paper.
  • task (string): type of task that the model is supposed to perform. This determines the attack scenario. (available: "classification" - support for more use cases will be provided in the future).
  • perturbation-type (string): type of perturbation to apply (available: "max-norm" or "random").
  • perturbation-values (Array of floats): array of values to use for crafting the adversarial examples. These are specified as percentage of the input range, fixed, in [0, 1] (e.g., a value of 0.05 will apply a perturbation of maximum 5% of the input scale).
  • indexes (Array of ints): if the list of indexes is specified, it will be used for creating a specific sample from the dataset.
  • preprocessing (dict): dictionary with keys "mean" and "std" for defining custom preprocessing. The values should be expressed as lists. If not set, standard imagenet preprocessing will be applied. Otherwise, specify an empty dict for no preprocessing.
{
  "dataset": "<dataset-path>.hdf5",
  "trained-model": "<model_path>.onnx",
  "performance-metric": "classification-accuracy",
  "evaluation-mode": "fast",
  "task": "classification",
  "perturbation-type": "max-norm",
  "perturbation-values": [
    0, 0.01, 0.02, 0.03, 0.04, 0.05
  ]
}

The API can also be tested with Postman (it is configured already to get the ID and use it for fetching results):

Run in Postman

Job status API

Job status can be retrieved by sending a GET request to /security_evaluations/{id}, where the id of the job should be replaced with the job ID of the previous point. A GET to /security_evaluations will return the status of all jobs found in the queues and in the finished job registries.

Job results API

Job results can be retrieved, once the job has entered the finished state, with a GET request to /security_evaluations/{id}/output. A request to this path with a job ID that is not yet in the finished status will redirect to the job status API.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

If you don't have time to contribute yourself, feel free to open an issue with your suggestions.

License

This project is licensed under the terms of the MIT license. See LICENSE for more information.

Credits

Based on the Security evaluation module - ALOHA.eu project

Comments
  • Adv examples api (PGD support)

    Adv examples api (PGD support)

    Changelog

    • [x] Add caching for PGD attack

    • [x] Add curve visualization for PGD attack

    • [x] Add adversarial example visualization for PGD attack

    • [x] Extend to other attacks

    • [x] Fix min-distance attacks and PGD caching

    • [x] Document the changes

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Updates - attack logging, adversarial example inspection, debugging.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) Major changes.

    • Other information:(does the pr fix some issues? Tag them with #)

      Fixes #6 .

    opened by maurapintor 0
  • Fix ram problems

    Fix ram problems

    Changelog

    • Fixed CW attack memory problem
    • Efficient computation of adversarial examples in maximum-norm case

    What kind of change does this PR introduce?

    • Clear cache for CW attack (temporary fix until secml is updated to support optional caching).

    • PGD attack is run, for each value of perturbation, only in the cases that were not found adversarial for smaller norms.

    • Other information:

      Fixes #21

    opened by maurapintor 0
  • Memory problems when running complete evaluation

    Memory problems when running complete evaluation

    Evaluation fails with some particular configuration of parameters. The reason seems to be related to cached adversarial examples.

    Expected Behavior

    The attack should not make the ram memory explode.

    Current Behavior

    The ram memory fills, then the swap memory, then everything freezes.

    Possible Solution

    Possibly free unused data, such as the attack paths.

    Steps to Reproduce

    The evaluation fails with the following set of parameters:

    • resnet 50 net
    • imagenet data from the demo data
    • L2 CW attack

    Context (Environment)

    • OS: Ubuntu 20.04 LTS
    • Python Version: 3.8
    • Pandavision Version: 0.3
    • Browser: Mozilla Firefox
    bug enhancement 
    opened by maurapintor 0
  • fixed conflict for picker

    fixed conflict for picker

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix

    • What is the current behavior? Now the GUI updates the attack selection and the perturbation size choices simultaneously.

    • Other information: Fixes #19

    opened by maurapintor 0
  • Attack selector bug

    Attack selector bug

    Attack choices not shown.

    Expected Behavior

    On the GUI, if the perturbation type is picked, the selector for the attack should visualize the attack choices for the specified perturbation model.

    Current Behavior

    The attack choices are not updated.

    Possible Solution

    Possible conflict with the jquery call that updates the perturbation values.

    bug 
    opened by maurapintor 0
  • Fix docker compose version

    Fix docker compose version

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix for docker container. Feature: picker for perturbation size.

    • What is the new behavior?

    • Now the docker-compose should at least be v1.16, as it supports the yaml file format used in this repo for building the pandavision architecture.
    • The GUI now allows to pick the perturbation sizes for the evaluation.
    • Other information: Fixes #14 Fixes #17
    opened by maurapintor 0
  • Docker compose problem with services key

    Docker compose problem with services key

    Docker compose file format is incompatible with old versions.

    Expected Behavior

    The command:

    docker build . -t pandavision && docker-compose build && docker-compose up
    

    should build the container and run smoothly.

    Current Behavior

    The command produces, with some Docker-compose versions, the following output:

    Successfully tagged pandavision:latest ERROR: The Compose file './docker-compose.yml' is invalid because: Unsupported config option for services: 'worker'

    Possible Solution

    The problem seems related to the docker-compose versions that have incompatible specifications for the expected yaml: https://docs.docker.com/compose/compose-file/compose-versioning/#versioning

    A suggested solution, from this StackOverflow question, is to upgrade the docker-compose version, and specify the version number in the top of the yaml file.

    Possible Implementation

    1. add line in the yaml file, stating version: "3" in the header.
    2. suggest minimum version required for docker-compose, i.e. at least 1.6, in the readme file.
    opened by maurapintor 0
  • Chart x-axis based on eps values rather than order

    Chart x-axis based on eps values rather than order

    The sec-eval curve is now presenting results in a "linspace" way. The possibility of adding scatter values should be added, so that the list of eps values can be dynamically adjusted to arbitrary ranges.

    bug enhancement 
    opened by maurapintor 0
  • GUI for security evaluations

    GUI for security evaluations

    Add visual interface for testing APIs. It should display at least the model and data selection, plus the results of the security evaluation when completed.

    enhancement 
    opened by maurapintor 0
  • Sequential attacks

    Sequential attacks

    I'm submitting a ...

    • feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    A multi-attack interface should be used. The interface should allow to specify a sequence of attacks that is used for testing the robustness of a model. The sequence will run the first attack on the whole dataset, then run the next attack in the sequence only on the points that fail for the given perturbation model.

    enhancement 
    opened by maurapintor 0
  • RobustBench models

    RobustBench models

    I'm submitting a ...

    • feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    Models from RobustBench should be available through the interface. The choice should be available next to the upload model button, where a dropdown menu should be displayed.

    enhancement 
    opened by maurapintor 0
  • Dataset samples

    Dataset samples

    I'm submitting a ...

    [x] feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    The interface should allow for selecting subsamples of commonly-used datasets without uploading them to the server. At least a sample from the following datasets should be included:

    • [ ] MNIST
    • [ ] CIFAR10
    • [ ] CIFAR100
    • [ ] ImageNet
    enhancement 
    opened by maurapintor 0
  • Feature request: other tasks

    Feature request: other tasks

    I'm submitting a ...

    • Feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    More use cases could be supported, as in https://gitlab.com/aloha.eu/security_evaluation. Possible use cases are:

    • detection
    • segmentation
    enhancement 
    opened by maurapintor 0
  • GPU support for container

    GPU support for container

    GPU can be currently used by running the server and worker locally. Using a container that also works with GPU might be beneficial for speedups and ease installation.

    enhancement help wanted 
    opened by maurapintor 0
Releases(v0.5)
Owner
Maura Pintor
🐼 Fighting evil adversarial pandas.
Maura Pintor
MAU: A Motion-Aware Unit for Video Prediction and Beyond, NeurIPS2021

MAU (NeurIPS2021) Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xinguang Xiang, Wen GAo. Official PyTorch Code for "MAU: A Motion-Aware

ZhengChang 20 Nov 25, 2022
This repository contains the official MATLAB implementation of the TDA method for reverse image filtering

ReverseFilter TDA This repository contains the official MATLAB implementation of the TDA method for reverse image filtering proposed in the paper: "Re

Fergaletto 2 Dec 13, 2021
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

ACSC Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems. System Architecture 1. Dependency Tested with U

KINO 192 Dec 13, 2022
This repository provides a basic implementation of our GCPR 2021 paper "Learning Conditional Invariance through Cycle Consistency"

Learning Conditional Invariance through Cycle Consistency This repository provides a basic TensorFlow 1 implementation of the proposed model in our GC

BMDA - University of Basel 1 Nov 04, 2022
An official implementation of "Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation" (ICCV 2021) in PyTorch.

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation This is an official implementation of the paper "Exploiting a Joint

CV Lab @ Yonsei University 35 Oct 26, 2022
MinHash, LSH, LSH Forest, Weighted MinHash, HyperLogLog, HyperLogLog++, LSH Ensemble

datasketch: Big Data Looks Small datasketch gives you probabilistic data structures that can process and search very large amount of data super fast,

Eric Zhu 1.9k Jan 07, 2023
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
Language models are open knowledge graphs ( non official implementation )

language-models-are-knowledge-graphs-pytorch Language models are open knowledge graphs ( work in progress ) A non official reimplementation of Languag

theblackcat102 132 Dec 18, 2022
GANmouflage: 3D Object Nondetection with Texture Fields

GANmouflage: 3D Object Nondetection with Texture Fields Rui Guo1 Jasmine Collins

29 Aug 10, 2022
Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear Devices

EMOShip This repository contains the EMO-Film dataset described in the paper "Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis

1 Nov 18, 2022
An Object Oriented Programming (OOP) interface for Ontology Web language (OWL) ontologies.

Enabling a developer to use Ontology Web Language (OWL) along with its reasoning capabilities in an Object Oriented Programming (OOP) paradigm, by pro

TheEngineRoom-UniGe 7 Sep 23, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which reaches a median HNS of 205.7 after only 10M frames (the original Rainbow from Hessel et al. 2017 re

Dominik Schmidt 31 Dec 21, 2022
A library for augmentation of a YOLO-formated dataset

YOLO Dataset Augmentation lib Инструкция по использованию этой библиотеки Запуск всех файлов осуществлять из консоли. GoogleCrawl_to_Dataset.py Это ск

Egor Orel 1 Dec 10, 2022
pix2pix in tensorflow.js

pix2pix in tensorflow.js This repo is moved to https://github.com/yining1023/pix2pix_tensorflowjs_lite See a live demo here: https://yining1023.github

Yining Shi 47 Oct 04, 2022
Parametric Contrastive Learning (ICCV2021)

Parametric-Contrastive-Learning This repository contains the implementation code for ICCV2021 paper: Parametric Contrastive Learning (https://arxiv.or

DV Lab 156 Dec 21, 2022
A pytorch &keras implementation and demo of Fastformer.

Fastformer Notes from the authors Pytorch/Keras implementation of Fastformer. The keras version only includes the core fastformer attention part. The

153 Dec 28, 2022
Matlab Python Heuristic Battery Opt - SMOP conversion and manual conversion

SMOP is Small Matlab and Octave to Python compiler. SMOP translates matlab to py

Tom Xu 1 Jan 12, 2022
This is an official implementation of the CVPR2022 paper "Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots".

Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots Blind2Unblind Citing Blind2Unblind @inproceedings{wang2022blind2unblind, tit

demonsjin 58 Dec 06, 2022
Neural network-based build time estimation for additive manufacturing

Neural network-based build time estimation for additive manufacturing Oh, Y., Sharp, M., Sprock, T., & Kwon, S. (2021). Neural network-based build tim

Yosep 1 Nov 15, 2021