Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your personal computer!

Overview

Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your machine!

Motivation

Would you like fully reproducible research or reusable workflows that seamlessly run on HPC clusters? Tired of writing and managing large Slurm submission scripts? Do you have comment out large parts of your pipeline whenever its results have been generated? Don't waste your precious time! awflow allows you to directly describe complex pipelines in Python, that run on your personal computer and large HPC clusters.

import awflow as aw
import glob
import numpy as np

n = 100000
tasks = 10

@aw.cpus(4)  # Request 4 CPU cores
@aw.memory("4GB")  # Request 4 GB of RAM
@aw.postcondition(aw.num_files('pi-*.npy', 10))
@aw.tasks(tasks)  # Requests '10' parallel tasks
def estimate(task_index):
    print("Executing task {} / {}.".format(task_index + 1, tasks))
    x = np.random.random(n)
    y = np.random.random(n)
    pi_estimate = (x**2 + y**2 <= 1)
    np.save('pi-' + str(task_index) + '.npy', pi_estimate)

@aw.dependency(estimate)
def merge():
    files = glob.glob('pi-*.npy')
    stack = np.vstack([np.load(f) for f in files])
    np.save('pi.npy', stack.sum() / (n * tasks) * 4)

@aw.dependency(merge)
@aw.postcondition(aw.exists('pi.npy'))  # Prevent execution if postcondition is satisfied.
def show_result():
    print("Pi:", np.load('pi.npy'))

aw.execute()

Executing this Python program (python examples/pi.py) on a Slurm HPC cluster will launch the following jobs.

           1803299       all    merge username PD       0:00      1 (Dependency)
           1803300       all show_res username PD       0:00      1 (Dependency)
     1803298_[6-9]       all estimate username PD       0:00      1 (Resources)
         1803298_3       all estimate username  R       0:01      1 compute-xx
         1803298_4       all estimate username  R       0:01      1 compute-xx
         1803298_5       all estimate username  R       0:01      1 compute-xx

Check the examples directory and guide to explore the functionality.

Installation

The awflow package is available on PyPi, which means it is installable via pip.

[email protected]:~ $ pip install awflow

If you would like the latest features, you can install it using this Git repository.

[email protected]:~ $ pip install git+https://github.com/JoeriHermans/awflow

If you would like to run the examples as well, be sure to install the optional example dependencies.

[email protected]:~ $ pip install 'awflow[examples]'

Usage

The core concept in awflow is the notion of a task. Essentially, this is a method that will be executed in your workflow. Tasks are represented as a node in a directed graph. In doing so, we can easily specify (task) dependencies. In addition, we can attribute properties to tasks using decorators defined by awflow. This allows you to specify things like CPU cores, GPU's and even postconditions. Follow the guide for additional examples and descriptions.

Decorators

TODO

Workflow storage

By default, workflows will be stored in the current working direction within the ./workflows folder. If desired, a central storage directory can be used by specifying the AWFLOW_STORAGE environment variable.

The awflow utility

This package comes with a utility program to manage submitted, failed, and pending workflows. Its functionality can be inspected by executing awflow -h. In addition, to streamline the management of workflows, we recommend to give every workflow as specific name to easily identify a workflow. This name does not have to be unique for every distinct workflow execution.

aw.execute(name=r'Some name')

Executing awflow list after submitting the pipeline with python pipeline.py [args] will yield.

[email protected]:~ $ awflow list
  Postconditions      Status      Backend     Name          Location
 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
  50%                 Running     Slurm       Some name     /home/jhermans/awflow/examples/.workflows/tmpntmc712a

Modules

[email protected]:~ $ awflow cancel [workflow] TODO

[email protected]:~ $ awflow clear TODO

[email protected]:~ $ awflow list TODO

[email protected]:~ $ awflow inspect [workflow] TODO

Contributing

See CONTRIBUTING.md.

Roadmap

  • Documentation
  • README

License

As described in the LICENSE file.

You might also like...
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Lightweight, Python library for fast and reproducible experimentation :microscope:

Steppy What is Steppy? Steppy is a lightweight, open-source, Python 3 library for fast and reproducible experimentation. Steppy lets data scientist fo

Open-sourcing the Slates Dataset for recommender systems research
Open-sourcing the Slates Dataset for recommender systems research

FINN.no Recommender Systems Slate Dataset This repository accompany the paper "Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sa

Research on controller area network Intrusion Detection Systems

Group members information Member 1: Lixue Liang Member 2: Yuet Lee Chan Member 3: Xinruo Zhang Member 4: Yifei Han User Manual Generate Attack Packets

GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon Research.

BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.
BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.

Overview BisQue is a web-based platform specifically designed to provide researchers with organizational and quantitative analysis tools for up to 5D

Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Open-L2O This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of proble

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

Comments
  • [BUG] conda activation crashes standalone execution

    [BUG] conda activation crashes standalone execution

    Issue description

    In the standalone backend on Unix systems, the os.system(command) used here

    https://github.com/JoeriHermans/awflow/blob/1fcf255debfbc18d39a6b2baa387bbc85050209d/awflow/backends/standalone/executor.py#L53-L60

    actually calls /bin/sh. For some OS, like Ubuntu, sh links to dash which does not support the scripting features required by conda activations. This results in runtime errors like

    sh: 5: /home/username/miniconda3/envs/envname/etc/conda/activate.d/activate-binutils_linux-64.sh: Syntax error: "(" unexpected
    

    Proposed solution

    A solution would be to change the shell with which the commands are called. This is possible thanks to the subprocess package. A good default would be bash as almost all Unix systems use it.

        if node.tasks > 1:
            for task_index in range(node.tasks):
                task_command = command + ' ' + str(task_index)
                return_code = subprocess.call(task_command, shell=True, executable='/bin/bash')
        else:
            return_code = subprocess.call(command, shell=True, executable='/bin/bash')
    

    One could also add a way to change this default. Additionally, wouldn't it be better to launch the tasks as background jobs for the standalone backend (simply add & at the end of the command) ?

    bug 
    opened by francois-rozet 1
  • [BUG] pip install fails for version 0.0.4

    [BUG] pip install fails for version 0.0.4

    $ pip install awflow==0.0.4
    Collecting awflow==0.0.4
      Using cached awflow-0.0.4.tar.gz (19 kB)
        ERROR: Command errored out with exit status 1:
         command: /home/francois/awf/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ou4rxs3q/awflow/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ou4rxs3q/awflow/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-ou4rxs3q/awflow/pip-egg-info
             cwd: /tmp/pip-install-ou4rxs3q/awflow/
        Complete output (7 lines):
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-install-ou4rxs3q/awflow/setup.py", line 54, in <module>
            'examples': _load_requirements('requirements_examples.txt')
          File "/tmp/pip-install-ou4rxs3q/awflow/setup.py", line 17, in _load_requirements
            with open(file_name, 'r') as file:
        FileNotFoundError: [Errno 2] No such file or directory: 'requirements_examples.txt'
        ----------------------------------------
    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    
    bug high priority 
    opened by francois-rozet 1
  • Jobs submitted with awflow doesn't work with Multiprocessing.pool

    Jobs submitted with awflow doesn't work with Multiprocessing.pool

    Hi,

    I tried submitting a few jobs with awflow but somehow each time I run it with slurm backend it never produces a pool.starmap and the process simply times out on cluster. `0 0 8196756 5.1g 85664 S 0.0 1.0 2:12.27 python 790517 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.66 python

    790518 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.45 python

    790519 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.76 python

    790520 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:02.02 python

    790521 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.99 python `

    An example of what happens in the cluster where the processes are spawned but each process uses 0 % of the cpu slurmstepd: error: *** JOB 1933332 ON compute-04 CANCELLED AT 2022-04-08T19:33:26 DUE TO TIME LIMIT ***

    opened by digirak 0
Releases(0.1.0)
Owner
Joeri Hermans
Combining Machine Learning and Physics to automate science.
Joeri Hermans
Recurrent Scale Approximation (RSA) for Object Detection

Recurrent Scale Approximation (RSA) for Object Detection Codebase for Recurrent Scale Approximation for Object Detection in CNN published at ICCV 2017

Yu Liu (Louis) 239 Dec 28, 2022
Boostcamp AI Tech 3rd / Basic Paper reading w.r.t Embedding

Boostcamp AI Tech 3rd : Basic Paper Reading w.r.t Embedding TL;DR 1992년부터 2018년도까지 이루어진 word/sentence embedding의 중요한 줄기를 이루는 기초 논문 스터디를 진행하고자 합니다. 논

Soyeon Kim 14 Nov 14, 2022
Optimal Adaptive Allocation using Deep Reinforcement Learning in a Dose-Response Study

Optimal Adaptive Allocation using Deep Reinforcement Learning in a Dose-Response Study Supplementary Materials for Kentaro Matsuura, Junya Honda, Imad

Kentaro Matsuura 4 Nov 01, 2022
DISTIL: Deep dIverSified inTeractIve Learning.

DISTIL: Deep dIverSified inTeractIve Learning. An active/inter-active learning library built on py-torch for reducing labeling costs.

decile-team 110 Dec 06, 2022
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]

GANimation: Anatomically-aware Facial Animation from a Single Image [Project] [Paper] Official implementation of GANimation. In this work we introduce

Albert Pumarola 1.8k Dec 28, 2022
Implementation of ToeplitzLDA for spatiotemporal stationary time series data.

Code for the ToeplitzLDA classifier proposed in here. The classifier conforms sklearn and can be used as a drop-in replacement for other LDA classifiers. For in-depth usage refer to the learning from

Jan Sosulski 5 Nov 07, 2022
Linear algebra python - Number of operations and problems in Linear Algebra and Numerical Linear Algebra

Linear algebra in python Number of operations and problems in Linear Algebra and

Alireza 5 Oct 09, 2022
Official implementation of EfficientPose

EfficientPose This is the official implementation of EfficientPose. We based our work on the Keras EfficientDet implementation xuannianz/EfficientDet

2 May 17, 2022
基于tensorflow 2.x的图片识别工具集

Classification.tf2 基于tensorflow 2.x的图片识别工具集 功能 粗粒度场景图片分类 细粒度场景图片分类 其他场景图片分类 模型部署 tensorflow serving本地推理和docker部署 tensorRT onnx ... 数据集 https://hyper.a

Wei Qi 1 Nov 03, 2021
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)

TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020) About The goal of our research problem is illustrated below: give

59 Dec 09, 2022
Iris prediction model is used to classify iris species created julia's DecisionTree, DataFrames, JLD2, PlotlyJS and Statistics packages.

Iris Species Predictor Iris prediction is used to classify iris species using their sepal length, sepal width, petal length and petal width created us

Siva Prakash 2 Jan 06, 2022
(under submission) Bayesian Integration of a Generative Prior for Image Restoration

BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration Authors: Majed El Helou, and Sabine Süsstrunk {Note: p

Majed El Helou 22 Dec 17, 2022
Codes accompanying the paper "Learning Nearly Decomposable Value Functions with Communication Minimization" (ICLR 2020)

NDQ: Learning Nearly Decomposable Value Functions with Communication Minimization Note This codebase accompanies paper Learning Nearly Decomposable Va

Tonghan Wang 69 Nov 26, 2022
Face Recognition and Emotion Detector Device

Face Recognition and Emotion Detector Device Orange PI 1 Python 3.10.0 + Django 3.2.9 Project's file explanation Django manage.py Django commands hand

BootyAss 2 Dec 21, 2021
PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection Introduction This is a pytorch implementation of Gen-LaneNet, which p

Yuliang Guo 233 Jan 06, 2023
Res2Net for Instance segmentation and Object detection using MaskRCNN

Res2Net for Instance segmentation and Object detection using MaskRCNN Since the MaskRCNN-benchmark of facebook is deprecated, we suggest to use our mm

Res2Net Applications 55 Oct 30, 2022
Pre-training of Graph Augmented Transformers for Medication Recommendation

G-Bert Pre-training of Graph Augmented Transformers for Medication Recommendation Intro G-Bert combined the power of Graph Neural Networks and BERT (B

101 Dec 27, 2022
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 05, 2022
This is the code for CVPR 2021 oral paper: Jigsaw Clustering for Unsupervised Visual Representation Learning

JigsawClustering Jigsaw Clustering for Unsupervised Visual Representation Learning Pengguang Chen, Shu Liu, Jiaya Jia Introduction This project provid

DV Lab 73 Sep 18, 2022
PyTorch implementation of the paper Dynamic Token Normalization Improves Vision Transfromers.

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022