Codeflare - Scale complex AI/ML pipelines anywhere

Overview

License Build Status PyPI Downloads Documentation Status GitHub

Scale complex AI/ML pipelines anywhere

CodeFlare is a framework to simplify the integration, scaling and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.

Its main features are:

  • Pipeline execution and scaling: CodeFlare Pipelines faciltates the definition and parallel execution of pipelines. It unifies pipeline workflows across multiple frameworks while providing nearly optimal scale-out parallelism on pipelined computations.
  • Deploy and integrate anywhere: CodeFlare simplifies deployment and integration by enabling a serverless user experience with the integration with Red Hat OpenShift and IBM Cloud Code Engine and providing adapters and connectors to make it simple to load data and connect to data services.

Release status

This project is under active development. See the Documentation for design descriptions and the latest version of the APIs.

Quick start

Run in your laptop

Instaling locally

CodeFlare can be installed from PyPI.

Prerequisites:

We recommend installing Python 3.8.6 using pyenv. You can find here recommended steps to set up the Python environment.

Install from PyPI:

pip3 install --upgrade pip          # CodeFlare requires pip >21.0
pip3 install --upgrade codeflare

Alternatively, you can also build locally with:

git clone https://github.com/project-codeflare/codeflare.git
cd codeflare
pip3 install --upgrade pip
pip3 install .

Using Docker

You can try CodeFlare by running the docker image from Docker Hub:

  • projectcodeflare/codeflare:latest has the latest released version installed.

The command below starts the most recent development build in a clean environment:

docker run --rm -it -p 8888:8888 projectcodeflare/codeflare:latest

It should produce an output similar to the one below, where you can then find the URL to run CodeFlare from a Jupyter notebook in your local browser.

[I 
   
     ServerApp] Jupyter Server 
    
      is running at:
...
[I 
     
       ServerApp]     http://127.0.0.1:8888/lab

     
    
   

Using Binder service

You can try out some of CodeFlare features using the My Binder service.

Click on the link below to try CodeFlare, on a sandbox environment, without having to install anything.

Binder

Pipeline execution and scaling

CodeFlare Pipelines reimagined pipelines to provide a more intuitive API for the data scientist to create AI/ML pipelines, data workflows, pre-processing, post-processing tasks, and many more which can scale from a laptop to a cluster seamlessly.

See the API documentation here, and reference use case documentation in the Examples section.

A set of reference examples are provided as executable notebooks.

To run examples, if you haven't done so yet, clone the CodeFlare project with:

git clone https://github.com/project-codeflare/codeflare.git

Example notebooks require JupyterLab, which can be installed with:

pip3 install --upgrade jupyterlab

Use the command below to run locally:

jupyter-lab codeflare/notebooks/<example_notebook>

The step above should automatically open a browser window and connect to a running Jupyter server.

If you are using any one of the recommended cloud based deployments (see below), examples are found in the codeflare/notebooks directory in the container image. The examples can be executed directly from the Jupyter environment.

As a first example of the API usage, see the sample pipeline.

For an example of how CodeFlare Pipelines can be used to scale out common machine learning problems, see the grid search example. It shows how hyperparameter optimization for a reference pipeline can be scaled and accelerated with both task and data parallelism.

Deploy and integrate anywhere

Unleash the power of pipelines by seamlessly scaling on the cloud. CodeFlare can be deployed on any Kubernetes-based platform, including IBM Cloud Code Engine and Red Hat OpenShift Container Platform.

  • IBM Cloud Code Engine for detailed instructions on how to run CodeFlare on a serverless platform.
  • Red Hat OpenShift for detailed instructions on how to run CodeFlare on OpenShift Container Platform.

Contributing

Join us in making CodeFlare Better! We encourage you to take a look at our Contributing page.

Blog

CodeFlare related blogs are published on our Medium publication.

License

CodeFlare is an open-source project with an Apache 2.0 license.

Comments
  • Error running notebook

    Error running notebook "RaySystemError: System error: buffer source array is read-only"

    Describe the bug I'm trying to run the example notebooks (in codeflare/notebooks), and came across this error. The error persisted thru attempts to restart my kernel, entire machine, and re-cloning the repo. Any help, or an explanation of the root cause, is much appreciated!

    To Reproduce Steps to reproduce the behavior:

    1. Go to notebooks/plot_nca_classification.ipynb
    2. Run 2nd code block. It uses Ray and Codeflare.
    3. This line produces the error knn_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn)[0])
    4. See error: RaySystemError: System error: buffer source array is read-only

    Full stack trace:

    RaySystemError: System error: buffer source array is read-only
    traceback: Traceback (most recent call last):
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 268, in deserialize_objects
        obj = self._deserialize_object(data, metadata, object_ref)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 191, in _deserialize_object
        return self._deserialize_msgpack_data(data, metadata_fields)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 169, in _deserialize_msgpack_data
        python_objects = self._deserialize_pickle5_data(pickle5_data)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 157, in _deserialize_pickle5_data
        obj = pickle.loads(in_band, buffers=buffers)
      File "sklearn/neighbors/_dist_metrics.pyx", line 223, in sklearn.neighbors._dist_metrics.DistanceMetric.__setstate__
      File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
      File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
    ValueError: buffer source array is read-only
    
    
    ---------------------------------------------------------------------------
    RaySystemError                            Traceback (most recent call last)
    /tmp/ipykernel_1251/3313313255.py in <module>
          9 test_input.add_xy_arg(node_scalar, dm.Xy(X_test, y_test))
         10 
    ---> 11 knn_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn)[0])
         12 knn_score = ray.get(rt.execute_pipeline(knn_pipeline, ExecutionType.SCORE, test_input)
         13                     .get_xyrefs(node_knn)[0].get_yref())
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/codeflare/pipelines/Runtime.py in select_pipeline(pipeline_output, chosen_xyref)
        381         curr_xyref = xyref_queue.get()
        382         curr_node_state_ptr = curr_xyref.get_curr_node_state_ref()
    --> 383         curr_node = ray.get(curr_node_state_ptr)
        384         prev_xyrefs = curr_xyref.get_prev_xyrefs()
        385 
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/_private/client_mode_hook.py in wrapper(*args, **kwargs)
         87             if func.__name__ != "init" or is_client_mode_enabled_by_default:
         88                 return getattr(ray, func.__name__)(*args, **kwargs)
    ---> 89         return func(*args, **kwargs)
         90 
         91     return wrapper
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/worker.py in get(object_refs, timeout)
       1621                     raise value.as_instanceof_cause()
       1622                 else:
    -> 1623                     raise value
       1624 
       1625         if is_individual_id:
    
    

    Expected behavior Expected is selecting the pipeline and evaluating its score via a 'SCORE' pipeline.

    Desktop

    • OS: Ubuntu 20.04 via WSL2 on Windows.
    • Python 3.8.6

    Thank you for any help! I am a University of Illinois at Urbana-Champaign grad student trying to make the most of your work!

    opened by KastanDay 6
  • Replace SimlpleQueue

    Replace SimlpleQueue

    Overview

    Currently, lineage uses SimpleQueue to realize pipelines. But this is available only in Python >=3.8. This reduces adoption, moving to Queue will give us broader Python version coverage.

    Acceptance Criteria

    • [x] Replace SimpleQueue with Queue
    • [x] Ensure tests pass

    Questions

    • What are the drawbacks of using Queue vs SimpleQueue?

    Assumptions

    Reference

    • https://towardsdatascience.com/dive-into-queue-module-in-python-its-more-than-fifo-ce86c40944ef
    enhancement 
    opened by raghukiran1224 5
  • CodeFlare resiliency tool: initial commit

    CodeFlare resiliency tool: initial commit

    What does this PR do? This is first step towards improving resiliency and performance in Ray without modifying the source code. This PR includes a new tool that helps configure Ray cluster conveniently. The tool helps in fetching and parsing ray configurations, and generating resiliency profiles (e.g., strict, relaxed, recommended). Currently, we are working on deciding configuration options for each resiliency profile manually by evaluating them on various ray workloads. We'll update this PR accordingly.

    Description of Changes The changes in this PR is currently independent of the main codeFlare code. We intend to put this tool in a new folder called utils in the codeFlare root directory.

    opened by JainTwinkle 2
  • Predicted output is not properly assigned to get_yref(), instead is assigned to get_Xref()

    Predicted output is not properly assigned to get_yref(), instead is assigned to get_Xref()

    Describe the bug After running a PREDICT, the y_predcannot be obtained via get_yref(), instead can be obtained via the get_Xref(). Semantically, this seems weird.

    To Reproduce Steps to reproduce the behavior:

    1. Go to https://github.ibm.com/codeflare/ray-pipeline/blob/complex-example-1/notebooks/plot_feature_selection_pipeline.ipynb
    2. Scroll down to `y_pred = ray.get(predict_clf_output[0].get_yref())
    3. If you change that statement to y_pred = ray.get(predict_clf_output[0].get_Xref()) the output would match the original sklearn pipeline on the top.

    Expected behavior The predicted output should be obtained from calling get_yref().

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

    bug good first issue cfp-runtime cfp-datamodel 
    opened by raghukiran1224 2
  • sample pipeline jupyter notebook on binder errors-out

    sample pipeline jupyter notebook on binder errors-out

    Describe the bug sample pipeline jupyter notebook errors out due to undefined variable

    To Reproduce Steps to reproduce the behavior:

    1. Go to binder
    2. Click on sample pipeline jupyter notebook
    3. Run

    Expected behavior Jupyter notebook on binder should run without exception

    Additional context error while executing cell:

    pipeline_output = rt.execute_pipeline(pipeline, ExecutionType.FIT, pipeline_input)
    node_0_output = pipeline_output.get_xyrefs(node_0)
    
    
    In [74]:
    
    outputs[0]
    
    
    
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-74-a45df8d4a457> in <module>
    ----> 1 outputs[0]
    
    NameError: name 'outputs' is not defined
    
    opened by asm582 2
  • Jupyter notebook plot_scalable_poly_kernels dies when run on binder

    Jupyter notebook plot_scalable_poly_kernels dies when run on binder

    Describe the bug Jupyter notebook kernel dies

    To Reproduce Steps to reproduce the behavior:

    1. Go to Binder
    2. Click on plot_scalable_poly_kernels
    3. Run the notebook

    Expected behavior The jupyter notebook should run without error

    opened by asm582 2
  • Grid search jupyter notebook on binder missing graphviz library

    Grid search jupyter notebook on binder missing graphviz library

    Graph viz missing from binder service

    To Reproduce Steps to reproduce the behavior:

    1. Go to binder service 2.Run Grid search notebook

    Additional context

    Below error caused by execution of cell:

    non_param_graph = cf_utils.pipeline_to_graph(pipeline)
    non_param_graph
    

    ExecutableNotFound: failed to execute ['dot', '-Kdot', '-Tsvg'], make sure the Graphviz executables are on your systems' PATH

    bug wontfix 
    opened by asm582 2
  • Refactor configuration utility tool; added support for latest Ray version

    Refactor configuration utility tool; added support for latest Ray version

    Related PRs Extending #37

    What does this PR do?

    This PR extends the Ray resiliency config tool. The PR does the following:

    1. The Ray config utility script now supports configurations from Ray v1.6, 1.7, and 1.8.

    2. The tool now saves config. files into their respective version directory. This is more organized as compared to saving files from all Ray versions into a single folder. For example, now the tools save output config. files in the following manner by default. ├── configs │ ├── 1.0.0 │ │ ├── Ray 1.0.0 related config files │ ├── 1.1.0 │ │ └── Ray 1.1.0 related config files

    3. The configuration parsing code is more generalized than before. Removed some hard-coded conditions and added functions to make the code less cluttered.

    4. Added a new field called config_string in the output config file. This field stores the original string from which we parsed the default value of the configuration. The config_string stores string whenever the default value is not a simple value but a conditional statement. This field will help in explaining how the associated environment variable's value will determine the default value. For example: For raylet_start_wait_time_s configuration, the signature/input is following:

    RAY_CONFIG(uint32_t, raylet_start_wait_time_s,
               std::getenv("RAY_preallocate_plasma_memory") != nullptr &&
                       std::getenv("RAY_preallocate_plasma_memory") == std::string("1")
                   ? 120
                   : 10)
    

    And, the script dumps following Yaml entry in the .conf file:

    raylet_start_wait_time_s:
      config_string: 'std::getenv("RAY_preallocate_plasma_memory") != nullptr && std::getenv("RAY_preallocate_plasma_memory") == std::string("1") ? 120 : 10'
      default: '10'
      env: RAY_preallocate_plasma_memory
      type: uint32_t
      value_for_this_mode: '10'
    

    The new field i.e. config_string is informatory and gives an idea about how the associated environment variable will be processed.

    1. The config tool now uses YAML format variable instead of a hardcoded string for system-config map YAML (system_cm.yaml)
    opened by JainTwinkle 1
  • Fix corner case with a singleton node

    Fix corner case with a singleton node

    Related Issue

    Supports #27

    Related PRs

    Reopen PR 31 after PR 27 merged with develop.

    What does this PR do?

    Description of Changes Checked node exists in pipeline post_graph Added ExecutionType.TRANSFORM Added a unit test

    bug 
    opened by yuanchi2807 1
  • Fix yref assignment for pipeline PREDICT and SCORE

    Fix yref assignment for pipeline PREDICT and SCORE

    Related Issue

    Supports #22

    Related PRs

    This PR is not dependent on any other PR

    What does this PR do?

    Description of Changes

    Assign PREDICT and SCORE results to yref as appropriate in Runtime.py. Updated unit tests and notebook examples.

    What gif most accurately describes how I feel towards this PR?

    Example of a gif

    bug 
    opened by yuanchi2807 1
  • Pipeline with a single dangling estimator node triggers an exception

    Pipeline with a single dangling estimator node triggers an exception

    Describe the bug Possibly a corner case? ray-pipeline/codeflare/pipelines/Datamodel.py in get_pre_edges(self, node) 640 """ 641 pre_edges = [] --> 642 pre_nodes = self.pre_graph[node] 643 # Empty pre 644 if not pre_nodes:

    KeyError: <codeflare.pipelines.Datamodel.EstimatorNode object at 0x7fa2d8920f10>

    To Reproduce

    ## initialize codeflare pipeline by first creating the nodes
    pipeline = dm.Pipeline()
    node_a = dm.EstimatorNode('a', MinMaxScaler())
    node_b = dm.EstimatorNode('b', StandardScaler())
    node_c = dm.EstimatorNode('c', MaxAbsScaler())
    node_d = dm.EstimatorNode('d', RobustScaler())
    
    node_e = dm.AndNode('e', FeatureUnion())
    node_f = dm.AndNode('f', FeatureUnion())
    node_g = dm.AndNode('g', FeatureUnion())
    
    ## codeflare nodes are then connected by edges
    pipeline.add_edge(node_a, node_e)
    pipeline.add_edge(node_b, node_e)
    pipeline.add_edge(node_c, node_f)
    ## node_d does not have a downstream node
    # pipeline.add_edge(node_d, node_f)
    pipeline.add_edge(node_e, node_g)
    pipeline.add_edge(node_f, node_g)
    
    pipeline_input = dm.PipelineInput()
    xy = dm.Xy(X,y)
    pipeline_input.add_xy_arg(node_a, xy)
    pipeline_input.add_xy_arg(node_b, xy)
    pipeline_input.add_xy_arg(node_c, xy)
    pipeline_input.add_xy_arg(node_d, xy)
    
    ## execute the codeflare pipeline
    pipeline_output = rt.execute_pipeline(pipeline, ExecutionType.FIT, pipeline_input)
    
    

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

    bug cfp-runtime 
    opened by raghukiran1224 1
  • Ray cluster on OpenShift fails due to missing file

    Ray cluster on OpenShift fails due to missing file

    Describe the bug Cannot bring up Ray cluster as defined in the OCP tutorial

    To Reproduce Steps to reproduce the behavior:

    1. Go to https://codeflare.readthedocs.io/en/latest/getting_started/starting.html#Openshift-Ray-Cluster-Operator
    2. Run pip3 install --upgrade codeflare
    3. Create namespace oc create namespace codeflare
    4. Run ray up ray/python/ray/autoscaler/kubernetes/example-full.yaml fails:
    $ ray up ray/python/ray/autoscaler/kubernetes/example-full.yaml
    Provided cluster configuration file (ray/python/ray/autoscaler/kubernetes/example-full.yaml) does not exist
    

    Expected behavior Bring up Ray cluster on OCP

    Desktop (please complete the following information):

    • OS: MacOS

    Additional context OCP Cluster running on IBM Cloud.

    $ oc cluster-info
    Kubernetes master is running at https://c100-e.jp-tok.containers.cloud.ibm.com:31129
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

    CodeFlare commit hash commit a2b290a115b0cc1317270cef6059d5281215842e

    opened by cmisale 0
  • Data splitter

    Data splitter

    Overview

    As a CFP user, I would like to split a dataset (e.g., np array, pandas dataframe) into smaller objects that can then be fed into other nodes/pipeline. This is especially useful when we have compute intensive tasks and would like to parallelize it easily.

    Acceptance Criteria

    • [x] Design for splitter, should be simple and intuitive
    • [ ] Implementation as an extension to the Node construct
    • [x] Tests

    Questions

    • What type of semantics does the splitter node define?

    Assumptions

    Reference

    good first issue help wanted cfp-datamodel user-story Prio1 
    opened by raghukiran1224 1
  • Support better integration between Ray and Spark in passing ObjectRef without actually moving data

    Support better integration between Ray and Spark in passing ObjectRef without actually moving data

    Overview

    As a Codeflare user, I want to use Ray and Spark alternately to execute my end-to-end ML jobs. Some steps might be executed more efficiently using Ray, while others using Spark. The plasma store in Ray seems to provide an efficient way to share ObjectRef between Ray and Spark. Currently, RayDP project supports from Spark to Ray in some limited way, by running Spark as a Ray actor. However, ObjectRef cannot be shared easily in both directions, Spark-2-Ray and Ray-2-Spark.

    Acceptance Criteria

    • Pandas dataframe created by remote tasks in local Ray plasma stores can be passed with ObjectRefto the Spark driver to create a Spark dataframecontaining list of ObjectRef.
    • Once that is done, on the Spark side, the executors of Spark can then access to the original Pandas dataframe locally.
    • From Spark to Ray: Spark preserves groupby() partition semantics and writes these partitions to plasma store, instead of using hashPartition().

    Questions

    • In RayDP, only the driver node knows about and can access Ray. The executors of PySpark doesn't have access to Ray. This will prevent the PySpark executors from accessing the Ray plasma store. As a result, it is not possible to seamlessly pass ObjectRefbetween Ray workers and Spark executors.

    Assumptions

    • Ray and Spark can share data seamlessly by exchanging ObjectRef among Ray workers and Spark executors.

    Reference

    [Reference] I have opened an issue on the RayDP repo: https://github.com/oap-project/raydp/issues/164

    ray-related 
    opened by klwuibm 3
  • Nested pipelines

    Nested pipelines

    Overview

    As a CF pipelines user, support for nested pipelines, where the node of a pipeline can be a pipeline itself.

    Acceptance Criteria

    • [ ] Nested pipeline API
    • [ ] Nested pipeline implementation
    • [ ] ADR for supporting nested pipelines
    • [ ] Tests

    Questions

    • Given that pipelines are not estimators by themselves, how can we support nesting easily?

    Assumptions

    Reference

    cfp-runtime cfp-datamodel user-story Prio1 
    opened by raghukiran1224 0
  • Investigate and measure zero copy for pipelines

    Investigate and measure zero copy for pipelines

    Overview

    As a CF pipelines user, I would like to understand the memory consumption when pipelines are executed. Given pipelines accept nparrays, will zero copy sharing of Ray help?

    Acceptance Criteria

    • [ ] Memory growth as pipelines are executed
    • [ ] Clear documentation on this
    • [ ] A potential story explaining this in more detail

    Questions

    Assumptions

    Reference

    help wanted cfp-runtime Prio1 benchmark 
    opened by raghukiran1224 0
  • Select best/k-best pipelines

    Select best/k-best pipelines

    Overview

    As a CF pipelines user, I would like the ability to select the best or k-best pipelines from a parameter grid search output.

    Acceptance Criteria

    • [ ] Best pipeline selection
    • [ ] K-best pipeline selection
    • [ ] Tests and compatibility with sklearn outputs

    Questions

    Assumptions

    Reference

    enhancement good first issue help wanted cfp-runtime 
    opened by raghukiran1224 0
Releases(0.1.2.dev0)
  • 0.1.2.dev0(Jul 9, 2021)

    Addressing the python version needs of IBM Cloud Watson Studio, we removed the deps on SimpleQueue and used Queue instead. This removes CodeFlare pipelines dep on >=3.8 and can do >=3.7.

    Shout out to @aviolante for helping with this fix!

    Install can be now done from pypi using pip3 install codeflare, default version updated to 0.1.2

    Source code(tar.gz)
    Source code(zip)
Owner
CodeFlare
Scaling complex pipelines anywhere
CodeFlare
sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code

sequitur sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code. It implements three differ

Jonathan Shobrook 305 Dec 21, 2022
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 07, 2023
Code for the Paper: Alexandra Lindt and Emiel Hoogeboom.

Discrete Denoising Flows This repository contains the code for the experiments presented in the paper Discrete Denoising Flows [1]. To give a short ov

Alexandra Lindt 3 Oct 09, 2022
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Jianwei ZHANG 8 Oct 14, 2021
Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR)

This is the official implementation of our paper Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR), which has been accepted by WSDM2022.

Yongchun Zhu 81 Dec 29, 2022
Code for the Lovász-Softmax loss (CVPR 2018)

The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks Maxim Berman, Amal Ranne

Maxim Berman 1.3k Jan 04, 2023
Code for the bachelors-thesis flaky fault localization

Flaky_Fault_Localization Scripts for the Bachelors-Thesis: "Flaky Fault Localization" by Christian Kasberger. The thesis examines the usefulness of sp

Christian Kasberger 1 Oct 26, 2021
A curated list of neural network pruning resources.

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Yang He 1.7k Jan 09, 2023
Pytorch implementation of the AAAI 2022 paper "Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification"

[AAAI22] Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification We point out the overlooked unbiasedness in long-tailed clas

PatatiPatata 28 Oct 18, 2022
Meta graph convolutional neural network-assisted resilient swarm communications

Resilient UAV Swarm Communications with Graph Convolutional Neural Network This repository contains the source codes of Resilient UAV Swarm Communicat

62 Dec 06, 2022
Fit Fast, Explain Fast

FastExplain Fit Fast, Explain Fast Installing pip install fast-explain About FastExplain FastExplain provides an out-of-the-box tool for analysts to

8 Dec 15, 2022
Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation.

Unified-EPT Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation. Installation Linux, CUDA=10.0,

29 Aug 23, 2022
Auto-Lama combines object detection and image inpainting to automate object removals

Auto-Lama Auto-Lama combines object detection and image inpainting to automate object removals. It is build on top of DE:TR from Facebook Research and

44 Dec 09, 2022
Python package for visualizing the loss landscape of parameterized quantum algorithms.

orqviz A Python package for easily visualizing the loss landscape of Variational Quantum Algorithms by Zapata Computing Inc. orqviz provides a collect

Zapata Computing, Inc. 75 Dec 30, 2022
Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"

RandWireNN Unofficial PyTorch Implementation of: Exploring Randomly Wired Neural Networks for Image Recognition. Results Validation result on Imagenet

Seung-won Park 684 Nov 02, 2022
A (PyTorch) imbalanced dataset sampler for oversampling low frequent classes and undersampling high frequent ones.

Imbalanced Dataset Sampler Introduction In many machine learning applications, we often come across datasets where some types of data may be seen more

Ming 2k Jan 08, 2023
PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

Impersonator PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer an

SVIP Lab 1.7k Jan 06, 2023
The audio-video synchronization of MKV Container Format is exploited to achieve data hiding

The audio-video synchronization of MKV Container Format is exploited to achieve data hiding, where the hidden data can be utilized for various management purposes, including hyper-linking, annotation

Maxim Zaika 1 Nov 17, 2021
Learning a mapping from images to psychological similarity spaces with neural networks.

LearningPsychologicalSpaces v0.1: v1.1: v1.2: v1.3: v1.4: v1.5: The code in this repository explores learning a mapping from images to psychological s

Lucas Bechberger 8 Dec 12, 2022
Repo for paper "Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information"

Repo for paper "Dynamic Placement of Rapidly Deployable Mobile Sensor Robots Using Machine Learning and Expected Value of Information" Notes I probabl

Berkeley Expert System Technologies Lab 0 Jul 01, 2021