🐦 Opytimizer is a Python library consisting of meta-heuristic optimization techniques.

Overview

Opytimizer: A Nature-Inspired Python Optimizer

Latest release DOI Build status Open issues License

Welcome to Opytimizer.

Did you ever reach a bottleneck in your computational experiments? Are you tired of selecting suitable parameters for a chosen technique? If yes, Opytimizer is the real deal! This package provides an easy-to-go implementation of meta-heuristic optimizations. From agents to search space, from internal functions to external communication, we will foster all research related to optimizing stuff.

Use Opytimizer if you need a library or wish to:

  • Create your optimization algorithm;
  • Design or use pre-loaded optimization tasks;
  • Mix-and-match different strategies to solve your problem;
  • Because it is fun to optimize things.

Read the docs at opytimizer.readthedocs.io.

Opytimizer is compatible with: Python 3.6+.


Package guidelines

  1. The very first information you need is in the very next section.
  2. Installing is also easy if you wish to read the code and bump yourself into, follow along.
  3. Note that there might be some additional steps in order to use our solutions.
  4. If there is a problem, please do not hesitate, call us.
  5. Finally, we focus on minimization. Take that in mind when designing your problem.

Citation

If you use Opytimizer to fulfill any of your needs, please cite us:

@misc{rosa2019opytimizer,
    title={Opytimizer: A Nature-Inspired Python Optimizer},
    author={Gustavo H. de Rosa, Douglas Rodrigues and João P. Papa},
    year={2019},
    eprint={1912.13002},
    archivePrefix={arXiv},
    primaryClass={cs.NE}
}

Getting started: 60 seconds with Opytimizer

First of all. We have examples. Yes, they are commented. Just browse to examples/, chose your subpackage, and follow the example. We have high-level examples for most tasks we could think of and amazing integrations (Learnergy, NALP, OPFython, PyTorch, Scikit-Learn, Tensorflow).

Alternatively, if you wish to learn even more, please take a minute:

Opytimizer is based on the following structure, and you should pay attention to its tree:

- opytimizer
    - core
        - agent
        - function
        - node
        - optimizer
        - space
    - functions
        - weighted
    - math
        - distribution
        - general
        - hyper
        - random
    - optimizers
        - boolean
        - evolutionary
        - misc
        - population
        - science
        - social
        - swarm
    - spaces
        - boolean
        - grid
        - hyper_complex
        - search
        - tree
    - utils
        - constants
        - decorator
        - exception
        - history
        - logging
    - visualization
        - convergence
        - surface

Core

Core is the core. Essentially, it is the parent of everything. You should find parent classes defining the basis of our structure. They should provide variables and methods that will help to construct other modules.

Functions

Instead of using raw and straightforward functions, why not try this module? Compose high-level abstract functions or even new function-based ideas in order to solve your problems. Note that for now, we will only support multi-objective function strategies.

Math

Just because we are computing stuff, it does not means that we do not need math. Math is the mathematical package, containing low-level math implementations. From random numbers to distributions generation, you can find your needs on this module.

Optimizers

This is why we are called Opytimizer. This is the heart of the heuristics, where you can find a large number of meta-heuristics, optimization techniques, anything that can be called as an optimizer. Please take a look on the available optimizers.

Spaces

One can see the space as the place that agents will update their positions and evaluate a fitness function. However, the newest approaches may consider a different type of space. Thinking about that, we are glad to support diverse space implementations.

Utils

This is a utility package. Common things shared across the application should be implemented here. It is better to implement once and use as you wish than re-implementing the same thing over and over again.

Visualization

Everyone needs images and plots to help visualize what is happening, correct? This package will provide every visual-related method for you. Check a specific variable convergence, your fitness function convergence, plot benchmark function surfaces, and much more!


Installation

We believe that everything has to be easy. Not tricky or daunting, Opytimizer will be the one-to-go package that you will need, from the very first installation to the daily-tasks implementing needs. If you may just run the following under your most preferred Python environment (raw, conda, virtualenv, whatever):

pip install opytimizer

Alternatively, if you prefer to install the bleeding-edge version, please clone this repository and use:

pip install -e .

Environment configuration

Note that sometimes, there is a need for additional implementation. If needed, from here, you will be the one to know all of its details.

Ubuntu

No specific additional commands needed.

Windows

No specific additional commands needed.

MacOS

No specific additional commands needed.


Support

We know that we do our best, but it is inevitable to acknowledge that we make mistakes. If you ever need to report a bug, report a problem, talk to us, please do so! We will be available at our bests at this repository or [email protected].


Comments
  • [BUG] AttributeError: 'History' object has no attribute 'show'

    [BUG] AttributeError: 'History' object has no attribute 'show'

    Describe the bug A clear and concise description of what the bug is.

    It looks like there is no show() method for the returned opytimizer history.

    To Reproduce Steps to reproduce the behavior:

    1. Follow the steps from the wiki Tutorial: Your first optimization
      1. Run the optimizer with o.start(): o = Opytimizer(space=s, optimizer=p, function=f) history = o.start()
      2. Show the history: history.how()

    Expected behavior Not sure what I expected, just curious :)

    Screenshots

    2020-01-02 13:26:28,270 - opytimizer.optimizers.fa — INFO — Iteration 1000/1000
    2020-01-02 13:26:28,278 - opytimizer.optimizers.fa — INFO — Fitness: 4.077875713322641e-14
    2020-01-02 13:26:28,279 - opytimizer.optimizers.fa — INFO — Position: [[-1.42791381e-07]
     [-1.42791381e-07]]
    2020-01-02 13:26:28,279 - opytimizer.opytimizer — INFO — Optimization task ended.
    2020-01-02 13:26:28,279 - opytimizer.opytimizer — INFO — It took 7.672951936721802 seconds.
    >>> history.show()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    AttributeError: 'History' object has no attribute 'show'
    >>> history
    <opytimizer.utils.history.History object at 0x113b75be0>
    >>> history.show()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    AttributeError: 'History' object has no attribute 'show'
    

    Desktop (please complete the following information):

    • OS: macOS Mojave 10.14.6
    • Virtual Environment: conda base
    • Python Version:
    (base) justinmai$ python3
    Python 3.7.3 (default, Mar 27 2019, 16:54:48) 
    [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
    

    Additional context Add any other context about the problem here.

    bug 
    opened by justinTM 5
  • [REG] How to supress DEBUG log message in opytimizer.core.space

    [REG] How to supress DEBUG log message in opytimizer.core.space

    Hello, After inititializing Searchspace there is a debug message that is printed to stdout. How can we turn it off/on? Following is the message. opytimizer.core.space — DEBUG — Agents: 25 |....

    I believe its printed because of line #223 in file opytimizer/core/space.py.

    For large dimensions it prints all the lower and upper bounds which we may not always require.

    Thanks.

    general 
    opened by amir1m 4
  • [NEW] Dump optimization progress

    [NEW] Dump optimization progress

    Is your feature request related to a problem? Please describe. If the server running fails. We lose the time running.

    Describe the solution you'd like Dump the optimization object from a time to time.

    Describe alternatives you've considered Maybe dump the agents?

    enhancement 
    opened by gcoimbra 4
  • [NEW] Constrained optimization

    [NEW] Constrained optimization

    Hi, thanks for the work.

    Any plans to add functionality to define constraints for the optimization? For instance, inequalities or any arbitrary non-linear constraints on the inputs and/or on the outputs?

    enhancement 
    opened by ogencoglu 4
  • [NEW] Using population data for population-based algorithms?

    [NEW] Using population data for population-based algorithms?

    Hello, First of all, thank you for sharing such a fantastic repo that can be used as an off-the-shelf meta-heuristic optimization algorithms!

    I have a question regarding how to use my own population data for optimizers in optimizer. Rather than using SearchSpace that uses predetermined upper/lower bounds, is there any way I can use my own population samples to start the optimization from?

    Thank you, hope you have a wonderful day!

    enhancement 
    opened by jimmykimmy68 3
  • [REG] What is the difference between grid space, and discrete space example?

    [REG] What is the difference between grid space, and discrete space example?

    Greetings,

    I have a mixed search space problem of five dimensions, 4 discrete and one continuous. how to implement a discrete search space, with these dimensions where the increment won't be the same. I found that grid space offer me the flexibility I need, yet I noticed you used a different implementation in discrete space example so which one should I use?

    My search space:

    step = [1, 1, 1, 1, 0.1]
    lower_bound = [16, 3, 2, 0, 0]
    upper_bound = [400, 20, 20, 1, 0.33]
    

    Thanks in advance.

    general 
    opened by TamerAbdelmigid 3
  • [NEW] Different number of step for each variable

    [NEW] Different number of step for each variable

    Is your feature request related to a problem? Please describe. It's not

    Describe the solution you'd like I'd like a different step size for each

    Additional context Sometimes variables are less sensible than others. Some are integers. Is there anyway to use a different step for each one?

    enhancement 
    opened by gcoimbra 3
  • [REG] How to get a detailed print out during optimization?

    [REG] How to get a detailed print out during optimization?

    Greetings,

    My function takes time getting evaluated, and I want to closely monitor what happens during optimization process. When using grid search space and printing the fitness from my function, I could see the movement along the grid. But, when using normal search space, my processor utilization is 100% and it takes hours with no print out. So, hot to get more details? is there something like a degree of verbosity?

    Thanks in advance.

    general 
    opened by TamerAbdelmigid 2
  • [NEW] Define objective function for regression problem

    [NEW] Define objective function for regression problem

    Hi there,

    I attempted to define an objective function (using CatBoost model for data) to solve a minimum problem in a regression task, however failed to create new objective function. So, do your package offer the solution for regression and can we define such an objective function in this case?

    My desired objective function something like this to minimize the MSE:

    from catboost import CatBoostRegressor as cbr
    cbr_model = cbr()
    
    def objective_function(cbr_model,X_train3, y_train3, X_test3, y_test3):      
        cbr_model.fit(X_train3,y_train3)  
        mse=mean_squared_error(y_test3,cbr_model.predict(X_test3))
        return mse
    

    Many thanks, Thang

    enhancement 
    opened by hanamthang 2
  • [REG]How to plot convergence diagram?

    [REG]How to plot convergence diagram?

    Hello, I was looking for convergence diagram. Found an example of using convergence function opytimizer/examples/visualization/convergence_plotting.py /. However, it uses few constant values of agent positions. Is there a convergence example that shows how to use this function with actual optimization problem such as after carrying out PSO?

    Thanks,

    general 
    opened by amir1m 2
  • [REG]Is there a way to provide initial values before starting optimization?

    [REG]Is there a way to provide initial values before starting optimization?

    Hello, Hope you keeping well.

    I am looking to provide an initial value to optimization loop. Is there a way through searchspace or otherwise to provide initial/starting solution?

    Thanks.

    general 
    opened by amir1m 2
Releases(v3.1.2)
  • v3.1.2(Sep 9, 2022)

    Changelog

    Description

    Welcome to v3.1.2 release.

    In this release, we added variables name mapping to search spaces

    Includes (or changes)

    • core.search_space
    Source code(tar.gz)
    Source code(zip)
  • v3.1.1(May 4, 2022)

    Changelog

    Description

    Welcome to v3.1.1 release.

    In this release, we added pre-commit hooks and annotated typing.

    Includes (or changes)

    • opytimizer
    Source code(tar.gz)
    Source code(zip)
  • v3.1.0(Jan 7, 2022)

    Changelog

    Description

    Welcome to v3.1.0 release.

    In this release, we implemented the initial parts for holding graph-related searches, which will support Neural Architecture Search (NAS) in the feature.

    Additionally, we have added the first algorithm for calculating the Pareto frontier of pre-defined points (Non-Dominated Sorting).

    Includes (or changes)

    • core
    • optimizers
    • spaces
    Source code(tar.gz)
    Source code(zip)
  • v3.0.2(Jun 28, 2021)

    Changelog

    Description

    Welcome to v3.0.2 release.

    In this release, we implemented the remaining meta-heuristics that were on hold. Please note that they are supposed to be 100% working, yet we still need to experimentally evaluate their performance.

    Additionally, an important hot-fix regarding the calculation of Euclidean Distance has been corrected.

    Includes (or changes)

    • optimizers
    • math.general
    Source code(tar.gz)
    Source code(zip)
  • v3.0.1(May 31, 2021)

    Changelog

    Description

    Welcome to v3.0.1 release.

    In this release, we have added a bunch of meta-heuristics that were supposed to be implemented earlier.

    Includes (or changes)

    • optimizers
    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(May 12, 2021)

    Changelog

    Description

    Welcome to v3.0.0 release.

    In this release, we have revamped the whole library, rewriting base packages, such as agent, function, node and space, as well as implementing new features, such as callbacks and a better Opytimizer bundler.

    Additionally, we have rewritten every optimizer and their tests, removing more than 2.5k lines that were tagged as "repeatable". Furthermore, we have removed excessive commentaries to provide a cleaner reading and have rewritten every example to include the newest features.

    Please take a while to check our most important advancements and read the docs at: opytimizer.readthedocs.io

    Note that this is a major release and we expect everyone to update their corresponding packages, as this update will not work with v2.x.x versions.

    Includes (or changes)

    • opytimizer
    Source code(tar.gz)
    Source code(zip)
  • v2.1.4(Apr 28, 2021)

    Changelog

    Description

    Welcome to v2.1.4 release.

    In this release, we have added the following optimizers: AOA and AO. Additionally, we have implemented a step size for each variable in the grid, as pointed by @gcoimbra.

    Please read the docs at: opytimizer.readthedocs.io

    Note that this is the latest update of v2 branch. The following update will feature a major rework on Opytimizer classes and will not be retroactive with past versions.

    Includes (or changes)

    • optimizers.misc.aoa
    • optimizers.population.ao
    • optimizers.spaces.grid
    Source code(tar.gz)
    Source code(zip)
  • v2.1.3(Mar 10, 2021)

    Changelog

    Description

    Welcome to v2.1.3 release.

    In this release, we have added the following optimizers: COA, JS and NBJS. Additionally, we have fixed the roulette selection for minimization problems, as pointed by @Martsks.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers.evolutionary.ga
    • optimizers.population.coa
    • optimizers.swarm.js
    Source code(tar.gz)
    Source code(zip)
  • v2.1.2(Dec 3, 2020)

    Changelog

    Description

    Welcome to v2.1.2 release.

    In this release, we have added a set of new optimizers, as follows: ABO, ASO, BOA, BSA, CSA, DOA, EHO, EPO, GWO, HGSO, HHO, MFO, PIO, QSA, SFO, SOS, SSA, SSO, TWO, WOA and WWO.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers
    Source code(tar.gz)
    Source code(zip)
  • v2.1.1(Nov 25, 2020)

    Changelog

    Description

    Welcome to v2.1.1 release.

    In this release, we have added some HS-based optimizers and fixed an issue regarding the store_best_only flag.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers
    • utils.history
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Jul 29, 2020)

    Changelog

    Description

    Welcome to v2.1.0 release.

    In this release, we have fixed the UMDA nomenclature, corrected some optimizers docstrings and added a soft-penalization to constrained optimization, which we believe that will enable users in designing more appropriate constrained objectives. Furthermore, we added a deterministic trait to the optimizers' unitary tests so they are able to converge.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • core.function
    • optimizers
    • tests
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2(Jul 10, 2020)

    Changelog

    Description

    Welcome to v2.0.2 release.

    In this release, we have added BMRFO, SAVPSO, UDMA and VPSO optimizers.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers.boolean.bmrfo
    • optimizers.boolean.udma
    • optimizers.swarm.pso
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Jun 26, 2020)

  • v2.0.0(May 7, 2020)

    Changelog

    Description

    Welcome to v2.0.0 release.

    In this release, we have reworked some inner structures, added the usability of decorators and revamped some optimizers. Additionally, we added a bunch of new optimizers and their tests.

    Note that this is a major release, therefore, we suggest to update the library as soon as possible, as it will not be compatible with older versions.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • opytimizer
    Source code(tar.gz)
    Source code(zip)
  • v1.1.3(Mar 31, 2020)

    Changelog

    Description

    Welcome to v1.1.3 release.

    In this release, we have improved the code readability, as well as we have merged some child optimizers to their parents' modules.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • opytimizer
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Feb 20, 2020)

    Changelog

    Description

    Welcome to v1.1.2 release.

    In this release, we have fixed some nasty bugs that may cause some particular issues. Additionally, we added a method for handling constrained optimization.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • core.function
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Jan 15, 2020)

    Changelog

    Description

    Welcome to v1.1.1 release.

    In this release, we have fixed some nasty bugs that may cause some particular issues. Additionally, we added several new optimizers to the library and a convergence module (belongs to the visualization package).

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers.gsa
    • optimizer.hc
    • otimizer.rpso
    • optimizer.sa
    • optimizer.sca
    • visualization.convergence
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Oct 3, 2019)

    Changelog

    Description

    Welcome to v1.1.0 release.

    This is a minor release that will probably prevent retro-compatibility. Some base structures were changed in order to enhance the library performance.

    Essentially, new structures were created to provide the bulding tools for tree-based evolutionary algorithms, such as Genetic Programming.

    Additionally, we added a constants module to the utilities package and an exception module. This will help guiding the users when inputting wrong information.

    MultiFunction was renamed to WeightedFunction, which is more suitable according to its usage.

    History have been reworked as well, making it possible to dynamically create properties through the dump() method.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • examples/applications
    • core.node
    • function.weighted
    • math.general
    • optimizers.gp
    • opytimizer
    • spaces.tree
    • tests
    • utils.constants
    • utils.exceptions
    • utils.history
    Source code(tar.gz)
    Source code(zip)
  • v1.0.7(Sep 12, 2019)

    Changelog

    Description

    Welcome to v1.0.7 release. We added Opytimizer to pip repository, fixed up the optimization task time, reworked some tests, added new integrations (Recogners library), changed our license for further publication, and fixed an issue regarding agents being out of bounds.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • examples/integrations/recogners
    • opytimizer
    • optimizers
    • tests
    Source code(tar.gz)
    Source code(zip)
  • v1.0.6(Jun 21, 2019)

    Changelog

    Description

    Welcome to v1.0.6 release. We added new interesting things, such as a new optimizer, more benchmarking functions, bug fixes (mostly agents being out of bounds), a reworked history object and some adjusted tests.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes

    • math.benchmark
    • optimizers.wca
    • utils.history
    • tests
    Source code(tar.gz)
    Source code(zip)
  • v1.0.5(Apr 18, 2019)

    Changelog

    Description

    Welcome to v1.0.5 release. We added new interesting things, such as new optimizers, some reworked tests for better uses' cases, a multi-objective strategy for handling problems with more than one objective functions, and much more!

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes

    • examples.integrations.sklearn
    • functions.multi
    • optimizers.abc
    • optimizers.bha
    • optimizers.hs
    • tests
    • optimizers.ihs
    Source code(tar.gz)
    Source code(zip)
  • v1.0.4(Mar 22, 2019)

  • v1.0.3(Mar 22, 2019)

    Changelog

    Description

    Welcome to v1.0.3 release. We have added methods for supporting new optimizers. Additionally, we have fixed some previous implementations and improved their convergence. Everything should be appropriate now.

    Some examples integrating PyTorch with Opytimizer were created as well. Ranging from linear regression to long short-term memory networks, we hope to continue improving our library to serve you well.

    Again, every test is implemented, making a 100% score of coverage. Please refer to the wiki in order to running them.

    Please stay tuned for our next updates and our newest integrations (Sklearn and Tensorflow)!

    Includes

    • optimizers.aiwpso
    • optimizers.ba
    • optimizers.cs
    • optimizers.fa
    • examples.integrations.pytorch
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Mar 7, 2019)

    Changelog

    Description

    Welcome to v1.0.2 release. We have added methods for supporting hypercomplex representations. From math modules to new spaces, we do support any hypercomplex approach, ranging from complexes to octonions.

    A History class has been added as well. It will server as the one to hold vital information from the optimization task. In the future, we will support visualization and plots graphing.

    Also, internal class has been removed. All of its contents were moved to core.function module. For now, this will be our new structure (there is a slighly chance to be modified in the future to accomodate multi-objective functions).

    Finally, we have tests. Every test is implemented, making a 100% score of coverage. Please refer to the wiki in order to running them.

    Please stay tuned for our next updates!

    Includes

    • math.hypercomplex
    • spaces
    • utils.history
    • tests (100% coverage)

    Excludes

    • functions
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Feb 26, 2019)

    Changelog

    Description

    Welcome to v1.0.1 release. Essentialy, we have reworked some basic structures, added a new math distribution module and a new optimizer (Flower Pollination Algorithm). Please stay tuned for our next updates!

    Includes

    • math.distribution
    • optimizers.fpa
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Feb 26, 2019)

    Changelog

    Description

    This is the initial release of Opytimizer. It includes all basic modules in order to work with it. One can create an internal optimization function and apply a Particle Swarm Optimization optimizer onto it. Please check examples folder or read the docs in order to know how to use this library.

    Includes

    • core
    • functions
    • math
    • optimizers
    • utils
    Source code(tar.gz)
    Source code(zip)
Owner
Gustavo Rosa
There are no programming languages that can match up to programming logic. Machine learning researcher on work time and software engineer on free time.
Gustavo Rosa
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
天勤量化开发包, 期货量化, 实时行情/历史数据/实盘交易

TqSdk 天勤量化交易策略程序开发包 TqSdk 是一个由信易科技发起并贡献主要代码的开源 python 库. 依托快期多年积累成熟的交易及行情服务器体系, TqSdk 支持用户使用极少的代码量构建各种类型的量化交易策略程序, 并提供包含期货、期权、股票的 历史数据-实时数据-开发调试-策略回测-

信易科技 2.8k Dec 30, 2022
Deep Inertial Prediction (DIPr)

Deep Inertial Prediction For more information and context related to this repo, please refer to our website. Getting Started (non Docker) Note: you wi

Arcturus Industries 12 Nov 11, 2022
Music library streaming app written in Flask & VueJS

djtaytay This is a little toy app made to explore Vue, brush up on my Python, and make a remote music collection accessable through a web interface. I

Ryan Tasson 6 May 27, 2022
General-purpose program synthesiser

DeepSynth General-purpose program synthesiser. This is the repository for the code of the paper "Scaling Neural Program Synthesis with Distribution-ba

Nathanaël Fijalkow 24 Oct 23, 2022
Implementation of Stochastic Image-to-Video Synthesis using cINNs.

Stochastic Image-to-Video Synthesis using cINNs Official PyTorch implementation of Stochastic Image-to-Video Synthesis using cINNs accepted to CVPR202

CompVis Heidelberg 135 Dec 28, 2022
DilatedNet in Keras for image segmentation

Keras implementation of DilatedNet for semantic segmentation A native Keras implementation of semantic segmentation according to Multi-Scale Context A

303 Mar 15, 2022
Additional code for Stable-baselines3 to load and upload models from the Hub.

Hugging Face x Stable-baselines3 A library to load and upload Stable-baselines3 models from the Hub. Installation With pip Examples [Todo: add colab t

Hugging Face 34 Dec 10, 2022
The code for 'Deep Residual Fourier Transformation for Single Image Deblurring'

Deep Residual Fourier Transformation for Single Image Deblurring Xintian Mao, Yiming Liu, Wei Shen, Qingli Li and Yan Wang News 2021.12.5 Release Deep

145 Jan 05, 2023
A generator of point clouds dataset for PyPipes.

CloudPipesGenerator Documentation | Colab Notebooks | Video Tutorials | Master Degree website A generator of point clouds dataset for PyPipes. TODO Us

1 Jan 13, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

697 Jan 06, 2023
A simple approach to emable dense segmentation with ViT.

Vision Transformer Segmentation Network This implementation of ViT in pytorch uses a super simple and straight-forward way of generating an output of

HReynaud 5 Jan 03, 2023
Code repository for Self-supervised Structure-sensitive Learning, CVPR'17

Self-supervised Structure-sensitive Learning (SSL) Ke Gong, Xiaodan Liang, Xiaohui Shen, Liang Lin, "Look into Person: Self-supervised Structure-sensi

Clay Gong 219 Dec 29, 2022
Fast and Easy Infinite Neural Networks in Python

Neural Tangents ICLR 2020 Video | Paper | Quickstart | Install guide | Reference docs | Release notes Overview Neural Tangents is a high-level neural

Google 1.9k Jan 09, 2023
Continuous Conditional Random Field Convolution for Point Cloud Segmentation

CRFConv This repository is the implementation of "Continuous Conditional Random Field Convolution for Point Cloud Segmentation" 1. Setup 1) Building c

Fei Yang 8 Dec 08, 2022
Human motion synthesis using Unity3D

Human motion synthesis using Unity3D Prerequisite: Software: amc2bvh.exe, Unity 2017, Blender. Unity: RockVR (Video Capture), scenes, character models

Hao Xu 9 Jun 01, 2022
Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

S2AND This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System f

AI2 54 Nov 28, 2022
SeqTR: A Simple yet Universal Network for Visual Grounding

SeqTR This is the official implementation of SeqTR: A Simple yet Universal Network for Visual Grounding, which simplifies and unifies the modelling fo

seanZhuh 76 Dec 24, 2022
Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"

Anytime Autoregressive Model Anytime Sampling for Autoregressive Models via Ordered Autoencoding , ICLR 21 Yilun Xu, Yang Song, Sahaj Gara, Linyuan Go

Yilun Xu 22 Sep 08, 2022
Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer

Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer This repository contains the PyTorch code for Evo-ViT. This work proposes a slow-fas

YifanXu 53 Dec 05, 2022