💡 Learnergy is a Python library for energy-based machine learning models.

Overview

Learnergy: Energy-based Machine Learners

Latest release DOI Build status Open issues License

Welcome to Learnergy.

Did you ever reach a bottleneck in your computational experiments? Are you tired of implementing your own techniques? If yes, Learnergy is the real deal! This package provides an easy-to-go implementation of energy-based machine learning algorithms. From datasets to fully-customizable models, from internal functions to external communications, we will foster all research related to energy-based machine learning.

Use Learnergy if you need a library or wish to:

  • Create your energy-based machine learning algorithm;
  • Design or use pre-loaded learners;
  • Mix-and-match different strategies to solve your problem;
  • Because it is incredible to learn things.

Read the docs at learnergy.readthedocs.io.

Learnergy is compatible with: Python 3.6+.


Package guidelines

  1. The very first information you need is in the very next section.
  2. Installing is also easy if you wish to read the code and bump yourself into, follow along.
  3. Note that there might be some additional steps in order to use our solutions.
  4. If there is a problem, please do not hesitate, call us.

Citation

If you use Learnergy to fulfill any of your needs, please cite us:

@misc{roder2020learnergy,
    title={Learnergy: Energy-based Machine Learners},
    author={Mateus Roder and Gustavo Henrique de Rosa and João Paulo Papa},
    year={2020},
    eprint={2003.07443},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Getting started: 60 seconds with Learnergy

First of all. We have examples. Yes, they are commented. Just browse to examples/, choose your subpackage, and follow the example. We have high-level examples for most of the tasks we could think.

Alternatively, if you wish to learn even more, please take a minute:

Learnergy is based on the following structure, and you should pay attention to its tree:

- learnergy
    - core
        - dataset
        - model
    - math
        - metrics
        - scale
    - models
        - bernoulli
            - conv_rbm
            - discriminative_rbm
            - dropout_rbm
            - e_dropout_rbm
            - rbm
        - deep
            - conv_dbn
            - dbn
            - residual_dbn
        - extra
            - sigmoid_rbm
        - gaussian
            - gaussian_conv_rbm        
            - gaussian_rbm
    - utils
        - constants
        - exception
        - logging
    - visual
        - convergence
        - image
        - tensor

Core

Core is the core. Essentially, it is the parent of everything. You should find parent classes defining the basis of our structure. They should provide variables and methods that will help to construct other modules.

Math

Just because we are computing stuff, it does not means that we do not need math. Math is the mathematical package, containing low-level math implementations. From random numbers to distributions generation, you can find your needs on this module.

Models

This is the heart. All models are declared and implemented here. We will offer you the most fantastic implementation of everything we are working with. Please take a closer look into this package.

Utils

This is a utility package. Common things shared across the application should be implemented here. It is better to implement once and use as you wish than re-implementing the same thing over and over again.

Visual

Everyone needs images and plots to help visualize what is happening, correct? This package will provide every visual-related method for you. Check a specific image, your fitness function convergence, plot reconstructions, weights, and much more.


Installation

We believe that everything has to be easy. Not tricky or daunting, Learnergy will be the one-to-go package that you will need, from the very first installation to the daily-tasks implementing needs. If you may just run the following under your most preferred Python environment (raw, conda, virtualenv, whatever):

pip install learnergy

Alternatively, if you prefer to install the bleeding-edge version, please clone this repository and use:

pip install -e .

Environment configuration

Note that sometimes, there is a need for additional implementation. If needed, from here, you will be the one to know all of its details.

Ubuntu

No specific additional commands needed.

Windows

No specific additional commands needed.

MacOS

No specific additional commands needed.


Support

We know that we do our best, but it is inevitable to acknowledge that we make mistakes. If you ever need to report a bug, report a problem, talk to us, please do so! We will be available at our bests at this repository or [email protected] and [email protected].


Comments
  • Naming of models' class attributes

    Naming of models' class attributes

    The variable names used in the getters and setters of each model class are different than what is initialized in the constructor.

    Screen Shot 2021-06-14 at 7 37 42 PM

    Note that the name 'W' is used in init, but '_W' is used as the variable name in the getters and setters, both of which have the same name as the variable initialized in init 'W'. This issue is the same for all member variables. I have tested the DBN and GaussianVarianceRBM subclasses as well, and it looks like the issue persists across all other subclasses.

    Screen Shot 2021-06-14 at 7 38 32 PM
    • OS: CentOS 7.8
    • Virtual Environment conda 4.10.1
    • Python Version 3.8

    This causes issues when trying to make copies of classes for things like multi-GPU training via torch.nn.DistributedDataParallel

    bug 
    opened by nlahaye 14
  • Probabilistic Max Pooling

    Probabilistic Max Pooling

    Hi Gustavo!

    I hope you are doing well!

    I was wondering if you had looked into / planned to add probabilistic max-pooling, as descibed in: H. Lee, et al., Convolutional Deep Belief Networksfor Scalable Unsupervised Learning of Hierarchical Representations Proceedings of the 26th annual international conference on machine learning (2009).

    I am happy to help, but figured it would be best to consult you first.

    Thanks! Nick

    enhancement 
    opened by nlahaye 7
  • probability bug

    probability bug

    When I use GaussianRBM to run rbm_classification.py on GPU, these errors appear.

    11%|â–ˆ | 51/469 [00:01<00:07, 58.24it/s]C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [416,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [417,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [418,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [419,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [420,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [421,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [422,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [423,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [424,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [425,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [426,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [427,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [428,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [429,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [430,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [431,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [432,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [433,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [434,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [435,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [436,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [437,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [438,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [439,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [440,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [441,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [442,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [443,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [444,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [445,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [446,0,0] Assertion 0 <= p4 && p4 <= 1 failed. C:/cb/pytorch_1000000000000/work/aten/src\ATen/native/cuda/DistributionTemplates.h:591: block: [6,0,0], thread: [447,0,0] Assertion 0 <= p4 && p4 <= 1 failed.

    RuntimeError: CUDA error: device-side assert triggered

    But if I run it on cpu, the error while like this: RuntimeError: Expected p_in >= 0 && p_in <= 1 to be true, but got false.

    It seems like the function hidden_sampling dosen't run correctly.

    bug question 
    opened by kyyongh 6
  • nan in probs

    nan in probs

    Hello,

    Thank you very much for developing this superb library!!

    I believe that I found an issue, but I am not able to find the root cause.

    From time to time I receive this error:

    Traceback (most recent call last): File "/Users/supermario/LNIssue/run_rbm.py", line 26, in epochs=1000000) File "/Users/supermario/LNIssue/venv/lib/python3.7/site-packages/learnergy/models/bernoulli/rbm.py", line 495, in fit _, _, _, _, visible_states = self.gibbs_sampling(samples) File "/Users/supermario/LNIssue/venv/lib/python3.7/site-packages/learnergy/models/bernoulli/rbm.py", line 379, in gibbs_sampling neg_hidden_states, True) File "/Users/supermario/LNIssue/venv/lib/python3.7/site-packages/learnergy/models/bernoulli/rbm.py", line 352, in visible_sampling states = torch.bernoulli(probs) RuntimeError: Expected p_in >= 0 && p_in <= 1 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

    Here is the code to reproduce the error:

    import numpy as np import torch from torch.utils.data import Dataset from learnergy.models.bernoulli.rbm import RBM

    np.random.seed(16) torch.manual_seed(16)

    class CustomDataset(Dataset):

    def __init__(self, custom_data):
        self.custom_data = torch.from_numpy(custom_data)
        self.dummy_y = torch.from_numpy(np.array([0.0]))
    
    def __getitem__(self, index):
        return self.custom_data[index], self.dummy_y[0]
    
    def __len__(self):
        return self.custom_data.shape[0]
    

    rbm = RBM(n_visible=32, n_hidden=64, steps=1, learning_rate=0.0001, momentum=0.9, decay=0.00001, temperature=0, use_gpu=False)

    data = np.random.choice(a=[False, True], size=(5755, 32)).astype(np.float32) data_train = CustomDataset(data)

    rbm.fit(dataset=data_train, batch_size=20, epochs=1000000)

    And requirements: astroid==2.8.4 attrs==21.2.0 coverage==6.1.1 cycler==0.11.0 imageio==2.10.3 importlib-metadata==4.8.2 iniconfig==1.1.1 isort==5.10.1 kiwisolver==1.3.2 lazy-object-proxy==1.6.0 learnergy==1.1.1 matplotlib==3.4.3 mccabe==0.6.1 networkx==2.6.3 numpy==1.21.4 packaging==21.2 Pillow==8.4.0 platformdirs==2.4.0 pluggy==1.0.0 py==1.11.0 pylint==2.11.1 pyparsing==2.4.7 pytest==6.2.5 python-dateutil==2.8.2 PyWavelets==1.2.0 scikit-image==0.18.3 scipy==1.7.2 six==1.16.0 tifffile==2021.11.2 toml==0.10.2 torch==1.9.0 tqdm==4.62.3 typed-ast==1.4.3 typing-extensions==3.10.0.2 wrapt==1.13.3 zipp==3.6.0

    The error should appear at epoch 221/1000000.

    If I add the following torch.nan_to_num in visible_sampling() method, then the error is not present anymore.

    probs = torch.nan_to_num(probs, nan=0.0) Sampling current states states = torch.bernoulli(probs)

    macosx, python 3.7.0

    Thank you! Please let me know if you need additional info.

    bug 
    opened by citizenn19 4
  • [REG] Basis for classification architecture

    [REG] Basis for classification architecture

    Regarding the basic Bernoulli RBM for classification, could I check what is the basis of your implementation? From what I can tell, the training is broken up in two sections:

    1. RBM will perform unsupervised learning to minimize energy (RBM::fit), basically like a feature extractor.
    2. The whole model is then considered as a 2-layer neural network with a sigmoid layer (RBM::forward) and linear transformation layer.

    So, it seems like the RBM is just used as an initializer for the neural network.

    In addition, I noticed that RBM::fit's epoch defaults to 1. Increasing the epoch number does not seem to improve performance. Does this mean that there is no use in training the RBM?

    Lastly, was there any prior work that implemented RBM in this way, or is this your original idea? Are there any published results or study into this architecture? Thank you!

    general 
    opened by shotnothing 3
  • [REG] The purpose of applying Softplus function in RBM

    [REG] The purpose of applying Softplus function in RBM

    I found in the comments of file learnergy/learnergy/models/bernoulli/rbm.py (line 401), you said

    Creating a Softplus function for numerical stability

    The form of Softplus function is f(x) = log(1+exp(x)), which fits the form of the log term of marginal energy of RBM

    E(v)= \sum_i a_i v_i + \sum_j ( log( 1 + exp( \sum_i v_i \lambda_{ij} +b_j ) ) ).

    So I suppose the Softplus function is not used to ensure the numerical stability, but to calculate the marginal energy. I am a beginner to RBM, if I have any wrong perceptions, please point it out.

    Thank you!

    general 
    opened by YemingMeng 3
  • Bump sphinx from 5.2.3 to 5.3.0

    Bump sphinx from 5.2.3 to 5.3.0

    Bumps sphinx from 5.2.3 to 5.3.0.

    Release notes

    Sourced from sphinx's releases.

    v5.3.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 5.3.0 (released Oct 16, 2022)

    • #10759: LaTeX: add :confval:latex_table_style and support the 'booktabs', 'borderless', and 'colorrows' styles. (thanks to Stefan Wiehler for initial pull requests #6666, #6671)
    • #10840: One can cross-reference including an option value like :option:`--module=foobar```, :option:--module[=foobar]``` or ``:option:--module foobar```. Patch by Martin Liska.
    • #10881: autosectionlabel: Record the generated section label to the debug log.
    • #10268: Correctly URI-escape image filenames.
    • #10887: domains: Allow sections in all the content of all object description directives (e.g. :rst:dir:py:function). Patch by Adam Turner
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 2
  • Bump sphinx from 5.2.3 to 5.3.0 in /docs

    Bump sphinx from 5.2.3 to 5.3.0 in /docs

    Bumps sphinx from 5.2.3 to 5.3.0.

    Release notes

    Sourced from sphinx's releases.

    v5.3.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 5.3.0 (released Oct 16, 2022)

    • #10759: LaTeX: add :confval:latex_table_style and support the 'booktabs', 'borderless', and 'colorrows' styles. (thanks to Stefan Wiehler for initial pull requests #6666, #6671)
    • #10840: One can cross-reference including an option value like :option:`--module=foobar```, :option:--module[=foobar]``` or ``:option:--module foobar```. Patch by Martin Liska.
    • #10881: autosectionlabel: Record the generated section label to the debug log.
    • #10268: Correctly URI-escape image filenames.
    • #10887: domains: Allow sections in all the content of all object description directives (e.g. :rst:dir:py:function). Patch by Adam Turner
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 2
  • Gaussian RBM with hidden layer SELU activation

    Gaussian RBM with hidden layer SELU activation

    Hi Gustavo, I hope you are having a great week. Over the last few months, I have made a couple of additions to my fork, and I figured I would share them to see if you found them useful for the base repo. No worries if not!

    The first one was adding Gaussian RBM with hidden layer SELU activation. I have had some success using this model for my research, and figured it may be useful for others.

    In addition to the SELU addition, added boolean flags to indicate whether or not to batch normalize and normalize the input within parent GaussianRBM class.

    The batch normalization is removed for SELU RBM, as it self-normalizes. Also, I use a scaling functionality outside of learnergy for the input, so it would be useful to have the normalization internal to learnergy as an option.

    By default input and batch normalization are turned on, as to not cause issues for other users.

    Thanks! Nick

    opened by nlahaye 2
  • Bump sphinx from 5.2.3 to 6.0.0

    Bump sphinx from 5.2.3 to 6.0.0

    Bumps sphinx from 5.2.3 to 6.0.0.

    Release notes

    Sourced from sphinx's releases.

    v6.0.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.3.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 6.0.0 (released Dec 29, 2022)

    Dependencies

    • #10468: Drop Python 3.6 support
    • #10470: Drop Python 3.7, Docutils 0.14, Docutils 0.15, Docutils 0.16, and Docutils 0.17 support. Patch by Adam Turner

    Incompatible changes

    • #7405: Removed the jQuery and underscore.js JavaScript frameworks.

      These frameworks are no longer be automatically injected into themes from Sphinx 6.0. If you develop a theme or extension that uses the jQuery, $, or $u global objects, you need to update your JavaScript to modern standards, or use the mitigation below.

      The first option is to use the sphinxcontrib.jquery_ extension, which has been developed by the Sphinx team and contributors. To use this, add sphinxcontrib.jquery to the extensions list in conf.py, or call app.setup_extension("sphinxcontrib.jquery") if you develop a Sphinx theme or extension.

      The second option is to manually ensure that the frameworks are present. To re-add jQuery and underscore.js, you will need to copy jquery.js and underscore.js from the Sphinx repository_ to your static directory, and add the following to your layout.html:

      .. code-block:: html+jinja

      {%- block scripts %} {{ super() }} {%- endblock %}

      .. _sphinxcontrib.jquery: https://github.com/sphinx-contrib/jquery/

      Patch by Adam Turner.

    • #10471, #10565: Removed deprecated APIs scheduled for removal in Sphinx 6.0. See :ref:dev-deprecated-apis for details. Patch by Adam Turner.

    • #10901: C Domain: Remove support for parsing pre-v3 style type directives and roles. Also remove associated configuration variables c_allow_pre_v3 and c_warn_on_allowed_pre_v3. Patch by Adam Turner.

    Features added

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 1
  • Bump sphinx from 5.2.3 to 6.0.0 in /docs

    Bump sphinx from 5.2.3 to 6.0.0 in /docs

    Bumps sphinx from 5.2.3 to 6.0.0.

    Release notes

    Sourced from sphinx's releases.

    v6.0.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.3.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 6.0.0 (released Dec 29, 2022)

    Dependencies

    • #10468: Drop Python 3.6 support
    • #10470: Drop Python 3.7, Docutils 0.14, Docutils 0.15, Docutils 0.16, and Docutils 0.17 support. Patch by Adam Turner

    Incompatible changes

    • #7405: Removed the jQuery and underscore.js JavaScript frameworks.

      These frameworks are no longer be automatically injected into themes from Sphinx 6.0. If you develop a theme or extension that uses the jQuery, $, or $u global objects, you need to update your JavaScript to modern standards, or use the mitigation below.

      The first option is to use the sphinxcontrib.jquery_ extension, which has been developed by the Sphinx team and contributors. To use this, add sphinxcontrib.jquery to the extensions list in conf.py, or call app.setup_extension("sphinxcontrib.jquery") if you develop a Sphinx theme or extension.

      The second option is to manually ensure that the frameworks are present. To re-add jQuery and underscore.js, you will need to copy jquery.js and underscore.js from the Sphinx repository_ to your static directory, and add the following to your layout.html:

      .. code-block:: html+jinja

      {%- block scripts %} {{ super() }} {%- endblock %}

      .. _sphinxcontrib.jquery: https://github.com/sphinx-contrib/jquery/

      Patch by Adam Turner.

    • #10471, #10565: Removed deprecated APIs scheduled for removal in Sphinx 6.0. See :ref:dev-deprecated-apis for details. Patch by Adam Turner.

    • #10901: C Domain: Remove support for parsing pre-v3 style type directives and roles. Also remove associated configuration variables c_allow_pre_v3 and c_warn_on_allowed_pre_v3. Patch by Adam Turner.

    Features added

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    opened by dependabot[bot] 1
Releases(v1.1.3)
  • v1.1.3(Oct 21, 2022)

    Changelog

    Description

    Welcome to v1.1.3 release.

    In this release, we have added the MaxPooling2D for convolutional models and optimized the training procedure for deeper models, i.e., DBNs. Such an optimization enables the DBN to train without loading all the dataset on the memory for each hidden layer in this phase.

    Please read the docs at: learnergy.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • learnergy
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Apr 26, 2022)

  • v1.1.1(Dec 23, 2020)

    Changelog

    Description

    Welcome to v1.1.1 release.

    In this release, we added the DropConnectRBM and fixed some nasty bugs, as well as improved some unitary tests.

    Please read the docs at: learnergy.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • models.bernoulli.dropout_rbm
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Dec 2, 2020)

    Changelog

    Description

    Welcome to v1.1.0 release.

    In this release, we renamed some packages for a clearer description. Additionally, we have added a GaussianConvRBM class and made some minor adjustments along the package.

    Note that this release might cause incompatibility with previous versions due to some packages being renamed.

    Please read the docs at: learnergy.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • models
    Source code(tar.gz)
    Source code(zip)
  • v1.0.7(Nov 25, 2020)

  • v1.0.6(Jul 10, 2020)

    Changelog

    Description

    Welcome to v1.0.6 release.

    In this release, we have added a new model, known as Convolutional RBM. Additionally, we have reworked our modules to provide a cleaner environment.

    Please read the docs at: learnergy.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • learnergy
    Source code(tar.gz)
    Source code(zip)
  • v1.0.5(May 28, 2020)

    Changelog

    Description

    Welcome to v1.0.5 release.

    In this release, we have added a new model, known as Gaussian ReLU RBM. Additionally, we have added progress bars in an attempt to construct a clearer visualization of the training process.

    Please read the docs at: learnergy.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • learnergy.models.gaussian_rbm
    Source code(tar.gz)
    Source code(zip)
  • v1.0.4(May 7, 2020)

    Changelog

    Description

    Welcome to v1.0.4 release.

    In this release, we have facilitated some classes imports, added the Residual DBN, and corrected some nasty bugs.

    Please read the docs at: learnergy.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • learnergy
    • learnergy.models.residual_dbn
    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Mar 31, 2020)

  • v1.0.2(Feb 18, 2020)

    Changelog

    Description

    Welcome to v1.0.2 release.

    In this release, we have rebranded the package to Learnergy. We have also improved all of your packages, including GPU-based versions of our models.

    Please read the docs at: learnergy.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • learnergy
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Sep 12, 2019)

    Changelog

    Description

    Welcome to Recogners v1.0.1. We fixed some issues related to the auto docstringing API, added the Dropout-based RBM and a method to dump important information to the model's history.

    Note that the Gaussian RBM is not yet guaranteed to work. Please await for our next release.

    Includes (or changes)

    • models/dropout_rbm
    • tests
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Jul 4, 2019)

    Changelog

    Description

    This is the initial release of Recogners. It includes all basic modules in order to work with it. One can create and use a Restricted Boltzmann Machine, along with some extra functionalities. Please check examples folder or read the docs in order to know how to use this library.

    Includes

    • core
    • datasets
    • math
    • models
    • utils
    • visual
    Source code(tar.gz)
    Source code(zip)
Owner
Gustavo Rosa
There are no programming languages that can match up to programming logic. Machine learning researcher on work time and software engineer on free time.
Gustavo Rosa
Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capability)

Protein GLM (wip) Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capabil

Phil Wang 17 May 06, 2022
deep-prae

Deep Probabilistic Accelerated Evaluation (Deep-PrAE) Our work presents an efficient rare event simulation methodology for black box autonomy using Im

Safe AI Lab 4 Apr 17, 2021
For medical image segmentation

LeViT_UNet For medical image segmentation Our model is based on LeViT (https://github.com/facebookresearch/LeViT). You'd better gitclone its codes. Th

13 Dec 24, 2022
RL and distillation in CARLA using a factorized world model

World on Rails Learning to drive from a world on rails Dian Chen, Vladlen Koltun, Philipp Krähenbühl, arXiv techical report (arXiv 2105.00636) This re

Dian Chen 131 Dec 16, 2022
A spherical CNN for weather forecasting

DeepSphere-Weather - Deep Learning on the sphere for weather/climate applications. The code in this repository provides a scalable and flexible framew

DeepSphere 47 Dec 25, 2022
Tensorflow-Project-Template - A best practice for tensorflow project template architecture.

Tensorflow Project Template A simple and well designed structure is essential for any Deep Learning project, so after a lot of practice and contributi

Mahmoud G. Salem 3.6k Dec 22, 2022
The Most Efficient Temporal Difference Learning Framework for 2048

moporgic/TDL2048+ TDL2048+ is a highly optimized temporal difference (TD) learning framework for 2048. Features Many common methods related to 2048 ar

Hung Guei 5 Nov 23, 2022
A repo for Causal Imitation Learning under Temporally Correlated Noise

CausIL A repo for Causal Imitation Learning under Temporally Correlated Noise. Running Experiments To re-train an expert, run: python experts/train_ex

Gokul Swamy 5 Nov 01, 2022
Matlab Python Heuristic Battery Opt - SMOP conversion and manual conversion

SMOP is Small Matlab and Octave to Python compiler. SMOP translates matlab to py

Tom Xu 1 Jan 12, 2022
Puzzle-CAM: Improved localization via matching partial and full features.

Puzzle-CAM The official implementation of "Puzzle-CAM: Improved localization via matching partial and full features".

Sanghyun Jo 150 Nov 14, 2022
Python lib to talk to pylontech lithium batteries (US2000, US3000, ...) using RS485

python-pylontech Python lib to talk to pylontech lithium batteries (US2000, US3000, ...) using RS485 What is this lib ? This lib is meant to talk to P

Frank 26 Dec 28, 2022
ANN model for prediction a spatio-temporal distribution of supercooled liquid in mixed-phase clouds using Doppler cloud radar spectra.

VOODOO Revealing supercooled liquid beyond lidar attenuation Explore the docs » Report Bug · Request Feature Table of Contents About The Project Built

remsens-lim 2 Apr 28, 2022
Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT)

Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT) Paper, Project Page This repo contains the official implementation of CVPR

Yassine 344 Dec 29, 2022
Outlier Exposure with Confidence Control for Out-of-Distribution Detection

OOD-detection-using-OECC This repository contains the essential code for the paper Outlier Exposure with Confidence Control for Out-of-Distribution De

Nazim Shaikh 64 Nov 02, 2022
Official Implementation of "Transformers Can Do Bayesian Inference"

Official Code for the Paper "Transformers Can Do Bayesian Inference" We train Transformers to do Bayesian Prediction on novel datasets for a large var

AutoML-Freiburg-Hannover 103 Dec 25, 2022
Style-based Neural Drum Synthesis with GAN inversion

Style-based Drum Synthesis with GAN Inversion Demo TensorFlow implementation of a style-based version of the adversarial drum synth (ADS) from the pap

Sound and Music Analysis (SoMA) Group 29 Nov 19, 2022
Raptor-Multi-Tool - Raptor Multi Tool With Python

Promises 🔥 20 Stars and I'll fix every error that there is 50 Stars and we will

Aran 44 Jan 04, 2023
Text Summarization - WCN — Weighted Contextual N-gram method for evaluation of Text Summarization

Text Summarization WCN — Weighted Contextual N-gram method for evaluation of Text Summarization In this project, I fine tune T5 model on Extreme Summa

Aditya Shah 1 Jan 03, 2022
Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis in JAX

SYMPAIS: Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis Overview | Installation | Documentation | Examples | Notebo

Yicheng Luo 4 Sep 13, 2022
Proto-RL: Reinforcement Learning with Prototypical Representations

Proto-RL: Reinforcement Learning with Prototypical Representations This is a PyTorch implementation of Proto-RL from Reinforcement Learning with Proto

Denis Yarats 74 Dec 06, 2022