Bayesian Optimization using GPflow

Overview

Note: This package is for use with GPFlow 1.

For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind.

GPflowOpt

GPflowOpt is a python package for Bayesian Optimization using GPflow, and uses TensorFlow. It was initiated and is currently maintained by Joachim van der Herten and Ivo Couckuyt. The full list of contributors (in alphabetical order) is Ivo Couckuyt, Tom Dhaene, James Hensman, Nicolas Knudde, Alexander G. de G. Matthews and Joachim van der Herten. Special thanks also to all GPflow contributors as this package would not be able to exist without their effort.

Build Status Coverage Status Documentation Status

Install

The easiest way to install GPflowOpt involves cloning this repository and running

pip install . --process-dependency-links

in the source directory. This also installs all required dependencies (including TensorFlow, if needed). For more detailed installation instructions, see the documentation.

Contributing

If you are interested in contributing to this open source project, contact us through an issue on this repository. For more information, see the notes for contributors.

Citing GPflowOpt

To cite GPflowOpt, please reference the preliminary arXiv paper. Sample Bibtex is given below:

@ARTICLE{GPflowOpt2017,
   author = {Knudde, Nicolas and {van der Herten}, Joachim and Dhaene, Tom and Couckuyt, Ivo},
    title = "{{GP}flow{O}pt: {A} {B}ayesian {O}ptimization {L}ibrary using Tensor{F}low}",
  journal = {arXiv preprint -- arXiv:1711.03845},
  year    = {2017},
  url     = {https://arxiv.org/abs/1711.03845}
}
Comments
  • GPflow 1.0

    GPflow 1.0

    Following up on #86 , development of a 1.0 compatible GPflowOpt version. This is far from done, the biggest difficulty (Acquisition) is still ahead unfortunately.

    At the same time lots of tests are affected: I'm reworking them to use some of the cool pytest features. When this work is over, I hope to do another PR to improve testing further (split out computationally demanding tests to system, use mock to test things which now trigger BO runs and model optimizations)

    do not merge yet Discussion 
    opened by javdrher 17
  • Cholesky faillures due to inappropriate initial hyperparameters

    Cholesky faillures due to inappropriate initial hyperparameters

    As mentioned #4 , tests often failed (mostly on python 2.7) due to cholesky decomposition errors. First I thought this was mostly caused by updating the data and calling optimize() again, but resetting the hyperparameters wasn't working all the time. Increasing the likelihood variance sometimes helps slightly but isn't very robust either.

    Right now the tests specify lengthscales for the initial model, and apply a hyperprior on the kernel variance. Each BO iteration, the hyperparameters supplied with the initial model are applied as a starting point. In addition, restarts are applied by randomizing the Params. This approach made it a lot more stable but isn't perfect yet. Especially in longer runs of BO, reverting to the supplied lengthscales each time ultimately causes crashes.

    Some things we may consider:

    • Normalizing the input/output data. Tested this a bit, didn't solve the issue. Additionally, the model hyperparameters loose some interpretability. Note that I think we will ultimately need this for PES anyway.
    • Add a callback for re-configuring hyperparameters. Instead of reverting to the initially supplied hypers each iteration, this function is called and configures the initial state. I think for more complex modeling approaches this ultimately required, but for simple scenario's with GPR this has to work automatically.
    • Applying hyperpriors is going to be important.

    I'd love to hear thoughts on how to improve this.

    help wanted 
    opened by javdrher 14
  • Question regarding getting started example from documentation

    Question regarding getting started example from documentation

    Good morning guys,

    I am currently trying to get gpflowopt up and running for some optimization problem. Naturally, I first tried the example you provided in the documentation Now, while I do get the same result as in the documentation, I am a bit puzzled about the function evaluations the optimizer is choosing. To be more specific, the optimizer always chooses to evaluate the function at the point [0.0, 0.5] in all 15 iterations. I am probably overlooking something, as this does not seem to be the desired behavior, right? The optimizer seems to be not really optimizing. Can anyone point out the mistake I made during the setup of the problem? I am pretty sure I follow the instructions of the example in the documentation to the letter.

    This is the code, that I am running:

    import numpy as np
    from gpflowopt.domain import ContinuousParameter
    import gpflow
    from gpflowopt.bo import BayesianOptimizer
    from gpflowopt.design import LatinHyperCube
    from gpflowopt.acquisition import ExpectedImprovement
    #from gpflowopt.optim import SciPyOptimizer
    
    def fx(X):
        X = np.atleast_2d(X)
        result = np.sum(np.square(X), axis=1)[:, None]
        print("X: {}".format(X))
        print("fx: {}".format(result))
        return result
    
    
    domain = ContinuousParameter('x1', -2, 2) + ContinuousParameter('x2', -1, 2)
    
    # Use standard Gaussian process Regression
    lhd = LatinHyperCube(21, domain)
    X = lhd.generate()
    Y = fx(X)
    model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True))
    model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3)
    
    # Now create the Bayesian Optimizer
    alpha = ExpectedImprovement(model)
    optimizer = BayesianOptimizer(domain, alpha)
    
    # Run the Bayesian optimization
    #with optimizer.silent():
    r = optimizer.optimize(fx, n_iter=15)
    print(r)
    

    And this is the output I am seeing:

    python try_out_gpflowopt.py 
    2017-12-11 09:28:09.790044: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.
    1 SSE4.2 AVX AVX2 FMA
    X: [[ 0.2   0.65]
     [ 0.   -0.1 ]
     [-0.8   0.5 ]
     [ 0.4   1.4 ]
     [-0.6   1.25]
     [ 1.    0.05]
     [ 1.2   0.8 ]
     [-1.   -0.25]
     [ 0.8  -0.7 ]
     [ 1.4   1.55]
     [-1.6   1.1 ]
     [-1.8   0.35]
     [-0.2  -0.85]
     [ 1.8   0.2 ]
     [-0.4   1.85]
     [ 0.6   2.  ]
     [ 2.    0.95]
     [ 1.6  -0.55]
     [-1.4   1.7 ]
     [-1.2  -1.  ]
     [-2.   -0.4 ]]
    fx: [[ 0.4625]
     [ 0.01  ]
     [ 0.89  ]
     [ 2.12  ]
     [ 1.9225]
     [ 1.0025]
     [ 2.08  ]
     [ 1.0625]
     [ 1.13  ]
     [ 4.3625]
     [ 3.77  ]
     [ 3.3625]
     [ 0.7625]
     [ 3.28  ]
     [ 3.5825]
     [ 4.36  ]
     [ 4.9025]
     [ 2.8625]
     [ 4.85  ]
     [ 2.44  ]
     [ 4.16  ]]
    Warning: optimization restart 4/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ -3.76012797e-10   4.99988509e-01]]
    fx: [[ 0.24998851]]
    Warning: optimization restart 1/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 4/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 3/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
         fun: array([ 0.01])
     message: 'OK'
        nfev: 15
     success: True
           x: array([[ 0. , -0.1]])
    

    For the sake of completeness, these are the steps I took to setup GPFlowOpt:

    1. Created new conda environment based on Python 3.5
    2. Cloned the gpflowopt repo
    3. Ran pip install . --process-dependency-links

    BTW, this is the first issue I've ever created on GiHub, so please forgive me if I am violating any conventions and please let me know if I left out crucial information.

    opened by jbi35 9
  • Overflow warnings

    Overflow warnings

    During calls to optimize(), sometimes UserWarnings pop up:

    /home/javdrher/.virtualenvs/gpflowopt/lib/python3.5/site-packages/GPflow/transforms.py:129: RuntimeWarning: overflow encountered in exp result = np.log(1. + np.exp(x)) + self._lower

    Typically this is quite harmless, if its really causing troubles its usually followed by a cholesky decomposition exception. However, those warnings mess up output. Specifically in documentation notebooks. I was thinking of silencing the warnings, any reason not to?

    question 
    opened by javdrher 9
  • Bug fix in pareto.py, Pareto::divide_conquer_nd

    Bug fix in pareto.py, Pareto::divide_conquer_nd

    The algorithm in Pareto::divide_conquer_nd fails when two points in the Pareto set have the same value in a certain dimension. An example is included in the modified pareto.py: The Pareto set d21 contains three points, two of which have a value of 2.0 in the first dimension. Ordering the three points results in different ways results in different values of the hypervolume (28 and 32), which are all wrong (should be 29).

    The issue is in the dominance test associated with _is_test_required method and pseudo_pf. The array pseudo_pf assigns different ranks to the same values. Therefore, by reordering the Pareto set in the test case, different pseudo Pareto sets are generated for the same Pareto set.

    I figured out two ways for the fix. One is to fix pseudo_pf by sorting the Pareto set such that the same values are assigned the same rank (e.g. using scipy.stats.rankdata). The other is to fix the dominance test by checking the actual Pareto set. The first one leads to more iterations in the algorithm. Therefore, I implemented the second approach in pareto.py with a minimum modification - although some simplifications of the code are possible.

    opened by smanist 5
  • when installing GPflowOpt it is down grading GPflow from 1.2 to  0.4

    when installing GPflowOpt it is down grading GPflow from 1.2 to 0.4

    i have installed latest gpflow version (1.2) using pip Later when i tried to install gpflowopt using the following code pip install git+https://github.com/GPflow/GPflowOpt.git

    it is down grading GPflow from 1.2 to 0.4

    opened by pullanagari 5
  • Is GPflowOpt compatible anymore with GPflow functions?

    Is GPflowOpt compatible anymore with GPflow functions?

    I just downloaded GPflowOpt, yet nothing can run due to slight changes made in GPflow. For example, you have now made a 'core' subfolder and seemed to changed the AutoFlow function. I have tried to apply come changes on my own (it is not hard to change 'from gpflow.param import DataHolder, Autoflow' to two separate imports in their correct folders, params and core), but note @Autoflow calls now need to be @gpflow.autoflow. I spent several hours changing this for pretty much every function/class.

    Yet now it seems certain classes are also significantly changed. 'Parameterized' no longer has the attribute 'highest_parent', something needed for SciPyOptimizer.

    At this point, not a single thing can be called from GPflowOpt without error.

    opened by grahamski2323 5
  • Max-Value Entropy Search

    Max-Value Entropy Search

    I implemented the recent acquisition function Max-Value Entropy Search from:

    Wang, Z. & Jegelka, S.. (2017). Max-value Entropy Search for Efficient Bayesian Optimization. Proceedings of the 34th International Conference on Machine Learning, in PMLR 70:3627-3635

    We named it Min-Value Entropy Search because the GPflow framework seeks the minimum if the function. There is a notebook evaluating the method on the Shekel function and it seems to perform well.

    opened by nknudde 5
  • MGP

    MGP

    In this pull request I implemented the Approximatively Marginalised GP as described as indicated in issue #39 . It currently supports multi-output GPs. A notebook and some tests are included.

    opened by nknudde 5
  • GPR works, VGP doesn't

    GPR works, VGP doesn't

    Do improve the speed of optimisation, I replaced GPR with VGP as follows:

    domain = np.sum([GPflowOpt.domain.ContinuousParameter(f'mux{i}', mm[i], mx[i]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'muy{i}', mm[i+7], mx[i+7]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmax{i}', 1e-7, 1.) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmay{i}', 1e-7, 1.) for i in range(7)]) domain += GPflowOpt.domain.ContinuousParameter('offset', endo * 0.7, endo * 1.3) design = GPflowOpt.design.RandomDesign(500, domain) X = design.generate() Y = np.vstack([obj(x.reshape(1, -1)) for x in X]) model = GPflow.vgp.VGP(X, Y, GPflow.kernels.RBF(29, lengthscales=X.std(axis=0)), likelihood=GPflow.likelihoods.Gaussian()) acquisition = GPflowOpt.acquisition.ExpectedImprovement(model) opt = GPflowOpt.optim.StagedOptimizer([GPflowOpt.optim.MCOptimizer(domain, 500), GPflowOpt.optim.SciPyOptimizer(domain)]) optimizer = GPflowOpt.BayesianOptimizer(domain, acquisition, optimizer=opt) optimizer.optimize(obj, n_iter=500)

    GPR works, but with VGP I receive the following error:

    [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] 2017-07-18 23:03:28.798171: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [501,1] vs. [500,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] Warning: optimization restart 1/5 failed 2017-07-18 23:03:28.898935: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.898992: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899066: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899289: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] Warning: optimization restart 2/5 failed

    I'm using master GPflow and GPflowOpt on TensorFlow 1.2 and Python 3.6.

    Thanks.

    bug 
    opened by mccajm 5
  • Avoid (some) duplicate optimizes

    Avoid (some) duplicate optimizes

    Following #52 here is some code which gets rid of 1 optimize call (in case of no initial design specified). The PR also includes a context which can be used to suspend all optimizes. This is mostly useful for the lower-level api.

    note: In the following release the data mechanism will undergo some changes and enabling the scaling should get moved to avoid another scaling

    enhancement do not merge yet 
    opened by javdrher 4
  • Issue: Install package gpflowopt

    Issue: Install package gpflowopt

    Hello can you help me to solve this issue ERROR: Could not find a version that satisfies the requirement GPflow==0.5.0 (from gpflowopt) (from versions: 1.4.1.linux-x86_64, 1.5.0.linux-x86_64, 1.0.0, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.1, 1.5.0, 1.5.1, 2.0.0rc1, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.2.0, 2.2.1, 2.3.0, 2.3.1, 2.4.0, 2.5.1, 2.5.2) ERROR: No matching distribution found for GPflow==0.5.0

    I tried to install in google colab and spyder but not working

    opened by SamirLamin 2
  • Coupled of decoupled constrained BO?

    Coupled of decoupled constrained BO?

    I am very new to this BO domain and working on constraint BO and found this wonderful tool that has constrained BO application. As I can understand from the code that the acquisition function for objective and constraint are multiplied. I have a question about that. Is this constrained BO considered as coupled then?

    opened by pallavimitra 0
  • Ask/tell interface

    Ask/tell interface

    Is there a ask/tell interface? If not, is there a workaround?

    I'd like to initialize the optimizer with already known function evaluations. Likewise, I'd like to query the next data point that should be evaluated, according to the acquisition function.

    opened by moi90 2
  • Discrete variables optimization

    Discrete variables optimization

    Hey there, I found your project which seems promising for a problem, I am currently working on. However, I as far as I can see, the option to include discrete variables in the optimization is not yet implemented? Is this correct? and if so, are there any developement in this direction currently going on?

    opened by HolmKiilerich 2
  • Can I use PyTorch within the objective function?

    Can I use PyTorch within the objective function?

    Hi, Before I start writing my coding with GPflowOpt, I need to check with you whether I can use a PyTorch model within the objective function? My objective function needs the decoder of a VAE defined using PyTorch. In addition to other essential parameters, I want to pass this model as one parameter to the objective function. I understand that GPflowOpt is based on TF, but I am not sure whether the objective can be any function independent of TF. Thanks in advance...

    opened by yifeng-li 0
Releases(v0.1.0)
  • v0.1.0(Sep 11, 2017)

    Initial version of the GPflowOpt framework, including some basic acquisition functions and support for standard Bayesian Optimization strategies.

    Source code(tar.gz)
    Source code(zip)
Owner
GPflow
GPflow
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch

Perceiver - Pytorch Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch Install $ pip install perceiver-pytorch Usage

Phil Wang 876 Dec 29, 2022
Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21).

ACTION-Net Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21). Getting Started EgoGesture data folder struct

V-Sense 171 Dec 26, 2022
SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches

SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [Paper]  [Project Page]  [Interactive Demo]  [Supplementary Material]        Usag

215 Dec 25, 2022
Locally cache assets that are normally streamed in POPULATION: ONE

Population One Localizer This is no longer needed as of the build shipped on 03/03/22, thank you bigbox :) Locally cache assets that are normally stre

Ahman Woods 2 Mar 04, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 03, 2023
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 04, 2023
An efficient and easy-to-use deep learning model compression framework

TinyNeuralNetwork 简体中文 TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework, which contains features like neura

Alibaba 441 Dec 25, 2022
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

605 Jan 03, 2023
Implementation of Barlow Twins paper

barlowtwins PyTorch Implementation of Barlow Twins paper: Barlow Twins: Self-Supervised Learning via Redundancy Reduction This is currently a work in

IgorSusmelj 86 Dec 20, 2022
scikit-learn: machine learning in Python

scikit-learn is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started

scikit-learn 52.5k Jan 08, 2023
MIMIC Code Repository: Code shared by the research community for the MIMIC-III database

MIMIC Code Repository The MIMIC Code Repository is intended to be a central hub for sharing, refining, and reusing code used for analysis of the MIMIC

MIT Laboratory for Computational Physiology 1.8k Dec 26, 2022
SAAVN - Sound Adversarial Audio-Visual Navigation,ICLR2022 (In PyTorch)

SAAVN SAAVN Code release for paper "Sound Adversarial Audio-Visual Navigation,IC

YinfengYu 10 Aug 30, 2022
Baseline of DCASE 2020 task 4

Couple Learning for SED This repository provides the data and source code for sound event detection (SED) task. The improvement of the Couple Learning

21 Oct 18, 2022
Efficient Deep Learning Systems course

Efficient Deep Learning Systems This repository contains materials for the Efficient Deep Learning Systems course taught at the Faculty of Computer Sc

Max Ryabinin 173 Dec 29, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology"

GIANT Code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology" https://arxiv.org/pdf/2004.02118.pdf Please cite our paper if this pr

Excalibur 39 Dec 29, 2022
PyTorch Lightning implementation of Automatic Speech Recognition

lasr Lightening Automatic Speech Recognition An MIT License ASR research library, built on PyTorch-Lightning, for developing end-to-end ASR models. In

Soohwan Kim 40 Sep 19, 2022
TensorFlow2 Classification Model Zoo playing with TensorFlow2 on the CIFAR-10 dataset.

Training CIFAR-10 with TensorFlow2(TF2) TensorFlow2 Classification Model Zoo. I'm playing with TensorFlow2 on the CIFAR-10 dataset. Architectures LeNe

Chia-Hung Yuan 16 Sep 27, 2022
Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper] Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Y

Hsiao-Yu Fish Tung 18 Dec 19, 2022
End-to-End Object Detection with Fully Convolutional Network

This project provides an implementation for "End-to-End Object Detection with Fully Convolutional Network" on PyTorch.

472 Dec 22, 2022