library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

Related tags

Deep Learningnlopt
Overview

Latest Docs Build Status Build Status

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unified interface and packaging of several free/open-source nonlinear optimization libraries.

The latest release can be downloaded from the NLopt releases page on Github, and the NLopt manual is hosted on readthedocs.

NLopt is compiled and installed with the CMake build system (see CMakeLists.txt file for available options):

git clone git://github.com/stevengj/nlopt
cd nlopt
mkdir build
cd build
cmake ..
make
sudo make install

(To build the latest development sources from git, you will need SWIG to generate the Python and Guile bindings.)

Once it is installed, #include <nlopt.h> in your C/C++ programs and link it with -lnlopt -lm. You may need to use a C++ compiler to link in order to include the C++ libraries (which are used internally by NLopt, even though it exports a C API). See the C reference manual.

There are also interfaces for C++, Fortran, Python, Matlab or GNU Octave, OCaml, GNU Guile, GNU R, Lua, Rust, and Julia. Interfaces for other languages may be added in the future.

Comments
  • nlopt compilation failed at

    nlopt compilation failed at "make" step on AIX7.2

    I am trying to compile nlopt on AIX7.2. The 1st "cmake" step finished successfully. However, the 2nd "make" step failed with the ERROR: Undefined symbol: __tls_get_addr. Can you help me to figure out the issue? Thanks.

    [ 66%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/global.cc.o
    [ 68%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/linalg.cc.o
    [ 70%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/local.cc.o
    [ 71%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/stogo.cc.o
    [ 73%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/tools.cc.o
    [ 75%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/evolvent.cc.o
    [ 76%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/solver.cc.o
    [ 78%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/local_optimizer.cc.o
    [ 80%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/ags.cc.o
    [ 81%] Linking CXX shared library libnlopt
    ld: 0711-317 ERROR: Undefined symbol: __tls_get_addr
    ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information.
    collect2: error: ld returned 8 exit status
    make[2]: *** [CMakeFiles/nlopt.dir/build.make:767: libnlopt.a] Error 1
    make[2]: Leaving directory '/software/thirdparty/nlopt-master/build'
    make[1]: *** [CMakeFiles/Makefile2:179: CMakeFiles/nlopt.dir/all] Error 2
    make[1]: Leaving directory '/software/thirdparty/nlopt-master/build'
    make: *** [Makefile:163: all] Error 2
    
    opened by bergen288 29
  • Prefer target_include_directories in CMake build script

    Prefer target_include_directories in CMake build script

    This PR facilitates the inclusion of nlopt into other CMake projects.

    • The "private" include directories are defined in a per-target basis via target_include_directories instead of include_directories. This has the advantage that it doesn't "pollute" parent projects.
    • Public include directories are set via the INTERFACE argument in target_include_directories. To keep it simple the original header nlopt.h is simply copied in the ${PROJECT_BINARY_DIR}/api folder, which is used as the interface include directory for nlopt. Unfortunatelly, absolute paths cannot be given as interface include directories for installed targets, hence the need for the trick with $<BUILD_INTERFACE:...>. However this generator expression has been introduced in cmake 3.0, so the minimum required version is bumped to 3.0, but maybe you don't want that?
    • Similarly, use target_compile_definitions instead of a global add_definitions.
    • I couldn't make per-target include directories work with SWIG, so there is still a include_directories appearing in the swig/CMakeLists.txt. This is not ideal, but if you have any idea to improve this I'm all ears.

    Now, building nlopt as part of other projects is as simple as

    add_subdirectory(ext/nlopt)
    target_link_libraries(my_program nlopt)
    
    opened by jdumas 15
  • Implement C++ style functors as targets for objectives

    Implement C++ style functors as targets for objectives

    This PR implements a wrapper nlopt::functor_wrapper for C++ style functors via std::function, and two new overloads of nlopt::set_min_objective, nlopt::set_max_objective.

    In order to allow that, a new member field in the myfunc_data struct is added: functor_type functor;, where functor_type is defined as

    typedef std::function<double(unsigned, const double*, double*)> functor_type;
    

    This is not introduced as a pointer (like the other function-pointers are) because std::function is already a container that stores a pointer, and abstracts it away.

    Important: note that the signature for the functor does not include void* data unlike all other function-pointers. That is because it is assumed that the functor already has all the data it needs.

    This PR allows now to write the following:

    class UserDefinedObjective {
      private:
        ImportantData data;
      public:
        UserDefinedObjective() = delete;
        UserDefinedObjective(ImportantData data) :
          data(std::move(data)) {}
        double operator()(unsigned n, const double* x, double* grad) const
        {
          // compute objective(x) and ∇objective(x) using this->data
        }
    };
    
    int main()
    {
      ImportantData data;
      UserDefinedObjective objective(std::move(data));
    
      nlopt::opt optimizer;
      // other nlopt settings
      optimizer.set_max_objective(std::move(objective));
    
      optimizer.optimize(...);
    
      return 0;
    }
    

    Same with C++ lambdas, regular functions and even class member functions (check out std::function).

    This PR also introduces a CMake macro NLOPT_add_cpp_test to quickly add cpp tests, and creates a test cpp_functor.cxx to actually test the new functionality.

    Closes #219 .

    opened by dburov190 14
  • Added example of automatic tests

    Added example of automatic tests

    • Added new_test target to generate test executable
    • Updated CMakeLists.txt to add test as subfolder
    • Added two test using executable new_test
    • Now you can run test suite by just typing either "make test" or "ctest" in the build directory
    opened by boris-il-forte 14
  • C++11 idiom

    C++11 idiom

    I found nlopt a great library, but using it through the C++ is really frustrating. You need to provide a void* for passing data to the objective/constraint functions, with all the problemas that may have.

    Also, you need to pass a function pointer, so you cant use lambda functions with captures (that would avoid passing the void*).

    Ive written a thin wrapper on top of NLopt which tries to offer a more modern c++ API, enabling the use of lambdas and hiding from the API the use of void*.

    Would you be interested in this to be merge?

    opened by jjcasmar 13
  • DIRECT takes impossibly long to reach xtol

    DIRECT takes impossibly long to reach xtol

    Unless I'm mistaken, the XTOL stopping criterion for DIRECT (the cdirect version) can't be used when searching large multi-dimensional spaces, because it requires all hyper-rectangles everywhere to be divided down to below the x-tolerances before stopping. This will take an impossibly long time.

    Wouldn't it make more sense to stop as soon as one (or a few) of the rectangles is small? This could be done by inverting some of the logic for the xtol_reached variable within the cdirect.c function divide_good_rects().

    I can attach some test code here or work towards a pull request if that would be helpful.

    Cheers, Joel

    opened by jcottrell-ellex 11
  • Website & download URLs down

    Website & download URLs down

    I get the following while trying to install:

    configure: Need to download and build NLopt
    trying URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
    Warning in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz",  :
      unable to connect to 'ab-initio.mit.edu' on port 80.
    Error in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz",  : 
      cannot open URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
    Execution halted
    /bin/tar: This does not look like a tar archive
    
    gzip: stdin: unexpected end of file
    /bin/tar: Child returned status 1
    /bin/tar: Error is not recoverable: exiting now
    

    The website, http://ab-initio.mit.edu/nlopt/, also does not display in my browser (ERR_CONNECTION_TIMED_OUT)

    opened by IljaKroonen 11
  • Prevent a conditional jump based on uninitialized value in nlopt_create.

    Prevent a conditional jump based on uninitialized value in nlopt_create.

    nlopt_set_lower_bounds1 reads from opt->ub before it has ever been written.

    We caught this in our nightly memcheck CI build for RobotLocomotion/drake: https://drake-cdash.csail.mit.edu/viewDynamicAnalysisFile.php?id=14355

    Contributes to RobotLocomotion/drake#3873


    This change is Reviewable

    opened by david-german-tri 11
  • New release

    New release

    I'm running into issue https://github.com/stevengj/nlopt/issues/33. Could you make a new release that includes that fix, please? It would fix dozens of R packages on NixOS.

    Looks like the last one was in 2014!

    opened by langston-barrett 11
  • Generate missing nlopt.hpp and nlopt.f (CMake), use GNUInstallDirs (CMake), fix for MSVC 2015

    Generate missing nlopt.hpp and nlopt.f (CMake), use GNUInstallDirs (CMake), fix for MSVC 2015

    Create missing nlopt.hpp and nlopt.f when building with CMake Use CMake's GNUInstallDIrs (e.g, ${CMAKE_INSTALL_LIBDIR} instead of lib) depending on platform Fix for MSVC 2015 compiler

    opened by rickertm 11
  • Running tests with CTest is broken

    Running tests with CTest is broken

    If I check out NLopt, build it, and try to run the tests with ctest, it fails horribly because testopt was not built.

    The way this works is a non-standard workflow and prevents running the tests e.g. when NLopt is built as a CMake external project.

    Please, either just build testopt by default (i.e. remove EXCLUDE_FROM_ALL), or else add an option (e.g. NLOPT_ENABLE_TESTS) that controls whether testopt is built by default and whether any add_test are invoked.

    opened by mwoehlke-kitware 10
  • undefined reference to `nlopt_get_errmsg'

    undefined reference to `nlopt_get_errmsg'

    Hi,

    I'm getting the following linker error message :

    in function nlopt::opt::get_errmsg() const': Hamiltonian.cpp:(.text._ZNK5nlopt3opt10get_errmsgEv[_ZNK5nlopt3opt10get_errmsgEv]+0x5b): undefined reference tonlopt_get_errmsg' collect2: error: ld returned 1 exit status

    when trying to compile a code which call nlopt.

    I compiled the nlopt 2.7.1 using the make install command and got :

    Install the project...
    -- Install configuration: "Release"
    -- Installing: /usr/local/lib/pkgconfig/nlopt.pc
    -- Installing: /usr/local/include/nlopt.h
    -- Installing: /usr/local/include/nlopt.hpp
    -- Installing: /usr/local/include/nlopt.f
    -- Installing: /usr/local/lib/libnlopt.so.0.11.1
    -- Installing: /usr/local/lib/libnlopt.so.0
    -- Set runtime path of "/usr/local/lib/libnlopt.so.0.11.1" to "/usr/local/lib"
    -- Installing: /usr/local/lib/libnlopt.so
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptLibraryDepends.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptLibraryDepends-release.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptConfig.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptConfigVersion.cmake
    -- Installing: /usr/local/share/man/man3/nlopt.3
    -- Installing: /usr/local/share/man/man3/nlopt_minimize.3
    
    

    and my cmakeList look like

    cmake_minimum_required(VERSION` 3.13.4)
    project(myProject)
    
    set(CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
    set(CMAKE_CXX_STANDARD 20)
    
    add_executable(myProject main.cpp)
    target_link_libraries(myProject PUBLIC nlopt ${CPLEX_LIBRARIES} `${CMAKE_DL_LIBS})
    

    I found that someone already got a similar issue but the proposed fixing does not seem to apply to my settings

    opened by Griset 0
  • Reduce number of gradient calculations in LD-MMA

    Reduce number of gradient calculations in LD-MMA

    The Svanberg MMA paper notes that for the CCSA algorithms described, gradients are only required in the outer iterations. "Each new inner iteration requires function values, but no derivatives."

    However, it appears that the implementation of LD-MMA calculates a gradient in the inner as well as the outer iteration. I request that the implementation be updated to reduce gradient calculation.

    I believe this is a two-line change: This line could be changed to something like fcur = f(n, xcur, NULL, f_data);, and then after line 299 in the same file one could add the code if (inner_done) { fcur = f(n, xcur, dfdx_cur, f_data); }.

    This would duplicate objective calls once per outer iteration, but since gradient calculations tend to dominate run-time in objective function calls, there should be overall net savings whenever more than one inner iteration is used.

    I regret not being able to try this out myself. I don't have C set up on my machine and have never coded in C, so I would be extremely slow at running tests (I'm using the Python API). Thanks for considering!

    opened by cpixton 0
  • Simple Academic Use Case with unexpected MMA behavior.

    Simple Academic Use Case with unexpected MMA behavior.

    Hello, Doing some tests on multiple starting guess and multiple optimization algorithm I found a case on which your solver behaves unexpectedly. This is luckily a simple analytical example easily reproducible.

    import nlopt from numpy import array

    def f(x, grad): if grad.size > 0: grad[0] = 2. * (x[0] - 1.) grad[1] = 2. * (x[1] - 1.) return (x[0] - 1.) ** 2. + (x[1] - 1.) ** 2.

    def g(x, grad): if grad.size > 0: grad[0] = 1. grad[1] = 1. return x[0] + x[1] - 1.

    if name == "main": algorithm = nlopt.LD_MMA n = 2 opt = nlopt.opt(algorithm, n) lb = array([0., 0.]) ub = array([1., 1.]) x0 = array([0.25, 1]) opt.set_min_objective(f) opt.set_lower_bounds(lb) opt.set_upper_bounds(ub) opt.add_inequality_constraint(g, 1e-3) tol = 1e-6 maxeval = 50 opt.set_ftol_rel(tol) opt.set_ftol_abs(tol) opt.set_xtol_rel(tol) opt.set_xtol_rel(tol) opt.set_maxeval(maxeval) opt.set_param("verbosity", 10000) opt.set_param("inner_maxeval",10) xopt = opt.optimize(x0) print(xopt) opt_val = opt.last_optimum_value() print(opt_val) result = opt.last_optimize_result() print(result)

    The solution to this problem is simply [0.5, 0.5] correctly spotted by LD_MMA from most initial guess but not with the starting guess [0.25,1].

    In this case the log looks like that:

    MMA dual converged in 6 iterations to g=0.914369: MMA y[0]=1e+40, gc[0]=0.116025 MMA outer iteration: rho -> 0.1 MMA rhoc[0] -> 0.1 MMA dual converged in 3 iterations to g=1.34431: MMA y[0]=1e+40, gc[0]=-0.269712 MMA outer iteration: rho -> 0.01 MMA rhoc[0] -> 0.01 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.6 MMA dual converged in 3 iterations to g=2.23837: MMA y[0]=1e+40, gc[0]=-0.378669 MMA outer iteration: rho -> 0.001 MMA rhoc[0] -> 0.001 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.72 MMA dual converged in 3 iterations to g=2.79213: MMA y[0]=1e+40, gc[0]=-0.454075 MMA outer iteration: rho -> 0.0001 MMA rhoc[0] -> 0.0001 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.864 MMA dual converged in 3 iterations to g=3.13745: MMA y[0]=1e+40, gc[0]=-0.524222 MMA outer iteration: rho -> 1e-05 MMA rhoc[0] -> 1e-05 MMA sigma[0] -> 0.6 MMA sigma[1] -> 1.0368 MMA dual converged in 3 iterations to g=2.46249: MMA y[0]=1e+40, gc[0]=-0.587037 MMA outer iteration: rho -> 1e-05 MMA rhoc[0] -> 1e-05 MMA sigma[0] -> 0.6 MMA sigma[1] -> 1.24416 MMA dual converged in 3 iterations to g=1.81718: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.0001 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.81722: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.001 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.81766: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.01 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.82206: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.1 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.86611: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.410949 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=2.01828: MMA y[0]=1e+40, gc[0]=-0.625775 [0.1160254 0.8660254] 0.7993602791855875 3

    Your solver stops to the design point [0.1160254 0.8660254] that is nor a local minimum, nor a saddle point for the objective neither a kkt point . I would like to have your insight on this behavior. BRs Simone Coniglio

    opened by SimoneConiglio 0
  • How to set discrete values for NLopt

    How to set discrete values for NLopt

    hi, i have an optimization problem to solve. i have hundreds of input variables. the values of these variables can only be 1 or 0.
    How I can tell this the NLopt package? i tried it with an equality equation, but it does not work her my little code in R

    opt_test<-function(boundary){
      
      target<-rep(c(0,1),100)
      sum_of_square<-0
      for (i in 1:length(boundary)){
        sum_of_square<-sum_of_square+sum((boundary[i]-target[i])^2)
      }
      #print(boundary)
      #print(sum_of_square)
      return(sum_of_square)
    }
    
    opt_test(rep(c(1,0),100))
    
    eval_g_eq_test<-function(x) {
      
      ret<-1
      for (i in 1:length(x)){
        if (x[i]==1) {ret<-0}
        if (x[i]==0) {ret<-0}
      }
      return(ret)
    }
    
    opts <- list("algorithm"="NLOPT_GN_ISRES"   # NLOPT_GN_ORIG_DIRECT NLOPT_GNL_DIRECT_NOSCAL, NLOPT_GN_DIRECT_L_NOSCAL, and NLOPT_GN_DIRECT_L_RAND_NOSCAL NLOPT_GD_STOGO, or NLOPT_GD_STOGO_RAND
                 #geht gut: NLOPT_LN_PRAXIS   NLOPT_LN_COBYLA  
                 #NLOPT_LN_NEWUOA !!!!! +bound
                 #NLOPT_LN_BOBYQA   !si only
                 # nloptr.print.options()   all possible options
                 ,xtol_rel=1e-8
                 #stopval=as.numeric(stopval),
                 ,maxeval=2000
                 ,print_level=1
    )
    x0<-rep(0,200)
    lb<-rep(0,200)
    ub<-rep(1,200)
    
    jo<- nloptr(x0=x0
                ,eval_f=opt_test
                ,lb = lb
                ,ub = ub
                #,eval_g_eq=eval_g_eq_test()==0
                ,opts=opts
    )
    
    opened by Axyxo 0
  • Error installing `nloptr` from source on CentOS cluster

    Error installing `nloptr` from source on CentOS cluster

    I am trying to install nloptr from source in R version 4.1.3 on a cluster (CentOS). However I receive the following error:

    /cvmfs/argon.hpc.uiowa.edu/2022.1/prefix/usr/lib/gcc/x86_64-pc-linux-gnu/9.4.0/../../../../x86_64-pc-linux-gnu/bin/ld: cannot find -lnlopt
    collect2: error: ld returned 1 exit status
    make: *** [/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/r-4.1.3-ljofaul/rlib/R/share/make/shlib.mk:10: nloptr.so] Error 1
    ERROR: compilation failed for package ‘nloptr’
    

    I am on a university cluster and cannot run sudo commands. After encouragement from @eddelbuettel to contact my sys admin, he figured out the issue. Here's what he wrote:


    The build environment for nloptr uses pkg-config to get information about nlopt. It turns out that the pkg-config file has an error. It has

    libdir=${exec_prefix}/lib

    but the library is actually located in

    libdir=${exec_prefix}/lib64

    That does not show up in the packaging environment because LIBRARY_PATH is set for the dependency chain. I will need to fix the pkg-config file in the package recipe, but you can work around it as follows:

    1. load environment modules:
    module load stack/2022.1
    module load nlopt
    
    1. set LIBRARY_PATH so linker can find library while launching R session (single line below):
    LIBRARY_PATH=$ROOT_NLOPT/lib64:$LIBRARY_PATH R
    
    1. install nloptr in the R console (single line below):
    install.packages(verbose=1,'nloptr')
    

    I originally posted this issue in the nloptr repo: https://github.com/astamm/nloptr/issues/123. However, @eddelbuettel encouraged me to post an issue here because we suspect that the issue may be the pkg-config file created by nlopt.

    Here's the output of some of my commands in CentOS:

    [[email protected] ~]$ module load stack/2022.1
    
    The following have been reloaded with a version change:
      1) stack/2020.1 => stack/2022.1
    
    [[email protected] ~]$ module load r/4.1.3_gcc-9.4.0
    [[email protected] ~]$ module load nlopt
    [[email protected] ~]$ R CMD config --all | grep lib64
    LIBnn = lib64
    [[email protected] ~]$ pkg-config --libs nlopt
    -L/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/nlopt-2.7.0-u5x4377/lib -lnlopt
    

    We think we would want that to be (https://github.com/astamm/nloptr/issues/123#issuecomment-1317199965_):

    -L/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/nlopt-2.7.0-u5x4377/lib64 -lnlopt
    

    That is, /lib64 in my case instead of /lib. @eddelbuettel, please clarify if I missed anything or got anything wrong!

    opened by isaactpetersen 4
  • Nonlineay constraints get violated in the result.

    Nonlineay constraints get violated in the result.

    Hi there: During my optimization, I applied LN_COBYLA since it supports arbitrary nonlinear constraint. My "constraint function" is actually a collision avoidance function. The function returns 10.0 when collision hits, so that the constraint should be view as unsatisfied. However, the result in experiment shows the constraint is not satisfied, and the algorithm can still get finished and converge. Can anyone give me some advice? Thanks sincerely! Screenshot from 2022-10-29 21-14-44

    opened by Lbaron980810 2
Releases(v2.7.1)
Owner
Steven G. Johnson
Steven G. Johnson
Offical code for the paper: "Growing 3D Artefacts and Functional Machines with Neural Cellular Automata" https://arxiv.org/abs/2103.08737

Growing 3D Artefacts and Functional Machines with Neural Cellular Automata Video of more results: https://www.youtube.com/watch?v=-EzztzKoPeo Requirem

Robotics Evolution and Art Lab 51 Jan 01, 2023
The implementation of PEMP in paper "Prior-Enhanced Few-Shot Segmentation with Meta-Prototypes"

Prior-Enhanced network with Meta-Prototypes (PEMP) This is the PyTorch implementation of PEMP. Overview of PEMP Meta-Prototypes & Adaptive Prototypes

Jianwei ZHANG 8 Oct 14, 2021
Eth brownie struct encoding example

eth-brownie struct encoding example Overview This repository contains an example of encoding a struct, so that it can be used in a function call, usin

Ittai Svidler 2 Mar 04, 2022
This is a code repository for the paper "Graph Auto-Encoders for Financial Clustering".

Repository for the paper "Graph Auto-Encoders for Financial Clustering" Requirements Python 3.6 torch torch_geometric Instructions This is a simple c

Edward Turner 1 Dec 02, 2021
Differential fuzzing for the masses!

NEZHA NEZHA is an efficient and domain-independent differential fuzzer developed at Columbia University. NEZHA exploits the behavioral asymmetries bet

147 Dec 05, 2022
A PyTorch implementation of "From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network" (ICCV2021)

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network The official code of VisionLAN (ICCV2021). VisionLAN successfully a

81 Dec 12, 2022
An end-to-end PyTorch framework for image and video classification

What's New: March 2021: Added RegNetZ models November 2020: Vision Transformers now available, with training recipes! 2020-11-20: Classy Vision v0.5 R

Facebook Research 1.5k Dec 31, 2022
Local trajectory planner based on a multilayer graph framework for autonomous race vehicles.

Graph-Based Local Trajectory Planner The graph-based local trajectory planner is python-based and comes with open interfaces as well as debug, visuali

TUM - Institute of Automotive Technology 160 Jan 04, 2023
Kaggle: Cell Instance Segmentation

Kaggle: Cell Instance Segmentation The goal of this challenge is to detect cells in microscope images. with simple view on how many cels have been ann

Jirka Borovec 9 Aug 12, 2022
Honours project, on creating a depth estimation map from two stereo images of featureless regions

image-processing This module generates depth maps for shape-blocked-out images Install If working with anaconda, then from the root directory: conda e

2 Oct 17, 2022
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Future Interfaces Group (CMU) 26 Dec 24, 2022
Solutions and questions for AoC2021. Merry christmas!

Advent of Code 2021 Merry christmas! 🎄 🎅 To get solutions and approximate execution times for implementations, please execute the run.py script in t

Wilhelm Ågren 5 Dec 29, 2022
A Pytorch Implementation of Domain adaptation of object detector using scissor-like networks

A Pytorch Implementation of Domain adaptation of object detector using scissor-like networks Please follow Faster R-CNN and DAF to complete the enviro

2 Oct 07, 2022
SegNet-Basic with Keras

SegNet-Basic: What is Segnet? Deep Convolutional Encoder-Decoder Architecture for Semantic Pixel-wise Image Segmentation Segnet = (Encoder + Decoder)

Yad Konrad 81 Jun 30, 2022
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

4.2k Jan 01, 2023
An open software package to develop BCI based brain and cognitive computing technology for recognizing user's intention using deep learning

An open software package to develop BCI based brain and cognitive computing technology for recognizing user's intention using deep learning

deepbci 272 Jan 08, 2023
TriMap: Large-scale Dimensionality Reduction Using Triplets

TriMap TriMap is a dimensionality reduction method that uses triplet constraints to form a low-dimensional embedding of a set of points. The triplet c

Ehsan Amid 235 Dec 24, 2022
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]

GANimation: Anatomically-aware Facial Animation from a Single Image [Project] [Paper] Official implementation of GANimation. In this work we introduce

Albert Pumarola 1.8k Dec 28, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 09, 2023
Deep Learning for Morphological Profiling

Deep Learning for Morphological Profiling An end-to-end implementation of a ML System for morphological profiling using self-supervised learning to di

Danielh Carranza 0 Jan 20, 2022