Shōgun

Overview

The SHOGUN machine learning toolbox


Unified and efficient Machine Learning since 1999.

Latest release:

Release

Cite Shogun:

DOI

Develop branch build status:

Build status codecov

Donate to Shogun via NumFocus:

Powered by NumFOCUS

Buildbot: https://buildbot.shogun.ml.

Interfaces


Shogun is implemented in C++ and offers automatically generated, unified interfaces to Python, Octave, Java / Scala, Ruby, C#, R, Lua. We are currently working on adding more languages including JavaScript, D, and Matlab.

Interface Status
Python mature (no known problems)
Octave mature (no known problems)
Java/Scala stable (no known problems)
Ruby stable (no known problems)
C# stable (no known problems)
R beta (most examples work, static calls unavailable)
Perl pre-alpha (work in progress quality)
JS pre-alpha (work in progress quality)

See our website for examples in all languages.

Platforms


Shogun is supported under GNU/Linux, MacOSX, FreeBSD, and Windows.

Directory Contents


The following directories are found in the source distribution. Note that some folders are submodules that can be checked out with git submodule update --init.

  • src - source code, separated into C++ source and interfaces
  • doc - readmes (doc/readme, submodule), Jupyter notebooks, cookbook (API examples), licenses
  • examples - example files for all interfaces
  • data - data sets (submodule, required for examples)
  • tests - unit tests and continuous integration of interface examples
  • applications - applications of SHOGUN (outdated)
  • benchmarks - speed benchmarks
  • cmake - cmake build scripts

License


Shogun is distributed under BSD 3-clause license, with optional GPL3 components. See doc/licenses for details.

Comments
  • Implement heterogeneous (GPU+CPU) dot product computation routines (Deep learning project)

    Implement heterogeneous (GPU+CPU) dot product computation routines (Deep learning project)

    The dot product operation is one of the major building blocks for deep architecture neural networks. The routine implemented in this task should be able to handle batch computation of dot products. For some references see Theano, CUDA, OpenCL, ViennaCL. It is also worth to implement some tests to measure performance and memory specific things.

    Please join the discussion before starting working on any code. We expect to refine the task with further discussion.

    good first issue 
    opened by lisitsyn 101
  • #2068 Simple Gaussian Process Regression on Movielens.

    #2068 Simple Gaussian Process Regression on Movielens.

    How to commit data to the shogun-data? Need I use another pull request on shogun-data?

    This is the simple example of using Gaussian Process Regression on Movielens.

    opened by pl8787 61
  • Add kmeans page to cookbook

    Add kmeans page to cookbook

    • There's no parameter CLabels in class CKMeans, so I can't find a way to use apply_*, eval.evaluate, so as to compare test and training dataset.
    • Though I didn't see why CKMeans cannot have CLabels - we can just label the clusters 1..N.
    • I thought about evaluating the clustering performance by calculating the Euclidean distances between centers of training dataset and test dataset, but there's no handy method for now.
    • I didn't see the difference between dataset fm_train_real.dat and classifier_binary__2d_linear_features_train.datbut I think it doesn't really matter which one to use..?
    opened by OXPHOS 59
  • Add meta example features-char-string

    Add meta example features-char-string

    A simple meta example for CStringFeatures.

    I would like to make changes and add output file for an integration test but I am not sure if the current outputs are enough for that. Currently, it stores "max_string_length", "number_of_strings" and "length_of_first_string". I don't think that is possible to practically check all the values of "strings".

    However, if you don't have a better idea I could add eight variables that store the value of the first vector before and after the change to "test".

    opened by avramidis 56
  • Refactor laplacian

    Refactor laplacian

    @karlnapf take a look at this. I will send the link for the notebook tomorrow.

    Note that the original implementation of LaplacianInferenceMethod in Shogun used log(lu.determinant()) to compute the log_determinant is not numerical stable. (In fact, this implementation do not follow the GPML code)

    Maybe MatrixOperations.h will be merged in Math.h. However, I think in that case the Math.h file need to include the Eigen3 header.

    Another issue is currently I use MatrixXd and VectorXd to pass variables in MatrixOperations.h. maybe SGVector and SGMatrix will be better. (should I use "SGVector &" or "SGVector") I do now know whether passing SGVector to a function is to copy elements in the SGVector.

    opened by yorkerlin 54
  • Implement an example of variational approximation for binary GP classification

    Implement an example of variational approximation for binary GP classification

    This task is for the Variational Learning for Recommendations with Big data http://shogun-toolbox.org/page/Events/gsoc2014_ideas#variational_learning

    Our goal is to reproduce a simple example of variational approximation. We will use a GP prior with zero mean and a linear Kernel, and generate synthetic data using a logit likelihood. We will then compute an approximate Gaussian posterior N(m,V) with a restriction that the diagonal of V is 1. Our goal is to find m and V. We will use the KL method of Kuss and Rasmussen, 2005

    I have a demo code in MATLAB here, and the hope is to generate this using Shogun. https://github.com/emtiyaz/VariationalApproxExample

    You need to do the following two main tasks: (1) Write a function similar to ElogLik.m for logit likelihood (2) Interface the optimization in example.m using Shogun's LBFGS implementation.

    Please let us know that you are working on it, and feel free to ask any questions to @karlnapf or me.

    Tag: Development Task good first issue 
    opened by emtiyaz 51
  • cv::Mat to CDenseFeature conversion Factory and vice versa.

    cv::Mat to CDenseFeature conversion Factory and vice versa.

    I have made a factory which directly converts any cvMat object into any(required) type of CDenseFeatures. and CDenseFeature<float64_t> into the required type of cvMat (any)

    opened by kislayabhi 48
  • Added Documentation regarding issue #1878

    Added Documentation regarding issue #1878

    Added 'pca_notebook.ipynb' named python notebook in doc/ipython-notebooks/pca Implemented PCA on toy data for 2d to 1d and 3d to 2d projection. Implemented Eigenfaces for data compression and face recognition using att_face dataset.

    opened by kislayabhi 48
  • WIP Write Generalized Linear Machine class

    WIP Write Generalized Linear Machine class

    #5005 #5000 This is the basic framework for the Generalized Linear Machine class. This class is supposed to implement the following distributions BINOMIAL, GAMMA, SOFTPLUS, PROBIT, POISSON

    The code has been written keeping in mind this reference: PyGLMNet library However, I have only written code for the Poisson distribution till now.

    THIS IS A WORK IN PROGRESS

    This PR is so that a discussion can be held about the implementation of the GLM and so that Some feedback can be obtained for my code. @lgoetz @geektoni

    TODO

    • [x] Write code.
    • [x] Add basic test.
    • [x] Add gradient test.
    • [X] Link github gists for generating data.
    • [X] Check why the SGObject test is failing.
    • [ ] Use FeatureDispatchCRTP.
    opened by Hephaestus12 47
  • Added DEPRECATED versions of statistic and variance in streaming MMD

    Added DEPRECATED versions of statistic and variance in streaming MMD

    DEPRECATED versions are available with

    • statistic type S_UNBIASED_DEPRECATED
    • null variance estimation method NO_PERMUTATION_DEPRECATED
    • null approximation method MMD1_GAUSSIAN_DEPRECATED
    opened by lambday 47
  • one issue  about using Shogun's optimizers in target languages

    one issue about using Shogun's optimizers in target languages

    @karlnapf In CInference class

    virtual void register_minimizer(Minimizer* minimizer);
    

    In Minimizer class

    #ifndef MINIMIZER_H
    #define MINIMIZER_H
    #include <shogun/lib/config.h>
    namespace shogun
    {
    
    /** @brief The minimizer base class.
     *
     */
    class Minimizer
    {
    public: 
            /** Do minimization and get the optimal value 
             * 
             * @return optimal value
             */
            virtual float64_t minimize()=0;
    
            virtual ~Minimizer() {}
    };
    
    }
    #endif
    

    Note that

    • CInference is a sub-class of SGObject
    • LBFGSMinimizer class is a sub-class of Minimizer
    • CSingleLaplaceInferenceMethod is a sub-class of CInference
    • Minimizer is NOT a sub-class of CSGObject

    The following lines of C++ code work.

    CSingleLaplaceInferenceMethod* inf = new CSingleLaplaceInferenceMethod();
    LBFGSMinimizer* opt=new LBFGSMinimizer();
    inf->register_minimizer(opt);
    

    However, the following lines of Python code do not work

    inf=SingleLaplaceInferenceMethod()
    opt = LBFGSMinimizer()
    inf.register_minimizer(opt)
    

    Error output:

    TypeError: in method 'Inference_register_minimizer', argument 2 of type 'Minimizer *'
    
    Type: Bug 
    opened by yorkerlin 44
  • Official Website shogun.ml is unavailable

    Official Website shogun.ml is unavailable

    image

    nslookup.exe www.shogun.ml
    

    服务器: cra-123-dns Address: 10.20.110.123 非权威应答: 名称: shogun.ml Address: 213.239.207.21 Aliases: www.shogun.ml


    ping 213.239.207.21

    正在 Ping 213.239.207.21 具有 32 字节的数据: 请求超时。 请求超时。 请求超时。 请求超时。 213.239.207.21 的 Ping 统计信息: 数据包: 已发送 = 4,已接收 = 0,丢失 = 4 (100% 丢失),

    opened by MIngPAPA 0
  • Frustrated with building shogun on RHEL9

    Frustrated with building shogun on RHEL9

    I'm having all sorts of issues building shogun on RHEL9. Is shogun supported on RHEL9?

    Also is there a place where you document the version of supported dependencies? I installed the latest version of eigen3 just to find out it was not compatible with shogun. The gpl stuff is also confusing for anyone building your code for the first time.

    opened by omidnabi 0
  • Website Not Secure warning

    Website Not Secure warning

    using mac os monterey and chrome v 104.0.5112.101

    Website linked from github: https://www.shogun.ml/

    image

    Website linked from NumFocus https://www.shogun-toolbox.org/ image

    opened by droumis 0
  • Machine object should return a reference to themselves

    Machine object should return a reference to themselves

    Machine object should return a reference to themselves (like in sklearn)

    auto machine = pipeline->over(std::make_shared<NormOne>())
                                           ->composite()
                                                  ->over(std::make_shared<MulticlassLibLinear>())
                				      ->over(std::make_shared<MulticlassOCAS>())
                            	        ->then(std::make_shared<MeanRule>());
    
    machine->train(train_feats, train_labels);
    auto pred = machine->apply_multiclass(test_feats);
    

    should be simply

    auto pred = pipeline->over(std::make_shared<NormOne>())
                                     ->composite()
                                              ->over(std::make_shared<MulticlassLibLinear>())
                		                  ->over(std::make_shared<MulticlassOCAS>())
                            	 ->then(std::make_shared<MeanRule>())
                                     ->train(train_feats, train_labels)
                                     ->apply_multiclass(test_feats);
    

    This should be a simple fix in Machine::train signature, but it might break some code..

    good first issue 
    opened by gf712 7
  • Error freeing memory LibSVM when exiting sample application

    Error freeing memory LibSVM when exiting sample application

    I build shogun master on Windows 10 x64, VisualStudio 2019. I built the sample classifier_minimal_svm, it works but I get this error exiting the application

    Critical error detected c0000374
    classifier_minimal_svm.exe has triggered a breakpoint.
    
    Exception thrown at 0x00007FFC395DB0B9 (ntdll.dll) in classifier_minimal_svm.exe: 0xC0000374: A heap has been corrupted 
    (parameters: 0x00007FFC396427F0).
    Unhandled exception at 0x00007FFC395DB0B9 (ntdll.dll) in classifier_minimal_svm.exe: 0xC0000374: A heap has been corrupted (parameters: 0x00007FFC396427F0).
    

    This is the stack trace:

    ntdll.dll!00007ffc395db0b9()	Unknown
    ntdll.dll!00007ffc395db083()	Unknown
    ntdll.dll!00007ffc395e390e()	Unknown
    ntdll.dll!00007ffc395e3c1a()	Unknown
    ntdll.dll!00007ffc3957ecb1()	Unknown
    ntdll.dll!00007ffc3958ce62()	Unknown
    ucrtbase.dll!00007ffc357ec7eb()	Unknown
    classifier_minimal_svm.exe!shogun::sg_free(void * ptr) Line 186	C++
    classifier_minimal_svm.exe!shogun::sg_generic_free<int,0>(int * ptr) Line 124	C++
    classifier_minimal_svm.exe!shogun::SGVector<int>::free_data() Line 405	C++
    classifier_minimal_svm.exe!shogun::SGReferencedData::unref() Line 102	C++
    classifier_minimal_svm.exe!shogun::SGVector<int>::~SGVector<int>() Line 173	C++
    classifier_minimal_svm.exe!shogun::KernelMachine::~KernelMachine() Line 79	C++
    classifier_minimal_svm.exe!shogun::SVM::~SVM() Line 40	C++
    classifier_minimal_svm.exe!shogun::LibSVM::~LibSVM() Line 37	C++
    classifier_minimal_svm.exe!shogun::LibSVM::`scalar deleting destructor'(unsigned int)	C++
    classifier_minimal_svm.exe!std::_Destroy_in_place<shogun::LibSVM>(shogun::LibSVM & _Obj) Line 269	C++
    classifier_minimal_svm.exe!std::_Ref_count_obj2<shogun::LibSVM>::_Destroy() Line 1446	C++
    classifier_minimal_svm.exe!std::_Ref_count_base::_Decref() Line 542	C++
    classifier_minimal_svm.exe!std::_Ptr_base<shogun::LibSVM>::_Decref() Line 776	C++
    classifier_minimal_svm.exe!std::shared_ptr<shogun::LibSVM>::~shared_ptr<shogun::LibSVM>() Line 1034	C++
    classifier_minimal_svm.exe!main(int argc, char * * argv) Line 41	C++
    [Inline Frame] classifier_minimal_svm.exe!invoke_main() Line 78	C++
    classifier_minimal_svm.exe!__scrt_common_main_seh() Line 288	C++
    

    I see in previous release there was this line of code now removed

    // free up memory
    SG_UNREF(svm);
    
    Type: Bugfixing Tag: Cleanup 
    opened by spiovesan 15
  • Make Machine class stateless

    Make Machine class stateless

    @LiuYuHui's main GSoC project. Machine class becomes stateless wrt Features and Labels which means that the user has to provide features and labels when fitting a Machine. This is essentially done by adding the notion of (Non)Parametric Machines.

    Tag: GSoC 
    opened by gf712 2
Releases(shogun_6.1.4)
  • shogun_6.1.4(Jul 5, 2019)

  • shogun_6.1.3(Dec 7, 2017)

    Features

    • Drop all <math.h> function calls [Viktor Gal]
    • Use c++11 std::isnan, std:isfinite, std::isinf [Viktor Gal]

    Bugfixes

    • Port ipython notebooks to be python3 compatible [Viktor Gal]
    • Use the shogun-static library on Windows when linking the interface library [Viktor Gal]
    • Fix python typemap when compiling with MSVC [Viktor Gal]
    • Fix ShogunConfig.cmake paths [Viktor Gal]
    • Fix meta example parser bug in parallel builds [Esben Sørig]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.1.2(Nov 29, 2017)

  • shogun_6.1.1(Nov 29, 2017)

    Bugfixes

    • Install headers of GPL models when LICENSE_GPL_SHOGUN is enabled [Viktor Gal]
    • Always turn on LIBSHOGUN_BUILD_STATIC when compiling with MSVC [Viktor Gal]
    • Fix ipython notebook errors [Viktor Gal]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.1.0(Nov 28, 2017)

    • This release is dedicated for Heiko's successful PhD defense!

    • Add conda-forge packages, to get prebuilt binaries via the cross-platform conda package manager [Dougal Sutherland]

    • Change interface cmake variables to INTERFACE_*

    • Move GPL code to gpl submodule [Heiko Strathmann]

    Features

    • Enable using BLAS/LAPACK from Eigen by default [Viktor Gal]
    • Add iterators to SGVector and SGMatrix [Viktor Gal]
    • Significantly lower the runtime of KernelPCA (GSoC '17) [Michele Mazzoni]
    • Refactor FisherLDA and LDA solvers (GSoC '17) [Michele Mazzoni]
    • Add automated test for trained model serialization (GSoC '17) [Michele Mazzoni]
    • Enable SWIG director classes by default [Viktor Gal]
    • Vectorize DotFeatures covariance/mean calculation [Michele Mazzoni]
    • Support for premature stopping of model training (GSoC '17) [Giovanni De Toni]
    • Add support for observable variables (GSoC '17) [Giovanni De Toni]
    • Use TFLogger to serialize observed variables for TensorBoard (GSoC '17) [Giovanni De Toni]
    • Drop CMath::dot and SGVector::dot and use linalg::dot [Viktor Gal]
    • Added class probabilities for BaggingMachine (GSoC '17) [Olivier Nguyen]

    Bugfixes

    • Fix transpose bug in Ruby typemap for matrices [Elias Saalmann]
    • Fix MKL detection and linking; use mkl_rt when available [Viktor Gal]
    • Fix Windows static linking [Viktor Gal]
    • Fix SWIG interface compilation on Windows [qcrist]
    • Fix CircularBuffer bug that broke parsing of big CSV and LibSVM files #1991 [Viktor Gal]
    • Fix R interface when using clang to compile the interface [Viktor Gal]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.0.0(Apr 23, 2017)

    • Add native MS Windows support [Viktor Gal]
    • Shogun requires the compiler to support C++11 features
    • Shogun cloud online: Jupyter notebook with Shogun from the browser, https://cloud.shogun.ml

    Features

    • LDA now supports 32, 64 and 128 bit floating point numbers [Chris Goldsworthy]
    • Add SHOGUN_NUM_THREADS enviroment variable to control the number of threads used by the models in runtime [Viktor Gal]
    • Added Scala Interface to the build [Abhinav Rai]
    • Major re-writing and API changes in kernel statistical hypothesis testing framework, significant speed up in permutation test for quadratic time MMD, new kernel selection algorithms for quadratic time MMD [Soumyajit De]

    Bugfixes:

    • Fix build error of R interface for R>=3.3.0, #3460 [Heiko Strathmann]
    • Make the code compatible with Eigen 3.3.0 [Viktor Gal]
    • Fix number of CPUs detected on Linux [Viktor Gal]
    • Fix multi-threading in KMeansBase [Viktor Gal]
    • Make ExponentialARDKernel thread-safe [Viktor Gal]
    • Make PRNG thread-safe [Viktor Gal]
    • Fix python interface when using libshogun compiled with OpenMP [Viktor Gal]
    • Fix CART to work with cross-validation [Fernando Iglesias]

    Cleanup, efficiency updates, and API Changes:

    • Port multi-threading to use OpenMP backend in Kernel [Viktor Gal]
    • Fix false sharing in EuclideanDistance [Viktor Gal]
    • Fix out of source build of the whole project [Viktor Gal]
    • Add LIBSHOGUN cmake flag to turn off libshogun compilation [Viktor Gal]
    • Export Shogun target with cmake to enable to build modular interfaces to a pre-compiled libshogun on the system without requiring to compile libshogun itself [Viktor Gal]

    Notes

    • Contains major rewrite and clean-up of developer documentation in doc/readme [Heiko Strathmann, Lea Götz]
    • Known issue: Octave multithreaded crashes, currently bindings are initialized single-threaded, https://github.com/shogun-toolbox/shogun/issues/3772 [Heiko Strathmann]
    Source code(tar.gz)
    Source code(zip)
  • shogun_5.0.0(Nov 4, 2016)

    Features

    • GSoC 2016 project of Saurabh Mahindre: Major efficiency improvements for KMeans, LARS, Random Forests, Bagging, KNN.
    • Add new Shogun cookbook for documentation and testing across all target languages [Heiko Strathmann, Sergey Lisitsyn, Esben Sorig, Viktor Gal].
    • Added option to learn CombinedKernel weights with GP approximate inference [Wu Lin].
    • LARS now supports 32, 64, and 128 bit floating point numbers [Chris Goldsworthy].

    Bugfixes:

    • Fix gTest segfaults with GCC >= 6.0.0 [Björn Esser].
    • Make Java and CSharp install-dir configurable [Björn Esser].
    • Autogenerate modshogun.rb with correct module-suffix [Björn Esser].
    • Fix KMeans++ initialization [Saurabh Mahindre].

    Cleanup, efficiency updates, and API Changes:

    • Make Eigen3 a hard requirement. Bundle if not found on system. [Heiko Strathmann]
    • Drop ALGLIB (GPL) dependency in CStatistics and ship CDFLIB (public domain) instead [Heiko Strathmann]
    • Drop p-value estimation in model-selection [Heiko Strathmann]
    • Static interfaces have been removed [Viktor Gal]
    • New base class ShiftInvariantKernel of which GaussianKernel inherits [Rahul De].

    NOTE

    This version contains a new CMake option USE_GPL_SHOGUN, which when set to OFF will exclude all GPL codes from Shogun [Heiko Strathmann].

    Source code(tar.gz)
    Source code(zip)
  • shogun_4.1.0(May 17, 2016)

    This is a new feature and cleanup release.

    Features:

    • Added GEMPLP for approximate inference to the structured output framework [Jiaolong Xu].
    • Effeciency improvements of the FITC framework for GP inference (FITC_Laplce, FITC, VarDTC) [Wu Lin].
    • Added optimisation of inducing variables in sparse GP inference [Wu Lin].
    • Added optimisation methods for GP inference (Newton, Cholesky, LBFGS, ...) [Wu Lin].
    • Added Automatic Relevance Determination (ARD) kernel functionality for variational GP inference [Wu Lin].
    • Updated Notebook for variational GP inference [Wu Lin].
    • New framework for stochastic optimisation (L1/2 loss, mirror descent, proximal gradients, adagrad, SVRG, RMSProp, adadelta, ...) [Wu Lin].
    • New Shogun meta-language for automatically generating code listings in all target languages [Esben Sörig].
    • Added periodic kernel [Esben Sörig].
    • Add gradient output functionality in Neural Nets [Sanuj Sharma].

    Bugfixes:

    • Fixes for java_modular build using OpenJDK [Björn Esser].
    • Catch uncaught exceptions in Neural Net code [Khaled Nasr].
    • Fix build of modular interfaces with SWIG 3.0.5 on MacOSX [Björn Esser].
    • Fix segfaults when calling delete[] twice on SGMatrix-instances [Björn Esser].
    • Fix for building with full-hardening-(CXX|LD)FLAGS [Björn Esser].
    • Patch SWIG to fix a problem with SWIG and Python >= 3.5 [Björn Esser].
    • Add modshogun.rb: make sure narray is loaded before modshogun.so [Björn Esser].
    • set working-dir properly when running R (#2654) [Björn Esser].

    Cleanup, efficiency updates, and API Changes:

    • Added GPU based dot-products to linalg [Rahul De].
    • Added scale methods to linalg [Rahul De].
    • Added element wise products to linalg [Rahul De].
    • Added element-wise unary operators in linalg [Rahul De].
    • Dropped parameter migration framework [Heiko Strathmann].
    • Disabled Python integration tests by default [Sergey Lisitsyn, Heiko Strathmann].
    Source code(tar.gz)
    Source code(zip)
  • shogun_4.0.0(Jan 18, 2015)

    • This release features the work of our 8 GSoC 2014 students [student; mentors]:
      • OpenCV Integration and Computer Vision Applications [Abhijeet Kislay; Kevin Hughes]
      • Large-Scale Multi-Label Classification [Abinash Panda; Thoralf Klein]
      • Large-scale structured prediction with approximate inference [Jiaolong Xu; Shell Hu]
      • Essential Deep Learning Modules [Khaled Nasr; Sergey Lisitsyn, Theofanis Karaletsos]
      • Fundamental Machine Learning: decision trees, kernel density estimation [Parijat Mazumdar ; Fernando Iglesias]
      • Shogun Missionary & Shogun in Education [Saurabh Mahindre; Heiko Strathmann]
      • Testing and Measuring Variable Interactions With Kernels [Soumyajit De; Dino Sejdinovic, Heiko Strathmann]
      • Variational Learning for Gaussian Processes [Wu Lin; Heiko Strathmann, Emtiyaz Khan]
    • This release also contains several cleanups and bugfixes:
      • Features:
        • New Shogun project description [Heiko Strathmann]
        • ID3 algorithm for decision tree learning [Parijat Mazumdar]
        • New modes for PCA matrix factorizations: SVD & EVD, in-place or reallocating [Parijat Mazumdar]
        • Add Neural Networks with linear, logistic and softmax neurons [Khaled Nasr]
        • Add kernel multiclass strategy examples in multiclass notebook [Saurabh Mahindre]
        • Add decision trees notebook containing examples for ID3 algorithm [Parijat Mazumdar]
        • Add sudoku recognizer ipython notebook [Alejandro Hernandez]
        • Add in-place subsets on features, labels, and custom kernels [Heiko Strathmann]
        • Add Principal Component Analysis notebook [Abhijeet Kislay]
        • Add Multiple Kernel Learning notebook [Saurabh Mahindre]
        • Add Multi-Label classes to enable Multi-Label classification [Thoralf Klein]
        • Add rectified linear neurons, dropout and max-norm regularization to neural networks [Khaled Nasr]
        • Add C4.5 algorithm for multiclass classification using decision trees [Parijat Mazumdar]
        • Add support for arbitrary acyclic graph-structured neural networks [Khaled Nasr]
        • Add CART algorithm for classification and regression using decision trees [Parijat Mazumdar]
        • Add CHAID algorithm for multiclass classification and regression using decision trees [Parijat Mazumdar]
        • Add Convolutional Neural Networks [Khaled Nasr]
        • Add Random Forests algorithm for ensemble learning using CART [Parijat Mazumdar]
        • Add Restricted Botlzmann Machines [Khaled Nasr]
        • Add Stochastic Gradient Boosting algorithm for ensemble learning [Parijat Mazumdar]
        • Add Deep contractive and denoising autoencoders [Khaled Nasr]
        • Add Deep belief networks [Khaled Nasr]
      • Bugfixes:
        • Fix reference counting bugs in CList when reference counting is on [Heiko Strathmann, Thoralf Klein, lambday]
        • Fix memory problem in PCA::apply_to_feature_matrix [Parijat Mazumdar]
        • Fix crash in LeastAngleRegression for the case D greater than N [Parijat Mazumdar]
        • Fix memory violations in bundle method solvers [Thoralf Klein]
        • Fix fail in library_mldatahdf5.cpp example when http://mldata.org is not working properly [Parijat Mazumdar]
        • Fix memory leaks in Vowpal Wabbit, LibSVMFile and KernelPCA [Thoralf Klein]
        • Fix memory and control flow issues discovered by Coverity [Thoralf Klein]
        • Fix R modular interface SWIG typemap (Requires SWIG >= 2.0.5) [Matt Huska]
      • Cleanup and API Changes:
        • PCA now depends on Eigen3 instead of LAPACK [Parijat Mazumdar]
        • Removing redundant and fixing implicit imports [Thoralf Klein]
        • Hide many methods from SWIG, reducing compile memory by 500MiB [Heiko Strathmann, Fernando Iglesias, Thoralf Klein]
    Source code(tar.gz)
    Source code(zip)
  • shogun_3.2.0(Feb 17, 2014)

    we are pleased to announce Shogun 3.2.0 !

    This release also contains several cleanups and bugfixes:

    • Features:
      • Fully support python3 now
      • Add mini-batch k-means [Parijat Mazumdar]
      • Add k-means++ for more details see the notebook [Parijat Mazumdar]
      • Add sub-sequence string kernel [lambday]
    • Bugfixes:
      • Compile fixes for upcoming swig3.0
      • Speedup for gaussian process' apply()
      • Improve unit / integration test checks
      • libbmrm uninitialized memory reads
      • libocas uninitialized memory reads
      • Octave 3.8 compile fixes [Orion Poplawski]
      • Fix java modular compile error [Bjoern Esser]
    Source code(tar.gz)
    Source code(zip)
A trusty face recognition research platform developed by Tencent Youtu Lab

Introduction TFace: A trusty face recognition research platform developed by Tencent Youtu Lab. It provides a high-performance distributed training fr

Tencent 956 Jan 01, 2023
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

RNN-for-Joint-NLU Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

Kim SungDong 194 Dec 28, 2022
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
SARS-Cov-2 Recombinant Finder for fasta sequences

Sc2rf - SARS-Cov-2 Recombinant Finder Pronounced: Scarf What's this? Sc2rf can search genome sequences of SARS-CoV-2 for potential recombinants - new

Lena Schimmel 41 Oct 03, 2022
Improving 3D Object Detection with Channel-wise Transformer

"Improving 3D Object Detection with Channel-wise Transformer" Thanks for the OpenPCDet, this implementation of the CT3D is mainly based on the pcdet v

Hualian Sheng 107 Dec 20, 2022
CS550 Machine Learning course project on CNN Detection.

CNN Detection (CS550 Machine Learning Project) Team Members (Tensor) : Yadava Kishore Chodipilli (11940310) Thashmitha BS (11941250) This is a work do

yaadava_kishore 2 Jan 30, 2022
Wordle-solver - Wordle answer generation program in python

🟨 Wordle Solver 🟩 Wordle answer generation program in python ✔️ Requirements U

Dahyun Kang 4 May 28, 2022
EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
The original weights of some Caffe models, ported to PyTorch.

pytorch-caffe-models This repo contains the original weights of some Caffe models, ported to PyTorch. Currently there are: GoogLeNet (Going Deeper wit

Katherine Crowson 9 Nov 04, 2022
Transformer Tracking (CVPR2021)

TransT - Transformer Tracking [CVPR2021] Official implementation of the TransT (CVPR2021) , including training code and trained models. We are revisin

chenxin 465 Jan 06, 2023
Implementation of Kronecker Attention in Pytorch

Kronecker Attention Pytorch Implementation of Kronecker Attention in Pytorch. Results look less than stellar, but if someone found some context where

Phil Wang 16 May 06, 2022
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Summary Explorer Summary Explorer is a tool to visually inspect the summaries from several state-of-the-art neural summarization models across multipl

Webis 42 Aug 14, 2022
Unoffical implementation about Image Super-Resolution via Iterative Refinement by Pytorch

Image Super-Resolution via Iterative Refinement Paper | Project Brief This is a unoffical implementation about Image Super-Resolution via Iterative Re

LiangWei Jiang 2.5k Jan 02, 2023
A Python library for Deep Probabilistic Modeling

Abstract DeeProb-kit is a Python library that implements deep probabilistic models such as various kinds of Sum-Product Networks, Normalizing Flows an

DeeProb-org 46 Dec 26, 2022
Source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated Recurrent Memory Network

KaGRMN-DSG_ABSA This repository contains the PyTorch source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated

XingBowen 4 May 20, 2022
Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh

Arjun Majumdar 44 Dec 14, 2022
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

35 Dec 28, 2022
NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem

NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem Liang Xin, Wen Song, Zhiguang

xinliangedu 33 Dec 27, 2022
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
Doods2 - API for detecting objects in images and video streams using Tensorflow

DOODS2 - Return of DOODS Dedicated Open Object Detection Service - Yes, it's a b

Zach 101 Jan 04, 2023