Python package for covariance matrices manipulation and Biosignal classification with application in Brain Computer interface

Overview

pyRiemann

PyPI version Build Status codecov DOI Documentation Status Downloads

pyRiemann is a python package for covariance matrices manipulation and classification through Riemannian geometry.

The primary target is classification of multivariate biosignals, like EEG, MEG or EMG.

This is work in progress ... stay tuned.

This code is BSD-licenced (3 clause).

Documentation

The documentation is available on http://pyriemann.readthedocs.io/en/latest/

Install

Using PyPI

pip install pyriemann

or using pip+git for the latest version of the code :

pip install git+https://github.com/pyRiemann/pyRiemann

Anaconda is not currently supported, if you want to use anaconda, you need to create a virtual environment in anaconda, activate it and use the above command to install it.

From sources

For the latest version, you can install the package from the sources using the setup.py script

python setup.py install

or in developer mode to be able to modify the sources.

python setup.py develop

How to use it

Most of the functions mimic the scikit-learn API, and therefore can be directly used with sklearn. For example, for cross-validation classification of EEG signal using the MDM algorithm described in [4] , it is easy as :

import pyriemann
from sklearn.model_selection import cross_val_score

# load your data
X = ... # your EEG data, in format Ntrials x Nchannels X Nsamples
y = ... # the labels

# estimate covariances matrices
cov = pyriemann.estimation.Covariances().fit_transform(X)

# cross validation
mdm = pyriemann.classification.MDM()

accuracy = cross_val_score(mdm, cov, y)

print(accuracy.mean())

You can also pipeline methods using sklearn Pipeline framework. For example, to classify EEG signal using a SVM classifier in the tangent space, described in [5] :

from pyriemann.estimation import Covariances
from pyriemann.tangentspace import TangentSpace

from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score

# load your data
X = ... # your EEG data, in format Ntrials x Nchannels X Nsamples
y = ... # the labels

# build your pipeline
covest = Covariances()
ts = TangentSpace()
svc = SVC(kernel='linear')

clf = make_pipeline(covest,ts,svc)
# cross validation
accuracy = cross_val_score(clf, X, y)

print(accuracy.mean())

Check out the example folder for more examples !

Testing

If you make a modification, run the test suite before submitting a pull request

pytest

Contribution Guidelines

The package aims at adopting the Scikit-Learn and MNE-Python conventions as much as possible. See their contribution guidelines before contributing to the repository.

References

[1] A. Barachant, M. Congedo ,"A Plug&Play P300 BCI Using Information Geometry", arXiv:1409.0107. link

[2] M. Congedo, A. Barachant, A. Andreev ,"A New generation of Brain-Computer Interface Based on Riemannian Geometry", arXiv: 1310.8115. link

[3] A. Barachant and S. Bonnet, "Channel selection procedure using riemannian distance for BCI applications," in 2011 5th International IEEE/EMBS Conference on Neural Engineering (NER), 2011, 348-351. pdf

[4] A. Barachant, S. Bonnet, M. Congedo and C. Jutten, "Multiclass Brain-Computer Interface Classification by Riemannian Geometry," in IEEE Transactions on Biomedical Engineering, vol. 59, no. 4, p. 920-928, 2012. pdf

[5] A. Barachant, S. Bonnet, M. Congedo and C. Jutten, "Classification of covariance matrices using a Riemannian-based kernel for BCI applications", in NeuroComputing, vol. 112, p. 172-178, 2013. pdf

Comments
  • Adding Riemannian Gaussian to pyRiemann

    Adding Riemannian Gaussian to pyRiemann

    This PR implements the discussion started in Issue #138

    It consists of:

    • A new sampling feature containing all the code relevant for sampling from Riemannian Gaussian distributions as defined in arXiv:1507.01760. I chose to give this rather general name because we could one day end up implementing other kinds of pdfs in the SPD manifold, like mixture of Gaussians or Riemannian Laplacian distributions. I am, however, open to suggestions!
    • Two examples illustrating why this feature is useful. They reproduce the figures presented in Issue #138

    It took me longer than initially expected because I ended up implementing a MCMC procedure for sampling the Riemannian Gaussian distribution. At least this ensures that we won't depend on other rather heavy packages like pyMC3 and pyro.

    @sylvchev had mentioned the idea of also implementing the Bayesian classifiers from arXiv:1507.01760. This could indeed be useful for the package, however, I think it goes out of the scope of this PR. Furthermore, it does not apply directly the code that I'm including here... it depends mostly of an EM algorithm that I haven't touched yet. I suggest we keep this for (near) future work :)

    Cheers, Pedro

    opened by plcrodrigues 23
  • Q: projecting single time point on tangent space?

    Q: projecting single time point on tangent space?

    Hi,

    I'm trying to play with your code.

    I was wondering whether it would be in principle possible to project a single time point (Nchan x 1) on the tangent space. I was thinking of fitting TangentSpace on the complete epochs, but then assess a series of svm on each time point separately in the tangent space. Although I'm not sure this makes sense at all..

    Thanks!

    opened by kingjr 23
  • Logo for pyRiemann

    Logo for pyRiemann

    Hi, everyone.

    I would like to propose that we think of a logo for our dearest package ! :)

    I have no training whatsoever in design and/or logo creation, so I must say that I don't have any strong opinion of what we should or should not include in a logo. With that said, I have a first suggestion and would like to know what you guys think.

    We could also include colors, such as in the two proposals here below.

    Well, that's it. I would be happy to know whether:

    1. The logo itself seems OK or if we should go onto something else
    2. What you think of the colors? We can of course play with other combinations (with the help of https://coolors.co/)

    Cheers, Pedro

    opened by plcrodrigues 19
  • (MRG) Transfer Learning in pyRiemann

    (MRG) Transfer Learning in pyRiemann

    Hi, everyone

    This is my first try to integrate some transfer learning methods to pyRiemann. @sylvchev and Emmanuel Kalunga had already started contributing on this topic with their PR #177 on MDWM, but I think it would be nice for us to discuss more broadly the general lines of development for transfer learning in our beloved package.

    @agramfort @qbarthelemy and myself had a Zoom meeting recently in which we sketched some aspects of how an API for transfer learning in pyRiemann should look like. I tried implementing these ideas and had to make some design choices. The most relevant things are:

    • Our data is always described by a triplet (X, y, meta) where X has the SPD matrices from all domains and y their corresponding labels. The meta structure is a pandas dataframe with two columns: domain for indicating to which domain each point belongs to and target indicating for each sample whether it is part of the Target domain or not.
    • I had to create a TLSplitter object capable of splitting the data into training/validation partition in cross-validation. We can not use simply the ones from sklearn because we need the information from meta
    • The way I see things, transfer learning procedures should be split into two parts: first we transform the data points and then we run classification. With this in mind, I created two examples of transformers DCT and RCT and a new object called TLPipeline that allows us to run the transfer learning procedure with a given classifier of choice.

    I am, of course, open to suggestions and remarks and would like to know what you guys think of this first proposal.

    Cheers, Pedro

    opened by plcrodrigues 14
  • Suggestion: Move repo to NeuroTechX org, or add more maintainers

    Suggestion: Move repo to NeuroTechX org, or add more maintainers

    I notice things are kinda stale here on the maintenance side. Notably, there are a few open PRs that look ready to merge (or at least there hasn't been any indication to the contrary).

    I understand/assume that @alexandrebarachant is rather busy and only sporadically active in this repo, and I wouldn't ask him to be a more active maintainer (his time is probably better spent on other things than maintenance) so I'd like to suggest two possible solutions:

    1. Move the repo to the NeuroTechX org, where there's a community that could actively and happily support it.
    2. Add more maintainers to this repo.
      • From looking at the contributor stats, it looks like there aren't really any clear candidates. But @sylvchev has several open PRs and seems to have a good understanding of the subject matter.

    The reason for asking this is that there are more issues here I'd like to open PRs to fix, but I won't put in the effort if I'm unlikely to get a response/merge anytime soon. An example would be to move from Travis (which has recently dropped their free-forever CI for open source) to GitHub Actions (see my changes in https://github.com/NeuroTechX/eeg-notebooks/pull/24 for an example).

    There are probably more issues at hand, like how to deal with the PyPI upload rights, but I'd just like to throw these two options out and see what people think about these options.

    opened by ErikBjare 13
  • cannot run MDM example

    cannot run MDM example

    Hi Alexandre,

    I was trying the cross-validation classification example code from the homepage on some toy EEG data (64 channels), but I keep running into the following error:

    screen shot 2016-08-23 at 15 33 31

    I tracked the error down to the mean_riemann method . logm(tmp) (line 61) is filled with NaNs.

    My input data is in the following format (Ntrials, Nchannels, Nsamples) :

    screen shot 2016-08-23 at 15 38 48

    Let me know if you need more information.

    Thanks!

    opened by nbara 13
  • Add example to compare classifiers

    Add example to compare classifiers

    This PR adds an example to compare several Riemannian classifiers on low-dimensional synthetic datasets, adapted to SPD matrices from https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html

    @gabelstein

    opened by qbarthelemy 11
  • Proposal: quantum classifiers

    Proposal: quantum classifiers

    Hello,

    We implemented a SKLearn wrapper around the Qiskit library to work with EEG data using Riemannian Geometry.

    Could this work be considered as inside the scope of pyRiemann?

    opened by gcattan 11
  • Implementation of Block Covariance estimation

    Implementation of Block Covariance estimation

    This includes block diagonal covariance estimation. Useful for tasks where multiple band-pass filters are applied to the original data and covariance between frequencies do not hold information.

    enhancement 
    opened by gabelstein 10
  • Make pyRiemann citable

    Make pyRiemann citable

    I'm actually using your implementation of XDawn in one of my studies. I would like to include a proper citation to the source (for now I mention the URL). Github supports generating proper DOI's though, which are easier to cite.

    opened by wmvanvliet 10
  • Riemann Support Vector Machine

    Riemann Support Vector Machine

    Making use of the new kernel module, the new support vector machine classification.

    Should I extend some example code somewhere to compare e.g. MDM with the RSVC?

    opened by gabelstein 9
  • add a new function for frequency band selection

    add a new function for frequency band selection

    Hello again! I created a new function for frequency band selection on the manifold using the class distinctiveness measure.

    This function finds the frequency band with the large class distinctiveness for training data from a broad frequency band that the user input. For instance, if the user inputs 5-35Hz, the function adjusts the band and returns 8-12Hz as the most class-distinctive frequency band. I added its example code using a motor imagery dataset, so please take a look.

    Using an optimized frequency band is one of the important aspects for enhancing oscillatory activity classification, so hopefully, this new function is helpful for the user:-)

    opened by MSYamamoto 1
  • Allow processing of HPD matrices

    Allow processing of HPD matrices

    Description

    A Hermitian positive definite (HPD) matrix $M$ is defined as: $M = A + iB$ where $A$ is a symmetric positive definite (SPD) matrix and $B$ is a skew-symmetric matrix.

    Cross-spectral matrices are HPD matrices, where real parts are co-spectral matrices and imaginary parts are quadrature spectra capturing phase information.

    Providing richer features, some works have shown the interest of classifying HPD matrices rather than SPD ones. 2017 - Dehgan - Classification in Riemannian space An application to sleep EEG.pdf 2019 - Xu - Feature extraction from the Hermitian manifold for Brain-Computer Interfaces.pdf

    pyRiemann should be adapted to process such complex features. The first obvious change is to update pyriemann.utils.base._matrix_operator, adding .conj(): D = (eigvecs * eigvals) @ np.swapaxes(eigvecs.conj(), -2, -1)

    @plcrodrigues @sylvchev @ygerf

    Example

    """HPD matrices classification by MDM"""
    
    import numpy as np
    from sklearn.base import BaseEstimator, TransformerMixin
    from sklearn.model_selection import cross_val_score, KFold
    
    from pyriemann.classification import MDM
    from pyriemann.utils.covariance import cross_spectrum
    
    
    ###############################################################################
    
    
    class CrossSpectra(BaseEstimator, TransformerMixin):
        """Estimation of cross-spectral matrices.
    
        Complex cross-spectral matrices are HPD matrices estimated as the spectrum
        covariance in the frequency domain. It returns a 4-d array with a
        cross-spectral matrix for each input and in each frequency bin of the
        Fourier transform.
    
        Parameters
        ----------
        window : int, default=128
            The length of the FFT window used for spectral estimation.
        overlap : float, default=0.75
            The percentage of overlap between window.
        fmin : float | None, default=None
            The minimal frequency to be returned.
        fmax : float | None, default=None
            The maximal frequency to be returned.
        fs : float | None, default=None
            The sampling frequency of the signal.
    
        Attributes
        ----------
        freqs_ : ndarray, shape (n_freqs,)
            If transformed, the frequencies associated to cospectra.
            None if ``fs`` is None.
    
        See Also
        --------
        Covariances
        Coherences
    
        References
        ----------
        .. [1] https://en.wikipedia.org/wiki/Cross-spectrum
        """
    
        def __init__(self, window=128, overlap=0.75, fmin=None, fmax=None,
                     fs=None):
            """Init."""
            self.window = _nextpow2(window)
            self.overlap = overlap
            self.fmin = fmin
            self.fmax = fmax
            self.fs = fs
    
        def fit(self, X, y=None):
            """Fit.
    
            Do nothing. For compatibility purpose.
    
            Parameters
            ----------
            X : ndarray, shape (n_matrices, n_channels, n_times)
                Multi-channel time-series.
            y : None
                Not used, here for compatibility with sklearn API.
    
            Returns
            -------
            self : CrossSpectra instance
                The CrossSpectra instance.
            """
            return self
    
        def transform(self, X):
            """Estimate cross-spectral matrices.
    
            Parameters
            ----------
            X : ndarray, shape (n_matrices, n_channels, n_times)
                Multi-channel time-series.
    
            Returns
            -------
            X_new : ndarray, shape (n_matrices, n_channels, n_channels, n_freqs)
                Cross-spectral matrices for each input and for each frequency bin.
            """
            X_new = []
    
            for i in range(len(X)):
                S, freqs = cross_spectrum(
                    X[i],
                    window=self.window,
                    overlap=self.overlap,
                    fmin=self.fmin,
                    fmax=self.fmax,
                    fs=self.fs)
                X_new.append(S)
            self.freqs_ = freqs
    
            return np.array(X_new)
    
    
    ###############################################################################
    # MDM on HPD matrices
    
    n_matrices, n_channels, n_times = 50, 4, 5000
    data_eeg = np.random.randn(2 * n_matrices, n_channels, n_times)
    
    CrossSp = CrossSpectra(window=128, overlap=0.5, fmin=1, fmax=32, fs=128)
    data_spec = CrossSp.transform(data_eeg)
    
    ftarget = 10
    X = np.squeeze(data_spec[..., CrossSp.freqs_ == ftarget])
    y = [0] * n_matrices + [1] * n_matrices
    
    
    mdm = MDM(metric='riemann')
    cv = KFold(n_splits=10, shuffle=True, random_state=42)
    scores = cross_val_score(mdm, X, y, cv=cv, n_jobs=1)
    

    API change

    Class CospCovariances could be renamed CoSpectra and simply coded as:

    class CoSpectra(CrossSpectra):
        """Estimation of co-spectral matrices.
    
        Co-spectral matrices are SPD matrices estimated as the real part of the
        spectrum covariance in the frequency domain. It returns a 4-d array with a
        o-spectral matrix for each input and in each frequency bin of the
        Fourier transform.
    
        Parameters
        ----------
        window : int, default=128
            The length of the FFT window used for spectral estimation.
        overlap : float, default=0.75
            The percentage of overlap between window.
        fmin : float | None, default=None
            The minimal frequency to be returned.
        fmax : float | None, default=None
            The maximal frequency to be returned.
        fs : float | None, default=None
            The sampling frequency of the signal.
    
        Attributes
        ----------
        freqs_ : ndarray, shape (n_freqs,)
            If transformed, the frequencies associated to cospectra.
            None if ``fs`` is None.
    
        See Also
        --------
        Covariances
        CrossSpectra
        """
    
        def transform(self, X):
            """Estimate co-spectral matrices.
    
            Parameters
            ----------
            X : ndarray, shape (n_matrices, n_channels, n_times)
                Multi-channel time-series.
    
            Returns
            -------
            X_new : ndarray, shape (n_matrices, n_channels, n_channels, n_freqs)
                Co-spectral matrices for each input and for each frequency bin.
            """
            X_new = super().transform(X)
            return X_new.real
    
    opened by qbarthelemy 2
  • Enforce formatting style?

    Enforce formatting style?

    Formatting style across the project is inconsistent. I would suggest to fix this issue by using Black. It works pretty well, and it is very easy to use.

    enhancement 
    opened by mesca 1
  • Questions about robustness

    Questions about robustness

    Thank you very much for such an excellent job ! You have pointed out in your article the advantages of robustness based on Riemannian geometry and gave a sample example about MI. But when I tried to apply this method to all the data of the datasets, I found that the total accuracy rate was only close to 63%, there are many subjects' acc are 50%. Can you give some suggestion what can I do now ?

    question 
    opened by TanTingyi 3
  • Faster covariance and cospectra calculation with einsum

    Faster covariance and cospectra calculation with einsum

    I recently saw that there have been some improvements to the coherence calculation (https://github.com/alexandrebarachant/pyRiemann/pull/79/commits/9a6dbeef03f61c7658c8902923d0f22d84b57c1c) and I thought that I would like to propose some further improvements to the covariance and cospectra calculation.

    In brief, I have observed that using einsum for these two operations give a speed-up of one order of magnitude. Here is my current code, but let us discuss if and how we could add this to pyriemann:

    def covariances(x: np.ndarray) -> np.ndarray:
        """Calculate covariances on epoched data
    
        Input dimensions must be epoch, samples, channels
        """
        n = x.shape[1]
        # TODO: watch for einsum, it does not promote!
        c = np.einsum('aji,ajk->aik', x, x) / (n - 1)
        return c
    
    def cross_spectrum(x: np.ndarray,
                       nperseg=None, noverlap=None, *,
                       fs: float = 1,
                       detrend: Optional[str] = 'constant',
                       window: str = 'boxcar',
                       #return_onesided: bool = True,
                       ) -> Tuple[np.ndarray, np.ndarray]:
    
        # x should be of shape (epoch, channel, sample)
        if x.ndim == 1:
            # when x is 1D, assume that is just samples; add a single channel
            x = x[np.newaxis, np.newaxis, :]
        elif x.ndim == 2:
            # when x is 2D, no manipulation is needed
            pass
        else:
            raise ValueError('Expected 1D or 2D array')
    
        n_channels, n_samples = x.shape
        nperseg = nperseg or n_samples
        noverlap = noverlap or 0
    
        # create sliding epochs to get epoch, channel, sample
        x_epoched = epoch(x, nperseg, nperseg - noverlap, axis=1)
        n_epochs = x_epoched.shape[0]
    
        # Handle detrending and window functions
        w = get_window(window, nperseg)
        x_epoched = x_epoched * w[np.newaxis, np.newaxis, :]
        if detrend is not None:
            x_epoched = signaltools.detrend(x_epoched, type=detrend, axis=2)
    
        # Apply FFT on x last dimension, X will be (epoch, channel, freq)
        freqs = np.fft.fftfreq(nperseg, 1 / fs)
        X = np.fft.fft(x_epoched)  # FFT over the last axis (samples)
    
        # Do a Einstein sum that will be equivalent to the following commented code:
        #
        # ## Verbose implementation ##
        # Apply x multiplied by its complex conjugate for each frequency
        # This gives dimensions epoch, channel, channel, frequency
        # cxx = np.apply_along_axis(_xxh, 1, X)
        #
        # Reorder the axis to epoch, frequency, channel, channel
        # cxx = np.rollaxis(cxx, 3, start=1)
        #
        # Average over epochs, eliminating the epoch dimension to get frequency, channel, channel
        # cxx = cxx.mean(axis=0)
        # ## end of verbose implementation ##
        #
        #
        # Using np.einsum, we get 1 order of magnitude faster (10x faster!),
        # but it is more difficult to understand. First, let us understand what
        # np.einsum('i,j->ij', a, b) does:
        # It multiplies the first axis of the first input over the each
        # element of the second input along its first axis.
        # In other words: it multiplies each element in a with each element in b
        # In other words: it does a vector outer product
        #
        # For example:
        # >>> x = np.arange(0, 3); y = np.arange(10, 13)
        # >>> x, y
        # (array([0, 1, 2]), array([10, 11, 12]))
        # >>> np.einsum('i,j->ij', x, y)
        # array([[ 0,  0,  0],
        #        [10, 11, 12],
        #        [20, 22, 24]])
        #
        # Another example:
        # >>> x = np.arange(3) + 1j
        # >>> x
        # array([0.+1.j, 1.+1.j, 2.+1.j])
        # >>> np.einsum('i,j->ij', x, x.conj())
        # array([[1.+0.j, 1.+1.j, 1.+2.j],
        #        [1.-1.j, 2.+0.j, 3.+1.j],
        #        [1.-2.j, 3.-1.j, 5.+0.j]])
        #
        # Back to our case: This is what we want to do on
        # an (I x J) array with I channels and J frequencies:
        # for each frequency, calculate x @ x.T (vector outer product):
        # np.einsum('ik,jk->kij', X, X.conj())
        #
        # for the 3D case (that is, with epochs)
        # np.einsum('ijl,ikl->iljk', X, X.conj())
        # or, for a more verbose approach, say the indices represent the following:
        # e: epoch
        # c: channel
        # f: frequency
        # h: channel on the conjugate tranpose (this should be the same size as c)
        # Then, the operation can be rewritten as:
        # np.einsum('ecf,ehf->efch', x, x.conj())
        #
        # Finally, since the final step is to do a mean over the epochs, we can
        # sum the "e" axis (by dropping the "e" axis on the output) and divide by
        # the number of epochs:
        cxx = np.einsum('ecf,ehf->fch', X, X.conj()) / n_epochs
    
        return freqs, cxx
    
    enhancement 
    opened by dojeda 4
Releases(v0.3)
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 09, 2022
Resources complimenting the Machine Learning Course led in the Faculty of mathematics and informatics part of Sofia University.

Machine Learning and Data Mining, Summer 2021-2022 How to learn data science and machine learning? Programming. Learn Python. Basic Statistics. Take a

Simeon Hristov 8 Oct 04, 2022
Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

S2AND This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System f

AI2 54 Nov 28, 2022
Tensorflow AffordanceNet and AffContext implementations

AffordanceNet and AffContext This is tensorflow AffordanceNet and AffContext implementations. Both are implemented and tested with tensorflow 2.3. The

Beatriz PĂ©rez 6 Dec 01, 2022
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022
Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE)

OG-SPACE Introduction Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE) is a computational framewo

Data and Computational Biology Group UNIMIB (was BI*oinformatics MI*lan B*icocca) 0 Nov 17, 2021
PyTorch implementation of CVPR'18 - Perturbative Neural Networks

This is an attempt to reproduce results in Perturbative Neural Networks paper. See original repo for details.

Michael Klachko 57 May 14, 2021
Automatic meme generation model using Tensorflow Keras.

Memefly You can find the project at MemeflyAI. Contributors Nick Buukhalter Harsh Desai Han Lee Project Overview Trello Board Product Canvas Automatic

BloomTech Labs 2 Jan 13, 2022
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
Controlling Hill Climb Racing with Hand Tacking

Controlling Hill Climb Racing with Hand Tacking Opened Palm for Gas Closed Palm for Brake

Rohit Ingole 3 Jan 18, 2022
Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)

Fast Axiomatic Attribution for Neural Networks This is the official repository accompanying the NeurIPS 2021 paper: R. Hesse, S. Schaub-Meyer, and S.

Visual Inference Lab @TU Darmstadt 11 Nov 21, 2022
FaRL for Facial Representation Learning

FaRL for Facial Representation Learning This repo hosts official implementation of our paper General Facial Representation Learning in a Visual-Lingui

Microsoft 19 Jan 05, 2022
Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561

Meta-Solver for Neural Ordinary Differential Equations Towards robust neural ODEs using parametrized solvers. Main idea Each Runge-Kutta (RK) solver w

Julia Gusak 25 Aug 12, 2021
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

[CVPRW 2021] - Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation

Anirudh S Chakravarthy 6 May 03, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

7 Feb 10, 2022
Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving

Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving This is the source code for our paper Frequency Domain Image Tran

Mu Cai 52 Dec 23, 2022
[UNMAINTAINED] Automated machine learning for analytics & production

auto_ml Automated machine learning for production and analytics Installation pip install auto_ml Getting started from auto_ml import Predictor from au

Preston Parry 1.6k Jan 02, 2023
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels

PGDF This repo is the official implementation of our paper "Sample Prior Guided Robust Model Learning to Suppress Noisy Labels ". Citation If you use

CVSM Group - email: <a href=[email protected]"> 22 Dec 23, 2022
Official implementation of "A Shared Representation for Photorealistic Driving Simulators" in PyTorch.

A Shared Representation for Photorealistic Driving Simulators The official code for the paper: "A Shared Representation for Photorealistic Driving Sim

VITA lab at EPFL 7 Oct 13, 2022