AntroPy: entropy and complexity of (EEG) time-series in Python

Overview

https://travis-ci.org/raphaelvallat/antropy.svg?branch=master
https://github.com/raphaelvallat/antropy/blob/master/docs/pictures/logo.png?raw=true

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to extract features from EEG signals.

Documentation

Installation

pip install antropy

Dependencies

Functions

Entropy

import numpy as np
import antropy as ant
np.random.seed(1234567)
x = np.random.normal(size=3000)
# Permutation entropy
print(ant.perm_entropy(x, normalize=True))
# Spectral entropy
print(ant.spectral_entropy(x, sf=100, method='welch', normalize=True))
# Singular value decomposition entropy
print(ant.svd_entropy(x, normalize=True))
# Approximate entropy
print(ant.app_entropy(x))
# Sample entropy
print(ant.sample_entropy(x))
# Hjorth mobility and complexity
print(ant.hjorth_params(x))
# Number of zero-crossings
print(ant.num_zerocross(x))
# Lempel-Ziv complexity
print(ant.lziv_complexity('01111000011001', normalize=True))
0.9995371694290871
0.9940882825422431
0.9999110978316078
2.015221318528564
2.198595813245399
(1.4313385010057378, 1.215335712274099)
1531
1.3597696150205727

Fractal dimension

# Petrosian fractal dimension
print(ant.petrosian_fd(x))
# Katz fractal dimension
print(ant.katz_fd(x))
# Higuchi fractal dimension
print(ant.higuchi_fd(x))
# Detrended fluctuation analysis
print(ant.detrended_fluctuation(x))
1.0310643385753608
5.954272156665926
2.005040632258251
0.47903505674073327

Execution time

Here are some benchmarks computed on a MacBook Pro (2020).

import numpy as np
import antropy as ant
np.random.seed(1234567)
x = np.random.rand(1000)
# Entropy
%timeit ant.perm_entropy(x)
%timeit ant.spectral_entropy(x, sf=100)
%timeit ant.svd_entropy(x)
%timeit ant.app_entropy(x)  # Slow
%timeit ant.sample_entropy(x)  # Numba
# Fractal dimension
%timeit ant.petrosian_fd(x)
%timeit ant.katz_fd(x)
%timeit ant.higuchi_fd(x) # Numba
%timeit ant.detrended_fluctuation(x) # Numba
106 µs ± 5.49 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
138 µs ± 3.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
40.7 µs ± 303 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
2.44 ms ± 134 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.21 ms ± 35.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
23.5 µs ± 695 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
40.1 µs ± 2.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
13.7 µs ± 251 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
315 µs ± 10.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Development

AntroPy was created and is maintained by Raphael Vallat. Contributions are more than welcome so feel free to contact me, open an issue or submit a pull request!

To see the code or report a bug, please visit the GitHub repository.

Note that this program is provided with NO WARRANTY OF ANY KIND. Always double check the results.

Acknowledgement

Several functions of AntroPy were adapted from:

All the credit goes to the author of these excellent packages.

Comments
  • Improve performance in `_xlog2x`

    Improve performance in `_xlog2x`

    Follow up to #3

    Using np.nan_to_num is advantageous because it makes use of numpy's vectorization, instead of 'if x == 0', which applies the test pointwise.

    enhancement 
    opened by jftsang 7
  • modify the _embed function to fit the 2d input

    modify the _embed function to fit the 2d input

    modify the _embed funciton, so, it can take input as 2d array. pre store the sliced signal into a list to accelerate concatenation operation. pre define the indice of sliced signal to reduce the computing in loop. add vectorized operation in loop to slice the signal for all input signal in one time.

    performance: 1e3 signal with 1000 time point, order = 3, decay=1: 0.01s

    1e4 signal with 1000 time point, order = 3, decay=1: 0.1s 1e4 signal with 1000 time point, order = 10, decay=1: 0.85s

    1e5 signal with 1000 time point, order = 3, decay=1: 1.11s 1e5 signal with 1000 time point, order = 10, decay=1: 9.82s

    5e5 signal with 1000 time point, order = 3, decay=1: 67s

    enhancement 
    opened by cheliu-computation 6
  • Handle the limit of p = 0 in p log2 p

    Handle the limit of p = 0 in p log2 p

    This patch defines a helper function, _xlog2x(x), that calculates x * log2(x) but handles the case x == 0 by returning 0 rather than nan. This is needed if the power spectrum has any component that is exactly zero: in particular, if the f = 0 component is zero.

    opened by jftsang 6
  • RuntimeWarning in _xlogx when x has zero values

    RuntimeWarning in _xlogx when x has zero values

    In the version currently on github, _xlogx uses numpy.where to return valid results based on the condition x==0, 0. However numpy.where still tries to apply the log function to all values of x before trimming the values that meet the condition, resulting in runtime warnings.

    To avoid those issues, I would suggest changing the code to something like

        xlogx = np.zeros_like(x)
        valid = np.nonzero(x)
        xlogx[valid] = x[valid] * np.log(x[valid]) / np.log(base)
        return xlogx
    

    As this strictly apply the function to the nonzero elements of x.

    If this looks good to you I could submit a PR. Let me know.

    enhancement 
    opened by guiweber 4
  • Fixed division by zero in linear regresion function (with test)

    Fixed division by zero in linear regresion function (with test)

    Hi,

    Just extending the information provided in the previous PR (https://github.com/raphaelvallat/antropy/pull/20), I provide a serie of screenshots about the problem I was facing when computing the detrended fluctuation of my signals.

    See below one of the segments of my signal where the method fails:

    Screenshot from 2022-11-17 08-24-32

    Results of the tests with this signal: Screenshot from 2022-11-17 08-36-07

    After the proposed solution: Screenshot from 2022-11-17 09-27-52

    I hope these new commits and test help to clarify the issue.

    Thanks, Tino

    enhancement 
    opened by Arritmic 3
  • conda-forge package

    conda-forge package

    Hello, I've added antropy to conda-forge; please let me know if you'd like to be added as a co-maintainer for the respective feedstock. It could also make sense to amend the installation instructions, WDYT?

    enhancement 
    opened by hoechenberger 3
  • Allow readonly arrays for higuchi_fd

    Allow readonly arrays for higuchi_fd

    The current behavior of this method changes the datatype of x as np.asarray is a wrapper for np.array where copy=False. (see here)

    I believe that this is (kind of) unexpected behavior, e.g., a user would not expect that the datatype would change when calculating a feature. Therefore, I suggest giving the user the option of not changing the datatype by adding a copy flag to the higuchi_fd function parameters. By default this flag = False, resulting in the same behavior as now (i.e., datatype of x is changed).

    When benchmarking the speed of the code, I observed no real difference. Perhaps we should even remove the flag and just use np.array instead of np.asarray?

    In [11]: x = np.random.rand(10_000).astype("float32")
    
    In [12]: %timeit ant.higuchi_fd(x)
    246 µs ± 5.24 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
    
    In [13]: x = np.random.rand(10_000).astype("float32")
    
    In [14]: %timeit ant.higuchi_fd(x, copy=True)
    242 µs ± 93.4 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
    

    PS: I really like the fast functions in this library :smile:

    enhancement 
    opened by jvdd 3
  • The most

    The most "generic" entropy measure

    Hi,

    Is there any review paper available that overviews the performance of different entropy measures which are implemented in this library for the actual electrophysiological data? Also, what would be the measure with the smallest number of non-optional parameters that is also guaranteed to work in most cases?

    Thank you!

    documentation question 
    opened by antelk 3
  • Fixed division by zero in linear regresion function

    Fixed division by zero in linear regresion function

    I have been facing problems when computing the Detrended fluctuation analysis (DFA) using the function detrended_fluctuation(x) and the input array is relatively small (subwindows of windows).

    In some cases, len(fluctuations)=1 causing den=0 in the linear regression function. This fix solves the issue for me, having expected results.

    bug enhancement 
    opened by Arritmic 2
  • Zero-crossings

    Zero-crossings

    Hi Raph,

    Was doing some cross-checking and I have a quick question to disperse a doubt in my mind regarding the counting of the number of inversions:

    https://github.com/raphaelvallat/antropy/blob/88fea895dc464fd075f634ac81f2ae4f46b60cac/antropy/entropy.py#L908

    Shouldn't it be: np.diff(np.signbit(np.diff( here? I.e., counting the changes in sign of the consecutive differences, rather than the difference of the sign of the consecutive samples 🤔

    question 
    opened by DominiqueMakowski 2
  • Error importing with 32-bit windows 7

    Error importing with 32-bit windows 7

    Hi there,

    I've been playing with antropy on my main home machine and have come to use the same code on a 32-bit windows 7 machine which has inccured an import error.

    Currently using Python 3.8.10 32-bit. Can this be fixed or is it likely i'm in need of going to a 64-bit version?

    The traceback is as follow:

    Python 3.8.10 (tags/v3.8.10:3d8993a, May  3 2021, 11:34:34) [MSC v.1928 32 bit (
    Intel)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import antropy
    Traceback (most recent call last):
      File "C:\Python38\lib\site-packages\numba\core\errors.py", line 776, in new_er
    ror_context
        yield
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 235, in lowe
    r_block
        self.lower_inst(inst)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 380, in lowe
    r_inst
        val = self.lower_assign(ty, inst)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 556, in lowe
    r_assign
        return self.lower_expr(ty, value)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 1084, in low
    er_expr
        res = self.lower_call(resty, expr)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 815, in lowe
    r_call
        res = self._lower_call_normal(fnty, expr, signature)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 1055, in _lo
    wer_call_normal
        res = impl(self.builder, argvals, self.loc)
      File "C:\Python38\lib\site-packages\numba\core\base.py", line 1194, in __call_
    _
        res = self._imp(self._context, builder, self._sig, args, loc=loc)
      File "C:\Python38\lib\site-packages\numba\core\base.py", line 1224, in wrapper
    
        return fn(*args, **kwargs)
      File "C:\Python38\lib\site-packages\numba\np\unsafe\ndarray.py", line 31, in c
    odegen
        res = _empty_nd_impl(context, builder, arrty, shapes)
      File "C:\Python38\lib\site-packages\numba\np\arrayobj.py", line 3468, in _empt
    y_nd_impl
        arrlen_mult = builder.smul_with_overflow(arrlen, s)
      File "C:\Python38\lib\site-packages\llvmlite\ir\builder.py", line 50, in wrapp
    ed
        raise ValueError("Operands must be the same type, got (%s, %s)"
    ValueError: Operands must be the same type, got (i32, i64)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Python38\lib\site-packages\antropy\__init__.py", line 4, in <module>
        from .fractal import *
      File "C:\Python38\lib\site-packages\antropy\fractal.py", line 304, in <module>
    
        def _dfa(x):
      File "C:\Python38\lib\site-packages\numba\core\decorators.py", line 226, in wr
    apper
        disp.compile(sig)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 979, in co
    mpile
        cres = self._compiler.compile(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 141, in co
    mpile
        status, retval = self._compile_cached(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 155, in _c
    ompile_cached
        retval = self._compile_core(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 168, in _c
    ompile_core
        cres = compiler.compile_extra(self.targetdescr.typing_context,
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 686, in comp
    ile_extra
        return pipeline.compile_extra(func)
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 428, in comp
    ile_extra
        return self._compile_bytecode()
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 492, in _com
    pile_bytecode
        return self._compile_core()
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 471, in _com
    pile_core
        raise e
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 462, in _com
    pile_core
        pm.run(self.state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 34
    3, in run
        raise patched_exception
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 33
    4, in run
        self._runPass(idx, pass_inst, state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_lock.py", line 35, in
    _acquire_compile_lock
        return func(*args, **kwargs)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 28
    9, in _runPass
        mutated |= check(pss.run_pass, internal_state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 26
    2, in check
        mangled = func(compiler_state)
      File "C:\Python38\lib\site-packages\numba\core\typed_passes.py", line 396, in
    run_pass
        lower.lower()
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 138, in lowe
    r
        self.lower_normal_function(self.fndesc)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 192, in lowe
    r_normal_function
        entry_block_tail = self.lower_function_body()
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 221, in lowe
    r_function_body
        self.lower_block(block)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 235, in lowe
    r_block
        self.lower_inst(inst)
      File "C:\Python38\lib\contextlib.py", line 131, in __exit__
        self.gen.throw(type, value, traceback)
      File "C:\Python38\lib\site-packages\numba\core\errors.py", line 786, in new_er
    ror_context
        raise newerr.with_traceback(tb)
    numba.core.errors.LoweringError: Failed in nopython mode pipeline (step: native
    lowering)
    Operands must be the same type, got (i32, i64)
    
    File "lib\site-packages\antropy\fractal.py", line 313:
    def _dfa(x):
        <source elided>
    
        for i_n, n in enumerate(nvals):
        ^
    
    During: lowering "array.70 = call empty_func.71(size_tuple.69, func=empty_func.7
    1, args=(Var(size_tuple.69, fractal.py:313),), kws=[], vararg=None, target=None)
    " at C:\Python38\lib\site-packages\antropy\fractal.py (313)
    >>>
    
    invalid 
    opened by LMBooth 2
  • modify the entropy function be able to compute vectorizly

    modify the entropy function be able to compute vectorizly

    Hi, I have used your package to process the ECG signal and it achieve good results on classify different heart disease. Thanks a lot!

    However, so far, these functions are only can deal with one-dimensional signal like array(~, 1). May I take a try to modify the code and make it can process the data like sklearn.preprocessing.scale(X, axis=xx)? So it will be more efficient to deal with big array, because we do not need to run the foor loop or something else.

    My email is [email protected], welcome to discuss with me!

    enhancement 
    opened by cheliu-computation 2
  • Different results of different SampEn implementations

    Different results of different SampEn implementations

    My own implementation:

    import math import numpy as np from scipy.spatial.distance import pdist

    def sample_entropy(signal,m,r,dist_type='chebyshev', result = None, scale = None):

    # Check Errors
    if m > len(signal):
        raise ValueError('Embedding dimension must be smaller than the signal length (m<N).')
    if len(signal) != signal.size:
        raise ValueError('The signal parameter must be a [Nx1] vector.')
    if not isinstance(dist_type, str):
        raise ValueError('Distance type must be a string.')
    if dist_type not in ['braycurtis', 'canberra', 'chebyshev', 'cityblock',
                         'correlation', 'cosine', 'dice', 'euclidean', 'hamming',
                         'jaccard', 'jensenshannon', 'kulsinski', 'mahalanobis',
                         'matching', 'minkowski', 'rogerstanimoto', 'russellrao',
                         'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'yule']:
        raise ValueError('Distance type unknown.')
    
    # Useful parameters
    N = len(signal)
    sigma = np.std(signal)
    templates_m = []
    templates_m_plus_one = []
    signal = np.squeeze(signal)
    
    for i in range(N - m + 1):
        templates_m.append(signal[i:i + m])
    
    B = np.sum(pdist(templates_m, metric=dist_type) <= sigma * r)
    if B == 0:
        value = math.inf
    else:
        m += 1
        for i in range(N - m + 1):
            templates_m_plus_one.append(signal[i:i + m])
        A = np.sum(pdist(templates_m_plus_one, metric=dist_type) <= sigma * r)
    
        if A == 0:
            value = math.inf
    
        else:
            A = A/len(templates_m_plus_one)
            B = B/len(templates_m)
    
            value = -np.log((A / B))
    
    """IF A = 0 or B = 0, SamEn would return an infinite value. 
    However, the lowest non-zero conditional probability that SampEn should
    report is A/B = 2/[(N-m-1)*(N-m)]"""
    
    if math.isinf(value):
    
        """Note: SampEn has the following limits:
                - Lower bound: 0 
                - Upper bound : log(N-m) + log(N-m-1) - log(2)"""
    
        value = -np.log(2/((N-m-1)*(N-m)))
    
    if result is not None:
        result[scale-1] = value
    
    return value
    

    signal= np.random.rand(200) // rand(200,1) in Matlab parameters: m = 1; r = 0.2


    Outputs:

    My implementation: 2.1812 Implementation adapted: 2.1969 Neurokit 2 entropy_sample function: 2.5316 Your implementation: 2.2431 Different implementation from GitHub: 1.0488

    invalid question 
    opened by dmarcos97 4
  • Speed up importing antropy

    Speed up importing antropy

    Create a file called import.py with the single line import antropy. On my machine (Linux VM), this takes at least 10 seconds to run.

    Using pyinstrument tells me that most of the time is spent importing numba. Is there any possibility of speeding this up? Seems like this is a known issue with numba, though: see e.g. https://github.com/numba/numba/issues/4927.

    $ pyinstrument import.py 
    
      _     ._   __/__   _ _  _  _ _/_   Recorded: 16:36:28  Samples:  7842
     /_//_/// /_\ / //_// / //_'/ //     Duration: 12.368    CPU time: 11.963
    /   _/                      v3.4.1
    
    Program: import.py
    
    12.368 <module>  import.py:1
    └─ 12.368 <module>  antropy/__init__.py:2
       ├─ 6.711 <module>  antropy/fractal.py:1
       │  └─ 6.711 wrapper  numba/core/decorators.py:191
       │        [14277 frames hidden]  numba, llvmlite, contextlib, pickle, ...
       ├─ 3.034 <module>  antropy/entropy.py:1
       │  ├─ 2.390 wrapper  numba/core/decorators.py:191
       │  │     [5009 frames hidden]  numba, abc, llvmlite, inspect, contex...
       │  └─ 0.522 <module>  sklearn/__init__.py:14
       │        [374 frames hidden]  sklearn, scipy, inspect, enum, numpy,...
       └─ 2.618 <module>  antropy/utils.py:1
          ├─ 1.584 wrapper  numba/core/decorators.py:191
          │     [5027 frames hidden]  numba, abc, functools, llvmlite, insp...
          ├─ 0.895 <module>  numba/__init__.py:3
          │     [1444 frames hidden]  numba, llvmlite, pkg_resources, warni...
          └─ 0.138 <module>  numpy/__init__.py:106
                [190 frames hidden]  numpy, pathlib, urllib, collections, ...
    
    To view this report with different options, run:
        pyinstrument --load-prev 2021-06-17T16-36-28 [options]
    
    
    enhancement 
    opened by jftsang 4
  • Allow users to pass signal in frequency domain in spectral entropy

    Allow users to pass signal in frequency domain in spectral entropy

    Currently, antropy.spectral_entropy only allows x to be in time-domain. We should add freqs=None and psd=None as possible input if users want to calculate the spectral entropy of a pre-computed power spectrum. We should also add an example of how to calculate the spectral entropy from a multitaper power spectrum.

    enhancement 
    opened by raphaelvallat 0
Releases(v0.1.5)
  • v0.1.5(Dec 17, 2022)

    This is a minor release.

    What's Changed

    • Handle the limit of p = 0 in p log2 p by @jftsang in https://github.com/raphaelvallat/antropy/pull/3
    • Correlation between entropy/FD metrics for data traces from Hodgin-Huxley model by @antelk in https://github.com/raphaelvallat/antropy/pull/5
    • Fix docstrings and rerun by @antelk in https://github.com/raphaelvallat/antropy/pull/7
    • Improve performance in _xlog2x by @jftsang in https://github.com/raphaelvallat/antropy/pull/8
    • Prevent invalid operations in xlogx by @guiweber in https://github.com/raphaelvallat/antropy/pull/11
    • Allow readonly arrays for higuchi_fd by @jvdd in https://github.com/raphaelvallat/antropy/pull/13
    • modify the _embed function to fit the 2d input by @cheliu-computation in https://github.com/raphaelvallat/antropy/pull/15
    • Fixed division by zero in linear regresion function (with test) by @Arritmic in https://github.com/raphaelvallat/antropy/pull/21
    • Add conda install instructions by @raphaelvallat in https://github.com/raphaelvallat/antropy/pull/19

    New Contributors

    • @jftsang made their first contribution in https://github.com/raphaelvallat/antropy/pull/3
    • @antelk made their first contribution in https://github.com/raphaelvallat/antropy/pull/5
    • @guiweber made their first contribution in https://github.com/raphaelvallat/antropy/pull/11
    • @jvdd made their first contribution in https://github.com/raphaelvallat/antropy/pull/13
    • @cheliu-computation made their first contribution in https://github.com/raphaelvallat/antropy/pull/15
    • @Arritmic made their first contribution in https://github.com/raphaelvallat/antropy/pull/21
    • @raphaelvallat made their first contribution in https://github.com/raphaelvallat/antropy/pull/19

    Full Changelog: https://github.com/raphaelvallat/antropy/compare/v0.1.4...v0.1.5

    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Apr 1, 2021)

Owner
Raphael Vallat
French research scientist specialized in sleep and dreaming | Strong interest in stats and signal processing | Python lover
Raphael Vallat
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Deep Cognition and Language Research (DeCLaRe) Lab 23 Dec 16, 2022
Pre-Training 3D Point Cloud Transformers with Masked Point Modeling

Point-BERT: Pre-Training 3D Point Cloud Transformers with Masked Point Modeling Created by Xumin Yu*, Lulu Tang*, Yongming Rao*, Tiejun Huang, Jie Zho

Lulu Tang 306 Jan 06, 2023
Code for a real-time distributed cooperative slam(RDC-SLAM) system for ROS compatible platforms.

RDC-SLAM This repository contains code for a real-time distributed cooperative slam(RDC-SLAM) system for ROS compatible platforms. The system takes in

40 Nov 19, 2022
Pytorch Lightning Distributed Accelerators using Ray

Distributed PyTorch Lightning Training on Ray This library adds new PyTorch Lightning plugins for distributed training using the Ray distributed compu

167 Jan 02, 2023
A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.

AnimeGAN A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing. Randomly Generated Images The images are

Jie Lei 雷杰 1.2k Jan 03, 2023
IDA file loader for UF2, created for the DEFCON 29 hardware badge

UF2 Loader for IDA The DEFCON 29 badge uses the UF2 bootloader, which conveniently allows you to dump and flash the firmware over USB as a mass storag

Kevin Colley 6 Feb 08, 2022
CLASP - Contrastive Language-Aminoacid Sequence Pretraining

CLASP - Contrastive Language-Aminoacid Sequence Pretraining Repository for creating models pretrained on language and aminoacid sequences similar to C

Michael Pieler 133 Dec 29, 2022
Predictive Modeling on Electronic Health Records(EHR) using Pytorch

Predictive Modeling on Electronic Health Records(EHR) using Pytorch Overview Although there are plenty of repos on vision and NLP models, there are ve

81 Jan 01, 2023
Keqing Chatbot With Python

KeqingChatbot A public running instance can be found on telegram as @keqingchat_bot. Requirements Python 3.8 or higher. A bot token. Local Deploy git

Rikka-Chan 2 Jan 16, 2022
The code written during my Bachelor Thesis "Classification of Human Whole-Body Motion using Hidden Markov Models".

This code was written during the course of my Bachelor thesis Classification of Human Whole-Body Motion using Hidden Markov Models. Some things might

Matthias Plappert 14 Dec 06, 2022
The repository includes the code for training cell counting applications. (Keras + Tensorflow)

cell_counting_v2 The repository includes the code for training cell counting applications. (Keras + Tensorflow) Dataset can be downloaded here : http:

Weidi 113 Oct 06, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

143 Dec 28, 2022
A containerized REST API around OpenAI's CLIP model.

OpenAI's CLIP — REST API This is a container wrapping OpenAI's CLIP model in a RESTful interface. Running the container locally First, build the conta

Santiago Valdarrama 48 Nov 06, 2022
Pre-Training Graph Neural Networks for Cold-Start Users and Items Representation.

Pretrain-Recsys This is our Tensorflow implementation for our WSDM 2021 paper: Bowen Hao, Jing Zhang, Hongzhi Yin, Cuiping Li, Hong Chen. Pre-Training

30 Nov 14, 2022
SOTA easy to use PyTorch-based DL training library

Easily train or fine-tune SOTA computer vision models from one training repository. SuperGradients Introduction Welcome to SuperGradients, a free open

619 Jan 03, 2023
Hierarchical Memory Matching Network for Video Object Segmentation (ICCV 2021)

Hierarchical Memory Matching Network for Video Object Segmentation Hongje Seong, Seoung Wug Oh, Joon-Young Lee, Seongwon Lee, Suhyeon Lee, Euntai Kim

Hongje Seong 72 Dec 14, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

63 Oct 17, 2022
Distributed DataLoader For Pytorch Based On Ray

Dpex——用户无感知分布式数据预处理组件 一、前言 随着GPU与CPU的算力差距越来越大以及模型训练时的预处理Pipeline变得越来越复杂,CPU部分的数据预处理已经逐渐成为了模型训练的瓶颈所在,这导致单机的GPU配置的提升并不能带来期望的线性加速。预处理性能瓶颈的本质在于每个GPU能够使用的C

Dalong 23 Nov 02, 2022
This repository is an implementation of our NeurIPS 2021 paper (Stylized Dialogue Generation with Multi-Pass Dual Learning) in PyTorch.

MPDL---TODO This repository is an implementation of our NeurIPS 2021 paper (Stylized Dialogue Generation with Multi-Pass Dual Learning) in PyTorch. Ci

CodebaseLi 3 Nov 27, 2022