ruptures: change point detection in Python

Overview

Welcome to ruptures

Maintenance build python PyPI version Conda Version docs PyPI - License Downloads Code style: black Binder Codecov

ruptures is a Python library for off-line change point detection. This package provides methods for the analysis and segmentation of non-stationary signals. Implemented algorithms include exact and approximate detection for various parametric and non-parametric models. ruptures focuses on ease of use by providing a well-documented and consistent interface. In addition, thanks to its modular structure, different algorithms and models can be connected and extended within this package.

How to cite. If you use ruptures in a scientific publication, we would appreciate citations to the following paper:

  • C. Truong, L. Oudre, N. Vayatis. Selective review of offline change point detection methods. Signal Processing, 167:107299, 2020. [journal] [pdf]

Basic usage

(Please refer to the documentation for more advanced use.)

The following snippet creates a noisy piecewise constant signal, performs a penalized kernel change point detection and displays the results (alternating colors mark true regimes and dashed lines mark estimated change points).

import matplotlib.pyplot as plt
import ruptures as rpt

# generate signal
n_samples, dim, sigma = 1000, 3, 4
n_bkps = 4  # number of breakpoints
signal, bkps = rpt.pw_constant(n_samples, dim, n_bkps, noise_std=sigma)

# detection
algo = rpt.Pelt(model="rbf").fit(signal)
result = algo.predict(pen=10)

# display
rpt.display(signal, bkps, result)
plt.show()

General information

Contact

Concerning this package, its use and bugs, use the issue page of the ruptures repository. For other inquiries, you can contact me here.

Important links

  • Documentation: link.
  • Pypi package index: link

Dependencies and install

Installation instructions can be found here.

Changelog

See the changelog for a history of notable changes to ruptures.

Thanks to all our contributors

License

This project is under BSD license.

BSD 2-Clause License

Copyright (c) 2017-2021, ENS Paris-Saclay, CNRS
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
  list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,
  this list of conditions and the following disclaimer in the documentation
  and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Comments
  • perf: Rand index computation

    perf: Rand index computation

    Greetings.

    I believe there is a minor mistake on the scaling done on the hamming method, it should be n_samples*(n_samples-1).

    I also provided an additional implementation of the Rand index under the method randindex_cpd. This implementation is based on an specific expression for the metric on change-point problems. If we have N samples and change-point sets of size r and s, this algorithm runs on O(r+s) in time and O(1) in memory complexity. The traditional implementation runs on O(rs+N) in time and O(rs) in memory, albeit your implementation might use less due to sparsity. This new implementation seems to perform some orders of magnitude better, depending on N, in a few profiling that I did.

    I can provide the proof of correctness of the algorithm if requested.

    Thanks.

    opened by Lucas-Prates 20
  • ci: add pytest-cov to the pypi gh action

    ci: add pytest-cov to the pypi gh action

    The GH action that releases to PyPI run pytest ruptures/tests (see here) which requires pytest-cov (specified here).

    To run the action, pytest-cov must be installed within cibuildwheel. Otherwise it runs into the following error.

    Type: CI 
    opened by deepcharles 13
  • Detect only increasing trends or changes

    Detect only increasing trends or changes

    Is there a way in the current scheme to detect only positive changes ?

    Currently , am using window method.

    def rupture_changepoint(points):
        points.values.reshape((points.shape[0],1))
        points = points.dropna()
        model = "l2"
        algo = rpt.Window(width = 10, model=model).fit(points.values)
        my_bkps = algo.predict(pen=60)
        print(my_bkps)
        fig, (ax,) = rpt.display(points, my_bkps, my_bkps, figsize=(10,6))
        plt.show()
    

    Got the break points as [30, 40, 50, 100, 121]

    16121339

    Here, i dont want the decreasing trend to be there as a breakpoint. I am only interested in the [40,50] change.

    Please point me to the specific files that needs to be changed if its not straightforward

    Thank you,

    opened by mancunian1792 10
  • How to select a proper method for detecting onset of signals?

    How to select a proper method for detecting onset of signals?

    I have a need for detecting change point of seismic waves, the onset seem quite obvious by visual inspection. But costAR and costL2 seem failed to detect the change point. Could you give me some suggestions about how to select a proper cost function among presented ones?

    opened by yijun1994 8
  • Trying to add ruptures code

    Trying to add ruptures code

    I am trying to add the ruptures code but unfortunately i couldn't get any result.

    here is the code :

    @app.callback( dash.dependencies.Output(component_id='chart_1', component_property="figure"), [dash.dependencies.Input('btn-nclicks-1', 'n_clicks')], [dash.dependencies.State('drop_down_1', 'value')], [dash.dependencies.State('drop_down_2', 'value')], [dash.dependencies.State('drop_down_3', 'value')], [dash.dependencies.State('drop_down_4', 'value')], prevent_initial_call=True ) def update_values_app1(_, regions, indicators, start_date, end_date): fig = go.Figure()

    max_value_dict = dict()
    
    if type(regions) == str:
        regions = [regions]
    
    for region in regions:
        cd = global_df.loc[(global_df.loc[:, "country_region"] == region)]
        cd = cd.loc[((cd.loc[:, "date"] >= start_date) & (cd.loc[:, "date"] <= end_date))]
        x = cd["date"].values
    
        for iid, indicator in enumerate(indicators):
            y = cd[indicator].values
            notna_mask = ~np.isnan(y)
    
            if notna_mask.sum() == 0:
                max_value = 1
            else:
                max_value = max(abs(np.nanmax(y)), abs(np.nanmin(y)))
    
            if indicator not in max_value_dict:
                max_value_dict[indicator] = max_value
            else:
                max_value_dict[indicator] = max(max_value, max_value_dict[indicator])
    
            if iid == 0:
                fig = fig.add_trace(
                    go.Scatter(x=x, y=y, mode='lines', name=region + " " + indicator,
                               line=dict(width=4)))
            else:
                fig = fig.add_trace(
                    go.Scatter(x=x, y=y, mode='lines', name=region + " " + indicator, yaxis='y{}'.format(iid + 2),
                               line=dict(width=4)))
    
    fig.update_layout(title="", xaxis_title="Date", yaxis_title="", legend_title="Indicators",
                      font=dict(family="Arial", size=20, color="dark blue"))
    
    y_axis_label_width = 0.08
    
    fig.update_layout(xaxis=dict(domain=[y_axis_label_width * (len(indicators) - 1), 1.0]))
    
    for iid, indicator in enumerate(indicators):
        y_range = [-max_value_dict[indicator], max_value_dict[indicator]]
        if iid == 0:
            fig.update_layout({'yaxis': dict(title=indicator, constraintoward='center', position=0,
                                             range=y_range)})
        else:
            fig.update_layout({'yaxis{}'.format(iid + 2): dict(title=indicator, overlaying="y", side="left",
                                                               constraintoward='center', position=y_axis_label_width * iid,
                                                               range=y_range)})
    
    fig.update_layout(legend=dict(font=dict(family="Arial", size=30, color="black")),
                      legend_title=dict(font=dict(family="Arial", size=35, color="blue")))
    fig.update_layout(legend=dict(orientation="h", yanchor="bottom", y=1.02, xanchor="right", x=1))
    
    fig.update_layout(margin=dict(t=250))
    fig.update_layout(xaxis_tickangle=0)
    fig.update_xaxes(showline=True, linewidth=2, linecolor='black')
    fig.update_yaxes(showline=True, linewidth=2, linecolor='black')
    fig.update_xaxes(zeroline=True, zerolinewidth=2, zerolinecolor='red')
    fig.update_yaxes(zeroline=True, zerolinewidth=2, zerolinecolor='red')
    
    return fig
    
    opened by waad876 7
  • Use `KernSeg` with model selection as described in JMLR paper

    Use `KernSeg` with model selection as described in JMLR paper

    Sylvain Arlot, Alain Celisse and Zaid Harchaoui provide theory and a heuristic for model selection with KernSeg in their paper A Kernel Multiple Change-point Algorithm via Model Selection. See 3.3.2, Theorem 2 and appendix B.3.

    The penalty they propose does not scale linearly with the number of change points, so sadly it is incompatible with the current implementation. Furthermore the heuristic they propose requires knowledge of the respective losses for a set of possible numbers of split points, which currently (to my best understanding) cannot be recovered without expensive refits.

    It would be great if this could be added.

    opened by mlondschien 7
  • fix(CostNormal): add small bias in covariance matrix

    fix(CostNormal): add small bias in covariance matrix

    On signals with truly constant segments, the CostNormal detection fails because the covariance matrix is badly conditioned, resulting in infinite value for the cost function. See #196.

    Type: Fix 
    opened by deepcharles 7
  • AttributeError: 'kernel_hyper' object has no attribute 'signal'

    AttributeError: 'kernel_hyper' object has no attribute 'signal'

    import ruptures as rpt print(rpt.__version__)

    1.1.7

    lst_cp_det = list(np.sort(cp_detector.predict(n_bkps=n_cp))) ruptures\detection\dynp.py", line 132, in predict n_samples=self.cost.signal.shape[0], AttributeError: 'kernel_hyper' object has no attribute 'signal'

    opened by 943fansi 6
  • fix: lru-cache usage

    fix: lru-cache usage

    Hi!

    I tried to use copy.deepcopy to copy Binseg object and met a problem: binseg.single_bkp and deepcopy(binseg).single_bkp referred to the same object. So, I can not use binseg_copy.single_bkp because it tries to use the data from the original object.

    It seems to be caused by the way lru_cache is used. Are there any reasons not to use it with @ decorator syntax?

    Type: Fix 
    opened by julia-shenshina 6
  • Mac C library makes KernalCPD non-deterministic, but it is deterministic in Linux container

    Mac C library makes KernalCPD non-deterministic, but it is deterministic in Linux container

    Repro case:

    import ruptures as rpt import numpy as np

    new_list = [-0.0155, 0.0194, 0.0289, 0.0071, -0.0059, -0.0102, 0.0046, 0.0218, 0.0153, 0.0491, 0.016, 0.0365, 0.0388, 0.0516, 0.0222, 0.0019, -0.0418, 0.0, -0.0262, 0.0468, 0.0, 0.0311, 0.0341, -0.0, 0.0569, 0.0206, 0.0336, 0.0615] trend_error = np.asarray(new_list)

    results = set({}) for _ in range(10000): change_points = ( rpt.KernelCPD(kernel="rbf", min_size=7) .fit(trend_error) .predict(pen=1.0) ) results.add(len(change_points))

    print(results)

    When running this on a mac I'll get two different values in results : 2, 4 But when running this in a linux container it will consistently yield just 2

    When running rpt.Pelt(model="rbf", jump=1, min_size=7) I repeatedly get a deterministic result (which matches the result in 2).

    opened by josh-boehm 6
  • Estimating confidence that a breakpoint occurs in a 1D array

    Estimating confidence that a breakpoint occurs in a 1D array

    I have a few hundred 1-D timeseries, each ~120 points. Each timeseries may or may not have one or more breakpoints caused by changes in instrumentation, with the timing and nature of the instrumentation change differing from timeseries to timeseries and not known in advance for any of them. I am interested in estimating some measure of confidence for each timeseries that at least one breakpoint exists for that timeseries (in addition to when that breakpoint occurs).

    What would you recommend using for this? This sounds to me like a statistical test based on BinSeg with n=1 breakpoints, but I'm new to breakpoints overall and to the ruptures package and so it's not obvious to me if that's correct conceptually nor how to do that with ruptures. Apologies if I'm moving too quickly and thus missing something clear in the docs.

    opened by spencerahill 6
  • Adding New metrics to ruptures

    Adding New metrics to ruptures

    Hi !

    Following the issue I opened about the possibility to add more metrics (#278), here is a PR to start working on the additional metrics.

    For now I'm starting with the implementation of the adjusted Rand index based on scikit's implementation to have a first base to start with.

    I also took care of adding scikit to the dependencies, I ran the tests and the installation works fine.

    @deepcharles How do you want to proceed ? Do you already have a set of additional metrics in mind that could be added in addition to the adjusted Rand index and the Intersection over Union ?

    Cheers,

    Romain

    opened by rfayat 0
  • More metrics ? e.g. adjusted randindex

    More metrics ? e.g. adjusted randindex

    Hi !

    First of all thanks a lot for this great toolbox (and for the great review that comes with it).

    While using ruptures I noticed that only a few metrics were available for comparing two segmentations and thought it would maybe be a good idea to implement additional ones.

    For instance, it is super straightforward to implement the adjusted randindex by leveraging the efficient implementation of the adjusted rand score in scikit learn (although adding this to ruptures would imply to add scikit as a dependency) :

    from sklearn.metrics import adjusted_rand_score
    
    
    def chpt_to_label(bkps):
        """Return the segment index each sample belongs to.
        
        Example
        -------
        >>> chpt_to_label([4, 10])
        array([0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
        """
        duration = np.diff([0] + bkps)
        return np.repeat(np.arange(len(bkps)), duration)
    
    
    def randindex_adjusted(bkps1, bkps2):
        "Compute the adjusted rand index for two lists of changepoints."
        label1 = chpt_to_label(bkps1)
        label2 = chpt_to_label(bkps2)
        return adjusted_rand_score(label1, label2)
    

    I guess there are much more similar metrics (Intersection over Union...) that could be added to ruptures in order to make the package even more complete than it is right now.

    Of course this is simply a suggestion and I would be happy to give a hand if you think this direction would indeed be interesting to pursue :)

    Cheers,

    Romain

    opened by rfayat 2
  • multidimensional data

    multidimensional data

    hello I am wondering if the ruptures library is working with multidimensional data or not ?? image

    please look into the uploaded image: I used ruptures in my code, and it's showing the change point lines like what is shown . so if its working correctly with multidimensional data can you tell me how ??

    regards

    opened by waad876 2
  • Docs: Cost function illustrations

    Docs: Cost function illustrations

    Super neat library! The API feels very well-designed 🤩

    Reading the documentation, I miss a couple of things. Another (after #275) is grokking the cost functions. For me, plots work wonders, so I would love to be presented with a plot of a typical signal they would be useful on. For instance, I imagine that CostLinear is useful for something like this: image I have no clue what the CostCLinear prototypical signal would look like, however 😄

    Existing work

    I see there's something in the Piecewise Gaussian generation page. It takes a little time to grok, but I got it now. I wish for something like this in the Cost function pages, too 🙏

    opened by thorbjornwolf 1
  • Docs: Penalty term explainer

    Docs: Penalty term explainer

    Super neat library! The API feels very well-designed 🤩

    Reading the documentation, I miss a couple of things. One of them is a central description about what pen is, and a general strategy for setting it or getting the right order of magnitude - or reasoning why no such strategy exists.

    Perhaps something like (modified from #271)

    The penalty value is a positive float that controls how many changes you want (higher values yield fewer changepoints). Finding a correct value is really dependent on your situation, but as a rule of thumb pen can be initialized to around [rule of thumb] and tweaked up and down from there.

    Existing work

    In Binseg and sibling models there's this magic incantation:

    my_bkps = algo.predict(pen=np.log(n) * dim * sigma**2)
    

    Was it produced with some rule of thumb? In the advanced usage, kernel article, it is set twice to values 2 OOM apart:

    penalty_value = 100  # beta
    penalty_value = 1  # beta
    

    The suggestion in #271 for reading an article is fine; what I lack is a paragraph or two somewhere visible. The penalty term seems important enough to be worth it.

    opened by thorbjornwolf 1
Releases(v1.1.7)
  • v1.1.7(Jul 7, 2022)

    🐛 Bug Fixes

    • fix(CostL2): set min_size to 1 @deepcharles (#255)
    • fix: Allow Binseg to hit minsize bounds for segments @oboulant (#249)

    📚 Documentation

    • docs(typo): Fixed syntax errors @shanks847 (#250)
    • docs: Ensemble dimensions @theovincent (#248)
    • docs: pw_normal @theovincent (#241)

    🧰 Maintenance

    • ci(docs): fix the doc publishing job @deepcharles (#261)
    • chore: do not pin mkdocs version @oboulant (#247)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.6(Jan 19, 2022)

    Changes

    • perf: Rand index computation @Lucas-Prates (#222)

    🐛 Bug Fixes

    • fix(datasets): propagate random seed for consistency @oboulant (#221)
    • fix: random behaviour of KernelCPD (Pelt) @deepcharles (#213)

    📚 Documentation

    • docs: update landing page @deepcharles (#215)
    • docs: update license in readme @deepcharles (#214)

    🧰 Maintenance

    • ci: remove coverage from the wheel testing process @deepcharles @oboulant (#229)
    • build: adding support py310 @deepcharles @oboulant (#228)
    • ci: skipping wheel building for Linux 32-bit @deepcharles @oboulant (#227)
    • ci: add pytest-cov to the pypi gh action @deepcharles (#225)
    • ci: make coverage computation work @oboulant (#220)
    • ci: add tests in CI for windows python v3.9 @oboulant (#219)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.5(Oct 5, 2021)

    Changes

    • test: better handling of mahalanobis @deepcharles (#211)
    • test: remove coverage computation from test @oboulant (#192)
    • build: add aarch64 wheel build support @odidev (#180)

    🐛 Bug Fixes

    • fix: use cibuildwheel package and add tests @oboulant (#210)
    • fix: CostMl fitting behaviour @deepcharles (#209)
    • fix(CostNormal): add small bias in covariance matrix @deepcharles (#198)
    • fix: sanity_check usage when n_bkps is not explicitly specified @oboulant (#190)
    • fix: lru-cache usage @julia-shenshina (#171)
    • fix: store signal in CostMl object for compatibility reasons @oboulant (#173)

    📚 Documentation

    • docs: update changelog @deepcharles (#205)

    🧰 Maintenance

    • fix: use cibuildwheel package and add tests @oboulant (#210)
    • ci(gh-tests): change pip version for test jobs @oboulant (#204)
    • style: add module imported but unused in flake8 @oboulant (#191)
    • ci: do not triggers some gh actions for pre-commit-ci-update-config branch @oboulant (#186)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.4(Jun 9, 2021)

    Changes

    🚀 Features

    • feat(display): more display options @earthgecko (#160)

    🐛 Bug Fixes

    • fix: enfoce deepcopy for costar to avoid inplace modifications @oboulant (#164)
    • fix(kernelcpd): add early check on segmentation parameters @oboulant (#128)

    📚 Documentation

    • docs: add text segmentation example @oboulant (#142)
    • docs: fix typo in relative path @oboulant (#137)

    🧰 Maintenance

    • build: add nltk dependency to run docs examples @oboulant (#168)
    • ci: custom commit message for autoupdate PRs @deepcharles (#154)
    • chore: set manually a working version of pip for install @oboulant (#153)
    • ci: add depency for librosa @deepcharles (#121)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.3(Feb 12, 2021)

    Changes

    • ci: use joerick/[email protected] action in upload to pypi @deepcharles (#120)
    • chore: pre-commit autoupdate @pre-commit-ci (#118)
    • chore: pre-commit autoupdate @pre-commit-ci (#112)
    • test(kernelcpd): exhaustive test for kernelcpd @oboulant (#108)
    • test: Improve test coverage @oboulant (#106)
    • test(costlinear): add test for CostLinear @oboulant (#105)
    • ci: fix release-drafter and pr-semantic-check @deepcharles (#103)
    • ci: add gh actions for PR labeling @deepcharles (#102)
    • docs(readme): display codecov badge @oboulant (#101)
    • build: use PEP517/518 conventions @deepcharles (#100)

    🐛 Bug Fixes

    • fix(KernelCPD): explicitly cast signal into double @oboulant (#111)
    • fix(costclinear): make multi-dimensional compatible @oboulant (#104)

    📚 Documentation

    • docs: fix typo @deepcharles (#119)
    • docs: add music segmentation example @oboulant (#115)

    🧰 Maintenance

    • build: use oldest-supported-numpy @deepcharles (#114)
    • build: cleaner build process @deepcharles (#107)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Jan 1, 2021)

    What's new

    Added

    • 12cbc9e feat: add piecewise linear cpd (#91)
    • a12b215 test: add code coverage badge (#97)
    • 2e9b17f docs: add binder for notebooks (#94)
    • da7544f docs(costcosine): add entry for CostCosine in docs (#93)
    • 8c9aa35 build(setup.py/cfg): add build_ext to setup.py (#88)
    • 10ef8e8 build(python39): add py39 to supported versions (#87)

    Changed

    • 069bd41 fix(kernelcpd): bug fix in pelt (#95)
    • b4abc34 fix: memory leak in KernelCPD (#89)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Nov 26, 2020)

  • v1.0.6(Oct 23, 2020)

  • v1.0.5(Jul 22, 2020)

  • v1.0.4(Jul 22, 2020)

Owner
Charles T.
Researcher in machine learning/computer science
Charles T.
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.8k Jan 08, 2023
The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Energy-based Conditional Generative Adversarial Network (ECGAN) This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without

sianchen 22 May 28, 2022
PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build

simple, elegant and safe Introduction PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to ha

Johnsz 2 Mar 02, 2022
Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders"

AAVAE Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders" Abstract Recent methods for self-supervised learnin

Grid AI Labs 48 Dec 12, 2022
StellarGraph - Machine Learning on Graphs

StellarGraph Machine Learning Library StellarGraph is a Python library for machine learning on graphs and networks. Table of Contents Introduction Get

S T E L L A R 2.6k Jan 05, 2023
A 1.3B text-to-image generation model trained on 14 million image-text pairs

minDALL-E on Conceptual Captions minDALL-E, named after minGPT, is a 1.3B text-to-image generation model trained on 14 million image-text pairs for no

Kakao Brain 604 Dec 14, 2022
MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Octave Convolution MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution Imag

Meta Research 549 Dec 28, 2022
SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity

SSD: Single Shot MultiBox Detector Introduction Here is my pytorch implementation of 2 models: SSD-Resnet50 and SSDLite-MobilenetV2.

Viet Nguyen 149 Jan 07, 2023
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
QAT(quantize aware training) for classification with MQBench

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Representation

How to Reproduce our Results This repository contains PyTorch implementation code for the paper MixCo: Mix-up Contrastive Learning for Visual Represen

opcrisis 46 Dec 15, 2022
NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go

NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One Go This repository provides our implementation of the CVPR 2021 paper NeuroMorp

Meta Research 35 Dec 08, 2022
SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches

SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [Paper]  [Project Page]  [Interactive Demo]  [Supplementary Material]        Usag

215 Dec 25, 2022
PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders: A PyTorch Implementation This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners: @

Meta Research 4.8k Jan 04, 2023
sktime companion package for deep learning based on TensorFlow

NOTE: sktime-dl is currently being updated to work correctly with sktime 0.6, and wwill be fully relaunched over the summer. The plan is Refactor and

sktime 573 Jan 05, 2023
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome

bottom-up-attention This code implements a bottom-up attention model, based on multi-gpu training of Faster R-CNN with ResNet-101, using object and at

Peter Anderson 1.3k Jan 09, 2023
[ICCV'21] Pri3D: Can 3D Priors Help 2D Representation Learning?

Pri3D: Can 3D Priors Help 2D Representation Learning? [ICCV 2021] Pri3D leverages 3D priors for downstream 2D image understanding tasks: during pre-tr

Ji Hou 124 Jan 06, 2023
[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets

[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets Introduction This repo contains the source code accompanying the paper: Well-tuned Sim

52 Jan 04, 2023
A repository for interferometer controller code.

dses-interferometer-controller A repository for interferometer controller code, hardware, and simulations. See dses.science for more information on th

Eli Reed 1 Jan 17, 2022