Segment axon and myelin from microscopy data using deep learning

Overview

Binder Build Status Documentation Status Coverage Status Twitter Follow

Segment axon and myelin from microscopy data using deep learning. Written in Python. Using the TensorFlow framework. Based on a convolutional neural network architecture. Pixels are classified as either axon, myelin or background.

For more information, see the documentation website.

alt tag

Help

Whether you are a newcomer or an experienced user, we will do our best to help and reply to you as soon as possible. Of course, please be considerate and respectful of all people participating in our community interactions.

  • If you encounter difficulties during installation and/or while using AxonDeepSeg, or have general questions about the project, you can start a new discussion on the AxonDeepSeg GitHub Discussions forum. We also encourage you, once you've familiarized yourself with the software, to continue participating in the forum by helping answer future questions from fellow users!
  • If you encounter bugs during installation and/or use of AxonDeepSeg, you can open a new issue ticket on the AxonDeepSeg GitHub issues webpage.

FSLeyes plugin

A tutorial demonstrating the installation procedure and basic usage of our FSLeyes plugin is available on YouTube, and can be viewed by clicking this link.

References

AxonDeepSeg

Applications

Reviews

Citation

If you use this work in your research, please cite it as follows:

Zaimi, A., Wabartha, M., Herman, V., Antonsanti, P.-L., Perone, C. S., & Cohen-Adad, J. (2018). AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks. Scientific Reports, 8(1), 3816. Link to paper: https://doi.org/10.1038/s41598-018-22181-4.

Copyright (c) 2018 NeuroPoly (Polytechnique Montreal)

Licence

The MIT License (MIT)

Copyright (c) 2018 NeuroPoly, École Polytechnique, Université de Montréal

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Contributors

Pierre-Louis Antonsanti, Stoyan Asenov, Mathieu Boudreau, Oumayma Bounou, Marie-Hélène Bourget, Julien Cohen-Adad, Victor Herman, Melanie Lubrano, Antoine Moevus, Christian Perone, Vasudev Sharma, Thibault Tabarin, Maxime Wabartha, Aldo Zaimi.

Comments
  • Refactored data augmentation, changed loss function, cleaned notebooks and other improvements

    Refactored data augmentation, changed loss function, cleaned notebooks and other improvements

    This major PR aims to handle the improvement in the performance of model as well as an improvised version of data augmentation.

    DONE

    • Implemented data augmentation (Albumentation library) similar to previous version of ADS

    • Changed from Cross Entropy as a loss function to Dice coefficient to improve model performance as indicated in issue #19 .

    • Interpolation changed fromlinear to nearest neighbour

    • Notebooks are cleaned and removed irrelevant notebooks as indicated in #148

    • Migrated models to OSF storage to prevent bloating of the repository

    Fixes #148, Fixes #19, Fixes #241, Fixes #278, Fixes #240 Fixes #273

    opened by vasudev-sharma 75
  • Implement Ellipse Minor Axis  as Diameter

    Implement Ellipse Minor Axis as Diameter

    Following the discussions in #363, #349, this PR aims to implement the diameter of axon using minor axis of ellipse.

    DONE:

    • [x] Implement minor axis as an additional feature to compute the diameter of an axon, thickness of myelin, diameter of axon_myelin
    • [x] In order to allow the user to chose minor axis or equivalent diameter while computing morphometrics as its diameter, user can manually set the boolen variable ellipse to be True or False.
    • [x] Made the necessary changes in 04-compute-morphometrics.ipynb notebook file, allowing the user to set his choice for diameter computation.
    • [x] Added comprehensive tests to test this new feature.
    • [x] Implement the similar behaviour in FSLeyes plugin, where the user is prompted to set his preferred choice of diameter either as equivalent diameter or ellipse minor axis. Opened seperate issue for this see #432 and it will be dealt in seperate PR.
    • [x] Add documentation for this feature in notebook 04-morphometrics_extraction.ipynb
    • [x] Add a flag to select the shape of the axons
    • [x] Documentation: Add literature for axon shape (circle and ellipse)
    • [x] Add cli tests for the axon shape -a flag

    What are the main contributions of this PR?

    1. Implements Ellipse Minor axis as an additional way to compute morphometrics
    2. For generating morphometrics via CLI, adds an flag -a to select the shape of axon. Usage: Refer to the docs for usage.
    3. Updated the RTD

    NOTE: - The default behaviour is set as equivalent diameter(circle) for measuring morphometrics. However, if the user wants to consider the axon as an "oblong ellipse", the user can set ellipse boolean variable to True.

    Fix #363, #349

    feature 
    opened by vasudev-sharma 46
  • Add FSLeyes plugin

    Add FSLeyes plugin

    This PR implements the following changes:

    • Changed numpy and scikit-image versions
    • Implemented a GUI design for the plugin
    • Implemented an image loader for the plugin
    • Implemented buttons on the control panel of the plugin to apply a prediction model
    • Implemented a button to load existing masks
    • "Active" images/masks are determined by their visibility status (eye icon on the overlay list)
    • added the following tools : watershed segmentation, axon auto-fill

    Fixes #159, Fixes #162, Fixes #191, Fixes #192, Fixes #193, Fixes #201, Fixes #209

    TODO

    • Write tests for the plugin (and add to Travis) - Will be adressed in https://github.com/neuropoly/axondeepseg/issues/224

    How to test / install

    The installation procedure can by found here : https://github.com/neuropoly/axondeepseg/blob/FSLeyes_integration/docs/source/documentation.rst

    Tools description

    Tooltips were added to the GUI. If you hover your cursor over a button on the plugin, a description should popup

    opened by Stoyan-I-A 45
  • Release version 4.0.0

    Release version 4.0.0

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [x] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Release version 4.0.0 of AxonDeepSeg that integrated IVADOMED into the project and provides Mac M1 compatibility.

    Linked issues

    Resolves #523 #536

    enhancement feature installation dependencies refactoring 
    opened by mathieuboudreau 43
  • Change how ADS dependencies are installed

    Change how ADS dependencies are installed

    This branch is a child of the branch from the fork in #441, and thus that PR needs to be merged first. I had to branch out of that PR because we updated the OSF filenames, and thus test fails in the meantime.

    This PR seeks to resolve the tests failing in #441 highlights here , and also simplifies the installation of FSLeyes by merging the requirements.txt file with the FSLeyes installation commands into a single environment.yml file. (I thought @jcohenadad had opened an issue about this at a previous meeting, but maybe we just discussed it) Now, all the tools will be installed at the conda venv installation stage, inistead of afterwards. pip install -e . is still needed to install AxonDeepSeg itself.

    With this PR, FSLeyes will always be installed by default in the conda environment.

    With this PR, don't think including ADS in pipy is a viable option anymore. Adding it to conda-forge may be possible though

    To do:

    • [x] Resolve the failing test
      • This is likely due to one of the packages pulling the latest version instead of the fixed version in the previous requirements file.
    • [x] Someone with a Linux machine needs to test FSLeyes locally to make sure the GUI actually works.
    • [x] Update documentation on how to install AxonDeepSeg.
    • [x] Once this PR passes the travis tests, squash merge #441 before merging this one so that the diff is cleaner.
    opened by mathieuboudreau 38
  • Move to Python 3.6 compatibility

    Move to Python 3.6 compatibility

    This branch isn't ready for merging yet, please standby. I'm simply making this PR to see the merge conflicts. There's still 3 failing tests and 1 errored test in Windows.

    opened by mathieuboudreau 34
  • Add pre-commit hooks

    Add pre-commit hooks

    This PR aims to use pre-commit hooks to limit the file size. We wish to set a limit on the file sizes so that contributors don't commit massive files in the repo.

    pre-commit has been added to prevent files > 500KB from being committed, do a yaml check and clear the outputs of the Jupter notebook files.

    Asides from local pre-commit hooks being implemented, checks using pre-commit hooks are added for Travis CI.

    The changes implemented are similar to what was implemented on the sister projects (see here and here)

    3 checks are being done using pre-commit hooks both locally and on Travis CI.

    • Large files greater than 500kb are prevented from being committed.
    • YAML files syntax check
    • Jupyter notebook output clear check --> This hook basically clears the output of the cells in case you are committing the notebooks with outputs. So at the time of committing the changes, it will clear the output of the executed cells, and the next time when you try to re-commit the changes, the notebooks with no cell outputs will be committed.

    Instructions to test this PR.

    1. In your virtual environment, first run conda env update --name ads_venv --file environment.yml

    2. Do, pip install -e .

    3. You can now try to test each of the hooks individually.

      3.1 (Hook for files > 500kb): Try to either commit any ADS model or either run all the cells of 00-getting_started.ipynb(after running all the cells of this notebook, the size of this file would be around 1.5MB). Now, try to commit the model or this notebook or both. The expected behavior would be that this pre-commit hook won't allow you to commit these big files as the file size > 500KB

      3.2 (Hook for YAML File syntax): Try to change the syntax of .travis.yml file (of course, update this file with wrong syntax). Now, try to commit this file. Expected behavior should be that this pre-commit hook won't allow you to commit this YAML file having incorrect syntax.

      3.3 (Hook for no output of notebooks). Execute the cells of one of the notebook files. Now, try to commit the notebook file with the output of cells. This commit will modify the notebook such that the outputs of the cells will be cleared. One can then commit the notebook file with cleared output.

    Linked Issues

    Fixes #423

    dependencies ci 
    opened by vasudev-sharma 32
  • Improve and force imread/imwrite conversion to 8bit int

    Improve and force imread/imwrite conversion to 8bit int

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [ ] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Changes how bit 8-bit depth conversion is done in ADS's imread, removes the optional bit depth arg for this function (it appeared to have been unused for a long time, always using the default value of 8), and add test verifying that the same image saved at different int and float precision loaded with ads.imread outputs the same 8bit image array.

    Linked issues

    Resolves #175

    processing testing 
    opened by mathieuboudreau 31
  • No matching distribution found for tensorflow==1.3.0

    No matching distribution found for tensorflow==1.3.0

    using 2bee818b5be963b11f57733b110f1818daebf402 on rosenberg, i cannot properly install tensorflow==1.3.0:

    [...]
    Collecting tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0)
      Could not find a version that satisfies the requirement tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0) (from versions: )
    No matching distribution found for tensorflow==1.3.0 (from AxonDeepSeg==2.2.dev0)
    (venv_ads) [[email protected] axondeepseg]$ pip install tensorflow==1.3.0
    Collecting tensorflow==1.3.0
      Could not find a version that satisfies the requirement tensorflow==1.3.0 (from versions: )
    No matching distribution found for tensorflow==1.3.0
    (venv_ads) [[email protected] axondeepseg]$ pip -V
    pip 18.1 from /home/jcohen/miniconda3/envs/venv_ads/lib/python3.7/site-packages/pip (python 3.7)
    (venv_ads) [[email protected] axondeepseg]$ python
    Python 3.7.1 (default, Oct 23 2018, 19:19:42) 
    [GCC 7.3.0] :: Anaconda, Inc. on linux
    
    installation 
    opened by jcohenadad 30
  • Fix Naming Convention

    Fix Naming Convention

    Fixes #439

    OSF: - Test files to upload on OSF test_files.zip

    DONE:

    • [x] fix naming convention on FSLeyes plugin
    • [x] fix naming convention in Notebooks
    • [x] Fix naming convention in apply_model.py scipt
    • [x] Upload test files on OSF

    TODO:

    • [ ] Add documentation on Wiki

    To test this PR :

    The segmented images names should follow a common convention, that is

    1. image_name_seg-axonmyelin.png (axon +myelin segmented mask)
    2. image_name_seg-axon.png (axon mask)
    3. image_name_seg-myelin.png (myelin mask)

    To check if the naming conventions are being followed, for the cases given below ADS should segment images adhering to the naming convention.

    1. FSLeyes: Test Apply ADS Segmentation Model and Save segmentation buttons and check if they are following the naming convention.
    2. Notebooks : Run all the notebooks and check whether the naming conventions are being followed.
    3. CLI: This has been tested in the ADS unit tests, so you should expect all the test cases to pass.
    opened by vasudev-sharma 27
  • v4 ivadomed implementation

    v4 ivadomed implementation

    Checklist

    • [x] I've given this PR a concise, self-descriptive, and meaningful title
    • [x] I've linked relevant issues in the PR body
    • [x] I've applied the relevant labels to this PR
    • [x] I've added relevant tests for my contribution
    • [x] I've updated the documentation and/or added correct docstrings
    • [x] I've assigned a reviewer
    • [x] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    Implements IVADOMED automated segmentation inside of the ADS framework.

    Linked issues

    Resolves #523

    enhancement feature fsleyes dependencies refactoring ivadomed-refactoring 
    opened by mathieuboudreau 26
  • RAM limitations with no-patch option

    RAM limitations with no-patch option

    Describe the problem

    In PR #696 and #700, we added the option for the user to segment images without patches.

    After comparing the segmentation results with our models, we conclude that:

    • Qualitatively: the "no-patch" segmentation produces generally better qualitative results with less border irregularities, less false positive pixels clusters and less incomplete axons and/or holes in axons.
    • Quantitatively: the "no-patch" option gives segmentation metrics close to the patch option but better detection metrics because of less false positive small pixel clusters.

    However, some issues arose while testing:

    1. By design in ivadomed, when both PT and ONNX models are available, PT models are selected automatically with GPU. However, PT models require more GPU memory than ONNX. Some larger images could be segmented with ONNX and not with PT but there is no way for the user to select the ONNX model when on GPU without removing the PT model from the folder.
    2. Some images are just too big to segment without patches even with GPU and ONNX model and resulted in “segmentation fault” (memory error) on bireli, rosenberg and romane. I was not able to identify how or if we can intercept this error. I was also not able to reproduce this error on CPU on my laptop and had to killed the process (Ctrl+C or closing the terminal) to avoid it crashing.
    3. We currently cannot choose which GPU to use in ADS but we can in ivadomed.

    Details of the tests and issues can be found in these slides.

    Proposed solutions

    We talked in meeting of different solutions/approaches to deal with these issues respectively:

    1. The model (PT or ONNX) is selected in ivadomed here depending on device (CPU/GPU) and availability. We could try to add a try-intercept block to try the PT and switch to ONNX model if PT fails and ONNX is availble. This would need to be done in ivadomed.
    2. Several solutions were suggested:
      • Estimate the RAM needed based on image size and use psutils to estimate the RAM available before launching the segmentation, then warning the user if RAM is not sufficient.
      • Estimate the maximum patch size that could be used given free RAM, add it to the warning to the user and implement a way to changer the patch size (currently fixed for a given model).
      • Warn the user when using "no-patch" that this may not be suitable for larger images, and warn the user when not using "no-patch" that "no-patch" could potentially produce better results if RAM is sufficient --> For now, we decide to go with this last option, implementation in-progress in PR #704.
    3. The implementation to choose which GPU to use in ADS is in progress in #701.

    Additional details can be found in these slides, including ideas on "where" to fix these issues (ivadomed or ADS) and what are the elements to consider in each case.

    enhancement discussion 
    opened by mariehbourget 0
  • Idea: stop supporting combined axon-myelin images and switch to only separate

    Idea: stop supporting combined axon-myelin images and switch to only separate

    This came up a couple of group meetings ago. If I recall, the reasoning behind it being that this is because it's how IVADOMED treats the images anyways, and would make things simpler for the GUIs as well.

    It wasn't clear to me if this was for both input and outputs of ADS; to me it makes sense to be only for inputs, as generating a combined axon-myelin image is still quite useful for us to look at. But it might make sense so stop supporint this combined image as an input into ADS. One potential issue I can see is that it might cause some issues if a user does manual correction of the separate masks on another software, as there might be overlapping pixels identified as both myelin and axon (something that is avoided by using the combined masks.

    opened by mathieuboudreau 3
  • Prepare support for 3-class segmentation

    Prepare support for 3-class segmentation

    In order to support 3-class segmentation (context: unmyelinated fibers), we will need to change some stuff on the ADS side. For ivadomed, nothing really changes: we will use 3 ground-truths in the BIDS derivatives and change the training config accordingly. However, in ADS we will need to add some flexibility, notably:

    • For the segmentation process, axon_segmentation(...) will need to also save the third prediction. Maybe also add flexibility to the merge_masks(...) if we want to support it: https://github.com/axondeepseg/axondeepseg/blob/821074c2c8b539bcec69686cce72304656124d51/AxonDeepSeg/apply_model.py#L46-L50 Not exactly sure how we would handle the 3rd class using grayscale format in the combined prediction image though
    • Most of the work will probably be on the morphometrics process. Thanks to @Stoyan-I-A's refactoring, this should be easier to do because I think we will need to add columns for the 3rd class metrics (e.g. area, etc.). Fortunately, processing unmyelinated axons should be the exact same as processing axons. I'm thinking of adding a parameter to get_axon_morphometrics(...) indicating if we want 3-class morphometrics, and if so will load the 3rd segmentation and run the usual axon metrics on it. Not sure how we could merge this data in the morphometrics file (because the "myelin" columns will not apply to unmyelinated axons). In that case, maybe we would need 2 separate morphometrics file, but I don't really like that idea.
    enhancement refactoring discussion morphometrics 
    opened by hermancollin 4
  • Colorization instance map question

    Colorization instance map question

    Hello,

    So I have been playing around with the colorization feature in the morphometrics extraction pipeline. My question refers to the colorization instance map and how that relates to the morphometrics extraction. The segmentation I have with my images is pretty good at the moment. However the colorization instance map shows myelin identity boundary creep between touching axons.

    Does the colorization identity mismatch contribute to the calculation of the myelin thickness? And if this is the case what is the best way to address this issue?

    Thanks a lot,

    Michael

    LM_1

    LM_1_axonmyelin_index

    LM_1_instance-map

    opened by GrimmSnark 5
  • Create a Napari plugin for ADS

    Create a Napari plugin for ADS

    Checklist

    • [ ] I've given this PR a concise, self-descriptive, and meaningful title
    • [ ] I've linked relevant issues in the PR body
    • [ ] I've applied the relevant labels to this PR
    • [ ] I've added relevant tests for my contribution
    • [ ] I've updated the documentation and/or added correct docstrings
    • [ ] I've assigned a reviewer
    • [ ] I've consulted ADS's internal developer documentation to ensure my contribution is in line with any relevant design decisions

    Description

    This PR contains the code I used for testing a Napari plugin

    Linked issues

    Resolves #681

    opened by Stoyan-I-A 1
Releases(v4.1.0)
Owner
NeuroPoly
Ecole Polytechnique, Université de Montréal
NeuroPoly
Efficient Lottery Ticket Finding: Less Data is More

The lottery ticket hypothesis (LTH) reveals the existence of winning tickets (sparse but critical subnetworks) for dense networks, that can be trained in isolation from random initialization to match

VITA 20 Sep 04, 2022
Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation

Generalizing Gaze Estimation with Outlier-guided Collaborative Adaptation Our paper is accepted by ICCV2021. Picture: Overview of the proposed Plug-an

Yunfei Liu 32 Dec 10, 2022
Solver for Large-Scale Rank-One Semidefinite Relaxations

STRIDE: spectrahedral proximal gradient descent along vertices A Solver for Large-Scale Rank-One Semidefinite Relaxations About STRIDE is designed for

48 Dec 20, 2022
Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research

Megaverse Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of the engine enables ph

Aleksei Petrenko 191 Dec 23, 2022
PyTorch implementation of DirectCLR from paper Understanding Dimensional Collapse in Contrastive Self-supervised Learning

DirectCLR DirectCLR is a simple contrastive learning model for visual representation learning. It does not require a trainable projector as SimCLR. It

Meta Research 49 Dec 21, 2022
Code for Neurips2021 Paper "Topology-Imbalance Learning for Semi-Supervised Node Classification".

Topology-Imbalance Learning for Semi-Supervised Node Classification Introduction Code for NeurIPS 2021 paper "Topology-Imbalance Learning for Semi-Sup

Victor Chen 40 Nov 23, 2022
Pre-trained models for a Cascaded-FCN in caffe and tensorflow that segments

Cascaded-FCN This repository contains the pre-trained models for a Cascaded-FCN in caffe and tensorflow that segments the liver and its lesions out of

300 Nov 22, 2022
Torch-ngp - A pytorch implementation of the hash encoder proposed in instant-ngp

HashGrid Encoder (WIP) A pytorch implementation of the HashGrid Encoder from ins

hawkey 1k Jan 01, 2023
Roach: End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

CARLA-Roach This is the official code release of the paper End-to-End Urban Driving by Imitating a Reinforcement Learning Coach by Zhejun Zhang, Alexa

Zhejun Zhang 118 Dec 28, 2022
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 08, 2023
Procedural 3D data generation pipeline for architecture

Synthetic Dataset Generator Authors: Stanislava Fedorova Alberto Tono Meher Shashwat Nigam Jiayao Zhang Amirhossein Ahmadnia Cecilia bolognesi Dominik

Computational Design Institute 49 Nov 25, 2022
PyTorch implementation of adversarial patch

adversarial-patch PyTorch implementation of adversarial patch This is an implementation of the Adversarial Patch paper. Not official and likely to hav

Jamie Hayes 172 Nov 29, 2022
REBEL: Relation Extraction By End-to-end Language generation

REBEL: Relation Extraction By End-to-end Language generation This is the repository for the Findings of EMNLP 2021 paper REBEL: Relation Extraction By

Babelscape 222 Jan 06, 2023
List of awesome things around semantic segmentation 🎉

Awesome Semantic Segmentation List of awesome things around semantic segmentation 🎉 Semantic segmentation is a computer vision task in which we label

Dam Minh Tien 18 Nov 26, 2022
deep-prae

Deep Probabilistic Accelerated Evaluation (Deep-PrAE) Our work presents an efficient rare event simulation methodology for black box autonomy using Im

Safe AI Lab 4 Apr 17, 2021
Consecutive-Subsequence - Simple software to calculate susequence with highest sum

Simple software to calculate susequence with highest sum This repository contain

Gbadamosi Farouk 1 Jan 31, 2022
Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts)

Introduction Pytorch implementation of Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Expert. | paper Song Park1

Clova AI Research 97 Dec 23, 2022
An open source object detection toolbox based on PyTorch

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Bo Chen 24 Dec 28, 2022
Official Python implementation of the 'Sparse deconvolution'-v0.3.0

Sparse deconvolution Python v0.3.0 Official Python implementation of the 'Sparse deconvolution', and the CPU (NumPy) and GPU (CuPy) calculation backen

Weisong Zhao 23 Dec 28, 2022
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.8k Jan 08, 2023