Training neural models with structured signals.

Overview

Neural Structured Learning in TensorFlow

Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured signals in addition to feature inputs. Structure can be explicit as represented by a graph [1,2,5] or implicit as induced by adversarial perturbation [3,4].

Structured signals are commonly used to represent relations or similarity among samples that may be labeled or unlabeled. Leveraging these signals during neural network training harnesses both labeled and unlabeled data, which can improve model accuracy, particularly when the amount of labeled data is relatively small. Additionally, models trained with samples that are generated by adversarial perturbation have been shown to be robust against malicious attacks, which are designed to mislead a model's prediction or classification.

NSL generalizes to Neural Graph Learning [1] as well as to Adversarial Learning [3]. The NSL framework in TensorFlow provides the following easy-to-use APIs and tools for developers to train models with structured signals:

  • Keras APIs to enable training with graphs (explicit structure) and adversarial perturbations (implicit structure).

  • TF ops and functions to enable training with structure when using lower-level TensorFlow APIs

  • Tools to build graphs and construct graph inputs for training

The NSL framework is designed to be flexible and can be used to train any kind of neural network. For example, feed-forward, convolution, and recurrent neural networks can all be trained using the NSL framework. In addition to supervised and semi-supervised learning (a low amount of supervision), NSL can in theory be generalized to unsupervised learning. Incorporating structured signals is done only during training, so the performance of the serving/inference workflow remains unchanged. Please check out our tutorials for a practical introduction to NSL.

Getting started

You can install the prebuilt NSL pip package by running:

pip install neural-structured-learning

For more detailed instructions on how to install NSL as a package or to build it from source in various environments, please see the installation guide

Note that NSL requires a TensorFlow version of 1.15 or higher. NSL also supports TensorFlow 2.x with the exception of v2.1, which contains a bug that is incompatible with NSL.

Videos and Colab Tutorials

Get a jump-start on NSL by watching our video series on YouTube! It gives a complete overview of the framework as well as discusses several aspects of learning with structured signals.

Overall Framework Natural Graphs Synthetic Graphs Adversarial Learning

We've also created hands-on colab-based tutorials that will allow you to interactively explore NSL. Here are a few:

You can find more examples and tutorials under the examples directory.

Contributing to NSL

Contributions are welcome and highly appreciated - there are several ways to contribute to TF Neural Structured Learning:

  • Case studies: If you are interested in applying NSL, consider wrapping up your usage as a tutorial, a new dataset, or an example model that others could use for experiments and/or development. The examples directory could be a good destination for such contributions.

  • Product excellence: If you are interested in improving NSL's product excellence and developer experience, the best way is to clone this repo, make changes directly on the implementation in your local repo, and then send us pull request to integrate your changes.

  • New algorithms: If you are interested in developing new algorithms for NSL, the best way is to study the implementations of NSL libraries, and to think of extensions to the existing implementation (or alternative approaches). If you have a proposal for a new algorithm, we recommend starting by staging your project in the research directory and including a colab notebook to showcase the new features. If you develop new algorithms in your own repository, we would be happy to feature pointers to academic publications and/or repositories using NSL from this repository.

Please be sure to review the contribution guidelines.

Research

See our research directory for research projects in Neural Structured Learning:

Featured Usage

Please see the usage page to learn more about how NSL is being discussed and used in the open source community.

Issues, Questions, and Feedback

Please use GitHub issues to file issues, bugs, and feature requests. For questions, please direct them to Stack Overflow with the "nsl" tag. For feedback, please fill this form; we would love to hear from you.

Release Notes

Please see the release notes for detailed version updates.

References

[1] T. Bui, S. Ravi and V. Ramavajjala. "Neural Graph Learning: Training Neural Networks Using Graphs." WSDM 2018

[2] T. Kipf and M. Welling. "Semi-supervised classification with graph convolutional networks." ICLR 2017

[3] I. Goodfellow, J. Shlens and C. Szegedy. "Explaining and harnessing adversarial examples." ICLR 2015

[4] T. Miyato, S. Maeda, M. Koyama and S. Ishii. "Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning." ICLR 2016

[5] D. Juan, C. Lu, Z. Li, F. Peng, A. Timofeev, Y. Chen, Y. Gao, T. Duerig, A. Tomkins and S. Ravi "Graph-RISE: Graph-Regularized Image Semantic Embedding." WSDM 2020

Comments
  • Extending Graph regularization to images?

    Extending Graph regularization to images?

    Hi folks.

    I am willing to work on a tutorial that shows how to extend graph regularization example in the same way it's done for text-based problems. Is there a scope for this tutorial inside this repo?

    opened by sayakpaul 29
  • Added CNN adversarial learning tutorial notebook

    Added CNN adversarial learning tutorial notebook

    Hi @arjung as discussed. Please find the adversarial learning notebook example.

    I have added it in the neural_structured_learning/examples/notebooks as you had mentioned. Feel free to suggest the necessary edits and changes to keep in line with Google & TF standards and I will make the necessary changes based on feedback.

    cla: no 
    opened by dipanjanS 16
  • gnn model implementation

    gnn model implementation

    Hi,

    I’m interested in neural structure learning(graph neural network) and want to apply some basic gnn models such as GCN, GAT using Tenserflow2.0. I have some ideas on how to implement it, but I am not sure whether such a model architecture is clear. Here is the code structure of gnn I roughly thought:

    • Create a folder named ‘models’ to design various gnn models structure
    • In ‘models’ folder, create a file named sublayers.py which defines a single graph layer as a class called: GraphConvLayer, GraphAttenLayer, etc…
    • Design base node & edge models to let other gnn models inherit them (increase flexibility, readability)
    • Bulid functions in utils.py(or a folder named utils) such as evaluate, inference, calculate_loss, etc… to make users more convenience

    About the above structure, I have slightly referred to the repository of Deep Graph Library(dgl). As mentioned before, I want to apply some gnn model application but don’t know if my idea is appropriate. My initial thought is maybe we can create a folder just called gnn-survey-paper or somewhere else to put these implementations of gnn models. This is the first time I tried to post an issue and hope to contribute to the open-source code. If there is anything unclear above or there are some recommendations from your team, just feel free to let me know. Thanks:)

    Best regards, Josh

    question stat:awaiting response 
    opened by joshchang1112 16
  • TypeError: object of type 'AdvRegConfig' has no len()

    TypeError: object of type 'AdvRegConfig' has no len()

    Hi, I'm implementing a Keras binary image classifier using VGG16 with Adversarial Regularization. After initialization of the VGG16 model layers, I'm configuring the Adversarial Regularizer using the following code -

    import neural_structured_learning as nsl
    
    adv_config = nsl.configs.make_adv_reg_config(multiplier=0.2, adv_step_size=0.05)
    adv_model = nsl.keras.AdversarialRegularization(custom_vgg_model, adv_config)
    adv_model.compile(tf.keras.optimizers.SGD(learning_rate=2e-5), loss='categorical_crossentropy', metrics=['accuracy'])
    

    When I execute the code, I'm getting the following error -

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    [<ipython-input-32-bb6bdecb015d>](https://localhost:8080/#) in <module>()
          1 adv_config = nsl.configs.make_adv_reg_config(multiplier=0.2, adv_step_size=0.05)
          2 adv_model = nsl.keras.AdversarialRegularization(custom_vgg_model, adv_config)
    ----> 3 adv_model.compile(tf.keras.optimizers.SGD(learning_rate=2e-5), loss='categorical_crossentropy', metrics=['accuracy'])
    
    2 frames
    [/usr/local/lib/python3.7/dist-packages/neural_structured_learning/keras/adversarial_regularization.py](https://localhost:8080/#) in _build_labeled_losses(self, output_names)
        554       return  # Losses are already populated.
        555 
    --> 556     if len(output_names) != len(self.label_keys):
        557       raise ValueError('The model has different number of outputs and labels. '
        558                        '({} vs. {})'.format(
    
    TypeError: object of type 'AdvRegConfig' has no len()
    

    How do I resolve this issue?

    question 
    opened by kabyanil 12
  • Adding a tutorial for graph regularization with images

    Adding a tutorial for graph regularization with images

    @arjung I reckoned that adding the example under g3docs/tutorials might be more appropriate because it would be the first one that shows how to build a synthetic graph for an image dataset and build a model.

    Let me know your thoughts. A parallel Colab Notebook is available here for the results. For reference, the corresponding issue thread is available here.

    cla: yes 
    opened by sayakpaul 12
  • ValueError in the tutorial on Colab environment

    ValueError in the tutorial on Colab environment

    Hello

    The ValueError error Insufficient elements in branch_graphs[0].outputs. is found in both Neural Graph Learning tutorials (sentiment and document classification).

    Both happen during the training of graph_reg_model.fit() process. The error messages are shown below, I think it is due to the same issue.

    • Graph regularization for document classification using natural graphs
    Epoch 1/100
          1/Unknown - 0s 303ms/step
    
    ---------------------------------------------------------------------------
    
    ValueError                                Traceback (most recent call last)
    
    <ipython-input-20-0c7d19de6181> in <module>()
         10     loss='sparse_categorical_crossentropy',
         11     metrics=['accuracy'])
    ---> 12 graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
    
    25 frames
    
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/cond_v2.py in _make_indexed_slices_indices_types_match(op_type, branch_graphs)
        650                      "Expected: %i\n"
        651                      "Actual: %i" %
    --> 652                      (current_index, len(branch_graphs[0].outputs)))
        653 
        654   # Cast indices with mismatching types to int64.
    
    ValueError: Insufficient elements in branch_graphs[0].outputs.
    Expected: 11
    Actual: 10
    
    • Graph regularization for sentiment classification using synthesized graphs
    Epoch 1/10
          1/Unknown - 2s 2s/step
    
    ---------------------------------------------------------------------------
    
    ValueError                                Traceback (most recent call last)
    
    <ipython-input-30-e49eed0ffe51> in <module>()
          3     validation_data=validation_dataset,
          4     epochs=HPARAMS.train_epochs,
    ----> 5     verbose=1)
    
    25 frames
    
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/cond_v2.py in _make_indexed_slices_indices_types_match(op_type, branch_graphs)
        650                      "Expected: %i\n"
        651                      "Actual: %i" %
    --> 652                      (current_index, len(branch_graphs[0].outputs)))
        653 
        654   # Cast indices with mismatching types to int64.
    
    ValueError: Insufficient elements in branch_graphs[0].outputs.
    Expected: 18
    Actual: 17
    

    Thanks

    bug 
    opened by yenhao 11
  • Generators

    Generators

    Is there a particular format when combining with ImageDataGenerator? Error: OperatorNotAllowedInGraphError: iterating over tf.Tensor is not allowed in Graph execution

    question 
    opened by esparza83 10
  • Adding an implementation of Denoised Smoothing

    Adding an implementation of Denoised Smoothing

    This PR adds Denoised Smoothing under the research folder.

    The success of Randomized Smoothing is proven and it works for many different scenarios. But it also operates under the assumption that the underlying classifier is able to perform well under Gaussian perturbations. Won't it be better if we could just take our standard pre-trained image classifiers (including the Cloud APIs) and have the benefits of Randomized Smoothing inside of them in an easy manner?

    That is preciously what Denoised Smoothing does by prepending a Denoiser to an image classifier and still maintains the theoretical guarantees of robustness against L2 attacks.

    Besides, the implementation includes a suite of utilities that may be helpful to generate robustness certificates. I think that gives us a unique opportunity to actually design an API inside NSL that would allow easy generation of robustness certificates. To the best of my knowledge, there does not exist any such framework as we speak.

    cla: yes 
    opened by sayakpaul 9
  • link prediction?

    link prediction?

    Hi,

    I have been thinking about using NSL in the context of link prediction. It can definitely be reframed as a classification problem I believe. The only thing I am wondering about is if anyone has thought of an elegant way to add the neighbours (I guess in this case of both nodes which form the link ). Has anyone been working on this?

    I guess a decent way of going about it would be changing the parse_example function:

    def parse_example(example_proto):
      """Extracts relevant fields from the `example_proto`.
    
      Args:
        example_proto: An instance of `tf.train.Example`.
    
      Returns:
        A pair whose first value is a dictionary containing relevant features
        and whose second value contains the ground truth labels.
      """
    

    so, that it can take a pair of examples which are part of a link.

    Thanks, George

    question stat:awaiting response 
    opened by ggerogiokas 9
  • Added CNN adversarial learning tutorial notebook [Updated with Feedback]

    Added CNN adversarial learning tutorial notebook [Updated with Feedback]

    Hi @arjung as discussed. Please find the adversarial learning notebook example updated based on feedback from your side.

    You can review it and add in any comments as needed to fix any pending issues. Also would be great if you can close PR: https://github.com/tensorflow/neural-structured-learning/pull/67

    cla: yes 
    opened by dipanjanS 8
  • Adversarial Learning Tutorial Additions

    Adversarial Learning Tutorial Additions

    Zoey and I are partners working on a project to help improve NSL documentation (specifically this tutorial).

    I noticed in online communities and from personal experience, while the corresponding Youtube videos verbally explain concepts well for beginners, the recommended interactive collabs were a bit difficult to follow for beginners. That's why I wanted to better connect the video content with the collab content thus I created a recap section for beginners that includes all the concepts explained in the Youtube video. Also for some, they may prefer reading content over watching, some may also just want to have a written reference integrated into the collab tutorial.

    cla: yes 
    opened by angela-wang1 8
  • Clarification on attacks used in Adversarial Training

    Clarification on attacks used in Adversarial Training

    Hello, I was trying to use NSL to implement adversarial training on my custom model, so I followed the default steps in the tutorial video, which worked like a charm. While studying the code, I noticed that the call to make_adv_reg_config() has a parameter called pgd_epsilon, which is "...Only used in Projected Gradient Descent (PGD) attack".

    This statement suggests that NSL can use different attacks in adversarial training; however, it is not clear how to select which attack to use, or which attack is currently in use. Up till now I had assumed that PGD was being used by default, as this is common in literature, but I would like to know if this is actually the case, and by extension if it is possible to use a different attack and how that can be done.

    Thanks!

    opened by madarax64 4
  • Apply adversarial training after few epochs.

    Apply adversarial training after few epochs.

    I want to train the model without adversarial attack at the first two epochs. After that, I would like to train the model with adversarial learning.

    In summary,

    1 epoch: Training w/o adversarial 2 epoch: Training w/o adversarial 3 epoch: Training with adversarial 4 epoch: Training with adversarial ....

    Is it possible to adjust the start epoch of the adversarial training? I couldn't find any related parameter in nsl.configs.make_adv_reg_config

    opened by jhss 1
  • Adding support for reading and writing to multiple tfrecords in `nsl.tools.pack_nbrs`

    Adding support for reading and writing to multiple tfrecords in `nsl.tools.pack_nbrs`

    The current implementation of nsl.tools.pack_nbrs does not support reading and writing to multiple tfrecord files. Given the extensive optimizations made available by the tf.data API when working with multiple tfrecords, supporting this would yield significant performance gain in distributed training. I would be willing to contribute to this

    Relevant parts of the code

    • for reading https://github.com/tensorflow/neural-structured-learning/blob/c21dad4feff187cdec041a564193ea7b619b8906/neural_structured_learning/tools/pack_nbrs.py#L63-L71
    • for writing https://github.com/tensorflow/neural-structured-learning/blob/c21dad4feff187cdec041a564193ea7b619b8906/neural_structured_learning/tools/pack_nbrs.py#L264-L270
    enhancement 
    opened by srihari-humbarwadi 8
Releases(v1.4.0)
  • v1.4.0(Jul 29, 2022)

    Major Features and Improvements

    • Add params as an optional third argument to the embedding_fn argument of nsl.estimator.add_graph_regularization. This is similar to the params argument of an Estimator's model_fn, which allows users to pass arbitrary states through. Adding this as an argument to embedding_fn will allow users to access that state in the implementation of embedding_fn.
    • Both nsl.keras.AdversarialRegularization and nsl.keras.GraphRegularization now support the save method which will save the base model.
    • nsl.keras.AdversarialRegularization now supports a tf.keras.Sequential base model with a tf.keras.layers.DenseFeatures layer.
    • nsl.configs.AdvNeighborConfig has a new field random_init. If set to True, a random perturbation will be performed before FGSM/PGD steps.
    • nsl.lib.gen_adv_neighbor now has a new parameter use_while_loop. If set to True, the PGD steps are done in a tf.while_loop which is potentially more memory efficient but has some restrictions.
    • New library functions:
      • nsl.lib.random_in_norm_ball for generating random tensors in a norm ball.
      • nsl.lib.project_to_ball for projecting tensors onto a norm ball.

    Bug Fixes and Other Changes

    • Dropped Python 2 support (which was deprecated 2+ years ago).
    • nsl.keras.AdversarialRegularization and nsl.lib.gen_adv_neighbor will not attempt to calculate gradients for tensors with a non-differentiable dtype. This doesn’t change the functionality, but only suppresses excess warnings.
    • Both estimator/adversarial_regularization.py and estimator/graph_regularization.py explicitly import estimator from tensorflow as a separate import instead of accessing it via tf.estimator and depend on the tensorflow estimator target.
    • The new top-level workshops directory contains presentation materials from tutorials we organized on NSL at KDD 2020, WSDM 2021, and WebConf 2021.
    • The new usage.md page describes featured usage of NSL, external talks, blog posts, media coverage, and more.
    • End-to-end examples under the examples directory:
      • New examples about graph neural network modules with graph-regularizer and graph convolution.
      • New README file providing an overview of the examples.
    • New tutorial examples under the examples/notebooks directory:
      • Graph regularization for image classification using synthesized graphs
      • Adversarial Learning: Building Robust Image Classifiers
      • Saving and loading NSL models

    Thanks to our Contributors

    This release contains contributions from many people at Google Research and from TF community members: @angela-wang1 , @dipanjanS, @joshchang1112, @SamuelMarks, @sayakpaul, @wangbingnan136, @zoeyz101

    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(Aug 18, 2020)

    Major Features and Improvements

    None.

    Bug Fixes and Other Changes

    • Fixed the NSL graph builder to ignore lsh_rounds when lsh_splits < 1. By default, the prior version of the graph builder would repeat the work twice by default. In addition, the default value for lsh_rounds has been changed from 2 to 1.
    • Updated the NSL IMDB tutorial to use the new LSH support when building the graph, thereby speeding up the graph building time by ~5x.

    Thanks to our Contributors

    This release contains contributions from many people at Google.

    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Jul 31, 2020)

    Major Features and Improvements

    • Added locality-sensitive hashing (LSH) support to the graph builder tool. This allows the graph builder to scale up to larger input datasets. As part of this change, the new nsl.configs.GraphBuilderConfig class was introduced, as well as a new nsl.tools.build_graph_from_config function. The new parameters for controlling the LSH algorithm are named lsh_rounds and lsh_splits.

    Bug Fixes and Other Changes

    • Changed nsl.tools.add_edge to return a boolean result indicating if a new edge was added or not; previously, this function was not returning any value.
    • Fixed a bug in nsl.tools.read_tsv_graph that was incrementing the reported edge count too often.
    • Removed Python 2 unit tests.
    • Fixed a bug in nsl.estimator.add_adversarial_regularization and nsl.estimator.add_graph_regularization so that the UPDATE_OPS can be triggered correctly.
    • Updated graph-NSL tutorials not to parse neighbor features during evaluation.
    • Added scaled graph and adversarial loss values as scalars to the summary in nsl.estimator.add_graph_regularization and nsl.estimator.add_adversarial_regularization respectively.
    • Updated graph and adversarial regularization loss metrics in nsl.keras.GraphRegularization and nsl.keras.AdversarialRegularization respectively, to include scaled values for consistency with their respective loss term contributions.

    Thanks to our Contributors

    This release contains contributions from many people at Google.

    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Jun 10, 2020)

    Release 1.2.0

    Major Features and Improvements

    • Changed nsl.tools.build_graph(...) to be more efficient and use far less memory. In particular, the memory consumption is now proportional only to the size of the input, not the size of the input plus the size of the output. Since the size of the output can be quadratic in the size of the input, this can lead to large memory savings. nsl.tools.build_graph(...) now also produces a log message every 1M edges it writes to indicate progress.
    • Introduces nsl.lib.strip_neighbor_features, a function to remove graph neighbor features from a feature dictionary.
    • Restricts the expectation of graph neighbor features being present in the input to the training mode for both the Keras and Estimator graph regularization wrappers. So, during evaluation, prediction, etc, neighbor features need not be fed to the model anymore.
    • Change the default value of keep_rank from False to True as well as flip its semantics in nsl.keras.layers.NeighborFeatures.call and nsl.utils.unpack_neighbor_features.
    • Supports feature value constraints for adversarial neighbors. See clip_value_min and clip_value_max in nsl.configs.AdvNeighborConfig.
    • Supports adversarial regularization with PGD in Keras and estimator models.
    • Support for generating adversarial neighbors using Projected Gradient Descent (PGD) via the nsl.lib.adversarial_neighbor.gen_adv_neighbor API.

    Bug Fixes and Other Changes

    • Clarifies the meaning of the nsl.AdvNeighborConfig.feature_mask field.
    • Updates notebooks to avoid invoking the nsl.tools.build_graph and nsl.tools.pack_nbrs utilities as binaries.
    • Replace deprecated API in notebooks when testing for GPU availability.
    • Fix typos in documentation and notebooks.
    • Improvements to example trainers.
    • Fixed the metric string to 'acc' to be compatible with both TF1.x and 2.x.
    • Allow passing dictionaries to sequential base models in adversarial regularization.
    • Supports input feature list in nsl.lib.gen_adv_neighbor.
    • Supports input with a collection of tensors in nsl.lib.maximize_within_unit_norm.
    • Adds an optional parameter base_with_labels_in_features to nsl.keras.AdversarialRegularization for passing label features to the base model.
    • Fixes the tensor ordering issue in nsl.keras.AdversarialRegularization when used with a functional Keras base model.

    Thanks to our Contributors

    This release contains contributions from many people at Google as well as @mzahran001.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Oct 15, 2019)

    Release 1.1.0

    Major Features and Improvements

    • Introduces nsl.tools.build_graph, a function for graph building.

    • Introduces nsl.tools.pack_nbrs, a function to prepare input for graph-based NSL.

    • Adds tf.estimator.Estimator support for NSL. In particular, this release introduces two new wrapper functions named nsl.estimator.add_graph_regularization and nsl.estimator.add_adversarial_regularization to wrap existing tf.estimator.Estimator-based models with NSL. These APIs are currently supported only for TF 1.x.

    Bug Fixes and Other Changes

    • Adds version information to the NSL package, which can be queried as nsl.__version__.

    • Fixes loss computation with Loss objects in AdversarialRegularization.

    • Adds a new parameter to nsl.keras.adversarial_loss which can be used to pass additional arguments to the model.

    • Fixes typos in documentation and notebooks.

    • Updates notebooks to use the release version of TF 2.0.

    Thanks to our Contributors

    This release contains contributions from many people at Google.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Sep 18, 2019)

    Release 1.0.1

    Major Features and Improvements

    • Adds make_graph_reg_config, a new API to help construct a nsl.configs.GraphRegConfig object

    • Updates the package description on PyPI

    Bug Fixes and Other Changes

    • Fixes metric computation with Metric objects in AdversarialRegularization

    • Fixes typos in documentation and notebooks

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    @joaogui1, @aspratyush.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Sep 3, 2019)

Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style disentanglement in image generation and translation" (ICCV 2021)

DiagonalGAN Official Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style Disentanglement in Image Generation and Trans

32 Dec 06, 2022
Code for Environment Dynamics Decomposition (ED2).

ED2 Code for Environment Dynamics Decomposition (ED2). Installation Follow the installation in MBPO and Dreamer. Usage First follow the SD2 method for

0 Aug 10, 2021
Self-Supervised Deep Blind Video Super-Resolution

Self-Blind-VSR Paper | Discussion Self-Supervised Deep Blind Video Super-Resolution By Haoran Bai and Jinshan Pan Abstract Existing deep learning-base

Haoran Bai 35 Dec 09, 2022
Deep Learning as a Cloud API Service.

Deep API Deep Learning as Cloud APIs. This project provides pre-trained deep learning models as a cloud API service. A web interface is available as w

Wu Han 4 Jan 06, 2023
The world's simplest facial recognition api for Python and the command line

Face Recognition You can also read a translated version of this file in Chinese 简体中文版 or in Korean 한국어 or in Japanese 日本語. Recognize and manipulate fa

Adam Geitgey 46.9k Jan 03, 2023
A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis

WaveGlow A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis Quick Start: Install requirements: pip install

Yuchao Zhang 204 Jul 14, 2022
Code for the paper "M2m: Imbalanced Classification via Major-to-minor Translation" (CVPR 2020)

M2m: Imbalanced Classification via Major-to-minor Translation This repository contains code for the paper "M2m: Imbalanced Classification via Major-to

79 Oct 13, 2022
Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021)

RSCD (BS-RSCD & JCD) Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021) by Zhihang Zhong, Yinqiang Zheng, Imari Sato We co

81 Dec 15, 2022
A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

A criticism of a recent paper on buggy image downsampling methods in popular image processing and deep learning libraries.

70 Jul 12, 2022
Densely Connected Search Space for More Flexible Neural Architecture Search (CVPR2020)

DenseNAS The code of the CVPR2020 paper Densely Connected Search Space for More Flexible Neural Architecture Search. Neural architecture search (NAS)

Jamin Fong 291 Nov 18, 2022
Official Implementation of CVPR 2022 paper: "Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning"

(CVPR 2022) Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning ArXiv This repo contains Official Implementat

Yujun Shi 24 Nov 01, 2022
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
tree-math: mathematical operations for JAX pytrees

tree-math: mathematical operations for JAX pytrees tree-math makes it easy to implement numerical algorithms that work on JAX pytrees, such as iterati

Google 137 Dec 28, 2022
Implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork.

YOLOv4-large This is the implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork. YOLOv4-CSP YOLOv4-tiny YOLOv4-

Kin-Yiu, Wong 2k Jan 02, 2023
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch

Bootstrap Your Own Latent (BYOL), in Pytorch Practical implementation of an astoundingly simple method for self-supervised learning that achieves a ne

Phil Wang 1.4k Dec 29, 2022
Self-Supervised Multi-Frame Monocular Scene Flow (CVPR 2021)

Self-Supervised Multi-Frame Monocular Scene Flow 3D visualization of estimated depth and scene flow (overlayed with input image) from temporally conse

Visual Inference Lab @TU Darmstadt 85 Dec 22, 2022
Implementation for Simple Spectral Graph Convolution in ICLR 2021

Simple Spectral Graph Convolutional Overview This repo contains an example implementation of the Simple Spectral Graph Convolutional (S^2GC) model. Th

allenhaozhu 64 Dec 31, 2022
Shared Attention for Multi-label Zero-shot Learning

Shared Attention for Multi-label Zero-shot Learning Overview This repository contains the implementation of Shared Attention for Multi-label Zero-shot

dathuynh 26 Dec 14, 2022
DecoupledNet is semantic segmentation system which using heterogeneous annotations

DecoupledNet: Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation Created by Seunghoon Hong, Hyeonwoo Noh and Bohyung Han at POSTE

Hyeonwoo Noh 74 Sep 22, 2021
Time Dependent DFT in Tamm-Dancoff Approximation

Density Function Theory Program - kspy-tddft(tda) This is an implementation of Time-Dependent Density Functional Theory(TDDFT) using the Tamm-Dancoff

Peter Borthwick 2 Nov 17, 2022