Generic U-Net Tensorflow implementation for image segmentation

Overview

Tensorflow Unet

Documentation Status http://img.shields.io/badge/arXiv-1609.09077-orange.svg?style=flat https://img.shields.io/badge/ascl-1611.002-blue.svg?colorB=262255

Warning

This project is discontinued in favour of a Tensorflow 2 compatible reimplementation of this project found under https://github.com/jakeret/unet

This is a generic U-Net implementation as proposed by Ronneberger et al. developed with Tensorflow. The code has been developed and used for Radio Frequency Interference mitigation using deep convolutional neural networks .

The network can be trained to perform image segmentation on arbitrary imaging data. Checkout the Usage section or the included Jupyter notebooks for a toy problem or the Radio Frequency Interference mitigation discussed in our paper.

The code is not tied to a specific segmentation such that it can be used in a toy problem to detect circles in a noisy image.

Segmentation of a toy problem.

To more complex application such as the detection of radio frequency interference (RFI) in radio astronomy.

Segmentation of RFI in radio data.

Or to detect galaxies and star in wide field imaging data.

Segmentation of a galaxies.

As you use tf_unet for your exciting discoveries, please cite the paper that describes the package:

@article{akeret2017radio,
  title={Radio frequency interference mitigation using deep convolutional neural networks},
  author={Akeret, Joel and Chang, Chihway and Lucchi, Aurelien and Refregier, Alexandre},
  journal={Astronomy and Computing},
  volume={18},
  pages={35--39},
  year={2017},
  publisher={Elsevier}
}
Comments
  • starting U-net

    starting U-net

    Hey,

    I would like to use the proposed U-net for my work but I am still a beginner with Python and Tensorflow. My current problem that I cant really start the code in general because python crashes already at the start. I think the issue is that I don't set the required parameters right at the beginning, I read the documentation but it is not clear for me yet which parameters I have to define and how (for example output path). Can someone help me please?

    Kind regards, Fabian

    question 
    opened by Fab1900 33
  • regarding the size of input masking image and definition of

    regarding the size of input masking image and definition of "in_size" and "size" for offset

    Hello Joel,

    Thank you very much for sharing your code, which is very well written.

    I have several questions, would you mind sharing your thoughts on them?

    1. In your implementation, the input mask training data set has to be of share row*column*2. For my use case, the input masking training data set is of shape row*column*1. Do I have to transform my input masking training data set into the form of row*column*2. Are there any reason that you would like to specify the mask data set that way?
    2. In Create_conv_net, you defined in_size=1000, and size=in_size. Value of size is changed during convolution, pooling, deconv and unpooling operations. Then create_conv_net will return in_size-size as offset, which will be used to compute px and py. This is copied from the program returns prediction: The unet prediction Shape [n, px, py, labels] (px=nx-self.offset/2) I don’t understand why in_size is setup as 1000, and why we need this offset. Looks like un-pooling and deconvolution can resize the output map to the original image. Especially, the conv2d should allow us to specify the shape of output map.
    3. In the training process, you use test_x, test_y = data_provider(4) pred_shape = self.store_prediction(sess, test_x, test_y, "_init") What’s the reason to generate a batch of 4 at the very begining. Are there any considerations here?

    Thank you very much for your help.

    opened by surfreta 15
  • Multi Class Segmentation

    Multi Class Segmentation

    I think this question has been asked by other people but I can not find the issue and your response. I am trying to use U_net for segmentation of medical images. The segmentations contain more than one label. I modified the labels to binary but I am just curious if U-Net can handle the multi_Class segmentation.

    question 
    opened by nargeshn 14
  • Error in combine_img_prediction

    Error in combine_img_prediction

    Hi, I have a trouble while running my code:

    # Import data
    print('Loading dataset...\n')
    X_data = np.load(DATASET_FOLDER+"X_data.npy")
    y_data = np.load(DATASET_FOLDER+"y_data.npy")
    X_test = np.load(DATASET_FOLDER+"X_test.npy")
    y_test = np.load(DATASET_FOLDER+"y_test.npy")
    
    print("TRAIN data shape: ", X_data.shape)
    print("TRAIN labels shape", y_data.shape)
    print("TEST data shape: ", X_test.shape)
    print("TEST labels shape: ", y_test.shape)
    
    X_data = np.float32(X_data)
    y_data = np.float32(y_data)
    X_test = np.float32(X_test)
    y_test = np.float32(y_test)
    
    training_iters = 20
    epochs = 100
    dropout = 0.75 # Dropout, probability to keep units
    display_step = 2
    restore = False
     
    data_provider = image_util.SimpleDataProvider(X_data, y_data, channels=2, n_class=1)
    
    net = unet.Unet(channels=2, n_class=1, layers=4, features_root=64, cost="dice_coefficient")
        
    trainer = unet.Trainer(net, optimizer="adam")
    path = trainer.train(data_provider, "./unet_trained", training_iters=training_iters, epochs=epochs, dropout=dropout, display_step=display_step, restore=restore)
         
    prediction = net.predict(path, X_test)
         
    print("Testing error rate: {:.2f}%".format(unet.error_rate(prediction, util.crop_to_shape(y_test, prediction.shape))))
       
    

    The error is:

    
    Loading dataset...
    
    TRAIN data shape:  (1560, 128, 128, 2)
    TRAIN labels shape (1560, 128, 128)
    TEST data shape:  (120, 128, 128, 2)
    TEST labels shape:  (120, 128, 128)
    2017-06-23 15:07:05,594 Layers 4, features 64, filter size 3x3, pool size: 2x2
    2017-06-23 15:07:07,878 Removing '/home/stefano/Dropbox/DeepWave/prediction'
    2017-06-23 15:07:07,878 Removing '/home/stefano/Dropbox/DeepWave/unet_trained'
    2017-06-23 15:07:07,878 Allocating '/home/stefano/Dropbox/DeepWave/prediction'
    2017-06-23 15:07:07,879 Allocating '/home/stefano/Dropbox/DeepWave/unet_trained'
    2017-06-23 15:07:07.879575: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    2017-06-23 15:07:07.879602: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    2017-06-23 15:07:07.879615: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    2017-06-23 15:07:10,201 Verification error= 0.0%, loss= -0.0000
    Traceback (most recent call last):
      File "Unet.py", line 45, in <module>
        path = trainer.train(data_provider, "./unet_trained", training_iters=training_iters, epochs=epochs, dropout=dropout, display_step=display_step, restore=restore)
      File "./tf_unet/unet.py", line 404, in train
        pred_shape = self.store_prediction(sess, test_x, test_y, "_init")
      File "./tf_unet/unet.py", line 457, in store_prediction
        img = util.combine_img_prediction(batch_x, batch_y, prediction)
      File "/home/stefano/Dropbox/DeepWave/tf_unet/util.py", line 104, in combine_img_prediction
        to_rgb(crop_to_shape(gt[..., 1], pred.shape).reshape(-1, ny, 1)), 
    IndexError: index 1 is out of bounds for axis 3 with size 1
    
    

    combile_img_prediction function has the following argument shapes: (4, 128, 128, 1) --> gt (4, 128, 128, 2) --> data (4, 36, 36, 1) --> pred

    My datasets have the following shapes: TRAIN data shape: (1560, 128, 128, 2) TRAIN labels shape (1560, 128, 128) TEST data shape: (120, 128, 128, 2) TEST labels shape: (120, 128, 128)

    How can I solve the issue? Thank you! :+1:

    EDIT: sorry.. obviously n_class was 2. I corrected the error... but now i have:

    Traceback (most recent call last):
      File "Unet.py", line 43, in <module>
        path = trainer.train(data_provider, "./unet_trained", training_iters=training_iters, epochs=epochs, dropout=dropout, display_step=display_step, restore=restore)
      File "./tf_unet/unet.py", line 403, in train
        test_x, test_y = data_provider(self.verification_batch_size)
      File "./tf_unet/image_util.py", line 89, in __call__
        train_data, labels = self._load_data_and_label()
      File "./tf_unet/image_util.py", line 50, in _load_data_and_label
        labels = self._process_labels(label)
      File "./tf_unet/image_util.py", line 65, in _process_labels
        labels[..., 0] = ~label
    TypeError: ufunc 'invert' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
    
    
    bug 
    opened by stefat77 14
  • Missing training files

    Missing training files

    Dear Author,

    I successfully downloaded & installed the tf_unet following the "Tensorflow Unet Documentation Release 0.1.0". However, I found the training files are missing when I input the code (from the documentation page 3, bottom): data_provider = image_util.ImageDataProvider("fishes\train*.tif")

    The error message shows: Traceback (most recent call last): File "", line 1, in File "c:\windows\system32\tf_unet\tf_unet\image_util.py", line 166, in init assert len(self.data_files) > 0, "No training files" AssertionError: No training files

    I wondering if I can download the train files from other places? Can you provide the link to download these training files?
    Thank you.

    -Shupeng

    duplicate question 
    opened by ubersexualShupeng 13
  • Trying to read image with 3 classes

    Trying to read image with 3 classes

    I have a training data with the following parameters: input image is rgb image of size 500x500. The ground truth has 3 classes with the following values for pixels: 0, 50, 100. I'm trying to read in this image as: generator = image_util.ImageDataProvider("data/train/*.tif", n_class=3)

    However, I produce this error:

    Traceback (most recent call last): File "meta_net.py", line 19, in x_test, y_test = generator(1) File "/home/Desktop/Projects/tf_unet/tf_unet/image_util.py", line 88, in call train_data, labels = self._load_data_and_label() File "/home/Desktop/Projects/tf_unet/tf_unet/image_util.py", line 58, in _load_data_and_label return train_data.reshape(1, ny, nx, self.channels), labels.reshape(1, ny, nx, self.n_class), ValueError: cannot reshape array of size 251001 into shape (1,501,501,3)

    How do I properly read in images with multiple classes? I tried looking at ufig_util for some ideas, but I couldn't extract too much from that.

    question 
    opened by UCRajkumar 11
  • Weights before softmax error in the weighted loss function

    Weights before softmax error in the weighted loss function

    Hi,

    in the implementation of the weighted loss function the weights are applied to the logits before the softmax activation function. The result for a two class problem is that the bigger value after the application of the softmax function will increase, the smaller value will decrease. In other words, the network will look more confident in its predictions. If the weight was large and the prediction was wrong the gradients will also be larger though not necessarily by the expected amount. If the prediction was right, however, the gradients will be smaller than they would have been otherwise.

    To ensure correct scaling, the weights should be applied after the call to tf.nn.softmax_cross_entropy_with_logits() and before the call to tf.reduce_mean()

    opened by FelixGruen 11
  • Always having a blank white image as prediction

    Always having a blank white image as prediction

    After training the model and trying to predict the segmentation, I always get a blank white image. In some other question, it seems this can be solved by changing the clipping maybe?

    Where can I make such change in the code?

    Thanks.

    opened by abderhasan 10
  • is there any setting should be considered in using this code?

    is there any setting should be considered in using this code?

    Hi I am using this code right now, is there any setting that should be considered about using this code? i.e. input pixel values range? ground-truth numbering?....? I used this code but the result is very bad and the code output is below 0.5 and should be considered as 0, at result whole of output shown black. How I can optimize the result? please please help!!!

    opened by bhralzz 10
  • Training Problem

    Training Problem

    Hi, thanks for putting up a clean and neat implementation of u-net.

    I've been playing around with your code and managed to adapt the data_provider for my multi-class problem and run the training without any errors. However, the results I'm getting from training is rather strange and not right. The training finish with what seems to be ok performance:

    18:22:22,965 Iter 6397, Minibatch Loss= 0.2209, Training Accuracy= 0.9514, Minibatch error= 4.9% 18:22:23,229 Iter 6398, Minibatch Loss= 0.5043, Training Accuracy= 0.8385, Minibatch error= 16.2% 18:22:23,510 Iter 6399, Minibatch Loss= 0.1701, Training Accuracy= 0.9685, Minibatch error= 3.1% 18:22:23,511 Epoch 99, Average loss: 0.3064, learning rate: 0.0012 18:22:23,560 Verification error= 3.0%, loss= 0.1624 18:22:25,814 Optimization Finished!

    but when I look at the prediction folder and the epoch images, the prediction column looks very strange for all the epochs. When I tried to do a prediction, it also came out as all blank and no meaningful results.

    epoch_55

    I realised some people had similar problems but then I tried to took their advice adding batch normalisation, increasing the depth, number of features, iterations and batch size but none seemed to make a difference. When I increased the batch size from default 1 to 4 the epoch images changed to a smaller window too.

    epoch_0

    Could this be a problem of unbalanced dataset? I feel I'm doing something wrong and was wondering if anyone can help.

    opened by DoraUniApp 9
  • How to get multiple images as output?

    How to get multiple images as output?

    Hi, Your code is very helpful for me. But the problem is it take only first image from the dataset as a input and give single image as output. Actually, I am begginer in the tensorflow. So, can you please give me suggestion how to get multiple images as output?

    Here is my code: #preparing data loading data_provider = ImageDataProvider("C:/Users/path/*.png")

    #setup & training net = unet.Unet(channels=1, n_class=2, layers=3, features_root=16) trainer = unet.Trainer(net) path = trainer.train(data_provider, output_path, training_iters=10, epochs=4)

    x_test, y_test = data_provider(4) prediction = net.predict(path, x_test)

    fig, ax = plt.subplots(1,3, figsize=(12,4)) ax[0].imshow(x_test[0,...,0], aspect="auto") ax[1].imshow(y_test[0,...,1], aspect="auto") ax[2].imshow(prediction[0,...,1], aspect="auto")

    fig.tight_layout() plt.show()

    question 
    opened by monicakapadia 9
  • UnsupportedPluginTypeException: Coordinate frame barycentricmeanecliptic not in allowed values

    UnsupportedPluginTypeException: Coordinate frame barycentricmeanecliptic not in allowed values

    I am first time to use the tf_unet, then I try the demo of demo_radio_data.ipynb. When I was running code: seek --file-prefix='/home/sgwhua/workspace/tf_unet-master/demo/bgs_example_data' --post-processing-prefix='/home/sgwhua/workspace/tf_unet-master/demo/bgs_example_data/seek_cache' --chi-1=20 --overwrite=True seek.config.process_survey_fft , I got this error: Coordinate frame barycentricmeanecliptic not in allowed values ['altaz', 'barycentrictrueecliptic', ...

    Traceback (most recent call last):
      File "/home/sgwhua/.local/bin/seek", line 11, in <module>
        load_entry_point('seek==0.1.0', 'console_scripts', 'seek')()
      File "/home/sgwhua/.local/lib/python2.7/site-packages/ivy-0.1.0-py2.7.egg/ivy/cli/main.py", line 28, in run
        _main(*sys.argv[1:])
      File "/home/sgwhua/.local/lib/python2.7/site-packages/ivy-0.1.0-py2.7.egg/ivy/cli/main.py", line 37, in _main
        mgr.launch()
      File "/home/sgwhua/.local/lib/python2.7/site-packages/ivy-0.1.0-py2.7.egg/ivy/workflow_manager.py", line 107, in launch
        executor.run(ctx().params.plugins)
      File "/home/sgwhua/.local/lib/python2.7/site-packages/ivy-0.1.0-py2.7.egg/ivy/backend.py", line 48, in run
        return map(LoopWrapper(loop), mapPlugin.getWorkload())
      File "/home/sgwhua/.local/lib/python2.7/site-packages/ivy-0.1.0-py2.7.egg/ivy/backend.py", line 126, in __call__
        for plugin in self.loop:
      File "/home/sgwhua/.local/lib/python2.7/site-packages/ivy-0.1.0-py2.7.egg/ivy/loop.py", line 95, in next
        return self._instantiate(plugin)
      File "/home/sgwhua/.local/lib/python2.7/site-packages/ivy-0.1.0-py2.7.egg/ivy/loop.py", line 136, in _instantiate
        return PluginFactory.createInstance(pluginName, self.ctx)
      File "/home/sgwhua/.local/lib/python2.7/site-packages/ivy-0.1.0-py2.7.egg/ivy/plugin/plugin_factory.py", line 61, in createInstance
        raise UnsupportedPluginTypeException("Module '%s' could not be instantiated'" % pluginName, ex)
    ivy.exceptions.exceptions.UnsupportedPluginTypeException: (u"Module 'seek.plugins.initialize' could not be instantiated'", ValueError(u"Coordinate frame barycentricmeanecliptic not in allowed values ['altaz', 'barycentrictrueecliptic', 'cirs', 'fk4', 'fk4noeterms', 'fk5', 'galactic', 'galacticlsr', 'galactocentric', 'gcrs', 'geocentrictrueecliptic', 'hcrs', 'heliocentrictrueecliptic', 'icrs', 'itrs', 'lsr', 'precessedgeocentric', 'supergalactic']",))
    
    opened by white3 0
  • TypeError: Fetch argument None has invalid type <class 'NoneType'>

    TypeError: Fetch argument None has invalid type

    python from __future__ import division, print_function %matplotlib inline import matplotlib.pyplot as plt import matplotlib import numpy as np plt.rcParams['image.cmap'] = 'gist_earth' np.random.seed(98765) `python from tf_unet import image_gen from tf_unet import unet from tf_unet import util

    nx = 572 ny = 572

    generator = image_gen.GrayScaleDataProvider(nx, ny, cnt=20)

    x_test, y_test = generator(1)

    fig, ax = plt.subplots(1,2, sharey=True, figsize=(8,4)) ax[0].imshow(x_test[0,...,0], aspect="auto") ax[1].imshow(y_test[0,...,1], aspect="auto")

    import tensorflow.compat.v1 as tf tf.disable_v2_behavior()

    net = unet.Unet(channels=generator.channels, n_class=generator.n_class, layers=3, features_root=16)

    trainer = unet.Trainer(net, optimizer="momentum", opt_kwargs=dict(momentum=0.2))

    path = trainer.train(generator, "./unet_trained", training_iters=32, epochs=10, display_step=2) `

    the error of the path

    TypeError Traceback (most recent call last) in ----> 1 path = trainer.train(generator, "./unet_trained", training_iters=32, epochs=10, display_step=2)

    ~/.local/lib/python3.8/site-packages/tf_unet-0.1.2-py3.8.egg/tf_unet/unet.py in train(self, data_provider, output_path, training_iters, epochs, dropout, display_step, restore, write_graph, prediction_path) 447 448 if step % display_step == 0: --> 449 self.output_minibatch_stats(sess, summary_writer, step, batch_x, 450 util.crop_to_shape(batch_y, pred_shape)) 451

    ~/.local/lib/python3.8/site-packages/tf_unet-0.1.2-py3.8.egg/tf_unet/unet.py in output_minibatch_stats(self, sess, summary_writer, step, batch_x, batch_y) 486 def output_minibatch_stats(self, sess, summary_writer, step, batch_x, batch_y): 487 # Calculate batch loss and accuracy --> 488 summary_str, loss, acc, predictions = sess.run([self.summary_op, 489 self.net.cost, 490 self.net.accuracy,

    ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata) 955 956 try: --> 957 result = self._run(None, fetches, feed_dict, options_ptr, 958 run_metadata_ptr) 959 if run_metadata:

    ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 1163 1164 # Create a fetch handler to take care of the structure of fetches. -> 1165 fetch_handler = _FetchHandler( 1166 self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles) 1167

    ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/client/session.py in init(self, graph, fetches, feeds, feed_handles) 475 """ 476 with graph.as_default(): --> 477 self._fetch_mapper = _FetchMapper.for_fetch(fetches) 478 self._fetches = [] 479 self._targets = []

    ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/client/session.py in for_fetch(fetch) 264 elif isinstance(fetch, (list, tuple)): 265 # NOTE(touts): This is also the code path for namedtuples. --> 266 return _ListFetchMapper(fetch) 267 elif isinstance(fetch, collections_abc.Mapping): 268 return _DictFetchMapper(fetch)

    ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/client/session.py in init(self, fetches) 376 else: 377 self._fetch_type = type(fetches) --> 378 self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] 379 self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers) 380

    ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/client/session.py in (.0) 376 else: 377 self._fetch_type = type(fetches) --> 378 self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] 379 self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers) 380

    ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/client/session.py in for_fetch(fetch) 260 """ 261 if fetch is None: --> 262 raise TypeError('Fetch argument %r has invalid type %r' % 263 (fetch, type(fetch))) 264 elif isinstance(fetch, (list, tuple)):

    TypeError: Fetch argument None has invalid type <class 'NoneType'>

    opened by rubbyaworka 1
  • what is the difference between jaccard similarity and Intersection over union ?

    what is the difference between jaccard similarity and Intersection over union ?

    what is the difference between jaccard similarity and Intersection over union ? if both are similar then why have different formula ?

    Jaccard similarity = AB/A-B+AB

    Intersection over union = AB/A+B

    opened by alicug 0
  • Training Accuracy is always 1.00 and the Minibatch error is always 0.0%

    Training Accuracy is always 1.00 and the Minibatch error is always 0.0%

    Hi, I have a trouble when i was training model.The minibatch loss seems normal, but the training accuracy is always 1 and minibatch error is always 0.0%. 图片 And I just want extract buildings from image, and my mask label channels is 3,should i set n_class=3 ?

    Here is my code:

    from tf_unet import unet, util, image_util

    data_provider = image_util.ImageDataProvider("data/train/*.tif") net = unet.Unet(layers=3, features_root=64, channels=3, n_class=3) trainer = unet.Trainer(net) path = trainer.train(data_provider, "./data/unet_trained_bgs_example_data", training_iters=32, epochs=100, dropout=0.5)

    verification

    ... data_provider = image_util.ImageDataProvider("data/test/*.tif") x_test, y_test = data_provider(1) prediction = net.predict("./data/unet_trained_bgs_example_data/model.ckpt", x_test) unet.error_rate(prediction, util.crop_to_shape(y_test, prediction.shape)) img = util.combine_img_prediction(x_test, y_test, prediction) util.save_image(img, "prediction.jpg")

    opened by ChristmasLatte 3
Releases(0.1.2)
  • 0.1.2(Jan 8, 2019)

    • Namescopes to improve TensorBoard layout
    • Move bias addition before dropout
    • numerically stable cross entropy computation
    • parametrized verification batch size
    • bugfix if all pixel values are 0
    • cleaned examples
    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Dec 29, 2017)

  • 0.1.0(Mar 27, 2017)

Owner
Joel Akeret
Joel Akeret
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Martin Hahner 110 Dec 30, 2022
《Truly shift-invariant convolutional neural networks》(2021)

Truly shift-invariant convolutional neural networks [Paper] Authors: Anadi Chaman and Ivan Dokmanić Convolutional neural networks were always assumed

Anadi Chaman 46 Dec 19, 2022
Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER 🦌 🦒 Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEE

33 Dec 23, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Cross-Task Consistency Learning Framework for Multi-Task Learning

Cross-Task Consistency Learning Framework for Multi-Task Learning Tested on numpy(v1.19.1) opencv-python(v4.4.0.42) torch(v1.7.0) torchvision(v0.8.0)

Aki Nakano 2 Jan 08, 2022
SciPy fixes and extensions

scipyx SciPy is large library used everywhere in scientific computing. That's why breaking backwards-compatibility comes as a significant cost and is

Nico Schlömer 16 Jul 17, 2022
PyTorch deep learning projects made easy.

PyTorch Template Project PyTorch deep learning project made easy. PyTorch Template Project Requirements Features Folder Structure Usage Config file fo

Victor Huang 3.8k Jan 01, 2023
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

574 Jan 02, 2023
Ensemble Visual-Inertial Odometry (EnVIO)

Ensemble Visual-Inertial Odometry (EnVIO) Authors : Jae Hyung Jung, Yeongkwon Choe, and Chan Gook Park 1. Overview This is a ROS package of Ensemble V

Jae Hyung Jung 95 Jan 03, 2023
SeMask: Semantically Masked Transformers for Semantic Segmentation.

SeMask: Semantically Masked Transformers Jitesh Jain, Anukriti Singh, Nikita Orlov, Zilong Huang, Jiachen Li, Steven Walton, Humphrey Shi This repo co

Picsart AI Research (PAIR) 186 Dec 30, 2022
RE3: State Entropy Maximization with Random Encoders for Efficient Exploration

State Entropy Maximization with Random Encoders for Efficient Exploration (RE3) (ICML 2021) Code for State Entropy Maximization with Random Encoders f

Younggyo Seo 47 Nov 29, 2022
PyTorch code for ICPR 2020 paper Future Urban Scene Generation Through Vehicle Synthesis

Future urban scene generation through vehicle synthesis This repository contains Pytorch code for the ICPR2020 paper "Future Urban Scene Generation Th

Alessandro Simoni 4 Oct 11, 2021
NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem

NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem Liang Xin, Wen Song, Zhiguang

xinliangedu 33 Dec 27, 2022
On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Facebook Research 46 Dec 15, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 02, 2023
Multi-task Learning of Order-Consistent Causal Graphs (NeuRIPs 2021)

Multi-task Learning of Order-Consistent Causal Graphs (NeuRIPs 2021) Authors: Xinshi Chen, Haoran Sun, Caleb Ellington, Eric Xing, Le Song Link to pap

Xinshi Chen 2 Dec 20, 2021
Geometric Vector Perceptron --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Code to accompany Learning from Protein Structure with Geometric Vector Perceptrons by B Jing, S Eismann, P Suriana, RJL T

Dror Lab 85 Dec 29, 2022
Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources.

Illumination_Decomposition Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources. This code implements the

QAY 7 Nov 15, 2020
RoMa: A lightweight library to deal with 3D rotations in PyTorch.

RoMa: A lightweight library to deal with 3D rotations in PyTorch. RoMa (which stands for Rotation Manipulation) provides differentiable mappings betwe

NAVER 90 Dec 27, 2022
Puzzle-CAM: Improved localization via matching partial and full features.

Puzzle-CAM The official implementation of "Puzzle-CAM: Improved localization via matching partial and full features".

Sanghyun Jo 150 Nov 14, 2022