A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN.

Overview

artificial intelligence

cosmic love and attention

fire in the sky

a pyramid made of ice

a lonely house in the woods

marriage in the mountains

lantern dangling from a tree in a foggy graveyard

a vivid dream

balloons over the ruins of a city

the death of the lonesome astronomer - by moirage

the tragic intimacy of the eternal conversation with oneself - by moirage

demon fire - by WiseNat

Big Sleep

Ryan Murdock has done it again, combining OpenAI's CLIP and the generator from a BigGAN! This repository wraps up his work so it is easily accessible to anyone who owns a GPU.

You will be able to have the GAN dream up images using natural language with a one-line command in the terminal.

Original notebook Open In Colab

Simplified notebook Open In Colab

Install

$ pip install big-sleep

Usage

$ dream "a pyramid made of ice"

Images will be saved to wherever the command is invoked

Advanced

You can invoke this in code with

from big_sleep import Imagine

dream = Imagine(
    text = "fire in the sky",
    lr = 5e-2,
    save_every = 25,
    save_progress = True
)

dream()

You can now train more than one phrase using the delimiter "|"

Train on Multiple Phrases

In this example we train on three phrases:

  • an armchair in the form of pikachu
  • an armchair imitating pikachu
  • abstract
from big_sleep import Imagine

dream = Imagine(
    text = "an armchair in the form of pikachu|an armchair imitating pikachu|abstract",
    lr = 5e-2,
    save_every = 25,
    save_progress = True
)

dream()

Penalize certain prompts as well!

In this example we train on the three phrases from before,

and penalize the phrases:

  • blur
  • zoom
from big_sleep import Imagine

dream = Imagine(
    text = "an armchair in the form of pikachu|an armchair imitating pikachu|abstract",
    text_min = "blur|zoom",
)
dream()

You can also set a new text by using the .set_text() command

dream.set_text("a quiet pond underneath the midnight moon")

And reset the latents with .reset()

dream.reset()

To save the progression of images during training, you simply have to supply the --save-progress flag

$ dream "a bowl of apples next to the fireplace" --save-progress --save-every 100

Due to the class conditioned nature of the GAN, Big Sleep often steers off the manifold into noise. You can use a flag to save the best high scoring image (per CLIP critic) to {filepath}.best.png in your folder.

$ dream "a room with a view of the ocean" --save-best

Experimentation

You can set the number of classes that you wish to restrict Big Sleep to use for the Big GAN with the --max-classes flag as follows (ex. 15 classes). This may lead to extra stability during training, at the cost of lost expressivity.

$ dream 'a single flower in a withered field' --max-classes 15

Alternatives

Deep Daze - CLIP and a deep SIREN network

Used By

Citations

@misc{unpublished2021clip,
    title  = {CLIP: Connecting Text and Images},
    author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},
    year   = {2021}
}
@misc{brock2019large,
    title   = {Large Scale GAN Training for High Fidelity Natural Image Synthesis}, 
    author  = {Andrew Brock and Jeff Donahue and Karen Simonyan},
    year    = {2019},
    eprint  = {1809.11096},
    archivePrefix = {arXiv},
    primaryClass = {cs.LG}
}
Comments
  • Possible bug in latent vector loss calculation?

    Possible bug in latent vector loss calculation?

    I'm confused by this, and wondering if it could be a bug? It seems as though latents is of size (32,128), which means that for array in latents: iterates 32 times. However, the results from these iterations aren't stored anywhere, so they are at best a waste of time and at worst causing a miscalculation. Perhaps the intention was to accumulate the kurtoses and skews for each array in latents, and then computing lat_loss using all the accumulated values?

    for array in latents:
        mean = torch.mean(array)
        diffs = array - mean
        var = torch.mean(torch.pow(diffs, 2.0))
        std = torch.pow(var, 0.5)
        zscores = diffs / std
        skews = torch.mean(torch.pow(zscores, 3.0))
        kurtoses = torch.mean(torch.pow(zscores, 4.0)) - 3.0
    
    lat_loss = lat_loss + torch.abs(kurtoses) / num_latents + torch.abs(skews) / num_latents
    

    Occurs at https://github.com/lucidrains/big-sleep/blob/main/big_sleep/big_sleep.py#L211

    opened by walmsley 14
  • Failure on first epoch

    Failure on first epoch

    opens folder where picture should be saved, but this error shows up immediately:

    RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasGemmEx( handle, opa, opb, m, n, k, &falpha, a, CUDA_R_16F, lda, b, CUDA_R_16F, ldb, &fbeta, c, CUDA_R_16F, ldc, CUDA_R_32F, CUBLAS_GEMM_DFALT_TENSOR_OP)

    torch version: 1.7.1 torch.cuda.is_available() == true

    what am i missing?

    opened by nhalsteadvt 11
  • Generated images are completely black?! 😡 What am I doing wrong?

    Generated images are completely black?! 😡 What am I doing wrong?

    Hello, I am on Windows 10, and my gpu is a PNY Nvidia GTX 1660 TI 6 Gb. I installed Big Sleep like so:

    • conda create --name bigsleep python=3.8
    • conda activate bigsleep
    • conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch (as per Pytorch website instructions)
    • pip install big-sleep

    The problem is that when I launch dream --text="a beautiful mind" --num-cutouts=64 --image-size=512 --save_every=10 --seed=12345 the generated images are completely black (although the inference process seems to run nicely and without errors).

    Things I've tried:

    • installing previous pytorch version with conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
    • removing the big-sleep conda environment completely and recreating it anew
    • uninstalling nvidia drivers and performing a new clean driver install (I tried both Nvidia Studio drivers and Nvidia Game Ready drivers)
    • uninstalling and reinstalling Conda completely

    But nothing helped... and at this point I don't know what else to try...

    The only interesting piece of information I could gather is that for some reason this problem also happens with another text-to-image project called v-diffusion-pytorch where similar to Big Sleep the inference process appears to run correctly but the generated images are all black.

    I think there must be some simple detail I'm overlooking... which it's making me go insane... 😡 Please let me know something if you think you can help! THANKS !

    opened by illtellyoulater 7
  • dream.set_text(

    dream.set_text("a quiet pond underneath the midnight moon") = object has no attribute 'set_text'

    Looks like def set_text is now def set_clip_encoding? Also, various settings aren't honoured using dream.set_clip_encoding then dream (such as save_best = True). Would it be possible / make sense to retain settings from the previous dream?

    opened by nerdyrodent 6
  • Overwrite Progress Bars instead of printing each iteration

    Overwrite Progress Bars instead of printing each iteration

    Update tqdm calls to overwrite a single line This leads to less console spam. Also use the tqdm for the image update notifications (I think the math on this is wrong, please check)

    opened by DrJKL 6
  • Can't install because of conflicting dependencies

    Can't install because of conflicting dependencies

    OS: Arch GNU/Linux Python: 3.10.1 Command run: pip3 install big-sleep Error:

      Downloading big_sleep-0.0.1-py3-none-any.whl (1.4 MB)
         |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.4 MB 9.1 MB/s
    ERROR: Cannot install big-sleep==0.0.1, big-sleep==0.0.2, big-sleep==0.1.0, big-sleep==0.1.1, big-sleep==0.1.2, big-sleep==0.1.4, big-sleep==0.2.0, big-sleep==0.2.2, big-sleep==0.2.3, big-sleep==0.2.4, big-sleep==0.2.5, big-sleep==0.2.6, big-sleep==0.2.7, big-sleep==0.2.8, big-sleep==0.2.9, big-sleep==0.3.0, big-sleep==0.3.1, big-sleep==0.3.2, big-sleep==0.3.3, big-sleep==0.3.4, big-sleep==0.3.5, big-sleep==0.3.6, big-sleep==0.3.7, big-sleep==0.3.8, big-sleep==0.4.0, big-sleep==0.4.1, big-sleep==0.4.10, big-sleep==0.4.11, big-sleep==0.4.2, big-sleep==0.4.3, big-sleep==0.4.4, big-sleep==0.4.5, big-sleep==0.4.6, big-sleep==0.4.7, big-sleep==0.4.8, big-sleep==0.4.9, big-sleep==0.5.0, big-sleep==0.5.1, big-sleep==0.5.2, big-sleep==0.5.3, big-sleep==0.6.0, big-sleep==0.6.1, big-sleep==0.6.2, big-sleep==0.7.0, big-sleep==0.7.1, big-sleep==0.7.2, big-sleep==0.8.0, big-sleep==0.8.1, big-sleep==0.8.2, big-sleep==0.8.3, big-sleep==0.8.4 and big-sleep==0.8.5 because these package versions have conflicting dependencies.
    
    The conflict is caused by:
        big-sleep 0.8.5 depends on torchvision>=0.8.2
        big-sleep 0.8.4 depends on torchvision>=0.8.2
        big-sleep 0.8.3 depends on torchvision>=0.8.2
        big-sleep 0.8.2 depends on torchvision>=0.8.2
        big-sleep 0.8.1 depends on torchvision>=0.8.2
        big-sleep 0.8.0 depends on torchvision>=0.8.2
        big-sleep 0.7.2 depends on torchvision>=0.8.2
        big-sleep 0.7.1 depends on torchvision>=0.8.2
        big-sleep 0.7.0 depends on torchvision>=0.8.2
        big-sleep 0.6.2 depends on torchvision>=0.8.2
        big-sleep 0.6.1 depends on torchvision>=0.8.2
        big-sleep 0.6.0 depends on torchvision>=0.8.2
        big-sleep 0.5.3 depends on torchvision>=0.8.2
        big-sleep 0.5.2 depends on torchvision>=0.8.2
        big-sleep 0.5.1 depends on torchvision>=0.8.2
        big-sleep 0.5.0 depends on torchvision>=0.8.2
        big-sleep 0.4.11 depends on torchvision>=0.8.2
        big-sleep 0.4.10 depends on torchvision>=0.8.2
        big-sleep 0.4.9 depends on torchvision>=0.8.2
        big-sleep 0.4.8 depends on torchvision>=0.8.2
        big-sleep 0.4.7 depends on torchvision>=0.8.2
        big-sleep 0.4.6 depends on torchvision>=0.8.2
        big-sleep 0.4.5 depends on torchvision>=0.8.2
        big-sleep 0.4.4 depends on torchvision>=0.8.2
        big-sleep 0.4.3 depends on torchvision>=0.8.2
        big-sleep 0.4.2 depends on torchvision>=0.8.2
        big-sleep 0.4.1 depends on torchvision>=0.8.2
        big-sleep 0.4.0 depends on torchvision>=0.8.2
        big-sleep 0.3.8 depends on torchvision>=0.8.2
        big-sleep 0.3.7 depends on torchvision>=0.8.2
        big-sleep 0.3.6 depends on torchvision>=0.8.2
        big-sleep 0.3.5 depends on torchvision>=0.8.2
        big-sleep 0.3.4 depends on torchvision>=0.8.2
        big-sleep 0.3.3 depends on torchvision>=0.8.2
        big-sleep 0.3.2 depends on torchvision>=0.8.2
        big-sleep 0.3.1 depends on torchvision>=0.8.2
        big-sleep 0.3.0 depends on torchvision>=0.8.2
        big-sleep 0.2.9 depends on torchvision>=0.8.2
        big-sleep 0.2.8 depends on torchvision>=0.8.2
        big-sleep 0.2.7 depends on torch>=1.7.1
        big-sleep 0.2.6 depends on torch>=1.7.1
        big-sleep 0.2.5 depends on torch>=1.7.1
        big-sleep 0.2.4 depends on torch>=1.7.1
        big-sleep 0.2.3 depends on torch>=1.7.1
        big-sleep 0.2.2 depends on torch>=1.7.1
        big-sleep 0.2.0 depends on torch>=1.7.1
        big-sleep 0.1.4 depends on torch>=1.7.1
        big-sleep 0.1.2 depends on torch>=1.7.1
        big-sleep 0.1.1 depends on torch>=1.7.1
        big-sleep 0.1.0 depends on torch>=1.7.1
        big-sleep 0.0.2 depends on torch>=1.7.1
        big-sleep 0.0.1 depends on torch>=1.7.1
    
        ```
    opened by ThatOneCalculator 5
  • Method 'forward' is not defined

    Method 'forward' is not defined

    I installed the module via

    $ pip install deep-daze and just tried the provided example with

    $ imagine "a house in the forest" but after it loaded something for a few minutes (the first time I run the command) it throws this error

    Traceback (most recent call last):

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-2-32b6fbd8f807> in <module>()
    ----> 1 from deep_daze import Imagine
         2 
         3 imagine = Imagine(
         4     text = 'cosmic love and attention',
         5     num_layers = 24,
    
    E:\Anaconda\lib\site-packages\deep_daze\__init__.py in <module>()
    ----> 1 from deep_daze.deep_daze import DeepDaze, Imagine
    
    E:\Anaconda\lib\site-packages\deep_daze\deep_daze.py in <module>()
        37 signal.signal(signal.SIGINT, signal_handling)
        38 
    ---> 39 perceptor, normalize_image = load()
        40 
        41 # Helpers
    
    E:\Anaconda\lib\site-packages\deep_daze\clip.py in load()
       190                     node.copyAttributes(device_node)
       191 
    --> 192     model.apply(patch_device)
       193     patch_device(model.encode_image)
       194     patch_device(model.encode_text)
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       471         """
       472         for module in self.children():
    --> 473             module.apply(fn)
       474         fn(self)
       475         return self
    
    E:\Anaconda\lib\site-packages\torch\nn\modules\module.py in apply(self, fn)
       472         for module in self.children():
       473             module.apply(fn)
    --> 474         fn(self)
       475         return self
       476 
    
    E:\Anaconda\lib\site-packages\deep_daze\clip.py in patch_device(module)
       181 
       182     def patch_device(module):
    --> 183         graphs = [module.graph] if hasattr(module, "graph") else []
       184         if hasattr(module, "forward1"):
       185             graphs.append(module.forward1.graph)
    
    E:\Anaconda\lib\site-packages\torch\jit\_script.py in graph(self)
       447             ``forward`` method. See :ref:`interpreting-graphs` for details.
       448             """
    --> 449             return self._c._get_method("forward").graph
       450 
       451         @property
    
    RuntimeError: Method 'forward' is not defined.
    

    My system is: Windows 10 GeForce GTX1060 6G pytorch 1.8.0+cu111 python 3.7.0

    opened by Jeffrey0Liao 4
  • Question about Colab environments affecting results

    Question about Colab environments affecting results

    This isn't really an issue, but I've been using this package to morph from one text input to the next by updating the input while its running and storing all intermediate output. I'm using variations of the notebook I've checked in here: https://github.com/lots-of-things/Story2Hallucination

    When doing this I've gotten to really see inside how the algorithm converges on its solution. And I've noticed that there are at least two distinct modes. Sometimes, the algorithm quickly converges and sticks and other times the algorithm wobbles around an image and then much more easily warps to something new.

    Here's two images with the same input and framerate: jerky/sticky output

    wobbly/warpy output

    If these two modes happened randomly I'd understand it, but here is the really strange part. This behavior will be consistent in the same colab environment. If I pull up a colab env and run the notebook and it does the sticky way. Then no matter how many times I restart the run, it will always be sticky. I have to factory reset the env to get it to change. Then if it starts to do it the wobbly way, it'll stay wobbly.

    It sounds bizarre but is there any reason this would be possible? I know there are different CUDA environments, but not sure if/why that would make it so different.

    opened by stedn 4
  • Backslash character in file names.

    Backslash character in file names.

    I'm now getting file names containing "\" which were previously removed automatically. This is causing havoc when I attempt to use Windows to read files over the network which were created on my Linux system.

    opened by pmonck 3
  • Image saving error

    Image saving error

    Hello I am using Big-Sleep straight out of command line on windows with no extra flags but for some reason when ever I get to image update 2 or 5 (most times 2) it errors out giving the error OSError: [Errno 22] Invalid argument: I couldn't find any fix that would work in this situation. Any and all help as to solving this error is greatly appreciated

    opened by Wemmons831 2
  • Not Working with Nemo File Manager

    Not Working with Nemo File Manager

    Installed successfully on Arch 5.11.16 with pip install big-sleep and it worked great but after a reboot it fails to generate images. After running any commands prime-run dream --num-cutouts=25 --save-progress --save-every 100 "whatever" or just dream --num-cutouts 25 "whatever" it opens the file manager (Nemo) to the directory but no images are generated. It sits while still using RAM but no GPU or CPU. Previously I was able to get it to work by reinstalling torch and big-sleep but that doesn't solve the problem anymore.

    Edit: After letting it sit for a few minutes, cancelling the command with CTRL+C shows detecting keyboard interrupt, gracefully exiting with empty progess bars before exiting.

    Running with the --open-folder False option fixes this

    opened by tilktilk5 2
  • Man in Sky

    Man in Sky

    A man who is levitating above the city, towards the clouds, part of his body is disintegrating and turning into binary numbers. At the bottom of the sky there is a transparent, blinding hand that receives it and welcomes it

    opened by FrankLaheraOcallaghan 0
  • Error: tensor is not a torch image

    Error: tensor is not a torch image

    I tried to run several examples on GCP with torch installed correctly and GPU.

    dream = Imagine(
        text = "an armchair in the form of pikachu",
    )
    dream()
    

    Sadly, I get this error:

    TypeError: tensor is not a torch image.
    

    Maybe someone else encoutered that?

    opened by dokato 0
  • How to use the CUDA to big-sleep?

    How to use the CUDA to big-sleep?

    I try to use my ubuntu machine to run big-sleep. It installs cuda. Big-sleep is also running normally, but during the running process, the performance of my GPU is not used, and the CPU usage reaches 100%. For a 512*512 image, I need to train for 6 hours, which is too long.

    opened by andy1994220 0
  • Does it work on mac ?

    Does it work on mac ?

    Hello Newbie here. I'm on Mac M1 Installation worked with "pip install big-sleep" But then when I try "dream..." it says "command not found" Does anyone know why and how to make it work ? Thanks a lot

    opened by NtnmrnNtnmrn 6
  • Option for initializing the generation with an image file?

    Option for initializing the generation with an image file?

    Hi @lucidrains, @afiaka87, @DrJKL, @enricoros, and all other contributors to this amazing project.

    I am trying to figure out the following: I saw there is an "--img" flag for using an image as prompt, but is there a way to use an image as initializer for the generation?

    If not, then do you guys have any plans implementing it? I'd say this is probably be one of the most important feature still missing from Big Sleep and... I would be happy to contribute to this myself, but this is not my field (yet!) so I honestly have no idea where to start...

    But ! If you think this could be similar to how other projects implemented it (for example deep daze), then I could take a look at it and try to make my way around it... and I could probably get somewhere...

    So yeah, let me know what you think!

    opened by illtellyoulater 15
Releases(0.9.1)
Owner
Phil Wang
Working with Attention.
Phil Wang
ECAENet (TensorFlow and Keras)

ECAENet: EfficientNet with Efficient Channel Attention for Plant Species Recognition (SCI:Q3) (Journal of Intelligent & Fuzzy Systems)

4 Dec 22, 2022
PyKaldi GOP-DNN on Epa-DB

PyKaldi GOP-DNN on Epa-DB This repository has the tools to run a PyKaldi GOP-DNN algorithm on Epa-DB, a database of non-native English speech by Spani

18 Dec 14, 2022
Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning Approach

Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning Approach This is the implementation of traffic prediction code in DTMP based on PyTo

chenxin 1 Dec 19, 2021
Read number plates with https://platerecognizer.com/

HASS-plate-recognizer Read vehicle license plates with https://platerecognizer.com/ which offers free processing of 2500 images per month. You will ne

Robin 69 Dec 30, 2022
Deep Inertial Prediction (DIPr)

Deep Inertial Prediction For more information and context related to this repo, please refer to our website. Getting Started (non Docker) Note: you wi

Arcturus Industries 12 Nov 11, 2022
A package for music online and offline rhythmic information analysis including music Beat, downbeat, tempo and meter tracking.

BeatNet A package for music online and offline rhythmic information analysis including music Beat, downbeat, tempo and meter tracking. This repository

Mojtaba Heydari 157 Dec 27, 2022
Bolt Online Learning Toolbox

Bolt Online Learning Toolbox Bolt features discriminative learning of linear predictors (e.g. SVM or Logistic Regression) using fast online learning a

Peter Prettenhofer 87 Dec 12, 2022
This repository contains the implementation of the paper: "Towards Frequency-Based Explanation for Robust CNN"

RobustFreqCNN About This repository contains the implementation of the paper "Towards Frequency-Based Explanation for Robust CNN" arxiv. It primarly d

Sarosij Bose 2 Jan 23, 2022
(CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic

ClassSR (CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic Paper Authors: Xiangtao Kong, Hengyuan

Xiangtao Kong 308 Jan 05, 2023
A PyTorch Lightning Callback for pushing models to the Hugging Face Hub πŸ€—βš‘οΈ

hf-hub-lightning A callback for pushing lightning models to the Hugging Face Hub. Note: I made this package for myself, mostly...if folks seem to be i

Nathan Raw 27 Dec 14, 2022
Representing Long-Range Context for Graph Neural Networks with Global Attention

Graph Augmentation Graph augmentation/self-supervision/etc. Algorithms gcn gcn+virtual node gin gin+virtual node PNA GraphTrans Augmentation methods N

UC Berkeley RISE 67 Dec 30, 2022
Simple PyTorch hierarchical models.

A python package adding basic hierarchal networks in pytorch for classification tasks. It implements a simple hierarchal network structure based on feed-backward outputs.

Rajiv Sarvepalli 5 Mar 06, 2022
Pairwise model for commonlit competition

Pairwise model for commonlit competition To run: - install requirements - create input directory with train_folds.csv and other competition data - cd

abhishek thakur 45 Aug 31, 2022
Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Christopher T. Chubb 35 Dec 21, 2022
Unsupervised Feature Ranking via Attribute Networks.

FRANe Unsupervised Feature Ranking via Attribute Networks (FRANe) converts a dataset into a network (graph) with nodes that correspond to the features

7 Sep 29, 2022
Hierarchical Cross-modal Talking Face Generation with Dynamic Pixel-wise Loss (ATVGnetοΌ‰

Hierarchical Cross-modal Talking Face Generation with Dynamic Pixel-wise Loss (ATVGnetοΌ‰ By Lele Chen , Ross K Maddox, Zhiyao Duan, Chenliang Xu. Unive

Lele Chen 218 Dec 27, 2022
Source code for the paper "PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction" in ACL2021

PLOME:Pre-training with Misspelled Knowledge for Chinese Spelling Correction (ACL2021) This repository provides the code and data of the work in ACL20

197 Nov 26, 2022
CVAT is free, online, interactive video and image annotation tool for computer vision

Computer Vision Annotation Tool (CVAT) CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our

OpenVINO Toolkit 8.6k Jan 04, 2023
TrackTech: Real-time tracking of subjects and objects on multiple cameras

TrackTech: Real-time tracking of subjects and objects on multiple cameras This project is part of the 2021 spring bachelor final project of the Bachel

5 Jun 17, 2022
Code for the paper "Learning-Augmented Algorithms for Online Steiner Tree"

Learning-Augmented Algorithms for Online Steiner Tree This is the code for the paper "Learning-Augmented Algorithms for Online Steiner Tree". Requirem

0 Dec 09, 2021