A fastai/PyTorch package for unpaired image-to-image translation.

Overview

Unpaired image-to-image translation

A fastai/PyTorch package for unpaired image-to-image translation currently with CycleGAN implementation.

This is a package for training and testing unpaired image-to-image translation models. It currently only includes the CycleGAN, DualGAN, and GANILLA models, but other models will be implemented in the future.

This package uses fastai to accelerate deep learning experimentation. Additionally, nbdev was used to develop the package and produce documentation based on a series of notebooks.

Install

To install, use pip:

pip install git+https://github.com/tmabraham/UPIT.git

The package uses torch 1.7.1, torchvision 0.8.2, and fastai 2.3.0 (and its dependencies). It also requires nbdev 1.1.13 if you would like to add features to the package. Finally, for creating a web app model interface, gradio 1.1.6 is used.

How to use

Training a CycleGAN model is easy with UPIT! Given the paths of the images from the two domains trainA_path and trainB_path, you can do the following:

#cuda
from upit.data.unpaired import *
from upit.models.cyclegan import *
from upit.train.cyclegan import *
dls = get_dls(trainA_path, trainB_path)
cycle_gan = CycleGAN(3,3,64)
learn = cycle_learner(dls, cycle_gan,opt_func=partial(Adam,mom=0.5,sqr_mom=0.999))
learn.fit_flat_lin(100,100,2e-4)

The GANILLA model is only a different generator model architecture (that's meant to strike a better balance between style and content), so the same cycle_learner class can be used.

#cuda
from upit.models.ganilla import *
ganilla = GANILLA(3,3,64)
learn = cycle_learner(dls, ganilla,opt_func=partial(Adam,mom=0.5,sqr_mom=0.999))
learn.fit_flat_lin(100,100,2e-4)

Finally, we provide separate functions/classes for DualGAN model and training:

#cuda
from upit.models.dualgan import *
from upit.train.dualgan import *
dual_gan = DualGAN(3,64,3)
learn = dual_learner(dls, dual_gan, opt_func=RMSProp)
learn.fit_flat_lin(100,100,2e-4)

Additionally, we provide metrics for quantitative evaluation of the models, as well as experiment tracking with Weights and Biases. Check the documentation for more information!

Citing UPIT

If you use UPIT in your research please use the following BibTeX entry:

@Misc{UPIT,
    author =       {Tanishq Mathew Abraham},
    title =        {UPIT - A fastai/PyTorch package for unpaired image-to-image translation.},
    howpublished = {Github},
    year =         {2021},
    url =          {https://github.com/tmabraham/UPIT}
}
Comments
  • AttributeError: 'Learner' object has no attribute 'pred'

    AttributeError: 'Learner' object has no attribute 'pred'

    Hi. I am getting the following error:

    from upit.data.unpaired import *
    from upit.models.cyclegan import *
    from upit.train.cyclegan import *
    from fastai.vision.all import *
    
    horse2zebra = untar_data('https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/horse2zebra.zip')
    
    
    folders = horse2zebra.ls().sorted()
    trainA_path = folders[2]
    trainB_path = folders[3]
    testA_path = folders[0]
    testB_path = folders[1]
    
    dls = get_dls(trainA_path, trainB_path,num_A=100)
    cycle_gan = CycleGAN(3,3,64)
    learn = cycle_learner(dls, cycle_gan,show_img_interval=1)
    learn.show_training_loop()
    learn.lr_find()
    
    AttributeError: 'Learner' object has no attribute 'pred'
    
    opened by turgut090 8
  • 'str' object has no attribute

    'str' object has no attribute "__stored_args__" (ISSUE #7)

    I opened an issue (#7) on 03/09/2020. The problem and the solution are mentioned below with the colab files as well.

    On 03/09/2020, a commit was made in fastai/fastcore/uitls.py (fastai/[email protected]) (line 86) which made sure that we need not use "self" when we call the method store_attr. ( separate the names using commas)

    I made a small change in the code ie. in the upit/train/cyclegan.py such that it follows the changed fastcore/utils.py structure. I removed the self component for store_attr in cyclegan.py.

    The error can be seen here

    Screenshot from 2020-09-04 22-27-26

    The colab link: https://colab.research.google.com/drive/1lbXhX-bWvTsQcLb2UUFI0UkRpvx_WK2Y?usp=sharing

    After the change, the error does not occur as shown here.

    Screenshot from 2020-09-04 22-50-17 The Colab link: https://colab.research.google.com/drive/13rrNLBgDulgeHclovxJNtQVbVYcGMKm0?usp=sharing

    opened by lohithmunakala 7
  • Expose tfms in dataset generation

    Expose tfms in dataset generation

    Hey there,

    I think it would be a good idea to expose the tfms of the Datasets in the get_dls-Function, so users can set their own. https://github.com/tmabraham/UPIT/blob/c6c769cf8cedeec42865deddba26c2e413772303/upit/data/unpaired.py#L30

    Same goes for dataloaders batch_tfms, (e.g. if I want to disable Flipping I have to rewrite the dataloaders)

    https://github.com/tmabraham/UPIT/blob/c6c769cf8cedeec42865deddba26c2e413772303/upit/data/unpaired.py#L33

    opened by hno2 4
  • How do I turn fake_A and fake_B into images and save them?

    How do I turn fake_A and fake_B into images and save them?

    I would like to see what fake_A and fake_B look like at this step in the process saved as images. I can't seem to figure out how to convert them properly.

    def forward(self, output, target): """ Forward function of the CycleGAN loss function. The generated images are passed in as output (which comes from the model) and the generator loss is returned. """ fake_A, fake_B, idt_A, idt_B = output #Save and look at png images of fake_A and fake_B here

    opened by rbunn80110 3
  • Possible typo in the loss

    Possible typo in the loss

    https://github.com/tmabraham/UPIT/blob/1f272eac299181348c31988289c1420936cb580b/upit/train/cyclegan.py#L132

    Was this intended or was this line supposed to be: self.learn.loss_func.D_B_loss = loss_D_B.detach().cpu() ?

    opened by many-hats 2
  • Inference - Can not load state_dict

    Inference - Can not load state_dict

    Hey,

    me again. Sorry to bother you again!

    So I am trying to do inference on a trained model (with default values). I exported the Generator with the export_generatorFunction. Now I try to load my generator as shown in the Web App Example.

    But I get errors in loading the state_dict. The state dict seems to have extra key for nine extra layers, if I understand the error message correctly:

    Error Message

    model.load_state_dict(torch.load("generator.pth", map_location=device))
      File "/Applications/Utilities/miniconda3/envs/ml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for Sequential:
            Missing key(s) in state_dict: "10.conv_block.5.weight", "10.conv_block.5.bias", "11.conv_block.5.weight", "11.conv_block.5.bias", "12.conv_block.5.weight", "12.conv_block.5.bias", "13.conv_block.5.weight", "13.conv_block.5.bias", "14.conv_block.5.weight", "14.conv_block.5.bias", "15.conv_block.5.weight", "15.conv_block.5.bias", "16.conv_block.5.weight", "16.conv_block.5.bias", "17.conv_block.5.weight", "17.conv_block.5.bias", "18.conv_block.5.weight", "18.conv_block.5.bias". 
            Unexpected key(s) in state_dict: "10.conv_block.6.weight", "10.conv_block.6.bias", "11.conv_block.6.weight", "11.conv_block.6.bias", "12.conv_block.6.weight", "12.conv_block.6.bias", "13.conv_block.6.weight", "13.conv_block.6.bias", "14.conv_block.6.weight", "14.conv_block.6.bias", "15.conv_block.6.weight", "15.conv_block.6.bias", "16.conv_block.6.weight", "16.conv_block.6.bias", "17.conv_block.6.weight", "17.conv_block.6.bias", "18.conv_block.6.weight", "18.conv_block.6.bias".
    

    Minimal Working Example

    
    import torch
    from upit.models.cyclegan import resnet_generator
    import torchvision.transforms
    from PIL import Image
    
    
    device = torch.device("cpu")
    model = resnet_generator(ch_in=3, ch_out=3)
    model.load_state_dict(torch.load("generator.pth", map_location=device))
    model.eval()
    
    
    totensor = torchvision.transforms.ToTensor()
    normalize_fn = torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
    topilimage = torchvision.transforms.ToPILImage()
    
    
    def predict(input):
        im = normalize_fn(totensor(input))
        print(im.shape)
        preds = model(im.unsqueeze(0)) / 2 + 0.5
        print(preds.shape)
        return topilimage(preds.squeeze(0).detach().cpu())
    
    
    im = predict(Image.open("test.jpg"))
    im.save("out.jpg")
    

    Thanks again for your support!

    opened by hno2 2
  • Disable Identity Loss

    Disable Identity Loss

    Hey, thanks for your awesome work. If I want to set l_idt of the CycleGANLoss to zero, how would I do this? Can I pass this some argument to the cycle_learner? On a quick look this seems to be hardcoded in the cycle_learner, right ? So I would have to right "my own" cycle_learner?

    Thanks for the answers to my - most likely - stupid questions!

    opened by hno2 2
  • How to show images after fit?

    How to show images after fit?

    Hi. The learner has a method learn.progress.show_cycle_gan_imgs. However, how to plot it with matplotlib's plt.show() if I use python repl. There is an argument event_name in learn.progress.show_cycle_gan_imgs. I would like to do it after fit:

    >>> learn.fit_flat_lin(1,1,2e-4)
    epoch     train_loss  id_loss_A  id_loss_B  gen_loss_A  gen_loss_B  cyc_loss_A  cyc_loss_B  D_A_loss  D_B_loss  time    
    /home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastprogress/fastprogress.py:74: UserWarning: Your generator is empty.
      warn("Your generator is empty.")
    0         10.809340   1.621755   1.690260   0.420364    0.452442    3.359353    3.504819    0.370692  0.370692  00:08     
    1         9.847495    1.283465   1.510985   0.353303    0.349454    2.682495    3.135504    0.253919  0.253919  00:07 
    

    https://github.com/tmabraham/UPIT/blob/020f8e2d8dbab6824cd4bef2690ea93e3ff69a6e/upit/train/cyclegan.py#L167-L176

    opened by turgut090 2
  • How to make predictions?

    How to make predictions?

    With image classifier I usually do:

    test_dl = object.dls.test_dl("n02381460_1052.jpg") # object is model/learner
    predictions = object.get_preds(dl = test_dl)
    

    However, it throws:

      TypeError: 'NoneType' object cannot be interpreted as an integer 
    
    Traceback (most recent call last):
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 155, in _with_events
        try:       self(f'before_{event_type}')       ;f()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 161, in all_batches
        for o in enumerate(self.dl): self.one_batch(*o)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 179, in one_batch
        self._with_events(self._do_one_batch, 'batch', CancelBatchException)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 155, in _with_events
        try:       self(f'before_{event_type}')       ;f()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 133, in __call__
        def __call__(self, event_name): L(event_name).map(self._call_one)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/foundation.py", line 226, in map
        def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 537, in map_ex
        return list(res)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 527, in __call__
        return self.fn(*fargs, **kwargs)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in _call_one
        [cb(event_name) for cb in sort_by_run(self.cbs)]
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in <listcomp>
        [cb(event_name) for cb in sort_by_run(self.cbs)]
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/callback/core.py", line 44, in __call__
        if self.run and _run: res = getattr(self, event_name, noop)()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/upit/train/cyclegan.py", line 112, in before_batch
        self.learn.xb = (self.learn.xb[0],self.learn.yb[0]),
    IndexError: tuple index out of range
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/torch_core.py", line 268, in to_concat
        try:    return retain_type(torch.cat(xs, dim=dim), xs[0])
    TypeError: expected Tensor as element 0 in argument 0, but got NoneType
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<string>", line 2, in <module>
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 235, in get_preds
        self._do_epoch_validate(dl=dl)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 188, in _do_epoch_validate
        with torch.no_grad(): self._with_events(self.all_batches, 'validate', CancelValidException)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 157, in _with_events
        finally:   self(f'after_{event_type}')        ;final()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 133, in __call__
        def __call__(self, event_name): L(event_name).map(self._call_one)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/foundation.py", line 226, in map
        def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 537, in map_ex
        return list(res)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 527, in __call__
        return self.fn(*fargs, **kwargs)
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in _call_one
        [cb(event_name) for cb in sort_by_run(self.cbs)]
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in <listcomp>
        [cb(event_name) for cb in sort_by_run(self.cbs)]
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/callback/core.py", line 44, in __call__
        if self.run and _run: res = getattr(self, event_name, noop)()
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/callback/core.py", line 123, in after_validate
        if not self.save_preds: self.preds   = detuplify(to_concat(self.preds, dim=self.concat_dim))
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/torch_core.py", line 270, in to_concat
        for i in range_of(o_)) for o_ in xs], L())
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/torch_core.py", line 270, in <listcomp>
        for i in range_of(o_)) for o_ in xs], L())
      File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/basics.py", line 425, in range_of
        return list(range(a,b,step) if step is not None else range(a,b) if b is not None else range(a))
    TypeError: 'NoneType' object cannot be interpreted as an integer
    
    opened by turgut090 2
  • 'str' object has no attribute

    'str' object has no attribute "__stored_args__" (ISSUE #7)

    I opened an issue (https://github.com/tmabraham/UPIT/issues/7) on 03/09/2020. The problem and the solution are mentioned below with the colab files as well.

    On 03/09/2020, a commit was made in fastai/fastcore/uitls.py (https://github.com/fastai/fastcore/commit/ea1c33f1c3543e6e4403b1d3b7702a98471f3515) (line 86) which made sure that we need not use "self" when we call the method store_attr. ( separate the names using commas).

    I made a small change in the code ie. in the upit/train/cyclegan.py such that it follows the changed fastcore/utils.py structure. I removed the self component for store_attr in cyclegan.py.

    The error can be seen here

    Screenshot from 2020-09-04 22-27-26

    The colab link : https://colab.research.google.com/drive/1lbXhX-bWvTsQcLb2UUFI0UkRpvx_WK2Y?usp=sharing

    After the change, the error does not occur as shown here.

    Screenshot from 2020-09-04 22-50-17

    The Colab link: https://colab.research.google.com/drive/13rrNLBgDulgeHclovxJNtQVbVYcGMKm0?usp=sharing

    opened by lohithmunakala 2
  • Add HuggingFace Hub integration

    Add HuggingFace Hub integration

    Pretrained models to be available on HuggingFace Hub, as well as allowing users to save their own models to HuggingFace Hub.

    Relevant links:

    • https://huggingface.co/docs/hub/adding-a-model
    • https://huggingface.co/docs/hub/adding-a-model#using-the-huggingface_hub-client-library
    enhancement 
    opened by tmabraham 1
  • Validate with Existing Model Trained on Both Classes

    Validate with Existing Model Trained on Both Classes

    This is really great work! I noticed you appear to be using this code for creating new examples of pathology images. I am doing something similar, just not pathology. Let's say I already have a separate model trained to classify these types of images and I want to run it against a validation set every epoch and determine how well the cyclegan is performing in generating new examples that fool an existing classifier. I'm trying to figure out where the best place might be to add code which does that. I can see a few places where it would probably work, but was curious if you have already thought about adding this functionality to monitor training progress?

    Thanks,

    Bob

    opened by rbunn80110 4
  • Add more unit tests

    Add more unit tests

    Here are some tests of interest:

    • [ ] Test the loss (example fake and real images for the reconstruction loss and discriminator)

    • [ ] Check the batch independence and the model parameter updates

    • [ ] Test successful overfitting on a single batch

    • [ ] Test for rotation invariance and other invariance properties

    enhancement 
    opened by tmabraham 0
  • multi-GPU support

    multi-GPU support

    I need to check if fastai's multi-GPU support work with my package, and if not, what needs to be modified to get it to work. Additionally, I may need to add a simpler interface for DDP or at least clear examples/documentation. This will enable for quicker model training on multi-GPU servers, like those at USF.

    enhancement 
    opened by tmabraham 1
  • Add metrics and test model tracking callbacks

    Add metrics and test model tracking callbacks

    I want to add support for metrics, and even potentially include some common metrics, like FID, mi-FID, KID, and segmentation metrics (for paired) etc.

    Additionally, monitoring the losses and metrics, I want to be able to use fastai's built-in callbacks for saving best model, early stopping, and reducing LR on plateau.

    This shouldn't be too hard to include. A major part of this feature is finding good PyTorch/numpy implementations of some of these metrics and getting it to work.

    enhancement 
    opened by tmabraham 5
Releases(0.2.2)
Python tool that takes the OCR.space JSON output as input and draws a text overlay on top of the image.

OCR.space OCR Result Checker = Draw OCR overlay on top of image Python tool that takes the OCR.space JSON output as input, and draws an overlay on to

a9t9 4 Oct 18, 2022
Simple SDF mesh generation in Python

Generate 3D meshes based on SDFs (signed distance functions) with a dirt simple Python API.

Michael Fogleman 1.1k Jan 08, 2023
BoxToolBox is a simple python application built around the openCV library

BoxToolBox is a simple python application built around the openCV library. It is not a full featured application to guide you through the w

František Horínek 1 Nov 12, 2021
caffe re-implementation of R2CNN: Rotational Region CNN for Orientation Robust Scene Text Detection

R2CNN: Rotational Region CNN for Orientation Robust Scene Text Detection Abstract This is a caffe re-implementation of R2CNN: Rotational Region CNN fo

candler 80 Dec 28, 2021
PyTorch Re-Implementation of EAST: An Efficient and Accurate Scene Text Detector

Description This is a PyTorch Re-Implementation of EAST: An Efficient and Accurate Scene Text Detector. Only RBOX part is implemented. Using dice loss

365 Dec 20, 2022
Page to PAGE Layout Analysis Tool

P2PaLA Page to PAGE Layout Analysis (P2PaLA) is a toolkit for Document Layout Analysis based on Neural Networks. 💥 Try our new DEMO for online baseli

Lorenzo Quirós Díaz 180 Nov 24, 2022
轻量级公式 OCR 小工具:一键识别各类公式图片,并转换为 LaTeX 格式

QC-Formula | 青尘公式 OCR 介绍 轻量级开源公式 OCR 小工具:一键识别公式图片,并转换为 LaTeX 格式。 支持从 电脑本地 导入公式图片;(后续版本将支持直接从网页导入图片) 公式图片支持 .png / .jpg / .bmp,大小为 4M 以内均可; 支持印刷体及手写体,前

青尘工作室 26 Jan 07, 2023
An organized collection of tutorials and projects created for aspriring computer vision students.

A repository created with the purpose of teaching students in BME lab 308A- Hanoi University of Science and Technology

Givralnguyen 5 Nov 24, 2021
基于图像识别的开源RPA工具,理论上可以支持所有windows软件和网页的自动化

SimpleRPA 基于图像识别的开源RPA工具,理论上可以支持所有windows软件和网页的自动化 简介 SimpleRPA是一款python语言编写的开源RPA工具(桌面自动控制工具),用户可以通过配置yaml格式的文件,来实现桌面软件的自动化控制,简化繁杂重复的工作,比如运营人员给用户发消息,

Song Hui 7 Jun 26, 2022
Handwritten_Text_Recognition

Deep Learning framework for Line-level Handwritten Text Recognition Short presentation of our project Introduction Installation 2.a Install conda envi

24 Jul 15, 2022
Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and text alignment capabilities.

Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and text alignment capabilities.

Microsoft 235 Dec 22, 2022
Slice a single image into multiple pieces and create a dataset from them

OpenCV Image to Dataset Converter Slice a single image of Persian digits into mu

Meysam Parvizi 14 Dec 29, 2022
docstrum

Docstrum Algorithm Getting Started This repo is for developing a Docstrum algorithm presented by O’Gorman (1993). Disclaimer This source code is built

Chulwoo Mike Pack 54 Dec 13, 2022
Binarize document images

Binarization Binarization for document images Examples Introduction This tool performs document image binarization (i.e. transform colour/grayscale to

QURATOR-SPK 48 Jan 02, 2023
This can be use to convert text in a file to handwritten text.

TextToHandwriting This can be used to convert text to handwriting. Clone this project or download the code. Run TextToImage.py give the filename of th

Ashutosh Mahapatra 2 Feb 06, 2022
This is the code for our paper DAAIN: Detection of Anomalous and AdversarialInput using Normalizing Flows

Merantix-Labs: DAAIN This is the code for our paper DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows which can be found at

Merantix 14 Oct 12, 2022
Zoom , GoogleMeets에서 Vtuber 데뷔하기

EasyVtuber Facial landmark와 GAN을 이용한 Character Face Generation Google Meets, Zoom 등에서 자신만의 웹툰, 만화 캐릭터로 대화해보세요! 악세사리는 어느정도 추가해도 잘 작동해요! 안타깝게도 RTX 2070

Gunwoo Han 140 Dec 23, 2022
Go package for OCR (Optical Character Recognition), by using Tesseract C++ library

gosseract OCR Golang OCR package, by using Tesseract C++ library. OCR Server Do you just want OCR server, or see the working example of this package?

Hiromu OCHIAI 1.9k Dec 28, 2022
Generates a message from the infamous Jerma Impostor image

Generate your very own jerma sus imposter message. Modes: Default Mode: Only supports the characters " ", !, a, b, c, d, e, h, i, m, n, o, p, q, r, s,

Giorno420 1 Oct 27, 2022
nofacedb/faceprocessor is a face recognition engine for NoFaceDB program complex.

faceprocessor nofacedb/faceprocessor is a face recognition engine for NoFaceDB program complex. Tech faceprocessor uses a number of open source projec

NoFaceDB 3 Sep 06, 2021