BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Overview

BasicVSR++

BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment

Ported from https://github.com/open-mmlab/mmediting

Dependencies

Installing mmcv-full on Windows is a bit complicated as it requires Visual Studio and other tools to compile CUDA ops. So I have uploaded the built file compiled with CUDA 11.1 for Windows users and you can install it by executing the following command.

pip install https://github.com/HolyWu/vs-basicvsrpp/releases/download/v1.0.0/mmcv_full-1.3.12-cp39-cp39-win_amd64.whl

Installation

pip install --upgrade vsbasicvsrpp
python -m vsbasicvsrpp

Usage

from vsbasicvsrpp import BasicVSRPP

ret = BasicVSRPP(clip)

See __init__.py for the description of the parameters.

Comments
  • Question about the tiling,...

    Question about the tiling,...

    I got a Geforce GTX 1070ti with 8 GB of vram. (I know it's not new and really that suited for this, but that's what I got. :)) If I crop my source to a 480x480 chunk and run BasicVSR++ on it ~1.4-3.5GB VRAM are used. Not cropping my source, I thought that even with some padding I thought that using "BasicVSRPP(clip=clip, model=5, tile_x=480, tile_y=480)" would allow me to filter HD an UHD clips which it does not. Question is: Why? Shouldn't this work with a 480x480 tiling?

    What are the min tile with and height sizes (I assumed it would be 64*4=256 + padding, but when using 320x320 I get: Python exception: Analyse: failed to retrieve first frame from super clip. Error message: The height and width of low-res inputs must be at least 64, but got 84 and 44. Using 392x392 I get The height and width of low-res inputs must be at least 64, but got 102 and 26. ->Just from testing I don't get what the tiling does at all. :)

    opened by Selur 5
  • Question regarding some Arguments

    Question regarding some Arguments

    Hi, My appologies, I'm not sure if I forgot to submit my last typed issue or if it was removed. If it was removed, feel free to close this without a comment. I typed it this morning, and honestly, can't remember if I submitted or closed the browser window by accident.

    I have a few questions for some of the arguments one can set, as those are a bit different from the mmediting interface, if I have seen that correctly.

    Interval: This specifies the number of Images per Batch, is that correct? In addition of reducing the VRam footprint, I assume it influences what the network can see during the upscale process? The smaller the batch, the smaller the window it can include in the temporal calculations?

    Tiling: This splits the image into tiles for processing instead of the whole image, is that correct? What is the purpose or best case scenario someone would use this for?

    FP16: I run the network with FP16 at the moment, as FP32 usually blows up my 11GB of VRam if I don't reduce the Interval. As I had not enough time to finish all my Tests yesterday, have you noticed any degradation in quality in Images when FP16 is used? If I want to run it with FP32, I need to reduce the Interval but if my assumption is correct, this narrows down the time-window the network can calculate across. So I'm in between Precision vs Intervall-Size for VRam usage. Do you have any experience in this and what a good tradeoff might look like.

    opened by Memnarch 4
  • mmcv-full on Windows and Python 3.10

    mmcv-full on Windows and Python 3.10

    Since Vapoursynth RC58 either need Python 3.8 (Win7 compatible) or Python 3.10 (which I'm using), the current mmcv_full-1.3.16-cp39-cp39-win_amd64.whl is not a supported wheel since it's meant for Python 3.9. Would be nice if you could create a mmcv-full for Python 3.10 in Windows. Thanks!

    opened by Selur 2
  • Sizes of tensors must match except in dimension 3. Got 213 and 214 (The offending index is 0)

    Sizes of tensors must match except in dimension 3. Got 213 and 214 (The offending index is 0)

    python 3.9.7 torch 1.9.1 mmcv 1.3.13/1.4.4 vs r57 vs fatpack

    import vapoursynth as vs core = vs.core

    from vsbasicvsrpp import BasicVSRPP

    video = core.ffms2.Source(source=r'D:\winpython\VapourSynth64Portable\in.mp4')

    video = core.resize.Bicubic(clip=video, format=vs.RGBS, matrix_in_s="709")

    video = BasicVSRPP(clip=video, interval=10, model=5, fp16=True, tile_pad=0)

    video = core.resize.Bicubic(video, format=vs.YUV420P8, matrix_s="709")

    video.set_output()

    opened by oblessnoob 2
  • meshgrid() got an unexpected keyword argument 'indexing'

    meshgrid() got an unexpected keyword argument 'indexing'

    Using the v1.4.0 and: clip = BasicVSRPP(clip=clip, model=3, tile_x=352, tile_y=480, fp16=True) full script:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="I:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_02_1-Sample-Beginning.demuxed.m2v'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_02_1-Sample-Beginning.demuxed.m2v using D2VSource
    clip = core.d2v.Source(input="E:/Temp/m2v_154f3f0f52f994b09117b9c8650e17d2_853323747.d2v")
    # making sure input color matrix is set as 470bg
    clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
    # making sure frame rate is set to 29.97
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip)# new fps: 23.976
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    # DEBUG: vsTIVTC changed scanorder to: progressive
    # cropping the video to 704x480
    clip = core.std.CropRel(clip=clip, left=6, right=10, top=0, bottom=0)
    # adjusting color space from YUV420P8 to RGBS for vsBasicVSRPPFilter
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # Quality enhancement using BasicVSR++
    from vsbasicvsrpp import BasicVSRPP
    clip = BasicVSRPP(clip=clip, model=3, tile_x=352, tile_y=480, fp16=True)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 23.976fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    # Output
    clip.set_output()
    

    I get:

    Error on frame 0 request:
    meshgrid() got an unexpected keyword argument 'indexing'
    
    opened by Selur 2
  • ImportError: DLL load failed while importing _ext: The specified procedure could not be found.

    ImportError: DLL load failed while importing _ext: The specified procedure could not be found.

    Using Python 3.9.7, I'm having the following issue when I try to finish installing vsbasicvsrpp with python -m vsbasicvsrpp.

    I had installed it just fine weeks ago, but I attempted to update PyTorch last night with: pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html and that seemed to break everything, so I managed to fix it, but vsbasicvsrpp remains unfixed due to the following.

    (c) Microsoft Corporation. All rights reserved.
    
    C:\WINDOWS\system32>pip install --upgrade vsbasicvsrpp
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    Collecting vsbasicvsrpp
      Using cached vsbasicvsrpp-1.3.0-py3-none-any.whl (21 kB)
    Requirement already satisfied: torchvision in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (0.11.1+cu113)
    Requirement already satisfied: mmcv-full>=1.3.13 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.3.14)
    Requirement already satisfied: torch>=1.9.0 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.10.0+cu113)
    Requirement already satisfied: numpy in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from vsbasicvsrpp) (1.21.3)
    Requirement already satisfied: Pillow in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (8.3.1)
    Requirement already satisfied: packaging in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (21.0)
    Requirement already satisfied: addict in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.0)
    Requirement already satisfied: yapf in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (0.31.0)
    Requirement already satisfied: regex in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2021.8.28)
    Requirement already satisfied: pyyaml in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (5.4.1)
    Requirement already satisfied: typing-extensions in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from torch>=1.9.0->vsbasicvsrpp) (3.10.0.0)
    Requirement already satisfied: pyparsing>=2.0.2 in c:\users\mainuser\appdata\local\programs\python\python39\lib\site-packages (from packaging->mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.7)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    Installing collected packages: vsbasicvsrpp
    Successfully installed vsbasicvsrpp-1.3.0
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    WARNING: Ignoring invalid distribution mvtools-float- (c:\users\mainuser\appdata\roaming\python\python39\site-packages)
    
    C:\WINDOWS\system32>python -m vsbasicvsrpp
    Traceback (most recent call last):
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 188, in _run_module_as_main
        mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 147, in _get_module_details
        return _get_module_details(pkg_main_name, error)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 111, in _get_module_details
        __import__(pkg_name)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\vsbasicvsrpp\__init__.py", line 10, in <module>
        from .basicvsr_pp import BasicVSRPlusPlus
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\vsbasicvsrpp\basicvsr_pp.py", line 8, in <module>
        from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\ops\__init__.py", line 2, in <module>
        from .ball_query import ball_query
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\ops\ball_query.py", line 7, in <module>
        ext_module = ext_loader.load_ext('_ext', ['ball_query_forward'])
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\site-packages\mmcv\utils\ext_loader.py", line 13, in load_ext
        ext = importlib.import_module('mmcv.' + name)
      File "C:\Users\MainUser\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
    ImportError: DLL load failed while importing _ext: The specified procedure could not be found.
    opened by AIisCool 2
  • Adjust strength?

    Adjust strength?

    First off I wish to say thank you very much for all the work you do HolyWu!

    I love how much noise vs-basicvsrpp removes from the video, but sometimes it's a little too much (ie. removes something that isn't noise at all). Is there any way to adjust the strength of it similar to DPIR?

    Thank you.

    opened by AIisCool 2
  • Model 5 seems to downscale by 4

    Model 5 seems to downscale by 4

    Hi, While I got good results, they did not seem to be quite as sharp as I expected them. When I tried to process a 136x264 Video, I suddenly got the error:

    Traceback (most recent call last):
      File "E:\Git\RivenTools\Reproduce.py", line 26, in <module>
        video.output(f, y4m=True)
      File "src\cython\vapoursynth.pyx", line 1790, in vapoursynth.VideoNode.output
      File "src\cython\vapoursynth.pyx", line 1655, in frames
      File "D:\Program Files\Python39\lib\concurrent\futures\_base.py", line 445, in result
        return self.__get_result()
      File "D:\Program Files\Python39\lib\concurrent\futures\_base.py", line 390, in __get_result
        raise self._exception
    vapoursynth.Error: The height and width of low-res inputs must be at least 64, but got 66 and 34.
    

    But my Video is larger. Looking into it, executing Model 5 downscales the video. Or at least its internal data seems to be downscaled. While the returned clip reports the correct size, the frames seem to be lower. A quarter to be precise.

    This seems to be the code causing it: https://github.com/HolyWu/vs-basicvsrpp/blob/da066461f66c6e7deedf354630899b815393836b/vsbasicvsrpp/basicvsr_pp.py#L291

    And this is the script to reproduce it (It's a trimmed down version, that's why model 5 and 1 are executed directly after one another. My pipeline has some steps inbetween): Reproduce.zip

    If you need that specific video from my script, I can share that with you but would prefer to do that outside of this report, as it's a game asset.

    opened by Memnarch 2
  • got a warning,...

    got a warning,...

    Running vs-basicvsrpp I get:

    I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\nn\functional.py:3657: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
      warnings.warn(
    

    should I do something or should I ignore this?

    opened by Selur 2
  • calling

    calling "python -m vsbasicvsrpp" fails

    Using a portable Vapoursynth R58 and calling python -m pip install --upgrade vsbasicvsrpp gives me

    Requirement already satisfied: vsbasicvsrpp in i:\hybrid\64bit\vapoursynth\lib\site-packages (1.4.1)
    Requirement already satisfied: torchvision in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (0.12.0)
    Requirement already satisfied: mmcv-full>=1.3.13 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.5.0)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.22.3)
    Requirement already satisfied: torch>=1.9.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsbasicvsrpp) (1.11.0)
    Requirement already satisfied: regex in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2022.3.15)
    Requirement already satisfied: pyyaml in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (6.0)
    Requirement already satisfied: packaging in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (21.3)
    Requirement already satisfied: Pillow in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (9.1.0)
    Requirement already satisfied: addict in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (2.4.0)
    Requirement already satisfied: yapf in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full>=1.3.13->vsbasicvsrpp) (0.32.0)
    Requirement already satisfied: typing-extensions in i:\hybrid\64bit\vapoursynth\lib\site-packages (from torch>=1.9.0->vsbasicvsrpp) (4.2.0)
    Requirement already satisfied: requests in i:\hybrid\64bit\vapoursynth\lib\site-packages (from torchvision->vsbasicvsrpp) (2.27.1)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from packaging->mmcv-full>=1.3.13->vsbasicvsrpp) (3.0.8)
    Requirement already satisfied: idna<4,>=2.5 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (3.3)
    Requirement already satisfied: certifi>=2017.4.17 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (2021.10.8)
    Requirement already satisfied: urllib3<1.27,>=1.21.1 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (1.26.9)
    Requirement already satisfied: charset-normalizer~=2.0.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from requests->torchvision->vsbasicvsrpp) (2.0.12)
    

    which looks fine to me. Problem is, calling python -m vsbasicvsrpp gives me:

    Traceback (most recent call last):
      File "runpy.py", line 187, in _run_module_as_main
      File "runpy.py", line 146, in _get_module_details
      File "runpy.py", line 110, in _get_module_details
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsbasicvsrpp\__init__.py", line 10, in <module>
        from .basicvsr_pp import BasicVSRPlusPlus
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsbasicvsrpp\basicvsr_pp.py", line 8, in <module>
        from mmcv.ops import ModulatedDeformConv2d, modulated_deform_conv2d
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\ops\__init__.py", line 2, in <module>
        from .active_rotated_filter import active_rotated_filter
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\ops\active_rotated_filter.py", line 8, in <module>
        ext_module = ext_loader.load_ext(
      File "i:\Hybrid\64bit\Vapoursynth\Lib\site-packages\mmcv\utils\ext_loader.py", line 13, in load_ext
        ext = importlib.import_module('mmcv.' + name)
      File "importlib\__init__.py", line 126, in import_module
    

    any idea what I'm doing wrong / where the problem is?

    Calling python -m pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html gives me:

    Looking in links: https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html
    Requirement already satisfied: mmcv-full in i:\hybrid\64bit\vapoursynth\lib\site-packages (1.5.0)
    Requirement already satisfied: pyyaml in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (6.0)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (1.22.3)
    Requirement already satisfied: addict in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (2.4.0)
    Requirement already satisfied: Pillow in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (9.1.0)
    Requirement already satisfied: yapf in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (0.32.0)
    Requirement already satisfied: regex in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (2022.3.15)
    Requirement already satisfied: packaging in i:\hybrid\64bit\vapoursynth\lib\site-packages (from mmcv-full) (21.3)
    Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from packaging->mmcv-full) (3.0.8)
    

    which seems fine too, to me.

    Cu Selur

    opened by Selur 1
  • NTIRE 2021 Video Super-Resolution

    NTIRE 2021 Video Super-Resolution

    opened by AIisCool 1
  • Multi-gpu support?

    Multi-gpu support?

    I tried to utilize 4 gpus for inference with the following code but it didn't work. Only one of the gpu was doing the job at a time and the others were idling. Are there any suggested ways for multi-gpu inference?

    import vapoursynth as vs
    import os
    from vsbasicvsrpp import BasicVSRPP
    core = vs.core
    
    folder = r'C:\Users\test\Desktop\vs-58'
    file = r'test.m4v'
    
    src = os.path.join(folder, file)
    
    src = core.ffms2.Source(src)
    
    src = core.fmtc.resample (clip=src, css="444")
    src = core.fmtc.matrix (clip=src, mat="709", col_fam=vs.RGB)
    src = core.fmtc.bitdepth (clip=src, bits=32)
    
    interval = 180
    n = 4
    
    add = (interval*n) - len(src) % (interval*n)
    if add>0:
    	src = src + core.std.BlankClip(src, length=add)
    
    c1 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*0, interval)])
    c1 = BasicVSRPP(c1, model=5, interval=interval, device_index=0, fp16=True)
    
    c2 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*1, interval*2)])
    c2 = BasicVSRPP(c2, model=5, interval=interval, device_index=1, fp16=True)
    
    c3 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*2, interval*3)])
    c3 = BasicVSRPP(c3, model=5, interval=interval, device_index=2, fp16=True)
    
    c4 = core.std.SelectEvery(clip=src, cycle=interval*n,  offsets=[i for i in range(interval*3, interval*4)])
    c4 = BasicVSRPP(c4, model=5, interval=interval, device_index=3, fp16=True)
    
    
    c = core.std.Interleave(clips=[c1, c2, c3, c4])
    
    a = [i for i in range(interval*n) if i % n ==0] + [i for i in range(interval*n) if i % n ==1] + [i for i in range(interval*n) if i % n ==2] + [i for i in range(interval*n) if i % n ==3]
    
    c = core.std.SelectEvery(clip=c, cycle=interval*n, offsets=a)
    
    c = core.fmtc.matrix (clip=c, mat="709", col_fam=vs.YUV)
    c = core.fmtc.resample (clip=c, css="420")
    c = core.fmtc.bitdepth(clip =c, bits=16)   
    
    if add>0:
    	c = c[:-add]
    
    c.set_output()
    
    opened by Bouby308 0
  • deepfillv2 support

    deepfillv2 support

    thanks for your effort:smile: At present, there is't a good inpaint method in vs. i notice mmediting also support deepfillv2, how about port it to vs (it performs good for image, maybe temporal flicker for video)

    opened by soldivelot 0
Releases(v1.4.1)
Owner
Holy Wu
Holy Wu
deep_image_prior_extension

Code for "Is Deep Image Prior in Need of a Good Education?" Project page: https://jleuschn.github.io/docs.educated_deep_image_prior/. Supplementary Ma

riccardo barbano 7 Jan 09, 2022
내가 보려고 정리한 <프로그래밍 기초 Ⅰ> / organized for me

Programming-Basics 프로그래밍 기초 Ⅰ 아카이브 Do it! 점프 투 파이썬 주차 강의주제 비고 1주차 Syllabus 2주차 자료형 - 숫자형 3주차 자료형 - 문자열형 4주차 입력과 출력 5주차 제어문 - 조건문 if 6주차 제어문 - 반복문 whil

KIMMINSEO 1 Mar 07, 2022
Character Controllers using Motion VAEs

Character Controllers using Motion VAEs This repo is the codebase for the SIGGRAPH 2020 paper with the title above. Please find the paper and demo at

Electronic Arts 165 Jan 03, 2023
A Python library for differentiable optimal control on accelerators.

A Python library for differentiable optimal control on accelerators.

Google 80 Dec 21, 2022
Few-Shot Graph Learning for Molecular Property Prediction

Few-shot Graph Learning for Molecular Property Prediction Introduction This is the source code and dataset for the following paper: Few-shot Graph Lea

Zhichun Guo 94 Dec 12, 2022
Fine-Tune EleutherAI GPT-Neo to Generate Netflix Movie Descriptions in Only 47 Lines of Code Using Hugginface And DeepSpeed

GPT-Neo-2.7B Fine-Tuning Example Using HuggingFace & DeepSpeed Installation cd venv/bin ./pip install -r ../../requirements.txt ./pip install deepspe

Nikita 180 Jan 05, 2023
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
TextureGAN in Pytorch

TextureGAN This code is our PyTorch implementation of TextureGAN [Project] [Arxiv] TextureGAN is a generative adversarial network conditioned on sketc

Patsorn 147 Dec 14, 2022
Code base for reproducing results of I.Schubert, D.Driess, O.Oguz, and M.Toussaint: Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics. NeurIPS (2021)

Learning to Execute (L2E) Official code base for completely reproducing all results reported in I.Schubert, D.Driess, O.Oguz, and M.Toussaint: Learnin

3 May 18, 2022
MicRank is a Learning to Rank neural channel selection framework where a DNN is trained to rank microphone channels.

MicRank: Learning to Rank Microphones for Distant Speech Recognition Application Scenario Many applications nowadays envision the presence of multiple

Samuele Cornell 20 Nov 10, 2022
Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code.

Build Low Code Automated Tensorflow explainable models in just 3 lines of code.

Hasan Rafiq 170 Dec 26, 2022
3D Generative Adversarial Network

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling This repository contains pre-trained models and sampling

Chengkai Zhang 791 Dec 20, 2022
Official Repsoitory for "Activate or Not: Learning Customized Activation." [CVPR 2021]

CVPR 2021 | Activate or Not: Learning Customized Activation. This repository contains the official Pytorch implementation of the paper Activate or Not

184 Dec 27, 2022
Real-time object detection on Android using the YOLO network with TensorFlow

TensorFlow YOLO object detection on Android Source project android-yolo is the first implementation of YOLO for TensorFlow on an Android device. It is

Nataniel Ruiz 624 Jan 03, 2023
Framework for evaluating ANNS algorithms on billion scale datasets.

Billion-Scale ANN http://big-ann-benchmarks.com/ Install The only prerequisite is Python (tested with 3.6) and Docker. Works with newer versions of Py

Harsha Vardhan Simhadri 132 Dec 24, 2022
Revisiting Global Statistics Aggregation for Improving Image Restoration

Revisiting Global Statistics Aggregation for Improving Image Restoration Xiaojie Chu, Liangyu Chen, Chengpeng Chen, Xin Lu Paper: https://arxiv.org/pd

MEGVII Research 128 Dec 24, 2022
Direct design of biquad filter cascades with deep learning by sampling random polynomials.

IIRNet Direct design of biquad filter cascades with deep learning by sampling random polynomials. Usage git clone https://github.com/csteinmetz1/IIRNe

Christian J. Steinmetz 55 Nov 02, 2022
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 05, 2023
Code to go with the paper "Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo"

dblmahmc Code to go with the paper "Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo" Requirements: https://github.com

1 Dec 17, 2021
TeST: Temporal-Stable Thresholding for Semi-supervised Learning

TeST: Temporal-Stable Thresholding for Semi-supervised Learning TeST Illustration Semi-supervised learning (SSL) offers an effective method for large-

Xiong Weiyu 1 Jul 14, 2022