Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Overview

Real-ESRGAN

Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Ported from https://github.com/xinntao/Real-ESRGAN

Dependencies

  • NumPy
  • PyTorch, preferably with CUDA. Note that torchvision and torchaudio are not required and hence can be omitted from the command.
  • VapourSynth

Installation

pip install --upgrade vsrealesrgan
python -m vsrealesrgan

Usage

from vsrealesrgan import RealESRGAN

ret = RealESRGAN(clip)

See __init__.py for the description of the parameters.

Comments
  • Installing on portable vapoursynth?

    Installing on portable vapoursynth?

    I'm getting this error:

    ` python -m pip install --upgrade vsrealesrgan Collecting vsrealesrgan Using cached vsrealesrgan-3.1.0-py3-none-any.whl (7.4 kB) Collecting tqdm Using cached tqdm-4.64.0-py2.py3-none-any.whl (78 kB) Requirement already satisfied: numpy in d:\vapoursynth\lib\site-packages (from vsrealesrgan) (1.22.3) Collecting VapourSynth>=55 Using cached VapourSynth-58.zip (558 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error

    × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [15 lines of output] Traceback (most recent call last): File "C:\Users*\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 64, in dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY) File "C:\Users*\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 38, in query reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ) FileNotFoundError: [WinError 2] The system cannot find the file specified

      During handling of the above exception, another exception occurred:
    
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\**\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 67, in <module>
          raise OSError("Couldn't detect vapoursynth installation path")
      OSError: Couldn't detect vapoursynth installation path
      [end of output]
    

    note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed

    × Encountered error while generating package metadata. ╰─> See above for output.

    note: This is an issue with the package mentioned above, not pip. hint: See above for details. `

    opened by manus693 8
  • 'vapoursynth.VideoFrame' object is not subscriptable

    'vapoursynth.VideoFrame' object is not subscriptable

    Error on frame 15 request: 'vapoursynth.VideoFrame' object is not subscriptable

    py3.6.4 vs.core.version: VapourSynth Video Processing Library\nCopyright (c) 2012-2018 Fredrik Mellbin\nCore R44\nAPI R3.5\nOptions: -\n torch.version: 1.10.0+cu111

    vpy: import vapoursynth as vs import sys sys.path.append("C:\C\Transcoding\VapourSynth\core64\plugins\Scripts") import mvsfunc as mvf sys.path.append(r"C:\Users\liujing\AppData\Local\Programs\Python\Python36\Lib\site-packages\vsrealesrgan") from vsrealesrgan import RealESRGAN

    core = vs.get_core(accept_lowercase=True) source = core.ffms2.Source(sourcename) source = mvf.ToRGB(source,depth=32) source = RealESRGAN(source) source= mvf.ToYUV(source,depth=16) source.set_output()

    opened by splinter21 4
  • TensorRT

    TensorRT "Ran out of input"?

    Using:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    import site
    import os
    # Adding torch dependencies to PATH
    path = site.getsitepackages()[0]+'/torch_dependencies/'
    path = path.replace('\\', '/')
    os.environ["PATH"] = path + os.pathsep + os.environ["PATH"]
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
    # source: 'G:\TestClips&Co\files\test.avi'
    # current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
    # Loading G:\TestClips&Co\files\test.avi using LWLibavSource
    clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/files/test.avi", format="YUV420P8", stream_index=0, cache=0, prefer_hw=0)
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 25
    clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0)
    original = clip
    from vsrealesrgan import RealESRGAN
    # adjusting color space from YUV420P8 to RGBH for VsRealESRGAN
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")
    # resizing using RealESRGAN
    clip = RealESRGAN(clip=clip, device_index=0, trt=True, trt_cache_path="G:/Temp", num_streams=4) # 2560x1408
    # resizing 2560x1408 to 640x352
    # adjusting resizing
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="limited")
    clip = core.fmtc.resample(clip=clip, w=640, h=352, kernel="lanczos", interlaced=False, interlacedd=False)
    original = core.resize.Bicubic(clip=original, width=640, height=352)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
    original = core.text.Text(clip=original,text="Original",scale=1,alignment=7)
    clip = core.text.Text(clip=clip,text="Filtered",scale=1,alignment=7)
    stacked = core.std.StackHorizontal([original,clip])
    # Output
    stacked.set_output()
    

    I get

    Failed to evaluate the script: Python exception: Ran out of input

    Traceback (most recent call last):
    File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
    File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
    File "C:\Users\Selur\Desktop\test_2.vpy", line 32, in 
    clip = RealESRGAN(clip=clip, device_index=0, trt=True, trt_cache_path="G:/Temp", num_streams=4) # 2560x1408
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsrealesrgan\__init__.py", line 284, in RealESRGAN
    module = [torch.load(trt_engine_path) for _ in range(num_streams)]
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsrealesrgan\__init__.py", line 284, in 
    module = [torch.load(trt_engine_path) for _ in range(num_streams)]
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\serialization.py", line 795, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\serialization.py", line 1002, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
    EOFError: Ran out of input
    

    Works fine with trt=False.

    ->Any idea what is going wrong there?

    opened by Selur 3
  • [REQ] SwinIR port

    [REQ] SwinIR port

    opened by forart 1
  • Vapoursynth R58 support

    Vapoursynth R58 support

    When trying to install vs-realesrgan in Vapoursynth R58 I get:

    I:\Hybrid\64bit\Vapoursynth>python -m pip install --upgrade vsrealesrgan
    Collecting vsrealesrgan
      Using cached vsrealesrgan-2.0.0-py3-none-any.whl (12 kB)
    Collecting VapourSynth>=55
      Using cached VapourSynth-57.zip (567 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [15 lines of output]
          Traceback (most recent call last):
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-7_na63f8\vapoursynth_4864864388024a95a1e8b4adda80b293\setup.py", line 64, in <module>
              dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY)
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-7_na63f8\vapoursynth_4864864388024a95a1e8b4adda80b293\setup.py", line 38, in query
              reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ)
          FileNotFoundError: [WinError 2] Das System kann die angegebene Datei nicht finden
    
          During handling of the above exception, another exception occurred:
    
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-7_na63f8\vapoursynth_4864864388024a95a1e8b4adda80b293\setup.py", line 67, in <module>
              raise OSError("Couldn't detect vapoursynth installation path")
          OSError: Couldn't detect vapoursynth installation path
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    

    any idea how to fix this?

    opened by Selur 0
  • 'vapoursynth.VideoFrame' object has no attribute 'get_read_array'

    'vapoursynth.VideoFrame' object has no attribute 'get_read_array'

    I have been trying to use this plugin, however I get the below error when trying to preview the video in VapourSynth Editor r19-mod-2-x86_64

    Error on frame 0 request: 'vapoursynth.VideoFrame' object has no attribute 'get_read_array'

    The code I am getting this error from is below

    from vapoursynth import core
    from vsrealesrgan import RealESRGAN
    import havsfunc as haf
    import vapoursynth as vs
    video = core.ffms2.Source(source='EDIT.mkv')
    video = haf.QTGMC(video, Preset="slow", MatchPreset="slow", MatchPreset2="slow", SourceMatch=3, TFF=True)
    video = core.std.SelectEvery(clip=video, cycle=2, offsets=0)
    video = core.std.Crop(clip=video, left=8, right=8, top=0, bottom=0)
    video = core.resize.Spline36(clip=video, width=640, height=480)
    video = core.resize.Bicubic(clip=video, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    video = RealESRGAN(clip=video, device_index=0)
    video = core.resize.Bicubic(clip=video, format=vs.YUV420P10, matrix_s="470bg", range_s="limited")
    video = core.resize.Spline36(clip=video, width=1440, height=1080)
    video = core.std.AssumeFPS(clip=video, fpsnum=30000, fpsden=1001)
    video.set_output()
    
    opened by silentsudin 0
Releases(v4.0.1)
Owner
Holy Wu
Holy Wu
Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate

News 05/17/2021 To make the comparison on ZJU-MoCap easier, we save quantitative and qualitative results of other methods at here, including Neural Vo

ZJU3DV 748 Jan 07, 2023
A Python package for generating concise, high-quality summaries of a probability distribution

GoodPoints A Python package for generating concise, high-quality summaries of a probability distribution GoodPoints is a collection of tools for compr

Microsoft 28 Oct 10, 2022
Capsule endoscopy detection DACON challenge

capsule_endoscopy_detection (DACON Challenge) Overview Yolov5, Yolor, mmdetection기반의 모델을 사용 (총 11개 모델 앙상블) 모든 모델은 학습 시 Pretrained Weight을 yolov5, yolo

MAILAB 11 Nov 25, 2022
Automated image registration. Registrationimation was too much of a mouthful.

alignimation Automated image registration. Registrationimation was too much of a mouthful. This repo contains the code used for my blog post Alignimat

Ethan Rosenthal 9 Oct 13, 2022
Official PyTorch implementation of the paper "Graph-based Generative Face Anonymisation with Pose Preservation" in ICIAP 2021

Contents AnonyGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evaluation Acknowledgments Citat

Nicola Dall'Asen 10 May 24, 2022
Deepfake Scanner by Deepware.

Deepware Scanner (CLI) This repository contains the command-line deepfake scanner tool with the pre-trained models that are currently used at deepware

deepware 110 Jan 02, 2023
RodoSol-ALPR Dataset

RodoSol-ALPR Dataset This dataset, called RodoSol-ALPR dataset, contains 20,000 images captured by static cameras located at pay tolls owned by the Ro

Rayson Laroca 45 Dec 15, 2022
style mixing for animation face

An implementation of StyleGAN on Animation dataset. Install git clone https://github.com/MorvanZhou/anime-StyleGAN cd anime-StyleGAN pip install -r re

Morvan 46 Nov 30, 2022
TCube generates rich and fluent narratives that describes the characteristics, trends, and anomalies of any time-series data (domain-agnostic) using the transfer learning capabilities of PLMs.

TCube: Domain-Agnostic Neural Time series Narration This repository contains the code for the paper: "TCube: Domain-Agnostic Neural Time series Narrat

Mandar Sharma 7 Oct 31, 2021
PyTorch implementation of a Real-ESRGAN model trained on custom dataset

Real-ESRGAN PyTorch implementation of a Real-ESRGAN model trained on custom dataset. This model shows better results on faces compared to the original

Sber AI 160 Jan 04, 2023
This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark

SILG This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark. If you find this work helpful, please cons

Victor Zhong 17 Nov 27, 2022
Resources related to our paper "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain"

CLIN-X (CLIN-X-ES) & (CLIN-X-EN) This repository holds the companion code for the system reported in the paper: "CLIN-X: pre-trained language models a

Bosch Research 4 Dec 05, 2022
LibMTL: A PyTorch Library for Multi-Task Learning

LibMTL LibMTL is an open-source library built on PyTorch for Multi-Task Learning (MTL). See the latest documentation for detailed introductions and AP

765 Jan 06, 2023
Learning cell communication from spatial graphs of cells

ncem Features Repository for the manuscript Fischer, D. S., Schaar, A. C. and Theis, F. Learning cell communication from spatial graphs of cells. 2021

Theis Lab 77 Dec 30, 2022
Python version of the amazing Reaction Mechanism Generator (RMG).

Reaction Mechanism Generator (RMG) Description This repository contains the Python version of Reaction Mechanism Generator (RMG), a tool for automatic

Reaction Mechanism Generator 284 Dec 27, 2022
Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX

ONNX-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX Stereo depth estimation on the cone

Ibai Gorordo 23 Nov 29, 2022
TLDR: Twin Learning for Dimensionality Reduction

TLDR (Twin Learning for Dimensionality Reduction) is an unsupervised dimensionality reduction method that combines neighborhood embedding learning with the simplicity and effectiveness of recent self

NAVER 105 Dec 28, 2022
Python code for loading the Aschaffenburg Pose Dataset.

Aschaffenburg Pose Dataset (APD) This repository contains Python code for loading and filtering the Aschaffenburg Pose Dataset. The dataset itself and

1 Nov 26, 2021
Pytorch implementation of Depth-conditioned Dynamic Message Propagation forMonocular 3D Object Detection

DDMP-3D Pytorch implementation of Depth-conditioned Dynamic Message Propagation forMonocular 3D Object Detection, a paper on CVPR2021. Instroduction T

Li Wang 32 Nov 09, 2022
[ECCVW2020] Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DiMP)

Feel free to visit my homepage Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DIMP) [ECCVW2020 paper] Presentation

Seokeon Choi 35 Oct 26, 2022