Pixray is an image generation system

Related tags

Deep Learningpixray
Overview

pixray

Alt text

Pixray is an image generation system. It combines previous ideas including:

pixray it itself a python library and command line utility, but is also friendly to running on line in Google Colab notebooks.

There is currently some documentation on options. Also checkout THE DEMO NOTEBOOKS or join in the discussion on discord.

Usage

Pixray can be run in Docker using Cog.

First, install Docker and Cog, then you can use cog run to run Pixray inside Docker. For example:

cog run python pixray.py --drawer=pixel --prompt=sunrise --output myfile.png
Comments
  • Implement a basic log for debugging

    Implement a basic log for debugging

    Saw an open issue about improving outputs so I took a stab at it. Didn't want to do too much as I saw you may already have some updates in mind regarding the file name / directory structure.

    Summary of changes:

    • Pixray will now take an optional parameter "--debug": a boolean value that indicates whether or not to output a debug log with the final output.
    • The debug log currently includes the settings used to generate an image. (More can be added later).
    • A reusable file path calculation function in utility class.
    • Add some unit tests.
    opened by sgallag-insta 18
  • Add overlay until option

    Add overlay until option

    Summary of changes:

    • Added --overlay_until option.
      • Takes an integer argument that is the number of iterations.
      • Default value is None.

    Tests can be run by executing python -m unittest tests/test_pixray.py from the main pixray directory.

    opened by sgallag-insta 10
  • BLIP loss

    BLIP loss

    I had to lower num_cuts when running.

        prompts="warrior. concept art. trending on artstation",
        drawer="super_resolution",
        size=[512, 512],
        num_cuts=8,
        quality="normal",
        learning_rate=0.1,
        init_image="human.jpg",
    
    opened by samedii 9
  • ImportError: cannot import name 'SimpleTokenizer' from 'tokenizer'

    ImportError: cannot import name 'SimpleTokenizer' from 'tokenizer'

    I keep getting this error...

    ImportError: cannot import name 'SimpleTokenizer' from 'tokenizer' (C:\Users\micro\anaconda3\envs\pixel\lib\site-packages\tokenizer_init_.py)

    opened by dillfrescott 7
  • Parse units in arguments

    Parse units in arguments

    Summary of changes:

    • Added a new parse_unit function that parses strings with units ("20 iterations", "50%", etc) to a raw iteration integer.
    • Refactored how parameters with pipes are handled slightly.
    • Overlay related arguments now use strings with units specified rather than integers.
    • Added test cases.

    Haven't had time to hook up the other arguments yet but I can continue with that.

    opened by sgallag-insta 7
  • Pixray not loading on chromebook with Lightspeed

    Pixray not loading on chromebook with Lightspeed

    Hey! I like Pixray, it can make some cool art. (better then me D:) However, on my Chromebook at school, it cannot load the Input or Output areas without a proxy. I'm using the Lightspeed blocker if that helps.

    opened by Erisfiregamer1 7
  • vdiff model is no longer available

    vdiff model is no longer available

    Hello! It looks like the model for vdiff isn't available at the URL currently in Pixray anymore. Attempting to use the vdiff drawer gives me this:

    (base) [email protected]:~/pixray$ python pixray.py --drawer=vdiff --prompts="an excessively fuzzy panda" --outdir panda
    Running with 30x1 = 30 cuts
    Using seed: 7387649654636579532
    Downloading models/yfcc_2.pth from https://v-diffusion.s3.us-west-2.amazonaws.com/yfcc_2.pth, please wait
    --2022-05-04 13:39:59--  https://v-diffusion.s3.us-west-2.amazonaws.com/yfcc_2.pth
    Resolving v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)... 52.218.228.113
    Connecting to v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)|52.218.228.113|:443... connected.
    HTTP request sent, awaiting response... 404 Not Found
    2022-05-04 13:39:59 ERROR 404: Not Found.
    
    Ignoring non-zero exit:  b''
    Traceback (most recent call last):
      File "/home/fox/pixray/pixray.py", line 2135, in <module>
        main()
      File "/home/fox/pixray/pixray.py", line 2129, in main
        do_init(settings)
      File "/home/fox/pixray/pixray.py", line 613, in do_init
        drawer.load_model(args, device)
      File "/home/fox/pixray/vdiff.py", line 85, in load_model
        model.load_state_dict(torch.load(checkpoint, map_location='cpu'))
      File "/home/fox/miniconda3/lib/python3.9/site-packages/torch/serialization.py", line 608, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "/home/fox/miniconda3/lib/python3.9/site-packages/torch/serialization.py", line 777, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
    EOFError: Ran out of input
    

    And spits out a 0 byte vfcc_2.pth file in the models folder. Attempting to follow the link it gives also yields a 404 page saying the bucket no longer exists.

    opened by graytFox 5
  • update predictors to use Cog's new Pydantic API

    update predictors to use Cog's new Pydantic API

    Hey @dribnet 👋🏼

    This PR updates pixray's Cog predictor classes to be compatible with Cog's new Python API.

    Cog v0.1.0 has a new predictor API that makes use of Python's built-in type annotations to declare input and output types. The new API also has a different way of declaring inputs based on pydantic, a python package for validating Python models. Instead of using the @cog.input decorators, inputs are now declared inline as parameters to the predict() method.

    There's a bunch of other useful foundational stuff in this new release of Cog that gets us closer to having a standardize type system that leans on JSON Schema and OpenAPI instead of re-inventing our own thing. For more details, see the release notes here: https://github.com/replicate/cog/releases/tag/v0.1.0

    Process

    Here's the process I followed to set things up, make changes, and test:

    1. created a new model https://replicate.com/zeke/pydantic-pixray
    2. forked this repo, recursively cloned it, and cog pushed it to zeke/pydantic-pixray using "old cog" (0.0.x)
    3. verified that my unchanged fork of pixray worked by running some predictions
    4. upgraded my local Cog version to the latest, 0.1.1
    5. updated all the Predictors in cogrun.py to use cog.BasePredictor, cog.Input, cog.Path etc.
    6. set the image field in cog.yaml to publish to my own copy of the model and the predict field to cogrun.py:PixrayVdiff
    7. published using cog push
    8. ran cog predict from a GCP image with a GPU. See https://gist.github.com/zeke/a36c059bebb751fb21b26c1d14ed1996

    Progress

    I am now able to cog build, cog push, and cog predict the changes herein using the latest version of Cog, but still hitting a few snags:

    • The PixrayVdiff predictor (which I think corresponds to https://replicate.com/pixray/text2image) produces output, but it's yielding the same image over and over. See https://gist.github.com/zeke/a36c059bebb751fb21b26c1d14ed1996
    • Some of the existing predictors accept a kwargs argument, but the new version of Cog has a strict list of allowed input types. In order to be compatible with Cog's new type checking stuff, these predictors that accept arbitrary keyword arguments will need to be expanded to explicitly list all the arguments and their types.

    Next steps

    @dribnet hopefully this gives you a head start for updating pixray to work with the new version of Cog. Let me know if this all makes sense, and if you need more help getting these changes shipped.

    opened by zeke 5
  • New option to load from yaml without using run.py

    New option to load from yaml without using run.py

    @dribnet Can you please check and merge this feature?

    It is meant to take a new argument: --config-file <path_to_yaml> without using run.pyscript. (Sorry for delays, this is my first PR on a repo I forked ;))

    See ya! And, btw, great job on pixray...

    opened by syllebra 5
  • Fast pixel drawer

    Fast pixel drawer

    Only supports "rectangular pixels".

    250it [00:31,  7.82it/s]
    vs.
    235it [02:22,  1.73it/s]
    

    Also uses a little less memory 6433MiB vs 8747MiB

    opened by samedii 4
  • Make apt-get update/install single line

    Make apt-get update/install single line

    This is a little Docker gotchya -- at some point the apt repositories will change, then apt-get install will fail because it's using the old cached output of update.

    We should make it harder to trip up on this in Cog, or document it, or something...

    opened by bfirsh 3
  • Python 3.8.5 torch missing version

    Python 3.8.5 torch missing version

    ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0)

    Python 3.8.5 is the oldest version available on miniconda.

    (3.8) [email protected] pixray % pip install -r requirements.txt
    Looking in links: https://download.pytorch.org/whl/torch_stable.html
    Collecting git+https://github.com/bfirsh/[email protected] (from -r requirements.txt (line 28))
      Cloning https://github.com/bfirsh/taming-transformers.git (to revision 7a6e64ee) to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-zaqc64ss
      Running command git clone --filter=blob:none --quiet https://github.com/bfirsh/taming-transformers.git /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-zaqc64ss
      WARNING: Did not find branch or tag '7a6e64ee', assuming revision or ref.
      Running command git checkout -q 7a6e64ee
      Resolved https://github.com/bfirsh/taming-transformers.git to commit 7a6e64ee
      Preparing metadata (setup.py) ... done
    Collecting git+https://github.com/openai/CLIP (from -r requirements.txt (line 29))
      Cloning https://github.com/openai/CLIP to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-wdkc5c31
      Running command git clone --filter=blob:none --quiet https://github.com/openai/CLIP /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-wdkc5c31
      Resolved https://github.com/openai/CLIP to commit d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
      Preparing metadata (setup.py) ... done
    Collecting git+https://github.com/pvigier/[email protected] (from -r requirements.txt (line 30))
      Cloning https://github.com/pvigier/perlin-numpy (to revision 6f077f8) to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-iyuy_gin
      Running command git clone --filter=blob:none --quiet https://github.com/pvigier/perlin-numpy /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-iyuy_gin
      WARNING: Did not find branch or tag '6f077f8', assuming revision or ref.
      Running command git checkout -q 6f077f8
      Resolved https://github.com/pvigier/perlin-numpy to commit 6f077f8
      Preparing metadata (setup.py) ... done
    Collecting git+https://github.com/fbcotter/pytorch_wavelets (from -r requirements.txt (line 46))
      Cloning https://github.com/fbcotter/pytorch_wavelets to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-a0yl7grc
      Running command git clone --filter=blob:none --quiet https://github.com/fbcotter/pytorch_wavelets /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-a0yl7grc
      Resolved https://github.com/fbcotter/pytorch_wavelets to commit 9a0c507f04f43c5397e384bb6be8340169b2fd9a
      Preparing metadata (setup.py) ... done
    Collecting git+https://github.com/pixray/[email protected] (from -r requirements.txt (line 49))
      Cloning https://github.com/pixray/aphantasia (to revision 7e6b3bb) to /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-jbsijre8
      Running command git clone --filter=blob:none --quiet https://github.com/pixray/aphantasia /private/var/folders/01/sn57hs8566145w17svn1c9780000gn/T/pip-req-build-jbsijre8
      WARNING: Did not find branch or tag '7e6b3bb', assuming revision or ref.
      Running command git checkout -q 7e6b3bb
      Resolved https://github.com/pixray/aphantasia to commit 7e6b3bb
      Preparing metadata (setup.py) ... done
    ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0)
    ERROR: No matching distribution found for torch==1.9.0+cu102
    (3.8) [email protected] pixray % python --version
    Python 3.8.5
    
    
    
    opened by codymarcel 0
  • Installation error with `torch`

    Installation error with `torch`

    I am trying to install using the requirements.txt. However, I am getting this error:

    Collecting package metadata (current_repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.

    PackagesNotFoundError: The following packages are not available from current channels:

    • torch==1.9.0+cu102

    Current channels:

    • https://repo.anaconda.com/pkgs/main/win-64
    • https://repo.anaconda.com/pkgs/main/noarch
    • https://repo.anaconda.com/pkgs/r/win-64
    • https://repo.anaconda.com/pkgs/r/noarch
    • https://repo.anaconda.com/pkgs/msys2/win-64
    • https://repo.anaconda.com/pkgs/msys2/noarch

    To search for alternate channels that may provide the conda package you're looking for, navigate to

    https://anaconda.org
    

    My computer is windows 10, and created a Python 3.8 virtual env using Conda. Here is my Cuda settings:

    import torch
    import tensorflow as tf
    import tensorflow.keras as ks
    
    print(tf)
    print(ks)
    print(torch.cuda.is_available())
    print(torch.version.cuda)
    print(torch.backends.cudnn.version())
    print('//////////')
    
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    X_train = torch.FloatTensor([0., 1., 2.])
    X_train = X_train.to(device)
    
    
    print(X_train.is_cuda)
    print(torch.cuda.current_device())
    print(torch.cuda.device_count())
    print(torch.cuda.get_device_name(0))
    

    Output

    <module 'tensorflow' from 'C:\Venv\conda_python3_8_tensorflow_gen_art\lib\site-packages\tensorflow\init.py'> <module 'tensorflow.keras' from 'C:\Venv\conda_python3_8_tensorflow_gen_art\lib\site-packages\tensorflow\keras\init.py'> True 11.3 8302 ////////// True 0 1 NVIDIA GeForce GTX 1070 Ti

    Process finished with exit code 0

    opened by kaionwong 3
  • Replace wget with requests for windows compatibility

    Replace wget with requests for windows compatibility

    The file download utility has been changed from a linux utility to a python library. It downloads at a good speed without too much RAM or CPU usage. It requires a new import, unfortunately, but it is a common one.

    This is tested to be working on my local Windows 10 machine and a Google Colab instance.

    Fixes #47 Partial for #76

    opened by cjpeterson 0
Owner
pixray
pixray
A Sign Language detection project using Mediapipe landmark detection and Tensorflow LSTM's

sign-language-detection A Sign Language detection project using Mediapipe landmark detection and Tensorflow LSTM. The project is built for a vocabular

Hashim 4 Feb 06, 2022
Official code for Score-Based Generative Modeling through Stochastic Differential Equations

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains the official implementation for the paper Score-Based Gen

Yang Song 818 Jan 06, 2023
Hand Gesture Volume Control | Open CV | Computer Vision

Gesture Volume Control Hand Gesture Volume Control | Open CV | Computer Vision Use gesture control to change the volume of a computer. First we look i

Jhenil Parihar 3 Jun 15, 2022
BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalanced Tongue Data

Balanced-Evolutionary-Semi-Stacking Code for the paper ''BESS: Balanced Evolutionary Semi-Stacking for Disease Detection via Partially Labeled Imbalan

0 Jan 16, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018

Learning-to-See-in-the-Dark This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018, by Chen Chen, Qifeng Chen, Jia Xu, and Vl

5.3k Jan 01, 2023
A general-purpose encoder-decoder framework for Tensorflow

READ THE DOCUMENTATION CONTRIBUTING A general-purpose encoder-decoder framework for Tensorflow that can be used for Machine Translation, Text Summariz

Google 5.5k Jan 07, 2023
Pytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"

About this repository This repo contains an Pytorch implementation for the ACL 2017 paper Get To The Point: Summarization with Pointer-Generator Netwo

wxDai 7 Oct 14, 2022
PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

PyTorch code for DriveGAN: Towards a Controllable High-Quality Neural Simulation

76 Dec 24, 2022
ComPhy: Compositional Physical Reasoning ofObjects and Events from Videos

ComPhy This repository holds the code for the paper. ComPhy: Compositional Physical Reasoning ofObjects and Events from Videos, (Under review) PDF Pro

29 Dec 29, 2022
[SIGGRAPH 2021 Asia] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning

DeepVecFont This is the official Pytorch implementation of the paper: Yizhi Wang and Zhouhui Lian. DeepVecFont: Synthesizing High-quality Vector Fonts

Yizhi Wang 146 Dec 18, 2022
Fast, accurate and reliable software for algebraic CT reconstruction

KCT CBCT Fast, accurate and reliable software for algebraic CT reconstruction. This set of software tools includes OpenCL implementation of modern CT

Vojtěch Kulvait 4 Dec 14, 2022
A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution

DRSAN A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution Karam Park, Jae Woong Soh, and Nam Ik Cho Environments U

4 May 10, 2022
TrackFormer: Multi-Object Tracking with Transformers

TrackFormer: Multi-Object Tracking with Transformers This repository provides the official implementation of the TrackFormer: Multi-Object Tracking wi

Tim Meinhardt 321 Dec 29, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
Deep Probabilistic Programming Course @ DIKU

Deep Probabilistic Programming Course @ DIKU

52 May 14, 2022
Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem

Benchmarking nearest neighbors Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but so far t

Erik Bernhardsson 3.2k Jan 03, 2023
Fast mesh denoising with data driven normal filtering using deep variational autoencoders

Fast mesh denoising with data driven normal filtering using deep variational autoencoders This is an implementation for the paper entitled "Fast mesh

9 Dec 02, 2022
Baseline for the Spoofing-aware Speaker Verification Challenge 2022

Introduction This repository contains several materials that supplements the Spoofing-Aware Speaker Verification (SASV) Challenge 2022 including: calc

40 Dec 28, 2022
Powerful and efficient Computer Vision Annotation Tool (CVAT)

Computer Vision Annotation Tool (CVAT) CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our

OpenVINO Toolkit 8.6k Jan 01, 2023