Synthetic data for the people.

Overview

zpy: Synthetic data in Blender.

WebsiteInstallDocsExamplesCLIContributeLicence

Discord Twitter Youtube PyPI Docs

Synthetic raspberry pi

Abstract

Collecting, labeling, and cleaning data for computer vision is a pain. Jump into the future and create your own data instead! Synthetic data is faster to develop with, effectively infinite, and gives you full control to prevent bias and privacy issues from creeping in. We created zpy to make synthetic data easy, by simplifying the scene creation process and providing an easy way to generate synthetic data at scale.

Install

Install: Using Blender GUI

First download the latest zip (you want the one called zpy_addon-v*.zip). Then open up Blender. Navigate to Edit -> Preferences -> Add-ons. You should be able to install and enable the addon from there.

Enabling the addon

Install: Linux: Using Install Script

$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/ZumoLabs/zpy/main/install.sh)"

Set these environment variables for specific versions:

export BLENDER_VERSION="2.91"
export BLENDER_VERSION_FULL="2.91.0"
export ZPY_VERSION="v1.0.0"

Documentation

More documentation can be found here

Examples

Tutorials

Projects

CLI

We provide a simple CLI, you can find documentation here.

Contributing

We welcome community contributions! Search through the current issues or open your own.

Licence

This release of zpy is under the GPLv3 license, a free copyleft license used by Blender. TLDR: Its free, use it!

BibTeX

If you use zpy in your research, we would appreciate the citation!

@article{zpy,
  title={zpy: Synthetic data for Blender.},
  author={Ponte, H. and Ponte, N. and Karatas, K},
  journal={GitHub. Note: https://github.com/ZumoLabs/zpy},
  volume={1},
  year={2020}
}
Comments
  • Improving Rendering & Processing Speed with AWS

    Improving Rendering & Processing Speed with AWS

    Is your feature request related to a problem? Please describe. I'm frustrated by the time it is taking to process images on my local device (the rendering isn't the worst but the annotations are taking a long time at the end). I would like to use AWS EC2 instances to reduce the time required to create images and annotations (especially the category segmentation which seems to take a long time to encode into the annotation).

    Do you have any suggestions as to the kinds of instances or methods on AWS can be used to reduce rendering and processing time? That would be immensely helpful. Thank you.

    question 
    opened by tgorm1965 14
  • Is it possible to segment parts of an object based on it's material?

    Is it possible to segment parts of an object based on it's material?

    Is your feature request related to a problem? Please describe. I would like to segment a complex mesh that is made up of different components with different materials for every component. Is it possible to segment based off the material? Or do i need to separate the single mesh into multiple components?

    question 
    opened by tgorm1965 13
  • Bounding Box generation Error

    Bounding Box generation Error

    I've created this Blender scene where I have a dice, and I'm using ZPY to generate a dataset composed of images obtained by rotating around the object and jittering both the dice position and the camera. Everything seems to be working properly, but the bounding-boxes generated on the annotation file get progressively worse with each picture.

    For example this is the first image's bounding-box: image

    This one we get halfway through: image

    And this is one of the last ones: image

    This is my code (I've cut some stuff, I can't paste it all for some reason):

    
    def run(num_steps = 20):
        
        # Random seed results in unique behavior
        zpy.blender.set_seed()
    
        # Create the saver object
        saver = zpy.saver_image.ImageSaver(description="Domain randomized dado")
    
        # Add the dado category
        dado_seg_color = zpy.color.random_color(output_style="frgb")
        saver.add_category(name="dado", color=dado_seg_color)
    
        # Segment Suzzanne (make sure a material exists for the object!)
        zpy.objects.segment("dado", color=dado_seg_color)
        
        # Original dice pose
        zpy.objects.save_pose("dado", "dado_pose_og")
        
        #Original camera pose
        zpy.objects.save_pose("Camera", "Camera_pose_og")
    
        # Save the positions of objects so we can jitter them later
        zpy.objects.save_pose("Camera", "cam_pose")
        zpy.objects.save_pose("dado", "dado_pose")
        
        asset_dir = Path(bpy.data.filepath).parent
        texture_paths = [
            asset_dir / Path("textures/madera.png"),
            asset_dir / Path("textures/ladrillo.png"),
        ]
        
    
        # Run the sim.
        for step_idx in range(num_steps):
            # Example logging
            # stp = zpy.blender.step()
            # print("BLENDER STEPS: ", stp.num_steps)
            # log.debug("This is a debug log")
    
            # Return camera and dado to original positions
            zpy.objects.restore_pose("Camera", "cam_pose")
            zpy.objects.restore_pose("dado", "dado_pose")
            
            # Rotate camera
            location = bpy.context.scene.objects["Camera"].location
            angle = step_idx*360/num_steps
            location = rotate(location, angle, axis=(0, 0, 1))
            bpy.data.objects["Camera"].location = location
            
            # Jitter dado pose
            zpy.objects.jitter(
                "dado",
                translate_range=((-300, 300), (-300, 300), (0, 0)),
                rotate_range=(
                    (0, 0),
                    (0, 0),
                    (-math.pi, math.pi),
                ),
            )
    
            # Jitter the camera pose
            zpy.objects.jitter(
                "Camera",
                translate_range=(
                    (-5, 5),
                    (-5, 5),
                    (-5, 5),
                ),
            )
    
            # Camera should be looking at dado
            zpy.camera.look_at("Camera", bpy.data.objects["dado"].location)
            
            texture_path = random.choice(texture_paths)
            
            # HDRIs are like a pre-made background with lighting
            # zpy.hdris.random_hdri()
    
            # Pick a random texture from the 'textures' folder (relative to blendfile)
            # Textures are images that we will map onto a material
            new_mat = zpy.material.make_mat_from_texture(texture_path)
            # zpy.material.set_mat("dado", new_mat)
    
            # Have to segment the new material
            zpy.objects.segment("dado", color=dado_seg_color)
    
            # Jitter the dado material
            # zpy.material.jitter(bpy.data.objects["dado"].active_material)
    
            # Jitter the HSV for empty and full images
            '''
            hsv = (
                random.uniform(0.49, 0.51),  # (hue)
                random.uniform(0.95, 1.1),  # (saturation)
                random.uniform(0.75, 1.2),  # (value)
            )
            '''
    
            # Name for each of the output images
            rgb_image_name = zpy.files.make_rgb_image_name(step_idx)
            iseg_image_name = zpy.files.make_iseg_image_name(step_idx)
            depth_image_name = zpy.files.make_depth_image_name(step_idx)
    
            # Render image
            zpy.render.render(
                rgb_path=saver.output_dir / rgb_image_name,
                iseg_path=saver.output_dir / iseg_image_name,
                depth_path=saver.output_dir / depth_image_name,
                # hsv=hsv,
            )
    
            # Add images to saver
            saver.add_image(
                name=rgb_image_name,
                style="default",
                output_path=saver.output_dir / rgb_image_name,
                frame=step_idx,
            )
            saver.add_image(
                name=iseg_image_name,
                style="segmentation",
                output_path=saver.output_dir / iseg_image_name,
                frame=step_idx,
            )
            saver.add_image(
                name=depth_image_name,
                style="depth",
                output_path=saver.output_dir / depth_image_name,
                frame=step_idx,
            )
    
            # Add annotation to segmentation image
            saver.add_annotation(
                image=rgb_image_name,
                seg_image=iseg_image_name,
                seg_color=dado_seg_color,
                category="dado",
            )
            
    
        # Write out annotations
        saver.output_annotated_images()
        saver.output_meta_analysis()
    
        # ZUMO Annotations
        zpy.output_zumo.OutputZUMO(saver).output_annotations()
    
        # COCO Annotations
        zpy.output_coco.OutputCOCO(saver).output_annotations()
        
        # Volver al estado inicial
        zpy.objects.restore_pose("dado", "dado_pose_og")
        zpy.objects.restore_pose("Camera", "Camera_pose_og")
    

    Is this my fault or an actual bug?

    • OS: Ubuntu 20.04
    • Python 3.9
    • Blender 2.93
    • zpy: latest
    bug 
    opened by franferraz98 7
  • Better Run button

    Better Run button

    To properly run the "run" script, a zpy user has to click the "Run" button under the ZPY panel. If they click the "Run" button in the Blender script panel, it will not save and reset the scene, resulting in simulation drift. We should provide a warning, or simply make the "run" button in the blender script also a valid option.

    bug enhancement 
    opened by hu-po 7
  • Question: Is there a method to obtain the height and width of every pixel of a depth map?

    Question: Is there a method to obtain the height and width of every pixel of a depth map?

    Hi, I would like to obtain this information to accompany the depth maps produced by ZPY. Is this possible using ZPY's implementation? How would I go about this? Thank you!

    question 
    opened by tgorm1965 6
  • Code Formatting

    Code Formatting

    Ran automatic code formatting with the following git pre-commit hooks:

    • black
    • trailing-whitespace-fixer
    • end-of-file-fixer
    • add-trailing-comma

    Added a pre-commit config file in case any contributors would like to use these pre-commit git hooks going forward.

    opened by zkneupper 6
  • Cannot create COCO annotations when RGB and Segmentation images are in Separate folder

    Cannot create COCO annotations when RGB and Segmentation images are in Separate folder

    Hi , I am trying to generate data with annotations, where my output_dir is the parent folder and I save my RGB and ISEG images in the children folders for example data/rgb_images and data/iseg_images. I also have another folder for annotations as data/coco_annotations. I am successfully able to render images however, I cannot generate COCO annotations. As far as I understand from the source code, it parses image path w.r.t. the saver.output_dir + filename instead of the directories where images are actually stored.

    Thanks!!!

    opened by aadi-mishra 4
  • Allow Custom scaling for HDRI Background

    Allow Custom scaling for HDRI Background

    Hi, This is a small feature I'd like to be added to "zpy.hdris.random_hdri()" function. Currently, the function doesn't accept a custom scale from user and renders the HDRI background in the default scale w.r.t the 3D object.

    It would helpful if a parameter can be added to random_hdri() so that it can be passed to load_hdri(). This would allow us to play with the HDRI background scaling.

    hdri enhancement 
    opened by Resh-97 4
  • Give multiple objects the same tag

    Give multiple objects the same tag

    Describe the solution you'd like Not sure if this already exists but I'd like to be able to give numerous object meshes have one label. If it does, please let me know!

    question 
    opened by yashdutt20 4
  • AOV Pass Error

    AOV Pass Error

    Describe the bug A key error is occurring in zpy. Can you help me figure out how to fix this?

    To Reproduce Steps to reproduce the behavior: I have followed the Suzanne script from the first video of the youtube channel.

    Expected behavior To save and render the images.

    Screenshots File "/Applications/Blender.app/Contents/Resources/2.91/python/lib/python3.7/site-packages/zpy/render.py", line 42, in make_aov_pass view_layer['aovs'][-1]['name'] = style KeyError: 'bpy_struct[key]: key "aovs" not found' In call to configurable 'make_aov_pass' (<function make_aov_pass at 0x1576194d0>) In call to configurable 'make_aov_file_output_node' (<function make_aov_file_output_node at 0x15761db00>) In call to configurable 'render_aov' (<function render_aov at 0x157623560>) Error: Python script failed, check the message in the system console Executing script failed with exception Error: Python script failed, check the message in the system console

    Desktop (please complete the following information):

    • OS: Mac OS Catalina 10.15.5
    • latest zpy
    opened by yashdutt20 3
  • KeyError: 'bpy_struct[key]: key

    KeyError: 'bpy_struct[key]: key "aovs" not found'

    Describe the bug I downloaded Suzanne 1 example from your repository and run it using Blender 2.91 and your library.

    I got following error:

    Traceback (most recent call last):
      File "/run.py", line 78, in <module>
      File "/run.py", line 35, in run
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 204, in render_aov
        output_node = make_aov_file_output_node(style=style)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 82, in make_aov_file_output_node
        zpy.render.make_aov_pass(style)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1069, in gin_wrapper
        utils.augment_exception_message_and_reraise(e, err_str)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
        raise proxy.with_traceback(exception.__traceback__) from None
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/gin/config.py", line 1046, in gin_wrapper
        return fn(*new_args, **new_kwargs)
      File "/home/piotr/Files/blender/bar/lib/python3.7/site-packages/zpy/render.py", line 42, in make_aov_pass
        view_layer['aovs'][-1]['name'] = style
    KeyError: 'bpy_struct[key]: key "aovs" not found'
      In call to configurable 'make_aov_pass' (<function make_aov_pass at 0x7f73bfd25f80>)
      In call to configurable 'make_aov_file_output_node' (<function make_aov_file_output_node at 0x7f73bfd25ef0>)
      In call to configurable 'render_aov' (<function render_aov at 0x7f73bfd28f80>)
    Error: Python script failed, check the message in the system console
    

    To Reproduce Steps to reproduce the behavior:

    1. Install Blender 2.91.
    2. Install zpy addon and zpy-zumo Python library.
    3. Download Suzanne 1 run.py script.
    4. Run Suzanne 1 run.py script.

    Expected behavior Suzanne 1 run.py script run without errors.

    bug 
    opened by VictorAtPL 3
  • Material jitter

    Material jitter

    Hi

    How would I got about jittering just one object material, whilst leaving others constant ??

    So for example- say I've got a car object. Car tyres are always matt black, and the taillights are always red- but the paintwork varies- might be gloss red or matt blue.

    Is this to do with the way that materials are assigned in Blender? i.e tyres would be fixed as black & paintwork would be inherited or determined by ZPY

    thank you

    Andrew

    opened by barney2074 0
  • HDRI & texture randomisation

    HDRI & texture randomisation

    Hi,

    thank you for ZPY- loving it & just what I need- thank you..!

    I've worked through the tutorials. Suzanne 3 YouTube content seems to differ from the latest Suzanne 3 python run.py on GitHub. Youtube has a Python list of explicit texture and HDRI paths, whereas Github seems to have moved it into a function ??

    To get it to work- from the documentation I though I just needed to create a \textures directory and a \hdris\1k directory in the same folder as the .blend file ??

    so: path/foo.blend path/textures/1.jpg path/textures/2.jpg path/hdris/1k/a.exr path/hdris/1k/b.exr

    However- I get a bunch of errors- looks like this is not correct I would be very grateful if you could point me in the right direction thanks again

    Andrew

    opened by barney2074 1
  • How to get the Normalised segmentation?

    How to get the Normalised segmentation?

    How to get the normalised segmentation?

    I want to generate the segmentation_float with the zpy.output_coco.OutputCOCO(saver).output_annotations().

    When I changed the code, it doesn't return any segmentation, just return the bbox.

    The reason I want to normalize these is I am using 4K image size for rendering and I am getting a large annotation files. For example for 5 images, the annotation file was 17Mb.

            {
                "category_id": 0,
                "image_id": 0,
                "id": 0,
                "iscrowd": false,
                "bbox": [
                    311.01,
                    239.01,
                    16.980000000000018,
                    240.98000000000002
                ],
                "area": 4091.8404000000046
            },
            {
                "category_id": 1,
                "image_id": 0,
                "id": 1,
                "iscrowd": false,
                "bbox": [
                    279.01,
                    223.01,
                    79.98000000000002,
                    120.98000000000002
                ],
                "area": 9675.980400000004
            },
    
    opened by c3210927 0
Releases(v1.4.1rc9)
Owner
Zumo Labs
The world is a simulation.
Zumo Labs
Leon is an open-source personal assistant who can live on your server.

Leon Your open-source personal assistant. Website :: Documentation :: Roadmap :: Contributing :: Story 👋 Introduction Leon is an open-source personal

Leon AI 11.7k Dec 30, 2022
One Stop Anomaly Shop: Anomaly detection using two-phase approach: (a) pre-labeling using statistics, Natural Language Processing and static rules; (b) anomaly scoring using supervised and unsupervised machine learning.

One Stop Anomaly Shop (OSAS) Quick start guide Step 1: Get/build the docker image Option 1: Use precompiled image (might not reflect latest changes):

Adobe, Inc. 148 Dec 26, 2022
STT for TorchScript is a port of Coqui STT based on DeepSpeech to PyTorch.

st3 STT for TorchScript is a port of Coqui STT based on DeepSpeech to PyTorch. Currently it supports converting pbmm models to pt scripts with integra

Vlad Ki 8 Oct 18, 2021
Generate custom detailed survey paper with topic clustered sections and proper citations, from just a single query in just under 30 mins !!

Auto-Research A no-code utility to generate a detailed well-cited survey with topic clustered sections (draft paper format) and other interesting arti

Sidharth Pal 20 Dec 14, 2022
This is a project of data parallel that running on NLP tasks.

This is a project of data parallel that running on NLP tasks.

2 Dec 12, 2021
Training open neural machine translation models

Train Opus-MT models This package includes scripts for training NMT models using MarianNMT and OPUS data for OPUS-MT. More details are given in the Ma

Language Technology at the University of Helsinki 167 Jan 03, 2023
Training and evaluation codes for the BertGen paper (ACL-IJCNLP 2021)

BERTGEN This repository is the implementation of the paper "BERTGEN: Multi-task Generation through BERT" (https://arxiv.org/abs/2106.03484). The codeb

<a href=[email protected]"> 9 Oct 26, 2022
💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants

Rasa Open Source Rasa is an open source machine learning framework to automate text-and voice-based conversations. With Rasa, you can build contextual

Rasa 15.3k Jan 03, 2023
Treemap visualisation of Maya scene files

Ever wondered which nodes are responsible for that 600 mb+ Maya scene file? Features Fast, resizable UI Parsing at 50 mb/sec Dependency-free, single-f

Marcus Ottosson 76 Nov 12, 2022
A collection of GNN-based fake news detection models.

This repo includes the Pytorch-Geometric implementation of a series of Graph Neural Network (GNN) based fake news detection models. All GNN models are implemented and evaluated under the User Prefere

SafeGraph 251 Jan 01, 2023
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Dec 31, 2022
Synthetic data for the people.

zpy: Synthetic data in Blender. Website • Install • Docs • Examples • CLI • Contribute • Licence Abstract Collecting, labeling, and cleaning data for

Zumo Labs 253 Dec 21, 2022
CYGNUS, the Cynical AI, combines snarky responses with uncanny aggression.

New & (hopefully) Improved CYGNUS with several API updates, user updates, and online/offline operations added!!!

Simran Farrukh 0 Mar 28, 2022
Train 🤗transformers with DeepSpeed: ZeRO-2, ZeRO-3

Fork from https://github.com/huggingface/transformers/tree/86d5fb0b360e68de46d40265e7c707fe68c8015b/examples/pytorch/language-modeling at 2021.05.17.

Junbum Lee 12 Oct 26, 2022
ASCEND Chinese-English code-switching dataset

ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong.

CAiRE 11 Dec 09, 2022
COVID-19 Chatbot with Rasa 2.0: open source conversational AI

COVID-19 chatbot implementation with Rasa open source 2.0, conversational AI framework.

Aazim Parwaz 1 Dec 23, 2022
Lingtrain Aligner — ML powered library for the accurate texts alignment.

Lingtrain Aligner ML powered library for the accurate texts alignment in different languages. Purpose Main purpose of this alignment tool is to build

Sergei Averkiev 76 Dec 14, 2022
Wrapper to display a script output or a text file content on the desktop in sway or other wlroots-based compositors

nwg-wrapper This program is a part of the nwg-shell project. This program is a GTK3-based wrapper to display a script output, or a text file content o

Piotr Miller 94 Dec 27, 2022
edge-SR: Super-Resolution For The Masses

edge-SR: Super Resolution For The Masses Citation Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses",

Pablo 40 Nov 10, 2022
Use Google's BERT for named entity recognition (CoNLL-2003 as the dataset).

For better performance, you can try NLPGNN, see NLPGNN for more details. BERT-NER Version 2 Use Google's BERT for named entity recognition (CoNLL-2003

Kaiyinzhou 1.2k Dec 26, 2022