Easy to use Python camera interface for NVIDIA Jetson

Related tags

Deep Learningjetcam
Overview

JetCam

JetCam is an easy to use Python camera interface for NVIDIA Jetson.

  • Works with various USB and CSI cameras using Jetson's Accelerated GStreamer Plugins

  • Easily read images as numpy arrays with image = camera.read()

  • Set the camera to running = True to attach callbacks to new frames

JetCam makes it easy to prototype AI projects in Python, especially within the Jupyter Lab programming environment installed in JetCard.

If you find an issue, please let us know!

Setup

git clone https://github.com/NVIDIA-AI-IOT/jetcam
cd jetcam
sudo python3 setup.py install

JetCam is tested against a system configured with the JetCard setup. Different system configurations may require additional steps.

Usage

Below we show some usage examples. You can find more in the notebooks.

Create CSI camera

Call CSICamera to use a compatible CSI camera. capture_width, capture_height, and capture_fps will control the capture shape and rate that images are aquired. width and height control the final output shape of the image as returned by the read function.

from jetcam.csi_camera import CSICamera

camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30)

Create USB camera

Call USBCamera to use a compatbile USB camera. The same parameters as CSICamera apply, along with a parameter capture_device that indicates the device index. You can check the device index by calling ls /dev/video*.

from jetcam.usb_camera import USBCamera

camera = USBCamera(capture_device=1)

Read

Call read() to read the latest image as a numpy.ndarray of data type np.uint8 and shape (224, 224, 3). The color format is BGR8.

image = camera.read()

The read function also updates the camera's internal value attribute.

camera.read()
image = camera.value

Callback

You can also set the camera to running = True, which will spawn a thread that acquires images from the camera. These will update the camera's value attribute automatically. You can attach a callback to the value using the traitlets library. This will call the callback with the new camera value as well as the old camera value

camera.running = True

def callback(change):
    new_image = change['new']
    # do some processing...

camera.observe(callback, names='value')

Cameras

CSI Cameras

These cameras work with the CSICamera class. Try them out by following the example notebook.

Model Infared FOV Resolution Cost
Raspberry Pi Camera V2 62.2 3280x2464 $25
Raspberry Pi Camera V2 (NOIR) x 62.2 3280x2464 $31
Arducam IMX219 CS lens mount 3280x2464 $65
Arducam IMX219 M12 lens mount 3280x2464 $60
LI-IMX219-MIPI-FF-NANO 3280x2464 $29
WaveShare IMX219-77 77 3280x2464 $19
WaveShare IMX219-77IR x 77 3280x2464 $21
WaveShare IMX219-120 120 3280x2464 $20
WaveShare IMX219-160 160 3280x2464 $23
WaveShare IMX219-160IR x 160 3280x2464 $25
WaveShare IMX219-200 200 3280x2464 $27

USB Cameras

These cameras work with the USBCamera class. Try them out by following the example notebook.

Model Infared FOV Resolution Cost
Logitech C270 60 1280x720 $18

See also

  • JetBot - An educational AI robot based on NVIDIA Jetson Nano

  • JetRacer - An educational AI racecar using NVIDIA Jetson Nano

  • JetCard - An SD card image for web programming AI projects with NVIDIA Jetson Nano

  • torch2trt - An easy to use PyTorch to TensorRT converter

Comments
  • Camera works, Jetcam does not

    Camera works, Jetcam does not

    I am trying to get a Raspberry Pi v2 camera module working on a Jetson Xavier NX with Jetpack 4.4 installed.

    (Specifically, I want to use Jetcam because one of your other projects, https://github.com/NVIDIA-AI-IOT/trt_pose uses Jetcam in its live demo.)

    I know my camera is connected properly and working because if I run:

    gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink
    

    ... I get a video image on screen immediately, no problem.

    However, running even the most basic example (csi_camera notebook), I always get errors:

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    /usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in __init__(self, *args, **kwargs)
         23             if not re:
    ---> 24                 raise RuntimeError('Could not read image from camera.')
         25         except:
    
    RuntimeError: Could not read image from camera.
    
    During handling of the above exception, another exception occurred:
    
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-2-4d23bcae2fae> in <module>
          1 from jetcam.csi_camera import CSICamera
          2 
    ----> 3 camera = CSICamera(width=224, height=224, capture_width=1980, capture_height=1080, capture_fps=30)
    
    /usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py in __init__(self, *args, **kwargs)
         25         except:
         26             raise RuntimeError(
    ---> 27                 'Could not initialize camera.  Please see error trace.')
         28 
         29         atexit.register(self.cap.release)
    
    RuntimeError: Could not initialize camera.  Please see error trace
    

    I've even tried the fix (hack?) suggested in https://github.com/NVIDIA-AI-IOT/jetcam/issues/12 but this makes no difference.

    Any advice on what to look for or what the issue could be?

    opened by anselanza 3
  • remove duplicate comma

    remove duplicate comma

    This duplicate comma causes an error on Jetpack 4.3 (OpenCV 4). error opening bin: could not parse caps "video/x-raw, , format=(string)BGR" Fix #17

    opened by borongyuan 3
  • camera can not initial

    camera can not initial

    Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    from jetcam.csi_camera import CSICamera camera = CSICamera(width=224, height=224, capture_width=1080, capture_height=720, capture_fps=30) Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py", line 24, in init RuntimeError: Could not read image from camera.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/csi_camera.py", line 27, in init RuntimeError: Could not initialize camera. Please see error trace.

    opened by wangnan31415926 1
  • cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol

    cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol

    [email protected]:/usr/lib$ python3
    Python 3.6.8 (default, Jan 14 2019, 11:02:34)
    [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from jetcam.usb_camera import USBCamera
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/lib/python3.6/dist-packages/jetcam-0.0.0-py3.6.egg/jetcam/usb_camera.py", line 3, in <module>
    ImportError: /usr/local/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so: undefined symbol: _ZTIN2cv3dnn14dnn4_v201809175LayerE
    >>>
    
    
    

    My HW is jetson nano and SW env is

    [email protected]:/usr/lib$ jetson-release
     - NVIDIA Jetson NANO/TX1
       * Jetpack 4.2 [L4T 32.1.0]
       * CUDA GPU architecture 5.3
     - Libraries:
       * CUDA 10.0.166
       * cuDNN 7.3.1.28-1+cuda10.0
       * TensorRT 5.0.6.3-1+cuda10.0
       * Visionworks 1.6.0.500n
       * OpenCV 4.0.1 compiled CUDA: YES
     - Jetson Performance: active
    [email protected]:/usr/lib$
    
    
    opened by hgnan 0
  • Install failure

    Install failure

    I runsudo python3 setup.py install

    I get the following:

    /usr/local/lib/python3.8/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 0.1.36ubuntu1 is an invalid version and will not be supported in a future release
      warnings.warn(
    /usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py:123: PkgResourcesDeprecationWarning: 0.23ubuntu1 is an invalid version and will not be supported in a future release
      warnings.warn(
    running bdist_egg
    running egg_info
    writing jetcam.egg-info/PKG-INFO
    writing dependency_links to jetcam.egg-info/dependency_links.txt
    writing top-level names to jetcam.egg-info/top_level.txt
    reading manifest file 'jetcam.egg-info/SOURCES.txt'
    adding license file 'LICENSE.md'
    writing manifest file 'jetcam.egg-info/SOURCES.txt'
    installing library code to build/bdist.linux-aarch64/egg
    running install_lib
    running build_py
    creating build/bdist.linux-aarch64/egg
    creating build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/csi_camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/__init__.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/usb_camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/camera.py -> build/bdist.linux-aarch64/egg/jetcam
    copying build/lib/jetcam/utils.py -> build/bdist.linux-aarch64/egg/jetcam
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/csi_camera.py to csi_camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/__init__.py to __init__.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/usb_camera.py to usb_camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/camera.py to camera.cpython-38.pyc
    byte-compiling build/bdist.linux-aarch64/egg/jetcam/utils.py to utils.cpython-38.pyc
    creating build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/PKG-INFO -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/SOURCES.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/dependency_links.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    copying jetcam.egg-info/top_level.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
    zip_safe flag not set; analyzing archive contents...
    creating 'dist/jetcam-0.0.0-py3.8.egg' and adding 'build/bdist.linux-aarch64/egg' to it
    removing 'build/bdist.linux-aarch64/egg' (and everything under it)
    Processing jetcam-0.0.0-py3.8.egg
    Removing /usr/lib/python3.8/site-packages/jetcam-0.0.0-py3.8.egg
    Copying jetcam-0.0.0-py3.8.egg to /usr/lib/python3.8/site-packages
    jetcam 0.0.0 is already the active version in easy-install.pth
    
    Installed /usr/lib/python3.8/site-packages/jetcam-0.0.0-py3.8.egg
    Processing dependencies for jetcam==0.0.0
    Finished processing dependencies for jetcam==0.0.0
    

    import jetcam returns ModuleNotFoundError: No module named 'jetcam'

    What am I doing wrong?

    opened by master0v 1
  • Jetbot Camera Not Working- RuntimeError: Could not initialize camera.  Please see error trace.

    Jetbot Camera Not Working- RuntimeError: Could not initialize camera. Please see error trace.

    Hello, For some reason I can't get my camera to work again. For context, I tried to use a custom dataset from roboflow but then my kernel kept dying after installing roboflow. I reconfigured the right numpy and edited my .bashrc as I saw in NVIDIA's forum. But now the camera wont initialize. I know it works because it used to work before. I also am able to save a short video with it and able to call it in the terminal. But whenever I try to run a cell in Jupyter that requires the camera, it fails. I've tried restarting the camera too. But no luck :( Any help would be appreciated!

    opened by niiita 1
  • Camera ON LED continues to be on unless I restart the OS.

    Camera ON LED continues to be on unless I restart the OS.

    Hi,

    How can I close the camera after camera.unobserve(update_image, names='value') ? The camera ON LED continues to be on unless I restart the OS. I am using Logitech C270 USB camera. Is there a command to close the camera?

    opened by jam244 0
  • jetcam thread race - read thread and processing thread

    jetcam thread race - read thread and processing thread

    with camera.running = True, jetcam spawns a thread which reads into camera.value

    Now let's say we do, new_image = change['new'] and do some processing. I guess Python does shallow copying and only assigns a reference to the original image array in the new_image variable. So, effectively, new_image and camera.value are pointing to the same memory region. Lets say my processing-thread takes a very long time. In the mean time, camera.value is updated by jetcam-thread. This can cause a thread race. Is that right?

    opened by PhilipsKoshy 0
  • Cannot query video position: status=0, value=-1, duration=-1

    Cannot query video position: status=0, value=-1, duration=-1

    i tried camera = USBCamera(width=224, height=224, capture_width=640, capture_height=480, capture_device=0) the reply is [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

    opened by Chenhait 0
  • AttributeError: 'directional_link' object has no attribute 'link'

    AttributeError: 'directional_link' object has no attribute 'link'

    The beginning steps is OK. But when 'camera_link.link()' fail to execute and I got an error: `--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in ----> 1 camera_link.link()

    AttributeError: 'directional_link' object has no attribute 'link'`

    Don't know what is the reason.

    opened by watershade 0
Releases(v0.0.0)
Owner
NVIDIA AI IOT
NVIDIA AI IOT
Continuous Time LiDAR odometry

CT-ICP: Elastic SLAM for LiDAR sensors This repository implements the SLAM CT-ICP (see our article), a lightweight, precise and versatile pure LiDAR o

385 Dec 29, 2022
I created My own Virtual Artificial Intelligence named genesis, He can assist with my Tasks and also perform some analysis,,

Virtual-Artificial-Intelligence-genesis- I created My own Virtual Artificial Intelligence named genesis, He can assist with my Tasks and also perform

AKASH M 1 Nov 05, 2021
Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training"

Saliency Guided Training Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training" by Aya Abdelsalam Ismail, Hector Cor

8 Sep 22, 2022
[CVPR21] LightTrack: Finding Lightweight Neural Network for Object Tracking via One-Shot Architecture Search

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search The official implementation of the paper LightTra

Multimedia Research 290 Dec 24, 2022
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
Personal project about genus-0 meshes, spherical harmonics and a cow

How to transform a cow into spherical harmonics ? Spot the cow, from Keenan Crane's blog Context In the field of Deep Learning, training on images or

3 Aug 22, 2022
Website for D2C paper

D2C This is the repository that contains source code for the D2C Website. If you find D2C useful for your work please cite: @article{sinha2021d2c au

1 Oct 21, 2021
KinectFusion implemented in Python with PyTorch

KinectFusion implemented in Python with PyTorch This is a lightweight Python implementation of KinectFusion. All the core functions (TSDF volume, fram

Jingwen Wang 80 Jan 03, 2023
All public open-source implementations of convnets benchmarks

convnet-benchmarks Easy benchmarking of all public open-source implementations of convnets. A summary is provided in the section below. Machine: 6-cor

Soumith Chintala 2.7k Dec 30, 2022
QHack—the quantum machine learning hackathon

Official repo for QHack—the quantum machine learning hackathon

Xanadu 72 Dec 21, 2022
Official implementation of FCL-taco2: Fast, Controllable and Lightweight version of Tacotron2 @ ICASSP 2021

FCL-Taco2: Towards Fast, Controllable and Lightweight Text-to-Speech synthesis (ICASSP 2021) Paper | Demo Block diagram of FCL-taco2, where the decode

Disong Wang 39 Sep 28, 2022
Codebase for Inducing Causal Structure for Interpretable Neural Networks

Interchange Intervention Training (IIT) Codebase for Inducing Causal Structure for Interpretable Neural Networks Release Notes 12/01/2021: Code and Pa

Zen 6 Oct 10, 2022
Implementation of the SUMO (Slim U-Net trained on MODA) model

SUMO - Slim U-Net trained on MODA Implementation of the SUMO (Slim U-Net trained on MODA) model as described in: TODO: add reference to paper once ava

6 Nov 19, 2022
Repo for parser tensorflow(.pb) and tflite(.tflite)

tfmodel_parser .pb file is the format of tensorflow model .tflite file is the format of tflite model, which usually used in mobile devices before star

1 Dec 23, 2021
This repo is customed for VisDrone.

Object Detection for VisDrone(无人机航拍图像目标检测) My environment 1、Windows10 (Linux available) 2、tensorflow = 1.12.0 3、python3.6 (anaconda) 4、cv2 5、ensemble

53 Jul 17, 2022
PyTorch implementation for our AAAI 2022 Paper "Graph-wise Common Latent Factor Extraction for Unsupervised Graph Representation Learning"

deepGCFX PyTorch implementation for our AAAI 2022 Paper "Graph-wise Common Latent Factor Extraction for Unsupervised Graph Representation Learning" Pr

Thilini Cooray 4 Aug 11, 2022
Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch .

PyTorch-High-Res-Stereo-Depth-Estimation Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch. Stereo dep

Ibai Gorordo 26 Nov 24, 2022
Official PyTorch implementation of UACANet: Uncertainty Aware Context Attention for Polyp Segmentation

UACANet: Uncertainty Aware Context Attention for Polyp Segmentation Official pytorch implementation of UACANet: Uncertainty Aware Context Attention fo

Taehun Kim 85 Dec 14, 2022
Multi-layer convolutional LSTM with Pytorch

Convolution_LSTM_pytorch Thanks for your attention. I haven't got time to maintain this repo for a long time. I recommend this repo which provides an

Zijie Zhuang 734 Jan 03, 2023
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"

SinGAN Project | Arxiv | CVF | Supplementary materials | Talk (ICCV`19) Official pytorch implementation of the paper: "SinGAN: Learning a Generative M

Tamar Rott Shaham 3.2k Dec 25, 2022