Lightweight Python library for adding real-time object tracking to any detector.

Overview

Norfair by Tryolabs logo

Build status DOI

Norfair is a customizable lightweight Python library for real-time 2D object tracking.

Using Norfair, you can add tracking capabilities to any detector with just a few lines of code.

Features

  • Any detector expressing its detections as a series of (x, y) coordinates can be used with Norfair. This includes detectors performing object detection, pose estimation, and instance segmentation.

  • The function used to calculate the distance between tracked objects and detections is defined by the user, making the tracker extremely customizable. This function can make use of any extra information, such as appearance embeddings, which can heavily improve tracking performance.

  • Modular. It can easily be inserted into complex video processing pipelines to add tracking to existing projects. At the same time it is possible to build a video inference loop from scratch using just Norfair and a detector.

  • Fast. The only thing bounding inference speed will be the detection network feeding detections to Norfair.

Norfair is built, used and maintained by Tryolabs.

Installation

Norfair currently supports Python 3.7+.

For the minimal version, install as:

pip install norfair

To make Norfair install the dependencies to support more features, install as:

pip install norfair[video]  # Adds several video helper features running on OpenCV
pip install norfair[metrics]  # Supports running MOT metrics evaluation
pip install norfair[metrics,video]  # Everything included

If the needed dependencies are already present in the system, installing the minimal version of Norfair is enough for enabling the extra features. This is particuarly useful for embedded devices, where installing compiled dependencies can be difficult, but they can sometimes come preinstalled with the system.

How it works

Norfair works by estimating the future position of each point based on its past positions. It then tries to match these estimated positions with newly detected points provided by the detector. For this matching to occur, Norfair can rely on any distance function specified by the user of the library. Therefore, each object tracker can be made as simple or as complex as needed.

The following is an example of a particularly simple distance function calculating the Euclidean distance between tracked objects and detections. This is possibly the simplest distance function you could use in Norfair, as it uses just one single point per detection/object.

 def euclidean_distance(detection, tracked_object):
     return np.linalg.norm(detection.points - tracked_object.estimate)

As an example we use Detectron2 to get the single point detections to use with this distance function. We just use the centroids of the bounding boxes it produces around cars as our detections, and get the following results.

Tracking cars with Norfair

On the left you can see the points we get from Detectron2, and on the right how Norfair tracks them assigning a unique identifier through time. Even a straightforward distance function like this one can work when the tracking needed is simple.

Norfair also provides several useful tools for creating a video inference loop. Here is what the full code for creating the previous example looks like, including the code needed to set up Detectron2:

import cv2
import numpy as np
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor

from norfair import Detection, Tracker, Video, draw_tracked_objects

# Set up Detectron2 object detector
cfg = get_cfg()
cfg.merge_from_file("demos/faster_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.MODEL.WEIGHTS = "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl"
detector = DefaultPredictor(cfg)

# Norfair
video = Video(input_path="video.mp4")
tracker = Tracker(distance_function=euclidean_distance, distance_threshold=20)

for frame in video:
    detections = detector(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
    detections = [Detection(p) for p in detections['instances'].pred_boxes.get_centers().cpu().numpy()]
    tracked_objects = tracker.update(detections=detections)
    draw_tracked_objects(frame, tracked_objects)
    video.write(frame)

The video and drawing tools use OpenCV frames, so they are compatible with most Python video code available online. The point tracking is based on SORT generalized to detections consisting of a dynamically changing amount of points per detection.

Motivation

Trying out the latest state of the art detectors normally requires running repositories which weren't intended to be easy to use. These tend to be repositories associated with a research paper describing a novel new way of doing detection, and they are therefore intended to be run as a one-off evaluation script to get some result metric to publish on a particular research paper. This explains why they tend to not be easy to run as inference scripts, or why extracting the core model to use in another standalone script isn't always trivial.

Norfair was born out of the need to quickly add a simple layer of tracking over a wide range of newly released SOTA detectors. It was designed to seamlessly be plugged into a complex, highly coupled code base, with minimum effort. Norfair provides a series of modular but compatible tools, which you can pick and chose to use in your project.

Documentation

You can find the documentation for Norfair's API here.

Examples

We provide several examples of how Norfair can be used to add tracking capabilities to several different detectors.

  1. Simple tracking of cars using Detectron2.
  2. Faster tracking of cars/pedestrians and other 78 classes using YOLOv5. Try it on any YouTube video on this Google Colab notebook.
  3. Faster tracking of cars using YOLOv4. Try it on any YouTube video on this Google Colab notebook.
  4. Inserting Norfair into an existing project: Simple tracking of pedestrians using AlphaPose.
  5. Speed up pose estimation by extrapolating detections using OpenPose.

Norfair OpenPose Demo

Comparison to other trackers

Norfair's contribution to Python's object tracker library repertoire is its ability to work with any object detector by being able to work with a variable number of points per detection, and the ability for the user to heavily customize the tracker by creating their own distance function.

If you are looking for a tracker, here are some other projects worth noting:

  • OpenCV includes several tracking solutions like KCF Tracker and MedianFlow Tracker which are run by making the user select a part of the frame to track, and then letting the tracker follow that area. They tend not to be run on top of a detector and are not very robust.
  • dlib includes a correlation single object tracker. You have to create your own multiple object tracker on top of it yourself if you want to track multiple objects with it.
  • AlphaPose just released a new version of their human pose tracker. This tracker is tightly integrated into their code base, and to the task of tracking human poses.
  • SORT and Deep SORT are similar to this repo in that they use Kalman filters (and a deep embedding for Deep SORT), but they are hardcoded to a fixed distance function and to tracking boxes. Norfair also adds some filtering when matching tracked objects with detections, and changes the Hungarian Algorithm for its own distance minimizer. Both these repos are also released under the GPL license, which might be an issue for some individuals or companies because the source code of derivative works needs to be published.

Benchmarks

MOT17 results obtained using motmetrics4norfair demo script. Hyperparameters were tuned for reaching a high MOTA on this dataset. A more balanced set of hyperparameters, like the default ones used in the other demos, is recommended for production.

Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
MOT17-13-DPM 18.0% 83.5% 110 5 28 77 416 9543 120 125 13.4% 26.8%
MOT17-04-FRCNN 56.3% 93.2% 83 18 43 22 1962 20778 90 104 52.0% 10.7%
MOT17-11-FRCNN 61.5% 93.1% 75 19 33 23 431 3631 64 61 56.3% 10.1%
MOT17-04-SDP 77.6% 97.4% 83 48 26 9 1001 10672 225 254 75.0% 13.2%
MOT17-13-SDP 57.0% 82.1% 110 45 28 37 1444 5008 229 209 42.6% 20.1%
MOT17-05-DPM 38.0% 82.2% 133 10 58 65 570 4291 96 96 28.3% 24.2%
MOT17-09-DPM 59.9% 75.4% 26 4 18 4 1042 2137 119 113 38.1% 26.2%
MOT17-10-DPM 37.3% 84.6% 57 6 20 31 871 8051 127 154 29.5% 24.8%
MOT17-02-SDP 51.0% 76.1% 62 11 39 12 2979 9103 268 284 33.5% 18.2%
MOT17-11-DPM 54.2% 84.5% 75 12 24 39 935 4321 88 64 43.4% 21.7%
MOT17-09-FRCNN 58.6% 98.5% 26 7 17 2 49 2207 40 39 56.9% 9.5%
MOT17-11-SDP 75.8% 91.1% 75 34 30 11 697 2285 103 101 67.3% 14.0%
MOT17-02-FRCNN 36.6% 79.7% 62 7 26 29 1736 11783 119 131 26.6% 13.4%
MOT17-05-FRCNN 54.7% 89.2% 133 24 68 41 457 3136 95 96 46.7% 18.1%
MOT17-04-DPM 42.5% 83.6% 83 7 44 32 3965 27336 401 420 33.3% 21.0%
MOT17-10-SDP 74.2% 88.1% 57 30 24 3 1292 3316 308 289 61.7% 19.8%
MOT17-10-FRCNN 61.0% 75.8% 57 16 35 6 2504 5013 319 313 39.0% 17.3%
MOT17-09-SDP 67.6% 94.6% 26 12 14 0 204 1726 52 55 62.8% 13.0%
MOT17-02-DPM 20.2% 81.6% 62 5 14 43 843 14834 111 112 15.0% 24.6%
MOT17-13-FRCNN 58.8% 73.6% 110 29 57 24 2455 4802 371 366 34.5% 18.5%
MOT17-05-SDP 66.7% 87.6% 133 32 81 20 653 2301 134 140 55.4% 16.5%
OVERALL 53.6% 87.2% 1638 381 727 530 26506 156274 3479 3526 44.7% 16.3%

Commercial support

Tryolabs can provide commercial support, implement new features in Norfair or build video analytics tools for solving your challenging problems. Norfair powers several video analytics applications, such as the face mask detection tool.

If you are interested, please contact us.

Citing Norfair

For citations in academic publications, please export your desired citation format (BibTeX or other) from Zenodo.

License

Copyright © 2021, Tryolabs. Released under the BSD 3-Clause.

Comments
  • Embeddings in TrackedObject

    Embeddings in TrackedObject

    Hi,

    I'm trying to implement a tracker which uses embedding. I'm storing each embedding in the detection.data, however I also need a place where to store the TrackedObject embedding, in order to compute a suitable distance. Is there something similar to detection.data for TrackedObject?

    feature request 
    opened by LorBordin 19
  • How to evalaute norfair on my own dataset ?

    How to evalaute norfair on my own dataset ?

    Hello ,

    I am trying to implement norfair along with yolov4 with my own dataset. I want to know how well norfair is tracking objects in my dataset ? can you please help me in evaluating norfair ?

    help wanted 
    opened by vishnuvardhan58 18
  • Darknet integration

    Darknet integration

    Hello, thanks for sharing this awesome tracker. Im trying to integrate it to darknet, but its not behaving as expected.

    heres what i do:

    detections = darknet.detect_image(network, class_names, darknet_image, thresh=0.2)
            detections2 = [
                Detection(get_centroid(detection[2], width, height), data=detection[2])
                for detection in detections
            ]
    tracked_objects = tracker.update(detections=detections2)
    norfair.draw_tracked_objects(frame_resized, tracked_objects)
    

    the tracker is somewhat populated if i do a print(tracker.tracker_objects) but it doesnt look right and the draw_tracked_objects is dead. since darknet outputs x and y by default ive tried a variety of variations with converting to bbox etc. nothing has worked and im hoping for a little push :)

    is the data parameter necessary in my case?

    edit i forgot to output darknets centroid as numpy array, its working now :) for those who maybe end up in the same issue, change out get_centroid with something like this (just clean up my messy demodef):

    def get_detlist(detections):
        arr = np.empty((0, 2), int)
        arr = np.append(arr, np.array([[detections[0],detections[1]]]), axis=0)
        return arr
        
    Detection(get_detlist(detection[2]), data=detection[2])
    
    opened by haviduck 16
  • Replace gifs for videos on demos

    Replace gifs for videos on demos

    Demos:

    • [x] 3d_track
      • already uses videos
    • [x] alphapose
    • [x] camera_motion
    • [x] detectron2
    • [x] keypoints_bounding_boxes
    • [x] mmdetection
    • [x] motmetrics4norfair
    • [x] openpose
      • skipped 1/2 frames instead od 4/5 because it didn't work well
    • [x] profiling
    • [x] reid
      • had to change the table for a single video because tables don't work with mp4
    • [x] yolov4
    • [x] yolov5
    • [x] yolov7
      • updated the demo to use bboxes by default

    closes #169 and #167

    opened by javiber 13
  • CPU bottleneck when running the pose estimation demo

    CPU bottleneck when running the pose estimation demo

    Hi,

    I am trying to track pose estimates using the "Tracking pedestrians with AlphaPose" demo as a reference. However I am using Nvidia trt-pose (https://github.com/NVIDIA-AI-IOT/trt_pose) instead of alpha pose as given in the demo.

    The pose estimation alone runs well at around 25fps (having about 50% CPU usage), however when I include the pose tracking, my fps drops to about 10-12 fps and its definitely a CPU bottleneck as my CPU usage is around 98% when running tracking. I would like to know if this is considered "normal" with the pose estimation tracking or I am doing something wrong in my end.

    PC specs GTX 1060 6GB intel i7 8500 H 6GB ram

    Thanks for the great work.

    help wanted 
    opened by pramod-wick 12
  • "reid_hit_counter" Keeps Decreasing Causing Long-living Objects To Killed Suddenly

    Hi & Thanks for your wonderful library.

    I run your tracker with "reid" feature enabled and it seems like there is a small bug in calculating "reid_hit_counter". (Although it seems too obvious so probably I am missing a point)

    As the document stated :

    Each tracked object keeps an internal ReID hit counter which tracks how often it's getting recognized by another tracker, each time it gets a match this counter goes up, and each time it doesn't it goes down. If it goes below 0 the object gets destroyed. If used, this argument (reid_hit_counter_max) defines how long an object can live without getting matched to any detections, before it is destroyed.

    But there is no code for increasing the value of 'reid_hit_counter'. The only part of the code which I found is in tracker_step :

        def tracker_step(self):
            self.hit_counter -= 1
            if self.reid_hit_counter is None:
                if self.hit_counter <= 0:
                    self.reid_hit_counter = self.reid_hit_counter_max
            else:
                self.reid_hit_counter -= 1
            self.point_hit_counter -= 1
            self.age += 1
            # Advances the tracker's state
            self.filter.predict()
    

    So even if the object is "alive", reid_hit_counter keeps decreasing and as soon as it gets to zero, according to this section of the code :

    if self.reid_hit_counter_max is None:
                self.tracked_objects = [
                    o for o in self.tracked_objects if o.hit_counter_is_positive
                ]
                alive_objects = self.tracked_objects
            else:
                tracked_objects = []
                for o in self.tracked_objects:
                    if o.reid_hit_counter_is_positive:
                        tracked_objects.append(o)
                        if o.hit_counter_is_positive:
                            alive_objects.append(o)
                        else:
                            dead_objects.append(o)
    

    that object gets suddenly destroyed. It happens mostly to a long living objects in the scene which reid_hit_counter has time to get zero. Looks like the "alive" objects should not get their reid_hit_counter decreased.

    Thanks in advance

    bug 
    opened by h-sh-h 8
  • Dependency Issue

    Dependency Issue

    I'm using Norfair and another library. However, I'm getting a version conflict for rich.

    Is it possible to change the dependency of norfair to be able to use higher version of rich? I'm using poetry to install norfair and I'm getting a "SolveProblemError Because norfair (0.4.0) depends on rich (>=9.10.0,<10.0.0)". I need to use at least version 11.2.0.

    I can force install higher version of rich using pip but I think it would be cleaner if the dependencies are clear. Norfair seems to work fine even with higher version of rich.,

    bug 
    opened by mgmalana 7
  • Import metrics error

    Import metrics error

    @gerhc @draix @dekked hi thanks for sharing this source code when i try to install pip install norfair[video] # Adds several video helper features running on OpenCV pip install norfair[metrics] # Supports running MOT metrics evaluation pip install norfair[metrics,video] # Everything included I i get the following error "ImportError: cannot import name 'metrics'"

    Please let me knw what is the issue

    help wanted 
    opened by abhigoku10 7
  • a question about score value in PredictionsTextFile export

    a question about score value in PredictionsTextFile export

    Thanks for this great library :+1:

    I'm wondering why is here the prediction score of the detection is exported as -1? https://github.com/tryolabs/norfair/blob/4d59a4d5dd0cf4738c70de6e93782c93bce10d41/norfair/metrics.py#L93

    help wanted 
    opened by fcakyon 7
  • distance function examples

    distance function examples

    Are there any more examples of distance functions? I'm doing tracking with bounding boxes, and a feature extractor on the box. Is there an example that uses this type of setup for norfair?

    documentation 
    opened by nikky4D 7
  • class value and export coordinates to csv

    class value and export coordinates to csv

    Hi, I have a question related to issue # 19: I'm working through the detectron2 demo (detectron2_cars.py) to understand the code and wanted to change the detected class from cars to person. I found that changing row 32 to "if c == 0" does this. How do I export/print a list of the detected classes?

    Also, I would like to export to a csv file the centroid/center coordinates for each tracked object for each frame (similar to print_objects_as_table, but ideally with the data from each frame in one row). Any suggestions?

    opened by nachonacho2 7
  • feed norfair with screenshots

    feed norfair with screenshots

    im using Yolov5 to detect objects on my screen, iam using mss to take screenshots and feed it to yolov5, how can i implent norfair with a setup like this?

    help wanted 
    opened by KendoClaw1 1
  • draw bounding box

    draw bounding box

    Describe the situation you are working on I would like to thank you for this amazing git ..I am using yolov5 model detection with the norfair tracker..

    Describe what is it that you need help with In the part where we want to draw the bounding boxes ..I initially used the function draw_tracked_boxes but got the message that this function deprecated..now to use the draw_box function I am not sure how input should be given should I pass the detections of yolov5 or should I pass tracked_objects Additional context

    I saw the part where an argument drawables is used it is said that this should be union of detections and tracked_objects but the detection object is this <tracker.Detection object at 0x7f7f7580c2e8> and tracked_object is Object_7(age: 7, hit_counter: 8, last_distance: 0.62, init_id: 1)

    Not sure what can be done..

    help wanted 
    opened by Pranav-India 2
  • Do not store past detections by default.

    Do not store past detections by default.

    Until now, the default value for past_detections_length was 4. This meant that even if you don't use the past detections in your workflow, Norfair is still storing them, making unnecessary operations and potentially using a great amount of memory for storing embeddings or image crops.

    We now set the default value of past_detections_length to 0. If the user wants to use the TrackedObject's past detections when matching, it must be specified when initiating the Tracker instance.

    opened by facundo-lezama 2
Releases(v2.2.0)
  • v2.2.0(Jan 4, 2023)

    Release notes

    We release Norfair 2.2.0, which includes a major improvement to distances performance as well as many minor fixes and improvements. Including a refactoring of drawing

    This version also drops support for python 3.6

    Changelog

    Features

    • New vectorized distance functions. (#211) @facundo-lezama
    • Added optimized IoU function (#226) @facundo-lezama
    • Added the flag draw_scores to the function draw_boxes. (#219) @moooises
    • Enable users to draw custom objects (#229) @javiber
    • Add estimate_velocity property to the TrackedObject class (#238) @DiegoFernandezC

    Demos

    • Introduce Norfair ROS in the README (#231) @DiegoFernandezC

    Documentation

    • Fixed typehint of draw_points and draw_boxes (#230) @javiber
    • update sahi version (#217) @fcakyon

    Other features & fixes

    • Drop support for Python 3.6. (#234) @facundo-lezama
    • Drawing refactor (#209) @javiber
    • Fix reid_hit_counter initialization. (#224) @facundo-lezama
    • Automatically deploy new doc versions (#239) @javiber
    • Make numpy a direct dependency of Norfair. (#233) @facundo-lezama
    • New issue templates (#206) @javiber
    • Set Ubuntu version to 20.04 in CI. (#222) @facundo-lezama
    • Fixed draw_box and removed infinite distance check on Tracker (#220) @javiber
    • Fixed a problem with the hex color parsing (#215) @javiber
    Source code(tar.gz)
    Source code(zip)
  • v2.1.1(Nov 18, 2022)

    Release notes

    We release this patch release to fix a missing dependency on Python 3.6 and 3.7. A new demo for panoptic perception and some updates to sahi demo and documentation are included as well.

    Changelog

    Fixes

    • Fix importlib-metadata dependency bug affecting python 3.6 and 3.7 (#212) @javiber

    Demos

    • Add panoptic driving perception demo using YOLOPv2 (#204) @DiegoFernandezC
    • Update sahi demo to use their latest version (#214) @fcakyon

    Docs

    • Add Getting started guide (#208) @javiber
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Oct 19, 2022)

    Release notes

    We release Norfair 2.1, with support for Python 3.10. The rest is mostly small features and fixes, and a new demo.

    Changelog

    Features

    • Support Python 3.10 (#195) @dekked
    • Support for object count when using multiple Tracker instances (#196) @javiber
    • Added optional quality_level field to MotionEstimator class (#200) @facundo-lezama

    Demos

    • Add small object tracking demo using SAHI (#188) @JuanFKurucz

    Documentation

    • Added Contributing guide (#205) @javiber
    • Documentation site for different Norfair versions (#201) @javiber
    • Added documentation for drawers related to camera motion (#203) @javiber

    Other features & fixes

    • Update MOT17 metrics and add MOT20 results (#202) @facundo-lezama
    • Version has single source of truth in pyproject.toml (#187) @dekked
    • Fixed bug when transforming coordinates of 1-rank detections (#198) @javiber
    • Fixed the formatting of warning strings (#194) @javiber
    • Fix Python 3.6 dependency in PyPI & upgrades in pyproject.toml (#191) @dekked
    • Fix broken link to documentation in error message (#192) @dekked
    • tox works again for running tests locally (#193) @dekked
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Sep 20, 2022)

    Release notes

    We are excited to announce the release of Norfair 2.0, the biggest upgrade to Tryolabs’ open-source multi-object tracking library since its first release two years ago 🙌

    Read announcement blog post.

    Changelog

    New features

    • Support re-identification with appearance embeddings (#118) @facundo-lezama
    • Estimate camera motion using the mode of the Optical Flow (#139) @aguscas
    • Support n-dimensional tracking (#138) @aguscas
    • Add new documentation using MkDocs (#154) @javiber
    • Added option to control output video extension (#176) @javiber
    • Added predefined distance functions for typical cases (#135) @javiber

    Demos

    • Add demo in Hugging Face Spaces (#178) @DiegoFernandezC @dekked
    • Add official demo in Google Colab (#184) @DiegoFernandezC
    • Revamp demos and dockerize them for reproducibility (#146) @aguscas @dekked
    • Add YOLOv7 demo (#147) @aguscas
    • Added example with MMDetection (#134) @javiber
    • Update OpenPose demo (#81) @rocioxl
    • Added IoU function to the YOLOv5 demo (#90) @ffedee7

    Other features & fixes

    • Set optimized filter as default (#145) @aguscas
    • Add version info in __init__.py (#123) @fcakyon
    • Blackify the whole repo (#141) @javiber
    • Fixed reid video (#179, #180) @javiber
    • Avoid drawing paths of dead objects (#175) @javiber
    • Fixed dependency on GitHub Actions (#174) @javiber
    • Fixes in YOLO demos when tracking bounding boxes (#161) @javiber
    • Fix dependency on older version of rich (#160) @javiber
    • Fixed draw_tracked_boxes when draw_box is False (#150) @javiber
    • Fix many initializations in the trackers (#142) @aguscas
    • Fix profiling demo (#137) @aguscas
    • Unit tests refactor (#162) @javiber
    • Remove redundant CI lines (#116) @joaqo
    • Replace gifs for videos on demos (#177) @javiber
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(May 30, 2022)

    Release notes

    One year and 8 months after the first public release, we got the API stable enough to be comfortable releasing Norfair 1.0! 🥂

    Changelog

    • Simplify API (#106) @aguscas
      • hit_inertia_min -> Removed
      • hit_inertia_max -> hit_counter_max
      • point_transience -> pointwise_hit_counter_max
    • Re-add support for Python 3.6 (#113) @joaqo
    • Add 'No filter' and 'Optimized KF' setups (#108) @aguscas
    • Add mot metrics GitHub action (#114) @joaqo
    • Create scripts to compare norfair and ByteTrack on MOT17 (#100) @aguscas
    • Profiling demo (#110) @aguscas
    Source code(tar.gz)
    Source code(zip)
  • v0.4.3(Apr 19, 2022)

    Changelog

    • Add demo tracking keypoints and bounding boxes using OpenPose and YOLOv5 (#99) @facundo-lezama
    • Add documentation for draw_boxes and draw_tracked_boxes (#101) @aguscas
    • Fix bug by removing reference to 'points_of_interest' (#95) @aguscas
    • Add support for path drawing in video (#89) @aguscas
    • Added labels to the drawing functions (#93) @facundo-lezama
    • Added feature for tracking objects with different classes (#91) @facundo-lezama
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Feb 15, 2022)

    Changelog

    • Fix yolov4 demo (#87) @joaqo
    • Update metrics docs (#86) @joaqo
    • Improve docs (#85) @joaqo
    • Update dependencies due to Pillow<0.9 containing vulnerabilities (#83) @joaqo
    • :pencil2: Fixed argument parsing (#78) @Huh-David
    • Add support for storing several past detections on TrackedObjects (#74) @joaqo
    Source code(tar.gz)
    Source code(zip)
    yolov4_fixed_layer_names.pth(245.97 MB)
  • v0.3.1(Jul 29, 2021)

  • v0.3.0(May 31, 2021)

    Changelog

    New features

    • [Backwards incompatible] Support custom list of colors in draw_tracked_boxes (renames line_color to border_colors and line_width to border_width). (#54) @aguscas
    • Support receiving custom Kalman filters on Tracker (#53) @aguscas

    Documentation & demos

    • Add demo of tracking cars and pedestrians using YOLOv5 (#52) @fcakyon
    • Add table with MOT17 metrics to Readme (#43) @aguscas
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Feb 17, 2021)

    Changelog

    • Make OpenCV and motmetrics optional dependencies (#40) @joaqo
    • Small refactor to metrics (#33) @aguscas
    • Add support for evaluating trackers on the MOT Challenge dataset (#17) @aguscas
    Source code(tar.gz)
    Source code(zip)
  • v0.1.8(Dec 7, 2020)

    Changelog

    Improvements

    • Allow the user to select the delay added when initializing objects (435babce9bfeb7f9f47d859dbadff1e597c7520b) @joaqo
    • Improve input validation for detections (3a6189637e453c9e2c5b8bd004e7a3bcc0f379a2) @joaqo
    • Non-integer downsample_ratio (#18) @3dgiordano
    • Add GoogleColabratory YOLOv4 demo (#14) @wakamezake
    • Add type_hints (#11) @wakamezake
    • Get terminal size from native Python (#7) @3dgiordano

    Bug fixes

    • Fix path error in YOLO demo (089bad31c98f02127906f1a33c11ca322a136e01) @joaqo
    • Fix bug in draw_debug_metrics (c535edd20ca241ae0b0809b4ecc3fe085d2190b2) @joaqo
    • Fix bug with initialization_delay (a2633c9e974da38f3681d8c196682fd925fe0aaf) @joaqo
    Source code(tar.gz)
    Source code(zip)
  • v0.1.7(Sep 11, 2020)

  • v0.1.6(Sep 10, 2020)

Owner
Tryolabs
We are a Machine Learning consulting shop with an active R&D division, focused on Deep Learning, Computer Vision and NLP.
Tryolabs
Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency

Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency This is a official implementation of the CycleContrast introduced in

13 Nov 14, 2022
Pytorch implementation of Supporting Clustering with Contrastive Learning, NAACL 2021

Supporting Clustering with Contrastive Learning SCCL (NAACL 2021) Dejiao Zhang, Feng Nan, Xiaokai Wei, Shangwen Li, Henghui Zhu, Kathleen McKeown, Ram

231 Jan 05, 2023
Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation

Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation By: Zayd Hammoudeh and Daniel Lowd Paper: Arxiv Preprint Coming soo

Zayd Hammoudeh 2 Oct 08, 2022
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)

Are Transformers More Robust Than CNNs? Pytorch implementation for NeurIPS 2021 Paper: Are Transformers More Robust Than CNNs? Our implementation is b

Yutong Bai 145 Dec 01, 2022
Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend

Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend This project acts as both a tuto

Guillaume Chevalier 103 Jul 22, 2022
Joint Learning of 3D Shape Retrieval and Deformation, CVPR 2021

Joint Learning of 3D Shape Retrieval and Deformation Joint Learning of 3D Shape Retrieval and Deformation Mikaela Angelina Uy, Vladimir G. Kim, Minhyu

Mikaela Uy 38 Oct 18, 2022
Uni-Fold: Training your own deep protein-folding models

Uni-Fold: Training your own deep protein-folding models. This package provides an implementation of a trainable, Transformer-based deep protein foldin

DP Technology 187 Jan 04, 2023
Official PyTorch implementation of RIO

Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection Figure 1: Our proposed Resampling at image-level and obect-

NVIDIA Research Projects 17 May 20, 2022
Python package to generate image embeddings with CLIP without PyTorch/TensorFlow

imgbeddings A Python package to generate embedding vectors from images, using OpenAI's robust CLIP model via Hugging Face transformers. These image em

Max Woolf 81 Jan 04, 2023
Material del curso IIC2233 Programación Avanzada 📚

Contenidos Los contenidos se organizan según la semana del semestre en que nos encontremos, y según la semana que se destina para su estudio. Los cont

IIC2233 @ UC 72 Dec 23, 2022
Attention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)

Intro This repository contains code to generate data and reproduce experiments from our NeurIPS 2019 paper: Boris Knyazev, Graham W. Taylor, Mohamed R

Boris Knyazev 242 Jan 06, 2023
blind SQLIpy sebuah alat injeksi sql yang menggunakan waktu sql untuk mendapatkan sebuah server database.

blind SQLIpy Alat blind SQLIpy ini merupakan alat injeksi sql yang menggunakan metode time based blind sql injection metode tersebut membutuhkan waktu

Galih Anggoro Prasetya 4 Feb 24, 2022
STEM: An approach to Multi-source Domain Adaptation with Guarantees

STEM: An approach to Multi-source Domain Adaptation with Guarantees Introduction This is the official implementation of ``STEM: An approach to Multi-s

5 Dec 19, 2022
Discord Multi Tool that focuses on design and easy usage

Multi-Tool-v1.0 Discord Multi Tool that focuses on design and easy usage Delete webhook Block all friends Spam webhook Modify webhook Webhook info Tok

Lodi#0001 24 May 23, 2022
Computing Shapley values using VAEAC

Shapley values and the VAEAC method In this GitHub repository, we present the implementation of the VAEAC approach from our paper "Using Shapley Value

3 Nov 23, 2022
Monitora la qualità della ricezione dei segnali radio nelle province siciliane.

FMap-server Monitora la qualità della ricezione dei segnali radio nelle province siciliane. Conversion data Frequency - StationName maps are stored in

Triglie 5 May 24, 2021
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 04, 2023
Python binding for Khiva library.

Khiva-Python Build Documentation Build Linux and Mac OS Build Windows Code Coverage README This is the Khiva Python binding, it allows the usage of Kh

Shapelets 46 Oct 16, 2022
AgML is a comprehensive library for agricultural machine learning

AgML is a comprehensive library for agricultural machine learning. Currently, AgML provides access to a wealth of public agricultural datasets for common agricultural deep learning tasks.

Plant AI and Biophysics Lab 1 Jul 07, 2022
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022