A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning

Overview

A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning


WebsiteAboutInstallationUsing OpenDR toolkitExamplesRoadmapLicense

License Test Suite (master)

About

The aim of OpenDR Project is to develop a modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning to provide advanced perception and cognition capabilities, meeting in this way the general requirements of robotics applications in the applications areas of healthcare, agri-food and agile production. OpenDR provides the means to link the robotics applications to software libraries (deep learning frameworks, e.g., PyTorch and Tensorflow) to the operating environment (ROS). OpenDR focuses on the AI and Cognition core technology in order to provide tools that make robotic systems cognitive, giving them the ability to:

  1. interact with people and environments by developing deep learning methods for human centric and environment active perception and cognition,
  2. learn and categorize by developing deep learning tools for training and inference in common robotics settings, and
  3. make decisions and derive knowledge by developing deep learning tools for cognitive robot action and decision making.

As a result, the developed OpenDR toolkit will also enable cooperative human-robot interaction as well as the development of cognitive mechatronics where sensing and actuation are closely coupled with cognitive systems thus contributing to another two core technologies beyond AI and Cognition. OpenDR aims to develop, train, deploy and evaluate deep learning models that improve the technical capabilities of the core technologies beyond the current state of the art.

Installing OpenDR Toolkit

OpenDR can be installed in the following ways:

  1. By cloning this repository (CPU/GPU support)
  2. Using pip (CPU only)
  3. Using docker (CPU/GPU support)

You can find detailed installation instruction in the documentation.

Using OpenDR toolkit

OpenDR provides an intuitive and easy to use Python interface, a C API for performance critical application, a wealth of usage examples and supporting tools, as well as ready-to-use ROS nodes. OpenDR is built to support Webots Open Source Robot Simulator, while it also extensively follows industry standards, such as ONNX model format and OpenAI Gym Interface. You can find detailed documentation in OpenDR wiki, as well as in the tools index.

Roadmap

OpenDR has the following roadmap:

  • v1.0 (2021): Baseline deep learning tools for core robotic functionalities
  • v2.0 (2022): Optimized lightweight and high-resolution deep learning tools for robotics
  • v3.0 (2023): Active perception-enabled deep learning tools for improved robotic perception

How to contribute

Please follow the instructions provided in the wiki.

Acknowledgments

OpenDR project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449.

Comments
  • Install scripts, bdist_wheel, x86 docker and instructions

    Install scripts, bdist_wheel, x86 docker and instructions

    This PR adds the following:

    • [x] Scripts to install OpenDR toolkit in clean Ubuntu 20.04 systems (even when running from a minimal image, e.g,. docker ones)
    • [x] setup.py to correctly install OpenDR package
    • [x] Corrected scripts to activate OpenDR venv environment
    • [x] Scripts to create bdist wheels for cpu only usage
    • [x] Dockerfile for assembling cpu-only OpenDR inference
    • [x] Readme listing different installation options
    • [x] Update wiki to reflect the changes made in this PR

    This PR also add missing __init__.py in toolkit and a __version__ variable according to typical python usage.

    test sources test tools 
    opened by passalis 38
  • Upgrade to CUDA 11.2 and improve GPU support

    Upgrade to CUDA 11.2 and improve GPU support

    This PR upgrades the toolkit to using CUDA 11.2. This also ensures that the toolkit will be compatible with the NVIDIA 30xx GPUs. For PyTorch we are using precompiled packages that bundle CUDA11.1. This does not affect the system-wide CUDA version.

    This PR also improves testing using GPUs, as well as fixes some documentation issues regarding the use of OPENDR_DEVICE variable.

    Tasks to be performed

    • [x] Change dockerfile to using CUDA 11.2
    • [x] Update PyTorch and mxnet
    • [x] Update detectron
    • [x] Update DCNv2
    • [x] Make sure that pip installation does not need any kind of update
    • [x] Update the documentation if needed

    We need to restore github branch in Dockerfile prior to merging.

    test sources test tools test release 
    opened by passalis 33
  • Synthetic multi view facial generator

    Synthetic multi view facial generator

    This is a PR for synthetic multi-view facial image generator which will be a standalone tool of OPENDR generating data (facial images) for procedures such as e.g. training.

    test sources test tools 
    opened by ekakalet 33
  • Mobile rl

    Mobile rl

    Hi everyone,

    This is an initial version with our approach on mobile manipulation based on our paper (https://arxiv.org/abs/2101.05325). It's not completely ready to be merged yet, but should already include all the main parts.

    • It implements the LearnerRL interface
    • Formatted according to PEP-8, .clang-format
    • Most unnecessary functionality should already be removed
    • Includes a first version of the documentation, including examples to train and to evaluate provided checkpoints

    But there are also a few questions from my side. Mainy because this is a project that relies on a python3-based interface for the user and an environment implemented in c++, which additionally draws on functionality from ROS (mainly moveit).

    • Atm I am keeping the c++ src and header files within the module, combined with it's own CMakeLists.txt. (i.e. within src/control/mobile_manipulation/). Is that appropriate?
    • How should I define ROS and c++ dependencies? The module provides environments for several robotic platforms (PR2, Tiago, HSR). This means to compile or run the module the user needs (i) a ROS installation (developped and tested for melodic) (ii) a separate catkin_ws for each robot (iii) launch a launchfile before running the python scripts. Due to this I feel like it makes sense to not require every user of other openDR modules to install these, but rather have them specified specific to this module. Some of robot specific dependencies should furthermore be compiled in separate catkin_workspaces. The model checkpoints are tiny (3x 3MB) and currently directly located in the git repo. Is that ok for such small files?
    • Licenses: this module includes slightly modified launchfiles from openly available ROS packages (~/robots_worlds/[pr2/hsr/tiago]). Do these have to be marked or treated specially somehow?
    • This was developped as part of WP5.2 Deep Navigation. As there is no navigation folder and this approach can be seen as a combination of navigation and control I have located it within control for now. Let me know in case I should move it elsewhere

    Any help on the above would be much appreciated. Other comments on what is already in here are also welcome as well.

    Some remaining todo's for myself to remember:

    • update checkpoints
    • test that gazebo evaluation works
    • test that examples in readme work
    test sources test tools 
    opened by dHonerkamp 31
  • Skeleton based action recognition

    Skeleton based action recognition

    This PR adds two learners (which train and evaluate a baseline model and three proposed models) for skeleton-based human action recognition.

    • A new data type named SkeletonSequence is added to the engine.data.py and a new target class named ActionCategory is added to engine.target.py.

    • The learners' implementation follows the provided template and sufficient tests are provided for all the functions which will be directly called by the user including, fit(), eval(), infer(), save(), load(), optimize(), multi_stream_eval(), network_builder().

    test sources test tools 
    opened by negarhdr 28
  • ROS2 workspace and example nodes

    ROS2 workspace and example nodes

    This PR contains a new ROS2 (Foxy-Fitzroy version) workspace located in the projects directory and, for now, it serves to ~test and finalize the structure, naming, etc.,~ gather and finalize all ROS2 nodes in a unified PR. Right now there are no ~docstrings~ (docstrings added), documentation or READMEs. This description will get updated for any additions.

    Contents:

    1. opendr_perception python package
      • Contains a pose estimation node
      • Contains fall detection node
      • Contains object detection 2d centernet/detr/ssd/yolov3 nodes
      • Contains face detection retinaface node
      • Contains face recognition node
      • Contains semantic segmentation bisenet node
      • ~Contains a subscriber tester node (tester), that subscribes to the messages published by the pose estimation node for testing~ Testing can be performed as described in steps 9 and 10 of Building and Running below.
    2. opendr_ros2_bridge python package
      • Contains bridge.py which includes a class with methods to convert images, poses, etc. from and to ROS2 messages
      • This uses cv_bridge which is included in the vision_opencv package
    3. opendr_ros2_messages CMake package

    The logic behind the structuring of the packages and nodes is similar to OpenDR's ROS1 packages/nodes.

    Below you can find instructions to install, build and run the nodes for testing. Note that i did everything on a system with ROS1 already installed.

    I faced many issues along the way that might reappear in a fresh install of ROS2, etc., so if any problems/errors occur following the instructions please get in touch with me, to possibly save you some time.

    Installation

    • To install ROS2 i followed this tutorial (section 2), which installs the 'foxy' release of ROS2. (Note that on '(7) configure environment variables', you need to replace dashing with foxy)
    • Edit: At this point you might need to run sudo apt-get install ros-foxy-vision-msgs as discussed below
    • Install colcon, basically just sudo apt install python3-colcon-common-extensions
    • Install ros2 usb cam to test with local webcam. In my case i use ros2 run usb_cam usb_cam_node_exe to run it after installation, which seems to work fine

    Building and Running

    1. Navigate to your OpenDR installation and activate it as usual
    2. Navigate to workspace root, opendr_ws_2 directory
    3. Install cv_bridge via the instructions in its README, excluding the last step(build). There seems to be no need to build it, as it will get built along with the rest of the packages later.
    4. Navigate to the workspace root (opendr_ws_2) as the previous step leaves you inside vision_opencv dir
    5. Run colcon build
    6. Run . install/setup.bash
    7. Run ros2 run opendr_perception pose_estimation to start the pose estimation node (or any other existing node)
    8. In a new terminal run ros2 run usb_cam usb_cam_node_exe to grab images from a webcam
    9. In a new terminal run ros2 run rqt_image_view rqt_image_view and select the corresponding topic to view the image result
    10. In a new terminal run ros2 topic echo opendr/poses to view the pose message. Note that it is not really human readable in that form, it should be read in another node and converted into an OpenDR pose object to have access to human-friendly print methods.

    * If you are using conda, check out Illia's comment down below. Thanks @iliiliiliili !

    To be added

    ROS2 nodes to be added according to what ROS1 nodes exist already:


    Perception package:

    • [x] Object detection 2D detr (update from original author) (#296)
    • [x] Video activity recognition (#323)
    • [x] RGBD hand gesture recognition (#341)
    • [x] Panoptic segmentation EfficientPS (#270)
    • [x] Heart anomaly detection(#337)
    • [x] Speech command recognition ( #340)
    • [x] Audiovisual emotion recognition (#342)
    • [x] Skeleton based action recognition (#344)
    • [x] Landmark-based facial expression recognition (#345)
    • [x] Image-based facial emotion estimation (new tool #264, #346)
    • [x] Object detection 2D gem (#295)
    • [x] Object detection 2D YOLOv5 (added in #360, I will directly add the ROS2 node on ros2 branch)
    • [x] Object detection 2D Nanodet (added in #278, I will directly add the ROS2 node on ros2 branch)
    • [x] Object tracking 2D SiamRPN (added in #367, ~I will directly add the~ WIP ROS2 node on ros2 branch)
    • [x] High resolution pose estimation (added in #356, I will directly add the ROS2 node on ros2 branch)
    • [x] Image dataset (#319)
    • [x] Point cloud dataset (#319)
    • [x] Object detection 3D voxel (#319)
    • [x] Object tracking 2D deep sort (#319)
    • [x] Object tracking 2D fair mot (#319)
    • [x] Object tracking 3D ab3dmot (#319)

    Data generation package:

    • [x] Synthetic facial recognition (#288)

    Simulation package:

    • [x] Human model generation client/service (#291)

    Planning package:

    • [x] End to end planner (this is new for ROS1 too) (~~#286, new PR will be opened for ROS2~~ #358)

    Edit1: Updated the last steps of the instructions as well as the contents list as per the latest changes. Edit2: Added information in contents list about the new opendr_ros2_messages. added TODO list for remaining nodes

    enhancement test sources test release 
    opened by tsampazk 24
  • Fer va estimation

    Fer va estimation

    This PR adds image-based facial expression recognition and valence-arousal estimation. It includes learner, unit-tests, demo, document, and ROS node. This is a replacement for the previous PR which had conflicts with other tools.

    test sources test tools 
    opened by negarhdr 23
  • Panoptic segmentation

    Panoptic segmentation

    Hi, this PR adds the EfficientPS network. The original repo can be found here.

    ~~Please also check issue #90.~~

    Todos:

    • [x] Upload pre-trained models to OpenDR server and adjust the URLs in efficient_ps_learner.py.
    • [x] Add unit tests
    • [x] Add documentation to /docs/reference
    • [x] Merge Heatmap implementation with version proposed in #100 ~~and updates pending in #98~~ ~~Install CUDA in GitHub CI~~ --> will not be resolvable since the code requires GPUs. See comment.

    Known issues:

    • ~~reason for failing tests: 3rd party dependencies assume an existing pytorch installation since they attempt to load torch in their setup.py~~
    test sources test tools 
    opened by vniclas 21
  • End to end planning

    End to end planning

    Hi All,

    This is an initial version of our method on end-to-end local planning. It's not completely ready to be merged yet, but should already include all the main parts.

    • It implements the LearnerRL interface
    • Formatted according to PEP-8
    • Includes a first version of the documentation

    Remaining todo's for myself:

    • Tests for code
    test sources test tools 
    opened by halil93ibrahim 21
  • Mobile rl 2

    Mobile rl 2

    Creating a new PR due to a forced pushed. See #68 for initial discussion. To recap the open points from the initial PR:

    • the license on the tiago urdf -> contacted PAL
    • the unittests -> currently blocked by the missing linting support for typing
    test sources test tools 
    opened by dHonerkamp 19
  • rosnode - rgbd_hand_gesture_recognition.py - parameter

    rosnode - rgbd_hand_gesture_recognition.py - parameter

    Parameters need to be consistent with other tools with agparser

    I am using opendr installed on my computer on the develop branch, I am feeding the rgb camera topic and the depth_image like this one: image

    I cannot get an output from the /opendr/gestures topic Is the depth_image topic different from the one that you are using?

    opened by thomaspeyrucain 16
  • Fix package creator

    Fix package creator

    The creation was successful, however the root package wasn't uploaded due to a missing new line in packages.txt. I've uploaded the missing one manually, this is just to ensure everything is fine

    test sources test release 
    opened by ad-daniel 1
  • C api implementations

    C api implementations

    The follow PR contains:

    1. More tools in C api.
    2. New data structures for tensors manipulation in C.
    3. Better Json parser with arrays and floats.
    4. Docs.
    5. Small changes in face_recognition and nanodet_jit (naming parameters as said in wiki).
    6. Tests in new data structures and tools.
    7. Python api bug fixes in open pose and fair mot for onnx optimizations.
    enhancement test sources 
    opened by ManosMpampis 1
  • Several tools have deprecation warnings, especially those relying on numpy

    Several tools have deprecation warnings, especially those relying on numpy

    As emerged here https://github.com/opendr-eu/opendr/pull/381 without the upper restriction on numpy, the version 1.24.0 might be installed and in it several deprecation warnings have expired. Even when things work, several deprecation warnings are printed when running the tests. Both issues should be addressed.

    bug 
    opened by ad-daniel 0
  • ROS1 Object Tracking 2D DeepSort error with input from webcam

    ROS1 Object Tracking 2D DeepSort error with input from webcam

    I was unable to find it documented so i am opening a new issue with the following error for the deepsort ROS1 node:

    [ERROR] [1671020211.255272]: bad callback: <bound method ObjectTracking2DDeepSortNode.callback of <__main__.ObjectTracking2DDeepSortNode object at 0x7edf5c2f28>>
    Traceback (most recent call last):
      File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
        cb(msg)
      File "/opendr/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_deep_sort_node.py", line 105, in callback
        tracking_boxes = self.learner.infer(image_with_detections, swap_left_top=True)
      File "/opendr/src/opendr/perception/object_tracking_2d/deep_sort/object_tracking_2d_deep_sort_learner.py", line 289, in infer
        result = self.tracker.infer(image, frame_id, swap_left_top=swap_left_top)
      File "/opendr/src/opendr/perception/object_tracking_2d/deep_sort/algorithm/deep_sort_tracker.py", line 81, in infer
        bbox_xywh[:, 3:] *= 1.2
    IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
    

    What i found the node works properly when provided with images from the image_dataset_node but throws this error when taking input from a webcam.

    bug 
    opened by tsampazk 2
  • ROS2 Node for EfficientLPS

    ROS2 Node for EfficientLPS

    Hi all,

    This PR is to add ROS2 node for EfficientLPS. This PR should be merged after #359 is merged. It includes EfficientLPS node and PointCloud2 Publisher node in it. It will remain as a draft until #359 is merged.

    test sources test tools 
    opened by aselimc 0
Releases(v2.0.0)
  • v2.0.0(Dec 30, 2022)

    Released on December, 31st, 2022.

    New Features:

    • Added YOLOv5 as an inference-only tool (#360).
    • Added Continual Transformer Encoders (#317).
    • Added Continual Spatio-Temporal Graph Convolutional Networks tool (#370).
    • Added AmbiguityMeasure utility tool (#361).
    • Added SiamRPN 2D tracking tool (#367).
    • Added Facial Emotion Estimation tool (#264).
    • Added High resolution pose estimation tool (#356).
    • Added ROS2 nodes for all included tools (#256).
    • Added missing ROS nodes and homogenized the interface across the tools (#305).

    Bug Fixes:

    • Fixed BoundingBoxList, TrackingAnnotationList, BoundingBoxList3D and TrackingAnnotationList3D confidence warnings (#365).
    • Fixed undefined image_id and segmentation for COCO BoundingBoxList (#365).
    • Fixed Continual X3D ONNX support (#372).
    • Fixed several issues with ROS nodes and improved performance (#305).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Jun 30, 2022)

  • v1.1(Jun 14, 2022)

    Released on June, 14th, 2022.

    New Features:

    • Added end-to-end planning tool (https://github.com/opendr-eu/opendr/pull/223).
    • Added seq2seq-nms module, along with other custom NMS implementations for 2D object detection.(https://github.com/opendr-eu/opendr/pull/232).

    Enhancements:

    • Added support for modular pip packages allowing tools to be installed separately (https://github.com/opendr-eu/opendr/pull/201).
    • Simplified the installation process for pip by including the appropriate post-installation scripts (https://github.com/opendr-eu/opendr/pull/201).
    • Improved the structure of the toolkit by moving io from utils to engine.helper (https://github.com/opendr-eu/opendr/pull/201).
    • Added support for post-install scripts and opendr dependencies in .ini files (https://github.com/opendr-eu/opendr/pull/201).
    • Updated toolkit to support CUDA 11.2 and improved GPU support (https://github.com/opendr-eu/opendr/pull/215).
    • Added a standalone pose-based fall detection tool (https://github.com/opendr-eu/opendr/pull/237)

    Bug Fixes:

    • updated wheel building pipeline to include missing files and removed unnecessary dependencies (https://github.com/opendr-eu/opendr/pull/200).
    • panoptic_segmentation/efficient_ps: updated dataset preparation scripts to create correct validation ground truth (https://github.com/opendr-eu/opendr/pull/221).
    • panoptic_segmentation/efficient_ps: added specific configuration files for the provided pretrained models (https://github.com/opendr-eu/opendr/pull/221).
    • c_api/face_recognition: pass key by const reference in json_get_key_string() (https://github.com/opendr-eu/opendr/pull/221).
    • pose_estimation/lightweight_open_pose: fixed height check on transformations.py according to original tool repo (https://github.com/opendr-eu/opendr/pull/242).
    • pose_estimation/lightweight_open_pose: fixed two bugs where ONNX optimization failed on specific learner parameterization (https://github.com/opendr-eu/opendr/pull/242).

    Dependency Updates:

    • heart anomaly detection: upgraded scikit-learn runtime dependency from 0.21.3 to 0.22 (https://github.com/opendr-eu/opendr/pull/198).
    • Relaxed all dependencies to allow future versions of non-critical tools to be used (https://github.com/opendr-eu/opendr/pull/201).
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Dec 31, 2021)

    This is the first public version of OpenDR toolkit, which provides baseline deep learning tools for core robotic functionalities. The first version includes (among others):

    • an intuitive and easy-to-use Python interface
    • a wealth of usage examples and supporting tools
    • ready-to-use ROS nodes
    • a partial C API

    You can find detailed installation instructions in OpenDR repository, while detailed documentation can be found in OpenDR wiki.

    Source code(tar.gz)
    Source code(zip)
Owner
OpenDR
OpenDR H2020 Research Project
OpenDR
Details about the wide minima density hypothesis and metrics to compute width of a minima

wide-minima-density-hypothesis Details about the wide minima density hypothesis and metrics to compute width of a minima This repo presents the wide m

Nikhil Iyer 9 Dec 27, 2022
OcclusionFusion: realtime dynamic 3D reconstruction based on single-view RGB-D

OcclusionFusion (CVPR'2022) Project Page | Paper | Video Overview This repository contains the code for the CVPR 2022 paper OcclusionFusion, where we

Wenbin Lin 193 Dec 15, 2022
This repo is about implementing different approaches of pose estimation and also is a sub-task of the smart hospital bed project :smile:

Pose-Estimation This repo is a sub-task of the smart hospital bed project which is about implementing the task of pose estimation 😄 Many thanks to th

Max 11 Oct 17, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
[IJCAI'21] Deep Automatic Natural Image Matting

Deep Automatic Natural Image Matting [IJCAI-21] This is the official repository of the paper Deep Automatic Natural Image Matting. Introduction | Netw

Jizhizi_Li 316 Jan 06, 2023
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

NVIDIA Isaac ROS 62 Dec 14, 2022
Pytorch code for our paper Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains)

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains (ICLR'2022) This is the Pytorch code for our paper Beyond ImageNet

Alibaba-AAIG 37 Nov 23, 2022
CS_Final_Metal_surface_detection - This is a final project for CoderSchool Machine Learning bootcamp on 29/12/2021.

CS_Final_Metal_surface_detection This is a final project for CoderSchool Machine Learning bootcamp on 29/12/2021. The project is based on the dataset

Cuong Vo 1 Dec 29, 2021
DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency

[CVPR19] DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency (Oral paper) Authors: Kuang-Jui Hsu, Yen-Yu Lin, Yung-Yu Chuang PDF:

Kuang-Jui Hsu 139 Dec 22, 2022
Keras Implementation of Neural Style Transfer from the paper "A Neural Algorithm of Artistic Style"

Neural Style Transfer & Neural Doodles Implementation of Neural Style Transfer from the paper A Neural Algorithm of Artistic Style in Keras 2.0+ INetw

Somshubra Majumdar 2.2k Dec 31, 2022
Neural Scene Flow Fields using pytorch-lightning, with potential improvements

nsff_pl Neural Scene Flow Fields using pytorch-lightning. This repo reimplements the NSFF idea, but modifies several operations based on observation o

AI葵 178 Dec 21, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 07, 2022
Official repository for "On Generating Transferable Targeted Perturbations" (ICCV 2021)

On Generating Transferable Targeted Perturbations (ICCV'21) Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli Paper:

Muzammal Naseer 46 Nov 17, 2022
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Jan 09, 2023
Code for KHGT model, AAAI2021

KHGT Code for KHGT accepted by AAAI2021 Please unzip the data files in Datasets/ first. To run KHGT on Yelp data, use python labcode_yelp.py For Movi

32 Nov 29, 2022
An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

33 Jun 27, 2021
[MICCAI'20] AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes

AlignShift NEW: Code for our new MICCAI'21 paper "Asymmetric 3D Context Fusion for Universal Lesion Detection" will also be pushed to this repository

Medical 3D Vision 42 Jan 06, 2023
Worktory is a python library created with the single purpose of simplifying the inventory management of network automation scripts.

Worktory is a python library created with the single purpose of simplifying the inventory management of network automation scripts.

Renato Almeida de Oliveira 18 Aug 31, 2022
General Vision Benchmark, a project from OpenGVLab

Introduction We build GV-B(General Vision Benchmark) on Classification, Detection, Segmentation and Depth Estimation including 26 datasets for model e

174 Dec 27, 2022