Keras Realtime Multi-Person Pose Estimation - Keras version of Realtime Multi-Person Pose Estimation project

Overview

This repository has become incompatible with the latest and recommended version of Tensorflow 2.0 Instead of refactoring this code painfully, I created a new fresh repository with some additional features like:

  • Training code for smaller model based on MobilenetV2.

  • Visualisation of predictions (heatmaps, pafs) in Tensorboard.

  • Additional scripts to convert and test models for Tensorflow Lite.

Here is the link to the new repo: tensorflow_Realtime_Multi-Person_Pose_Estimation


Realtime Multi-Person Pose Estimation (DEPRECATED)

This is a keras version of Realtime Multi-Person Pose Estimation project

Introduction

Code repo for reproducing 2017 CVPR paper using keras.

This is a new improved version. The main objective was to remove dependency on separate c++ server which besides the complexity of compiling also contained some bugs... and was very slow. The old version utilizing rmpe_dataset_server is still available under the tag v0.1 if you really would like to take a look.

Results

 

Contents

  1. Converting caffe model
  2. Testing
  3. Training
  4. Changes

Require

  1. Keras
  2. Caffe - docker required if you would like to convert caffe model to keras model. You don't have to compile/install caffe on your local machine.

Converting Caffe model to Keras model

Authors of original implementation released already trained caffe model which you can use to extract weights data.

  • Download caffe model cd model; sh get_caffe_model.sh
  • Dump caffe layers to numpy data cd ..; docker run -v [absolute path to your keras_Realtime_Multi-Person_Pose_Estimation folder]:/workspace -it bvlc/caffe:cpu python dump_caffe_layers.py Note that docker accepts only absolute paths so you have to set the full path to the folder containing this project.
  • Convert caffe model (from numpy data) to keras model python caffe_to_keras.py

Testing steps

  • Convert caffe model to keras model or download already converted keras model https://www.dropbox.com/s/llpxd14is7gyj0z/model.h5
  • Run the notebook demo.ipynb.
  • python demo_image.py --image sample_images/ski.jpg to run the picture demo. Result will be stored in the file result.png. You can use any image file as an input.

Training steps

  • Install gsutil curl https://sdk.cloud.google.com | bash. This is a really helpful tool for downloading large datasets.
  • Download the data set (~25 GB) cd dataset; sh get_dataset.sh,
  • Download COCO official toolbox in dataset/coco/ .
  • cd coco/PythonAPI; sudo python setup.py install to install pycocotools.
  • Go to the "training" folder cd ../../../training.
  • Optionally, you can set the number of processes used to generate samples in parallel dataset.py -> find the line df = PrefetchDataZMQ(df, nr_proc=4)
  • Run the command in terminal python train_pose.py

Changes

25/06/2018

  • Performance improvement thanks to replacing c++ server rmpe_dataset_server with tensorpack dataflow. Tensorpack is a very efficient library for preprocessing and data loading for tensorflow models. Dataflow object behaves like a normal Python iterator but it can generate samples using many processes. This significantly reduces latency when GPU waits for the next sample to be processed.

  • Masks generated on the fly - no need to run separate scripts to generate masks. In fact most of the mask were only positive (nothing to mask out)

  • Masking out the discarded persons who are too close to the main person in the picture, so that the network never sees unlabelled people. Previously we filtered out keypoints of such smaller persons but they were still visible in the picture.

  • Incorrect handling of masks has been fixed. The rmpe_dataset_server sometimes assigned a wrong mask to the image, misleading the network.

26/10/2017

Fixed problem with the training procedure. Here are my results after training for 5 epochs = 25000 iterations (1 epoch is ~5000 batches) The loss values are quite similar as in the original training - output.txt

Results of running demo_image --image sample_images/ski.jpg --model training/weights.best.h5 with model trained only 25000 iterations. Not too bad !!! Training on my single 1070 GPU took around 10 hours.

22/10/2017

Augmented samples are fetched from the server. The network never sees the same image twice which was a problem in previous approach (tool rmpe_dataset_transformer) This allows you to run augmentation locally or on separate node. You can start 2 instances, one serving training set and a second one serving validation set (on different port if locally)

Related repository

Citation

Please cite the paper in your publications if it helps your research:

@InProceedings{cao2017realtime,
  title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2017}
  }
Owner
M Faber
Software/Data Engineer
M Faber
Python implementation of the multistate Bennett acceptance ratio (MBAR)

pymbar Python implementation of the multistate Bennett acceptance ratio (MBAR) method for estimating expectations and free energy differences from equ

Chodera lab // Memorial Sloan Kettering Cancer Center 169 Dec 02, 2022
Build Low Code Automated Tensorflow, What-IF explainable models in just 3 lines of code.

Build Low Code Automated Tensorflow explainable models in just 3 lines of code.

Hasan Rafiq 170 Dec 26, 2022
Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks

Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks This is the master thesi

Giacomo Arcieri 1 Mar 21, 2022
Transformer Huffman coding - Complete Huffman coding through transformer

Transformer_Huffman_coding Complete Huffman coding through transformer 2022/2/19

3 May 19, 2022
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

Junha Lee 10 Dec 02, 2022
A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks)

A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks) This repository contains a PyTorch implementation for the paper: Deep Pyra

Greg Dongyoon Han 262 Jan 03, 2023
LBK 20 Dec 02, 2022
Code for "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection", ICRA 2021

FGR This repository contains the python implementation for paper "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection"(I

Yi Wei 31 Dec 08, 2022
Cleaned up code for DSTC 10: SIMMC 2.0 track: subtask 2: multimodal coreference resolution

UNITER-Based Situated Coreference Resolution with Rich Multimodal Input: arXiv MMCoref_cleaned Code for the MMCoref task of the SIMMC 2.0 dataset. Pre

Yichen (William) Huang 2 Dec 05, 2022
Federated_learning codes used for the the paper "Evaluation of Federated Learning Aggregation Algorithms" and "A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison"

Federated Distance (FedDist) This is the code accompanying the Percom2021 paper "A Federated Learning Aggregation Algorithm for Pervasive Computing: E

GETALP 8 Jan 03, 2023
DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control One version of our system is implemented using the

260 Nov 28, 2022
Code for NeurIPS2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints"

This repository is the code for NeurIPS 2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints". Edit 2021/

10 Dec 20, 2022
[3DV 2021] Channel-Wise Attention-Based Network for Self-Supervised Monocular Depth Estimation

Channel-Wise Attention-Based Network for Self-Supervised Monocular Depth Estimation This is the official implementation for the method described in Ch

Jiaxing Yan 27 Dec 30, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022
Simple reference implementation of GraphSAGE.

Reference PyTorch GraphSAGE Implementation Author: William L. Hamilton Basic reference PyTorch implementation of GraphSAGE. This reference implementat

William L Hamilton 861 Jan 06, 2023
DRIFT is a tool for Diachronic Analysis of Scientific Literature.

About DRIFT is a tool for Diachronic Analysis of Scientific Literature. The application offers user-friendly and customizable utilities for two modes:

Rajaswa Patil 108 Dec 12, 2022
LAnguage Model Analysis

LAMA: LAnguage Model Analysis LAMA is a probe for analyzing the factual and commonsense knowledge contained in pretrained language models. The dataset

Meta Research 960 Jan 08, 2023
3D Pose Estimation for Vehicles

3D Pose Estimation for Vehicles Introduction This work generates 4 key-points and 2 key-edges from vertices and edges of vehicles as ground truth. The

Jingyi Wang 1 Nov 01, 2021
Personalized Federated Learning using Pytorch (pFedMe)

Personalized Federated Learning with Moreau Envelopes (NeurIPS 2020) This repository implements all experiments in the paper Personalized Federated Le

Charlie Dinh 226 Dec 30, 2022
Demonstration of transfer of knowledge and generalization with distillation

Distilling-the-Knowledge-in-a-Neural-Network This is an implementation of a part of the paper "Distilling the Knowledge in a Neural Network" (https://

26 Nov 25, 2022