Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data

Overview

LiDAR-MOS: Moving Object Segmentation in 3D LiDAR Data

This repo contains the code for our paper: Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data PDF.

Our approach accurately segments the scene into moving and static objects, i.e., distinguishing between moving cars vs. parked cars. It runs faster than the frame rate of the sensor and can be used to improve 3D LiDAR-based odometry/SLAM and mapping results as shown below.

Additionally, we created a new benchmark for LiDAR-based moving object segmentation based on SemanticKITTI here.

Complete demo video can be found in YouTube here. LiDAR-MOS in action:

Table of Contents

  1. Introduction of the repo and benchmark
  2. Publication
  3. Dependencies
  4. How to use
  5. Applications
  6. Collection of downloads
  7. License

Publication

If you use our code and benchmark in your academic work, please cite the corresponding paper:

@article{chen2021ral,
    title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data}},
    author={X. Chen and S. Li and B. Mersch and L. Wiesmann and J. Gall and J. Behley and C. Stachniss},
    year={2021},
    journal={IEEE Robotics and Automation Letters (RA-L)},
    doi = {10.1109/LRA.2021.3093567},
    issn = {2377-3766},
}

Dependencies

We built and tested our work based on SalsaNext, RangeNet++ and MINet. We thank the original authors for their nice work and implementation. If you are interested in fast LiDAR-based semantic segmentation, we strongly recommend having a look at the original repositories.

Note that, in this repo, we show that how easily we could achieve LiDAR-based moving object segmentation exploiting sequential information with existing segmentation networks. We didn't change the original pipeline of the segmentation networks, but only changed the data loader and input of the network as shown in the figure below. Therefore, our method can be used with any range-image-based LiDAR segmentation networks.

Our method is based on range images. To use range projection with fast c++ library, please find the usage doc here.

How to use

For a quick test of all the steps below, one could download a toy dataset here and decompress it in the data\ folder following the data structure data/README.md.

Prepare training data

To use our method, one needs to generate the residual images. Here is a quick demo:

  $ python3 utils/gen_residual_images.py

More setup about the data preparation can be found in the yaml file config/data_preparing.yaml. To prepare the training data for the whole KITTI-Odometry dataset, please download the original website.

Using SalsaNext as the baseline

To use SalsaNext as the baseline segmentation network for LiDAR-MOS, one should follow the mos_SalsaNext/README.md to it up.

Inferring

To generate the LiDAR-MOS predictions with pretrained model. Quick test on toy dataset, directly run

  $ cd mos_SalsaNext/train/tasks/semantic
  $ python3 infer.py -d ../../../../data -m ../../../../data/model_salsanext_residual_1 -l ../../../../data/predictions_salsanext_residual_1_new -s valid

Inferring the whole dataset, please download the KITTI-Odometry dataset from the original website, and change the corresponding paths.

  $ cd mos_SalsaNext/train/tasks/semantic
  $ python3 infer.py -d path/to/kitti/dataset -m path/to/pretrained_model -l path/to/log -s train/valid/test # depending of desired split to evaluate

Training

To train a LiDAR-MOS network with SalsaNext from scratch, one has to download the KITTI-Odometry dataset and Semantic-Kitti dataset: Change the corresponding paths and run:

  $ cd mos_SalsaNext/train/tasks/semantic
  $ ./train.sh -d path/to/kitti/dataset -a salsanext_mos.yml -l path/to/log -c 0  # the number of used gpu cores

Using RangeNet++ as the baseline

To use RangeNet++ as the baseline segmentation network for LiDAR-MOS, one should follow the mos_RangeNet/README.md to set it up.

Inferring

Inferring the whole dataset, please download the KITTI-Odometry dataset from the original website, the [pretrained model](todo: add pretrained model for rangenet) and change the corresponding paths.

  $ cd mos_RangeNet/tasks/semantic
  $ python3 infer.py -d path/to/kitti/dataset -m path/to/pretrained_model -l path/to/log -s train/valid/test # depending of desired split to evaluate

Training

To train a LiDAR-MOS network with RangeNet++ from scratch, one has to download the KITTI-Odometry dataset and Semantic-Kitti dataset and change the corresponding paths and run:

  $ cd mos_RangeNet/tasks/semantic
  $ python3 train.py -d path/to/kitti/dataset -ac rangenet_mos.yaml -l path/to/log

More pretrained model and LiDAR-MOS predictions can be found in collection of downloads.

Evaluation and visualization

How to evaluate

Evaluation metrics. Let's call the moving (dynamic) status as D and the static status as S.

Since we ignore the unlabelled and invalid status, therefore in MOD there are only two classes.

GT\Prediction dynamic static
dynamic TD FS
static FD TS
  • $$ IoU_{MOS} = \frac{TD}{TD+FD+FS} $$

To evaluate the MOS results on the toy dataset just run:

  $ python3 utils/evaluate_mos.py -d data -p data/predictions_salsanext_residual_1_valid -s valid

To evaluate the MOS results on our LiDAR-MOS benchmark please have a look at our semantic-kitti-api and benchmark website.

How to visualize the predictions

To visualize the MOS results on the toy dataset just run:

  $ python3 utils/visualize_mos.py -d data -p data/predictions_salsanext_residual_1_valid -s 8  # here we use a specific sequence number

where:

  • sequence is the sequence to be accessed.
  • dataset is the path to the kitti dataset where the sequences directory is.

Navigation:

  • n is next scan,
  • b is previous scan,
  • esc or q exits.

Applications

LiDAR-MOS is very important for building consistent maps, making future state predictions, avoiding collisions, and planning. It can also improve and robustify pose estimation, sensor data registration, and SLAM. Here we show two obvious applications of our LiDAR-MOS which are LiDAR-based odometry/SLAM as well as 3D mapping. Before that, we show two simple examples of how to combine our method with semantics and clean the scans. After cleaning scans we can get better odometry/SLAM and 3D mapping results.

Note that, here we show two direct use cases of our MOS approach without any further optimizations employed.

Enhanced with semantics

To show a simple way of combining our LiDAR-MOS with semantics, we provide a quick demo with the toy dataset:

  $ python3 utils/combine_semantics.py

It just simply checks whether the moving objects are movable classes or not. If not, re-assigned as static.

Clean the scans

To clean the LiDAR scans with our LiDAR-MOS as masks, we also provide a quick demo on the toy dataset:

  $ python3 utils/scan_cleaner.py

Odometry/SLAM

Using the cleaned LiDAR scans, we see that by simply applying our MOS predictions as a preprocessing mask, the odometry results are improved in both the KITTI training and test data and even slightly better than the carefully-designed full classes semantic-enhanced SuMa++.

The testing results of our methods can also be found in KITTI-Odometry benchmark.

Mapping

we compare the aggregated point cloud maps (left) directly with the raw LiDAR scans, (right) with the cleaned LiDAR scans by applying our MOS predictions as masks. As can be seen, there are moving objects present that pollute the map, which might have adversarial effects, when used for localization or path planning. By using our MOS predictions as masks, we can effectively remove these artifacts and get a clean map.

Collection of downloads

License

This project is free software made available under the MIT License. For details see the LICENSE file.

Comments
  • How to use SalsaNet with my own dataset?

    How to use SalsaNet with my own dataset?

    Hi, I have read the paper and built-and-run your LiDAR-MOS.
    Thanks for sharing your awesome projects here.


    I have a question.
    How can I use the code with my own dataset?
    I'll use the pretrained model you uploaded, so I think all I need to do is make my data be appropriate format for the LiDAR-MOS.
    The data I have consists of .bag file and .pcd file.

    I'd appreciate for you to give me some advice.

    Best regards.

    opened by bigbigpark 15
  • Questions about LIDAR-MOS visualization

    Questions about LIDAR-MOS visualization

    Hello, your LIDAR-MOS code is very good, but I have a problem that cannot be visualized when reproducing your code, as shown in the figure: image After running this command, the program seems to be stuck. I don't know why, I want to get your visualized results, as shown below: image ps:Author reply:From the results of the operation, it seems to be a tkinter problem.But the tkinker module does not seem to be missing. If anyone knows how to solve it, hope you can help me, thanks.

    opened by MrNeoJeep 13
  • TRAIN BUGS

    TRAIN BUGS

    Thanks for your quick response! @Chen-Xieyuanli

    I trained with salsa_mos in semantic kitti. When it run here Lr: 3.977e-03 | Update: 3.258e-04 mean,5.209e-04 std | Epoch: [0][950/2391] | Time 0.641 (0.623) | Data 0.081 (0.067) | Loss 0.6863 (0.9777) | acc 0.830 (0.855) | IoU 0.417 (0.434) | [1 day, 3:19:41] LiDAR-MOS/mos_SalsaNext/train/tasks/semantic/../../common/laserscan.py:166: RuntimeWarning: invalid value encountered in true_divide pitch = np.arcsin(scan_z / depth) i got error File "conda/envs/salsa/lib/python3.9/site-packages/torch/_utils.py", line 457, in reraise raise exception IndexError: Caught IndexError in DataLoader worker process 1. and File "LiDAR-MOS/mos_SalsaNext/train/tasks/semantic/../../common/laserscan.py", line 201, in do_range_projection self.proj_range[proj_y, proj_x] = depth IndexError: index -2147483648 is out of bounds for axis 0 with size 64 The conda env is `- python=3.9.12=h12debd9_0

    python-flatbuffers=1.12=pyhd3eb1b0_0 pytorch=1.11.0=py3.9_cuda11.3_cudnn8.2.0_0 pytorch-mutex=1.0=cuda tensorflow-base=2.6.0=mkl_py39h3d85931_0 tensorflow-estimator=2.6.0=pyh7b7c402_0` Is there any wrong with this env? Or it is too new? Or there is something wrong with data in residual images?

    opened by LiXiang0021 10
  • Multiple input frames

    Multiple input frames

    Thank you for your great code!

    I notice in rangenet_mos.yaml and salsanext_mos.yaml you only use one input frame. If I want to use multiple scans for training, how can I do it?

    As far as I know, I need to change n_input_scans in both backbone and dataset, and transform in the dataset. What else do I need?

    Another question is about the pose transform. When using a sequence of scans to train the model, why do you transform the pose to the last scan but not the first scan? here

    opened by Trainingzy 9
  • Migrating the model to Livox Horizon

    Migrating the model to Livox Horizon

    Thanks for your excellent work. I download and infer the toy dataset you provide and the performance is good. Now I want to use the model in my Livox Horizon with 80x25 FoV and similar point density to 64-line LiDAR. When I use your tools to generate range image and residual image, I just change the pose.txt and calib.txt to my own and everything is alright, I can generate correct range image and residual image. But when I try to infer my data using the model, I meet this error the range image generation function in salsanext. Are there any possible reasons? image

    opened by Psyclonus2887 8
  • dims of multi_residuals_images, thanks!

    dims of multi_residuals_images, thanks!

    Dear author,

    IF n_input_scans =2, so dims of proj_full is 12? (x,y ,z ,r,e, x,y,z,r,e, residual1 ,resudual2)?

    Is that right?

    I'm so sorry to bother you. That's really confused me. image

    Thanks.

    opened by emilyemliyM 8
  • Question about loading the pretrained salsanext model

    Question about loading the pretrained salsanext model

    Hi!

    Thanks so much for the codes! I've a question about loading the pretrained salsanext model. When I followed the steps outlined in the "How to use" section and tested on the toy example, I came across the issue when trying to run infer.py on the toy dataset (python3 infer.py -d ../../../../data -m ../../../../data/model_salsanext_residual_1 -l ../../../../data/predictions_salsanext_residual_1_new -s valid) and got this error:

    RuntimeError: ../../../../data/model_salsanext_residual_1/SalsaNext_valid_best is a zip archive (did you mean to use torch.jit.load()?)

    I tried to switch from torch.load() to torch.jit.load() in the user.py as it suggested but it leads to other errors. What did I do wrong, or did I miss something along the way? I set up the environment according to the instruction linked on the github (using Pytorch 1.1).

    Thank you in advance for your help!

    opened by maneekwant 8
  • How can i train 'SalsaNext' successfully?(训练'SalsaNext'时侯出现了问题)

    How can i train 'SalsaNext' successfully?(训练'SalsaNext'时侯出现了问题)

    Hi, thanks for sharing your great code. I'm just trying to do whole process of your works. but I can't train SalsaNext,

    I tried :

    ./train.sh -d ../../../../dataset/KITTI_dataset/velodyne_laser/dataset/ -a salsanext_mos.yml -l logs/ -c 0
    

    learning process arrived at :

    Lr: 5.944e-03 | Update: 2.381e-04 mean,3.611e-04 std | Epoch: [0][11370/19130] | Time 0.203 (0.204) | Data 0.030 (0.031) | Loss 0.3839 (0.2800) | acc 0.962 (0.980) | IoU 0.685 (0.517) | [7 days, 18:10:40]
    

    and the error msgs i got :

    proj_full = torch.cat([proj_full, torch.unsqueeze(eval("proj_residuals_" + str(i+1)), 0)])
    RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 2048 and 2058 in dimension 2 at ../aten/src/TH/generic/THTensor.cpp:711
    

    Is the conda environment setting wrong? currently using :

    tensorboard               1.13.1                   pypi_0    pypi
    tensorboard-data-server   0.6.1                    pypi_0    pypi
    tensorboard-plugin-wit    1.8.0                    pypi_0    pypi
    tensorflow                1.13.1                   pypi_0    pypi
    tensorflow-estimator      1.13.0                   pypi_0    pypi
    

    Or, can you check again links Collection of downloads ? seems like can't access now. Thanks you !

    bug 
    opened by Sunghooon 7
  • prediction labels in toy dataset

    prediction labels in toy dataset

    Hi,

    I see there are some segmentation results already present in the toy dataset. image

    The one that says salsanext, does it use one residual image?

    Best Regards Sambit

    opened by SM1991CODES 6
  • How to use the pretrained model to test my own Lidar Scans?

    How to use the pretrained model to test my own Lidar Scans?

    Thanks for your work! Now I want to clean my own Lidar Scans.

    1. Use infer.py to get mos predictions label
    2. Then use utils/scan_cleaner.py to clean Is that right? if no, can you give me some advice? Thanks a lot!
    opened by Cxz-dev 5
  • how to change the

    how to change the "n_input_scans"?

    I only change the "arch_cfg.yaml" in model ,but there is a error:size mismatch for module.downCntx.conv1.weight: copying a param with shape torch.Size([32, 6, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 13, 1, 1]),so how to change the "n_input_scans"?thank you :)

    opened by pikaqiu0131 5
  • About set my own Lidar pose to kitti format pose

    About set my own Lidar pose to kitti format pose

    Hi @Chen-Xieyuanli ! Thanks for your code! Now I have another question, I know the code need pose as input(such as gen_residual_images.py), but when I use my own lidar without camera, how can I get the pose.txt from lidar odemetry? I think the pose need to be transformed in LiDAR coord, so can I just use the pose(1*3) based in Lidar coord which came from lidar odemetry as input?

    opened by Cxz-dev 3
  • How to re-train when training is unexpectedly interrupted?

    How to re-train when training is unexpectedly interrupted?

    Hi,thank you for your wonderful open source,I would like to know whether it can continue to train from the last time when training is unexpectedly interrupted? If can,how to do it ?

    opened by beyounged 8
  • RuntimeError(

    RuntimeError("grad can be implicitly created only for scalar outputs")

    Hello, thank you for your great work. I am training my own dataset and encountered the following error.

    Ignoring class  0  in IoU evaluation
    [IOU EVAL] IGNORE:  tensor([0])
    [IOU EVAL] INCLUDE:  tensor([1, 2])
    Lr: 3.106e-05 | Update: 2.258e-01 mean,4.181e-01 std | Epoch: [0][0/322] | Time 3.170 (3.170) | Data 0.154 (0.154) | Loss 1.9250 (1.9250) | acc 0.533 (0.533) | IoU 0.363 (0.363) | [1 day, 20:35:54]
    Traceback (most recent call last):
      File "/content/LiDAR-MOS/mos_SalsaNext/train/tasks/semantic/train.py", line 178, in <module>
        trainer.train()
      File "../../tasks/semantic/modules/trainer.py", line 274, in train
        show_scans=self.ARCH["train"]["show_scans"])
      File "../../tasks/semantic/modules/trainer.py", line 391, in train_epoch
        loss_m.backward()
      File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 396, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py", line 166, in backward
        grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False)
      File "/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py", line 67, in _make_grads
        raise RuntimeError("grad can be implicitly created only for scalar outputs")
    RuntimeError: grad can be implicitly created only for scalar outputs
    
      optimizer.zero_grad()
                if self.n_gpus > 1:
                    idx = torch.ones(self.n_gpus).cuda()
                    loss_m.backward(idx)
                else:
                    loss_m.backward() #here i got the error
                optimizer.step()
    

    I have looked the error in google, and it usually happens when you use two or more GPUs. However, I am using only one GPU and got this error. Could you please help me to solve this error.

    opened by e1339g 1
  • Training setups (tested with different GPUs)

    Training setups (tested with different GPUs)

    Dear author,

    Thanks for the sharing code.

    I'm trying to reproduce the metrics from the paper, but haven't been successful yet. I would like to ask about some training parameters and hardware equipment for the experiment? Regarding the indicators such as iou in the paper, do you mean miou or just the iou of the moving class?

    Thanks!

    good first issue 
    opened by emilyemliyM 6
  • Tweaking the model for partial azimuth FOV Lidar

    Tweaking the model for partial azimuth FOV Lidar

    Hi, My Lidar's azimuth FOV is only ~100 [deg] . What would be the best way to tweak the model or some configuration so it will work? Currently the range images (and also the residual images) are very sparse at the right and left sides and I think that is one of the reason for the bad performance I get. Thanks

    opened by boazMgm 7
Releases(v1.1)
Owner
Photogrammetry & Robotics Bonn
Photogrammetry & Robotics Lab at the University of Bonn
Photogrammetry & Robotics Bonn
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral

One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of

264 Jan 09, 2023
FedMM: Saddle Point Optimization for Federated Adversarial Domain Adaptation

This repository contains the code accompanying the paper " FedMM: Saddle Point Optimization for Federated Adversarial Domain Adaptation" Paper link: R

20 Jun 29, 2022
Dataset VSD4K includes 6 popular categories: game, sport, dance, vlog, interview and city.

CaFM-pytorch ICCV ACCEPT Introduction of dataset VSD4K Our dataset VSD4K includes 6 popular categories: game, sport, dance, vlog, interview and city.

96 Jul 05, 2022
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 09, 2022
A script depending on VASP output for calculating Fermi-Softness.

Fermi softness calculation for Vienna Ab initio Simulation Package (VASP) Update 1.1.0: Big update: Rewrote the code. Use Bader atomic division instea

qslin 11 Nov 08, 2022
Collection of common code that's shared among different research projects in FAIR computer vision team.

fvcore fvcore is a light-weight core library that provides the most common and essential functionality shared in various computer vision frameworks de

Meta Research 1.5k Jan 07, 2023
Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)

VITON-HD — Official PyTorch Implementation VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization Seunghwan Choi*1, Sunghyun Pa

Seunghwan Choi 250 Jan 06, 2023
MILK: Machine Learning Toolkit

MILK: MACHINE LEARNING TOOLKIT Machine Learning in Python Milk is a machine learning toolkit in Python. Its focus is on supervised classification with

Luis Pedro Coelho 610 Dec 14, 2022
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language (NeurIPS 2021)

VRDP (NeurIPS 2021) Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language Mingyu Ding, Zhenfang Chen, Tao Du, Pin

Mingyu Ding 36 Sep 20, 2022
Prior-Guided Multi-View 3D Head Reconstruction

Prior-Guided Head MVS This repository includes some reconstruction results of our IEEE TMM 2021 paper, Prior-Guided Multi-View 3D Head Reconstruction.

11 Aug 17, 2022
[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision Kehong Gong*, Bingbing Li*, Jianfeng Zhang*, Ta

256 Dec 28, 2022
A flexible ML framework built to simplify medical image reconstruction and analysis experimentation.

meddlr Getting Started Meddlr is a config-driven ML framework built to simplify medical image reconstruction and analysis problems. Installation To av

Arjun Desai 36 Dec 16, 2022
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

JeffLi 347 Dec 24, 2022
It is the assignment for COMP 576 in Rice University

COMP-576 It is the assignment for COMP 576 in Rice University There are two programming assignments and one Final Project. Assignment 1: It is a MLP a

Maojie Tang 1 Nov 25, 2021
Consensus score for tripadvisor

ContripScore ContripScore is essentially a score that combines an Internet platform rating and a consensus rating from sentiment analysis (For instanc

Pepe 1 Jan 13, 2022
To SMOTE, or not to SMOTE?

To SMOTE, or not to SMOTE? This package includes the code required to repeat the experiments in the paper and to analyze the results. To SMOTE, or not

Amazon Web Services 1 Jan 03, 2022
Codebase for "ProtoAttend: Attention-Based Prototypical Learning."

Codebase for "ProtoAttend: Attention-Based Prototypical Learning." Authors: Sercan O. Arik and Tomas Pfister Paper: Sercan O. Arik and Tomas Pfister,

47 2 May 17, 2022
Self-Supervised Speech Pre-training and Representation Learning Toolkit.

What's New Sep 2021: We host a challenge in AAAI workshop: The 2nd Self-supervised Learning for Audio and Speech Processing! See SUPERB official site

s3prl 1.6k Jan 08, 2023
Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis in JAX

SYMPAIS: Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis Overview | Installation | Documentation | Examples | Notebo

Yicheng Luo 4 Sep 13, 2022
Bayesian dessert for Lasagne

Gelato Bayesian dessert for Lasagne Recent results in Bayesian statistics for constructing robust neural networks have proved that it is one of the be

Maxim Kochurov 84 May 11, 2020