[go: up one dir, main page]

Skip to content

C++ implementation to Detect, track and classify multiple objects using LIDAR scans or point cloud

License

Notifications You must be signed in to change notification settings

praveen-palanisamy/multiple-object-tracking-lidar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multiple objects detection, tracking and classification from LIDAR scans/point-clouds

DOI

Sample demo of multiple object tracking using LIDAR scans

PCL based ROS package to Detect/Cluster --> Track --> Classify static and dynamic objects in real-time from LIDAR scans implemented in C++.

Features:

  • K-D tree based point cloud processing for object feature detection from point clouds
  • Unsupervised euclidean cluster extraction (3D) or k-means clustering based on detected features and refinement using RANSAC (2D)
  • Stable tracking (object ID & data association) with an ensemble of Kalman Filters
  • Robust compared to k-means clustering with mean-flow tracking

Usage:

Follow the steps below to use this (multi_object_tracking_lidar) package:

  1. Create a catkin workspace (if you do not have one setup already).
  2. Navigate to the src folder in your catkin workspace: cd ~/catkin_ws/src
  3. Clone this repository: git clone https://github.com/praveen-palanisamy/multiple-object-tracking-lidar.git
  4. Compile and build the package: cd ~/catkin_ws && catkin_make
  5. Add the catkin workspace to your ROS environment: source ~/catkin_ws/devel/setup.bash
  6. Run the kf_tracker ROS node in this package: rosrun multi_object_tracking_lidar kf_tracker

If all went well, the ROS node should be up and running! As long as you have the point clouds published on to the filtered_cloud rostopic, you should see outputs from this node published onto the obj_id, cluster_0, cluster_1, …, cluster_5 topics along with the markers on viz topic which you can visualize using RViz.

Supported point-cloud streams/sources:

The input point-clouds can be from:

  1. A real LiDAR or
  2. A simulated LiDAR or
  3. A point cloud dataset or
  4. Any other data source that produces point clouds

Note: This package expects valid point cloud data as input. The point clouds you publish to the "filtered_cloud" is not expected to contain NaNs. The point cloud filtering is somewhat task and application dependent and therefore it is not done by this module. PCL library provides pcl::removeNaNFromPointCloud (...) method to filter out NaN points. You can refer to this example code snippet to easily filter out NaN points in your point cloud.

Citing

If you use the code or snippets from this repository in your work, please cite:

@software{praveen_palanisamy_2019_3559187,
  author       = {Praveen Palanisamy},
  title        = {{praveen-palanisamy/multiple-object-tracking-lidar: 
                   Multiple-Object-Tracking-from-Point-Clouds_v1.0.2}},
  month        = dec,
  year         = 2019,
  publisher    = {Zenodo},
  version      = {1.0.2},
  doi          = {10.5281/zenodo.3559187},
  url          = {https://doi.org/10.5281/zenodo.3559186}
}

Wiki

Checkout the Wiki pages

  1. Multiple-object tracking from pointclouds using a Velodyne VLP-16