Survey - 3D Point Cloud Fusion
3D Point Cloud Fusion
The essential key to 3D point cloud fusion is to find 3D point correspondences, based on which similarity transformation can be estimated to roughly align point clouds and then adopt algorithms like ICP to register them accurately. To robustly determine the 3D point correspondences between noisy point clouds, expressive descpritors are very important. There are two main types of 3D feature descriptors to date:
- Multiview texture based descriptor [1].
-
Local geometry based descriptor. 3D geometry can be described in terms of different native 3D formats, including voxel grid and polygon mesh. This method is more suitable for dense and watertight point clouds.
2.1 Voxel grid. Use binary voxel representation, e.g., 3D ShapeNets [5].
2.2 Polygon mesh. Use norm (1st order), surface curvature (2nd order), etc, e.g., [2].
2.3 Combination of both above.
Multi-View Fusion
- View pooling [1]: The variant local structures are overlayed.
Reference
- Multi-view Convolutional Neural Networks for 3D Shape Recognition
- Aligning Point Cloud Views using Persistent Feature Histograms
- Comparison of 3D interest point detectors and descriptors for point cloud fusion
- Multi-View 3D Object Detection Network for Autonomous Driving
- 3D ShapeNets: A Deep Representation for Volumetric Shapes
- Location Recognition Using Prioritized Feature Matching