Virtual Interest Point for Registration
Point Cloud , Virtual Interest Point , Feature Extraction , Feature Descriptor , Registration , 3D Sensor , 3D Vision , Machine Vision , Mapping , Computer Vision
A new method is presented for registration of two partially overlapping noisy point clouds that is robust to noise and data density variation with improved computational efficiency. The registration is driven by establishing correspondences of virtual interest points, which do not exist in the original point cloud data and are defined by the intersection of three implicit surfaces extracted from the point cloud. Implicit surfaces exist in abundance in both natural and built environments and can be used to represent stable regions in the data, which in turn leads to repeatable virtual interest points. Large regions in a point cloud can be represented by a few implicit surfaces, which reduces the computational cost of registration and also makes the algorithm robust to noise and data density variations. The main contribution of this work is to represent the point cloud as implicit surfaces that results in repeatable interest points. The effect of noise is reduced during the modelling phase. Additionally, the feature descriptors computed for the virtual interest points are significantly different from the state of the art techniques. Surface properties and their relationships with each other are used to define a descriptor that is robust to data density variations compared to conventional support region based descriptors. Furthermore, the transformation between two point clouds can be computed by only one true correspondence, which makes the technique efficient compared to recently proposed competing techniques. Experiments were performed on 11 data sets to characterize robustness to noise and data density variations as well as computational efficiency. The data sets were extracted from natural scenes, including as plants, rocks, and indoor architectural scenes such as offices and laboratories. Similarly, several 3D models were also tested for registration to demonstrate the generality of the technique. A range of sensors was used to collect the data sets, including the Microsoft Kinect version 1 and version 2, Swiss Ranger, and a NextEngine 3D scanner. For most data sets, the proposed method outperformed the Iterative Closest Point (ICP), Generalized Iterative Closest Point, a 2.5D SIFT-based RANSAC method, Super 4-Point Congruent Sets (4PCS), Super Generalized 4PCS (SG4PCS), and the Go-ICP method in registering overlapping point clouds with both a higher success rate and reduced computational cost. The convergence rate of VIP method was more than 75 % for all data sets. The minimum improvement ratio for computational efficiency was more than 1.5 as compared to 4PCS, SG4PCS and Go-ICP for all data sets except Sailor.