New Algorithms for 3D Registration Using Raw Point Techniques
Abstract
Registration of two 3D point clouds is a problem encountered in many domains in
3D computer vision. A correct correspondence between the overlapping portions of
the point clouds can be used to generate a transform that registers the two point
clouds. The search for correspondences can be done using techniques that attempt
to find similarity between local or global surface geometry. However, such methods
have their limitations. Global methods fail when there is significant surface occlusion,
while local ones degrade in performance in the face of noise and outliers. In cases
of significant occlusion, noise, and outliers, it is best to rely on a large number of
correspondences between very small subsets of points [3]. The higher the number
of correspondences, the more likely it is to find the correspondence that represents
the correct transformation to achieve registration. In this thesis, a generalization to
these subsets that allows us to control the degree of their ambiguity is introduced.
Our generalization provides a way to optimize the number of correspondences as to
achieve the maximum speed up without sacrificing robustness. We show that for
the problem of offline registration we can achieve a speed up factor of up to 4.4x
using our generalized version of the algorithm. We also use our generalization with
and improved version of 4PCS [52], and show that we can achieve further efficiency
improvements over the state of the art in raw point registration
iIn addition to the generalization of the 4-Point Congruent sets method, we present
a novel RANSAC framework for 3D registration. Unlike the standard RANSAC ap-
proach, our approach requires sampling only 2 points and thus reducing the worst
time complexity of the algorithm. We present two flavours of this approach and evalu-
ate it by comparing it to 4PCS and Super 4PCS. We achieve a speed up improvement
of up to 57x over 4PCS and are on par with Super 4PCS in most cases.
URI for this record
http://hdl.handle.net/1974/24252Request an alternative format
If you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology CentreThe following license files are associated with this item: