Machine Vision Based Inspection: Case Studies on 2D Illumination Techniques and 3D Depth Sensors
Yan, Michael T.
MetadataShow full item record
This paper investigates two distinct, but related, topics in machine vision. The first is the effect of lighting on the performance of a 2D vision-based inspection system. The lighting component of machine vision has often been overlooked; an attempt was made to quantify the impact on existing machine vision algorithms. The second topic explores the applications of a data-rich 3D vision sensor that is capable of providing depth data in a wide range of ambient lightning conditions for industrial applications. A focus is placed on inspection systems with the depth data provided by the sensor. Three basic lighting geometries were compared quantitatively based on discriminant analysis in an inspection task that checked for the presence of J-clips on an aluminum carrier. Two different LabVIEW® machine vision algorithms were used to evaluate backlight, bright field and dark field illumination on their ability to minimize the span of the pass (clip present) and fail (clip absent) sample sets, as well as maximize the separation between these sample sets. Results showed that there were clear differences in performance with the different lighting geometries, with over a 30% change in performance. Although it has long been accepted that the choice of lighting for machine vision systems is not a trivial exercise, this paper provides a quantitative measure of the impact lighting has on the performance of feature-based machine vision. The Microsoft Kinect® is a commercial vision sensor that can simultaneously provide a colour video stream, comparable to current webcam technologies, in addition to a depth stream that provides three-dimensional information of the camera’s field of view and is invariant to environmental lighting. An experiment was carried out to characterize the sensor’s accuracy and precision, and to evaluate its performance as an inspection system to determine the orientation of a wheel. Tests were also conducted to determine the effect that changes in the physical environment had on performance. These changes included camera height, lighting and surface material. Results of the experiment have shown that the sensor has an average precision of ±0.12 cm and average accuracy of 0.5 cm, both with less than a 30% change when varying physical features. A discriminant analysis was performed to measure inspection performance, which showed less than 30% change with set separation, but not for set span. No trends were apparent with the change in set span relating to the change in physical features.