Student teams earn prizes in EECS 556: Image Processing (Winter 2011)
Congratulations to the winning students!
Two teams earned prizes in the graduate level course, EECS 556: Image Processing, thanks to the sponsorship of KLA-Tencor. The course, taught this past term by Prof. Jeff Fessler, covers the theory and application of digital image processing, which has applications in biomedical images, time-varying imagery, robotics, and optics. Kris Bhaskar (VP Sofware Engineering) and Mohan Mahadevan (Scientist), representatives from KLA-Tencor, and Prof. Dave Neuhoff, Associate Chair for ECE, judged the projects.
KLA Challenge: Detection of Defects in Integrated Circuits
First place went to the project, “Detection of Defects in Integrated Circuits,” by Xiyu Duan, Chris Fink, Hao Sun and Meng Wu. Each student received their own iPad.
In this project, KLA-Tencor challenged students to develop automated algorithms to detect as many defects as possible in images of integrated circuits, with a zero false-positive rate. KLA-Tencor supplied twelve image triplets, five with defects and seven without (each image triplet consists of two defect-free images and one image with potential defects). KLA-Tencor also supplied a simple baseline algorithm which detected 154 out of 209 defects. The team utilized many approaches in their attempt to improve upon KLA’s algorithm. These included image interpolation and alignment, Fourier filtering, PCA analysis, Markov Random Field segmentation, morphological image processing, and wavelet analysis. In the end, their optimal algorithm employed some simple pre-processing steps followed by median filtering, and it successfully detected 182 defects with a zero false-positive rate.
Using Optical Flow Plane Detection and Depth Maps for Augmented Reality
Second place went to the project, “Using Optical Flow Plan Detection and Depth Maps for Augmented Reality,” by Leng-Chun Chen, Yu-Hui Chen, Yi-Sing Hsiao and Srinath Sridhar. Each student received an iPod touch.
In this project, a three-dimensional (3D) geometry of a given scene was reconstructed from pairs of images through dominant plane detection and depth map estimation. The dominant plane detection was done using optical flow and planar flow of two sequential images, and the depth map estimation utilizes disparity map obtained from a pair of images along with parameters trained from images with known depth. The combined information then provides a reconstructed 3D geometry for registering virtual objects to the real scene. This is namely the implementation of augmented reality (AR). AR typically requires that we know the geometry of the scene along with the camera positions. Currently, it is typical to put specific ‘marker’ patterns into the scene to mark the object position. With the students’ method, 3D spatial templates can be built for AR implementation without placing markers.