Vol: 57(71) No: 4 / December 2012 Range Image Smoothing and Viewpoint Normalized Images for Feature Based Landmark Detection Viktor Kovács Department of Automation and Applied Informatics, Budapest University of Technology and Economics, Faculty of Electrical Engineering and Informatics, Magyar Tudósok krt. 2., 1117 Budapest, Hungary, phone: (+36 1) 463-2857, e-mail: kovacsv@aut.bme.hu Gábor Tevesz Department of Automation and Applied Informatics, Budapest University of Technology and Economics, Faculty of Electrical Engineering and Informatics, Magyar Tudósok krt. 2., 1117 Budapest, Hungary, e-mail: tevesz@aut.bme.hu Keywords: image feature extraction, landmark detection, viewpoint invariance, SLAM Abstract Feature extraction algorithms are used to preprocess, extract relevant information from images. Features consist of two parts: keypoint and feature descriptor. Advanced feature extraction algorithms (such as SIFT and SURF) offer robust keypoint detection and distinctive descriptor generation. Slight changes in brightness, contrast, translation, rotation, scale or viewpoint does not affect feature matching abilities. Thus these algorithms are widely used in machine vision for mobile robotics. As a robot moves in its environment, perceived images suffer from many distortions (viewpoint brightness, rotation, scale, translation etc. changes) so landmark detection algorithms must be robust and tolerate these changes. Viewpoint invariance is especially important in SLAM applications. In this paper we evaluate a method to improve viewpoint invariance based on intensity images supplemented by additional information provided by range images. We also show a method to smooth low depth resolution range images in order to detect planes needed for the previous algorithm. Range images are smoothed by locally fitting surfaces in range images using pixel data near borders caused by quantization. In smooth range images local normals are estimated and a histogram is generated. Based on the peaks planar surfaces are identified. Intensity image data is viewpoint normalized and features are extracted. We compare feature point matching with and without the improvement. References [1] A. E. Abdel-Hakim, and A. A. Farag, “CSIFT: A SIFT Descriptor with Color Invariant Characteristics”, Proc. of International Conf. on Computer Vision and Pattern Recognition, vol. 2, pp. 1978-1983, 2006. [2] S. K. Bode, K. K. Biswas, S. K. Gupta, “An integrated approach for range image segemntation and representation”, Artificial Intelligence in Engineering, vol. 10. no 3. pp. 243-252, Aug. 1996. [3] C. Wu; B. Clipp; X Li; J.-M. Frahm, and M. Pollefeys, “3D model matching with Viewpoint-Invariant Patches (VIP)” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp.1-8, Jun. 2008. [4] Dina A. H., Walaa M. S., Sahar B., Bayumy A. B. Y, “A New Approach for 3D Range Image Segmentation using Gradient Method”, Journal of Computer Science, vol. 7, no. 4, pp. 475-487, 2011. [5] D. Lowe, “Distinctive image features from scale-invariant keypoints”, International Journal of Computer Vision, vol. 60, pp. 91-110, 2004. [6] D. M. Chu, and A. W. M. Smeulders, “Color Invariant SURF in Discriminative Object Tracking”, Proc. ECCV Workshop on Color and Reflectance in Imaging and Computer Vision, 2010. [7] G. J. Burghouts, and J.-M. Geusebroek, “Performance evaluation of local colour invariants”, Computer Vision and Image Understanding, vol. 113, no. 1, pp. 48-62, Jan. 2009. [8] Guoshen Yu, and J.-M Morel, “A fully affine invariant image comparison method”, Proc. IEEE International Conf. Acoustics, Speech and Signal Processing, pp. 1597-1600, Apr. 2009. [9] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features”, Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, Jun. 2008. [10] H. Hirayu, C. Wang, H. Tanahashi, Y. Niwa and K. Yamamoto, “A Probabilistic Approach to Surface Extraction from Range Data”, Proc. Fifth Asian Conference on Computer Vision, vol. 2, pp. 725-730, Jan. 2002. [11] J. Bauer, N. Sünderhauf, and P. Protzel, “Comparing several implementations of two recently published feature detectors”, Proc. International Conference on Intelligent and Autonomous Systems, vol. 6, pt. 1, 2007. [12] K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek, “Evaluating Color Descriptors for Object and Scene Recognition”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1582-1596, Sept. 2010. [13] K. Mikolajczyk, and C. Schmid, “A performance evaluation of local descriptors”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615-1630, Oct. 2005. [14] K. Mikolajczyk, and C. Schmid, “Scale & Affine Invariant Interest Point Detectors”, International Journal of Computer Vision, vol. 60, no. 1, pp. 63-86, Oct. 2004. [15] Liang Cheng; Jianya Gong; Xiaoxia Yang; Chong Fan; and Peng Han; “Robust Affine Invariant Feature Extraction for Image Matching”, IEEE Geoscience and Remote Sensing Letters, vol. 5, no. 2, pp. 246-250, April 2008. [16] M. Ying Yang, Yanpeng Cao, and J. McDonald, “Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments”, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 66, no. 6, pp. S52-S61, Dec. 2011. [17] Songtao L., Dongming Z. “Gradient based polyhedral segmentation for 3-D range image”, Proc. 2000 International Conference on Image Processing, vol. 2, pp. 748-751, 2000. [18] T. Lemaire, C. Berger, I-K. Jung and S. Lacroix, “Vision-Based SLAM: Stereo and Monocular Approaches”, International Journal of Computer Vision, vol. 73, no. 3, pp. 343-364. 2007. [19] Y. Ke, and R. Sukthankar, “PCA-SIFT: a more distinctive representation for local image descriptors”, in Proceedings of International Conference on Computer Vision and Pattern Recognition, vol.2, pp. 506-513, 2004. [20] Y. Cao, and J. McDonald, “Viewpoint invariant features from single images using 3D geometry” in IEEE Workshop on Applications of Computer Vision, pp.1-6, Dec. 2009. [21] Y. Cao, M. Y. Yang, and J. McDonald, “Robust alignment of wide baseline terrestrial laser scans via 3D viewpoint normalization”, Proc. IEEE Workshop on Applications of Computer Vision, pp. 455-462, Jan. 2011. |