26
Indoor Place Recognition System for Localization of Mobile Robots Raghavender Sahdev and John K. Tsotsos, Department of Electrical Engineering and Computer Science and Centre for Vision Research (CVR), York University, Canada 1

Place Recognition for Localization of Mobile Robot

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Place Recognition for Localization of Mobile RobotIndoor Place Recognition System for Localization of Mobile Robots
Raghavender Sahdev and John K. Tsotsos, Department of Electrical Engineering and Computer Science and Centre for Vision Research (CVR), York University, Canada
1
Introduction
2
• Place Categorization - Recognizing previously seen similar places
Have I been here
• Place Categorization - Recognizing previously seen similar places
Have I been here
4
5
1. Compute the HOUP (Histogram of Oriented Uniform Patterns) Descriptor for an image sub block
2. Compute the feature for image
3. Use classifiers for classification
4. Output the Image Class
Gabor Filter
Input Image
Local Binary
Local Binary Patterns
• Certain LBPs are termed ‘uniform’ represent the fundamental properties of local image texture.
• Uniform = transitions at most 2 from 0 to 1 or 1 to 0.
7
Computing LBP for 33 region
Uniform Local Binary Patterns
Highlighted numbers represent the decimal representations of the 58 uniform patterns out of the
total 256 binary patterns in an LBP.
8
Subdivision
9
Gabor Filter LBP
• Each block passed through gabor filter, then LBPs computed for that block
• Each block has 58 uniform + 1 non uniform pattern for each orientation.
Pipeline for feature computation
6 orientations
Feature Aggregation
• Total 59*6 = 354 dimensionality
• PCA to reduce the dimensions from 354 to 70 for each block (95% of the information)
• Dimensionality of the feature vector for the image = 70*9 blocks = 630
• Classifiers used are
10
Source: A. Pronobis, B. Caputo, P. Jensfelt, and H. Christensen, “A Discriminative Approach to Robust Visual Place Recognition,” 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006.
KTH Idol dataset has 5 places only
Places under varying illumination conditions
Results on the KTH Idol # Train Test Lighting
conditions
Performance
Dumbo Dumbo Same 97.26 97.62 98.24 97.22
2 Minnie Minnie Different 71.90 90.17 92.01 85
Dumbo Dumbo Different 80.55 94.98 95.76 88
3 Dumbo Minnie Same 66.63 77.78 80.05 72.46
Minnie Dumbo Same 62.20 72.44 75.43 75.48
13
• Pronobis et al. 2006 - High dimensional Composed Receptive Field Histograms for feature computation
• Wu & Rehg, 2011 - CENsus TRansform hISTogram (CENTRIST) features
• Fazl-Ersi & Tsotsos, 2012 – HOUP with feature selection
• Ours – HOUP without feature selection
Results on the UIUC Dataset
14
Source: S. Lazebnik, C. Schmid, and J. Ponce, “Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories,” 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2 (CVPR'06).
We achieved performance of 75.33% accuracy on the UIUC dataset consisting of 15 different places
Comparing with state-of-the-art
• First real-time implementation of HOUP
• Feature selection algorithm from Fazl-Ersi and Tsotsos 2012 avoided due to high computational cost (takes 20x more time on average)
• We use 9 blocks as compared to 43 blocks used by Fazl- Ersi and Tsotsos, 2012 15
Our Dataset
• 2 robots (VirtualMe and Pioneer).
• Robots were manually driven.
• Our dataset include the following places as well
17 • Pictures from the above categories were collected at the Coast
Capri Hotel
Hotel Corridor Hotel Bedroom Conference Room
Dinning Room Hotel Lobby Hotel Hallway
Dataset
• Image resolution 640 x 480 at a frame rate of 3 frames per second.
• Each place has 100 – 500 images
• Total 4000 – 4200 images for each training/testing sequence
18
Places captured from VirtualMe during night 21
Varying lighting conditions
Accuracy (%)
• High performance suggests the robustness of our approach
• Our approach is generalizable over new environments too (learning required one time)
22
Experiment Training Set Testing Set Lighting
Conditions
Accuracy
VirtualMe VirtualMe Same 98
VirtualMe VirtualMe Different 93
VirtualMe Pioneer Same 92
VirtualMe Pioneer Different 85
• Amir Rasoulli for providing image capturing code
• Ehsan Fazl-Ersi for discussions
• Novel dataset for place recognition generated
• Our approach does well for place recognition and categorization
• Good results comparable to the state-of-the-art achieved in the paper
• Changing both lighting conditions and the robot greatly increases the difficulty of the task
• Project page: