Upload
pi194043
View
278
Download
1
Embed Size (px)
DESCRIPTION
This article describes use of random ferns for path description,classification and detectionby using a ensemble of random fern classifiers
Citation preview
Random Ferns forpatch description
Pi19404
January 20, 2014
Contents
Contents
Random Ferns for patch description 3
0.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . 50.3 Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 | 10
Random Ferns for patch description
Random Ferns for patch
description
0.1 Introduction
� Let us consider the problem of object recognition in which we
have to decide if a given patch of image contains the desired
object or not.
� One way to achieve this to characterize the texture about a
key-points across images acquired under widely varying poses and
lightning conditions.
Due to its robustness to partial occlusions and computational
efïnAciency, recognition of image patches extracted around de-
tected key-points is crucial for many vision problems.
� One way to achieve this is to compute a statistical model for
the object/patch and then estimate if a given patch has a good
probability of being sampled from the object model.
� Let us consider a mathematical representation of the patch to
be classified by representing it in terms of a feature vector.
� Let f be the set of features computed over the patch.
�ci = argmaxP (C = cijf) (1)
P (C = cijf) =P (f jC = ci)P (C = ci)
P (f)(2)
�ci = argmaxP (f jC = ci) (3)
(4)
For real time application we require features that can be easily
computed and encoded.The concept of Local Binary patterns has
become popular for texture analysis and patch description due to
it small memory and computational requirement.
� In case of Local binary features each patch is described by a bit
string. The value of each bit is derived by performing some
binary test on the image patch
3 | 10
Random Ferns for patch description
� The test’s can in principle be in any binary test that can ef-
fectively encode the information present in the patch as 0 or
1.
� Typically each binary feature is determined by performing inten-
sity comparisons . The intensity comparisons can be between
two different pixel locations or between image and trans-
formed image at specified pixel location.
� A set of binary test constitutes the feature vector.
� However these features are very simple and hence a large num-
ber of such features may be required to accurately describe a
patch.
� If we consider a joint probability distribution we would require
to store 2N entries for each class.However if we assume in-
dependence between features using Naive Bayes assumption we
are required to store N feature ,however this completely ig-
nores any correlation between features.
� We consider a randomized fern algorithm,which consists of
randomly selecting S features out of total pool of N features
into single group.
� All the features with a group are considered to be dependent
and a joint distribution is constructed over these features.
� Each group is called a fern and for each fern fk we can compute
P (fkjC = ci), Thus we have K random classifier that provide us the
probability that feature vector fk belong to class P (fkjC = ci).
� Here each fern is constructed from a random subset of S
features.
� Fk corresponds binary feature vector for each fern k. Since
each group contains S features and we are required to maintain
a joint PDF we require 2S entries.And since there are M groups
total entires required are M � 2S .
� The feature points used in binary tests are selected randomly
from a normal distribution. The grouping of features points is
also performed randomly.
�
� We will approximate the joint PDF using histogram.Each fern
consists of S binary features
4 | 10
Random Ferns for patch description
� Since features are binary ,total number of bins the joint his-
togram are 2S
� For example for a fern with 2 features the histogram bins are
indexed as 00,01,10,11.Each bin can be indexed by a integral value.
� During the training phase we need estimate the class conditional
probabilities that require the estimation of 2S �M parameters.
An ensemble techniques is used to combine the result of K
classifiers
�ci = argmax1
K
Xk = 1KP (fkjC = ci) (5)
(6)
� A important feature of this scheme of modelling does not re-
quire us to store the training data.
� Also we can perform incremental learning since we have to
update only the counts in the statistical model.
0.2 Implementation Details
� The points where the features are computed are stored in 2D
vector of length numFerns � 2 �n umFeatures, this is stored in a
vector where each element is of type Point2f .
� As mentioned above to maintain a Joint PDF we have to main-
tain a joint histogram of 2numFeatures and there are numFernsgroups.The data is stored in a single data structure of type
std::vector.
� 3 such vectors are maintained positive,negative and posterior
probability distribution.
� Each of the locations in the joint PDF can be addressed using
a integer data type ie The feature vector extracted for each
Fern/group is a integer data type.
� Whenever we learn a positive/negative sample,we increment
the count corresponding to the bins indexed by the integer val-
ued feature.Since feature vector is of length numFerns the vec-
tors are indexed as i �n umIndices+ features[i] where numIndices =2numFeatures i represents the fern and features[i] represents inte-
ger feature value.
5 | 10
Random Ferns for patch description
� Thus to maintain histogram ,the bin indices corresponding to
decimal equivalent to binary features are incremented.
� Positive vector is updated upon learning positive sample ,Nega-
tive vector is updated upon learning negative sample and Pos-
teriors probability vector is updated after updating positive or
negative samples and contains the confidence/probability that
binary feature belongs to positive patch.
� Give a binary vector to compute probability that it represents
a positive sample we just look at the posterior probabilities of
each group P (FkjC = ci) and compute the average over all groups
k = 1 : : :n umFerns.
0.3 Code
� The code for the same can be found at git repo https://github.
com/pi19404/OpenVision/ in ImgFeatures=randomferns:cpp and ImgFeatures=randomferns:hppfiles
� Some important functions are also provided below
� class RandomFerns is the main class for the file
1 nclude "randomferms.h"
2 give a feature vector of size M(number of ferms)
3 we compute the output of ensemble classifier as
4 average of individual classifiers
5 oat RandomFerns::calcConfidence(vector<int> features)
6
7
8 float conf=0.0;
9 for(int i=0;i<features.size();i++)
10 {
11 //posterioir is a vector consisting of posterior probabilities of PDF
12 //i*numIndices marks the start point of joint PDF of each fern
13 conf=conf+posteriors[i*_numIndices+features[i]];
14 }
15 return conf/(features.size());
16
17
18
19 while updating the posterior probabilities after updating the class histograms
20 id RandomFerns::updatePosterior(vector<int> features,bool class1,int ammount)
6 | 10
Random Ferns for patch description
21
22 for(int i=0;i<features.size();i++)
23 {
24 int arrayIndex=(i*_numIndices)+features[i];
25 class1?positive[arrayIndex]+=ammount:negatives[arrayIndex]+=ammount;
26 //update the posterior
27 posteriors[arrayIndex]=(((float)positive[arrayIndex]))/((float)positive[arrayIndex]+(float)negatives[arrayIndex]);
28 }
29 / writeToFile("/home/pi19404/config_oc.txt");
30
31
32
33 compute the location points for the binary tests
34 for locations of ferns we select the points such that they lie at random locations
35 in a rectangular ROI of size 1,the points are selected such that
36 adjacent points will most likely not lie in the same quadrant
37 random location of points as well as random selection of quadrant
38 are used
39 id RandomFerns::init()
40
41
42 points.resize(_numFerns);
43 int toggle=0;
44 for(int i=0;i<_numFerns;i++)
45 {
46 vector<Point2f> px=points[i];
47 for(int j=0;j<2*_numFeatures;j++)
48 {
49 Point2f p;
50 p.x=((float)std::rand())/(float)RAND_MAX;
51 p.y=((float)std::rand())/(float)RAND_MAX;
52 p.x=p.x/2;
53 p.y=p.y/2;
54
55 toggle=((float)std::rand())/(float)RAND_MAX;
56
57 if(toggle<0.25)
58 {
59 p.x=0.5-p.x;
60 p.y=0.5-p.y;
61 }
62 else if(toggle<0.5)
63 {
64 p.x=0.5+p.x;
7 | 10
Random Ferns for patch description
65 p.y=0.5+p.y;
66 }
67 else if(toggle < 0.75)
68 {
69 p.x=0.5-p.x;
70 p.y=0.5+p.y;
71
72 }
73 else if(toggle <1)
74 {
75 p.x=0.5+p.x;
76 p.y=0.5-p.y;
77 }
78 px.push_back(p);
79 toggle=!toggle;
80
81
82 }
83 points[i]=px;
84 }
85
86
87
88
89
90 function to compute fern feature for a rectangular region in the image
91 ctor<int> RandomFerns::computeFeatures(const Rect r,const Mat image)
92
93
94
95 vector<int> features;
96 features.resize(0);
97
98 Mat roi=image(r);
99
100 for(int i=0;i<points.size();i++)
101 {
102 int index=0;
103 vector<Point2f> pp=points[i];
104
105 for(int j=0;j<pp.size();j=j+2)
106 {
107
108 index <<=1;
8 | 10
Random Ferns for patch description
109 Point2f p=pp[j]*r.width;
110 Point2f p1=pp[j+1]*r.height;
111 uchar val1=roi.at<uchar>(p.x,p.y);
112 uchar val2=roi.at<uchar>(p1.x,p1.y);
113
114 if((int)val1 >(int)val2)
115 {
116 index|=1;
117 }
118
119 }
120
121 features.push_back(index);
122 }
123 return features;
124
9 | 10
Bibliography
Bibliography
[1] Martin Godec et al. �On-Line Random Naive Bayes for Tracking.� In: ICPR. IEEE,2010, pp. 3545�3548. url: http://dblp.uni-trier.de/db/conf/icpr/icpr2010.html#GodecLSB10.
[2] Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas. �Tracking-Learning-Detection�.In: IEEE Transactions on Pattern Analysis and Machine Intelligence 34.7 (2012),pp. 1409�1422. issn: 0162-8828. doi: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2011.239.
[3] Mustafa �zuysal, Pascal Fua, and Vincent Lepetit. �Fast keypoint recognition inten lines of code�. In: In Proc. IEEE Conference on Computing Vision and Pattern
Recognition. 2007.
10 | 10