6
Traffic Sign Detection for U.S. Roads: Remaining Challenges and a case for Tracking Andreas Møgelmose 1,2 , Dongran Liu 2 , and Mohan M. Trivedi 2 Abstract— Traffic sign detection is crucial in intelligent vehi- cles, no matter if one’s objective is to develop Advanced Driver Assistance Systems or autonomous cars. Recent advances in traffic sign detection, especially the great effort put into the competition German Traffic Sign Detection Benchmark, have given rise to very reliable detection systems when tested on European signs. The U.S., however, has a rather different approach to traffic sign design. This paper evaluates whether a current state-of-the-art traffic sign detector is useful for American signs. We find that for colorful, distinctively shaped signs, Integral Channel Features work well, but it fails on the large superclass of speed limit signs and similar designs. We also introduce an extension to the largest public dataset of American signs, the LISA Traffic Sign Dataset, and present an evaluation of tracking in the context of sign detection. We show that tracking essentially suppresses all false positives in our test set, and argue that in order to be useful for higher level analysis, any traffic sign detection system should contain tracking. I. I NTRODUCTION Advanced Driver Assistance Systems (ADAS) and au- tonomous cars are currently very popular research topics, with many applications, as car manufacturers continue to put efforts into making cars safer and easier to drive. In both scenarios it is crucial for the onboard systems to have a solid view and understanding of the world around the car. This can include looking at pedestrians [1], other cars [2]–[4], or lane markings [5], [6]. Part of this understanding comes from being able to read traffic signs. Whether the aim of the system is to assist the driver, or remove him completely from the equation, traffic signs provide valuable information about the driving scenario at any given time. Traffic Sign Recognition (TSR) has seen considerable work over the past decade. The task is generally split into two: detection and recognition [7]. Some full-flow systems of course combine the two tasks, but generally as two separate steps in a pipeline. Other systems focus on one or the other. This work is focused on detection, and particularly detec- tion of American traffic signs. A recent competition called the German Traffic Sign Detection Benchmark (GTSDB) [8] had several excellent entries, and it was even suggested by one of the entries, Mathias et. al. [9], that the problem was largely solved. Indeed Mathias et. al., along with other top contenders in the competition, showed very impressive detection results on the GTSDB. However, as the name 1 Visual Analysis of People Lab, Aalborg University, Denmark [email protected] 2 Laboratory for Intelligent and Safe Automobiles, UC San Diego, United States [dol060, mtrivedi]@ucsd.edu suggests, in includes only German traffic signs, which are substantially different from those in the United States. The purpose of this paper is threefold: 1) Test the method successfully employed in [9] on Amer- ican signs in order to ascertain whether the traffic sign detection problem is really solved. 2) Add a second layer of intelligence to traffic sign detection in the form of tracking. 3) Introduce a recently captured extension of the LISA Traffic Sign Dataset which adds more high-resolution color imagery to what is already the world’s largest and only public dataset of American traffic signs. The remainder of this paper is structured as follows: In section II, related work is briefly discussed. Section III explains the methods used for both detection and tracking. In section IV the results are shown and discussed for several traffic sign classes, and finally the paper is rounded off with conclusions in section V. II. RELATED WORK Traffic sign detection has been heavily researched in the past decade. A comprehensive overview is given in the survey [7]. There are two main approaches: Model-based and learning-based. The earliest approaches [10]–[13] were model-based and relied on static knowledge of traffic sign appearance, such as shape or color. This works well under good conditions, reliable lighting, etc. However, the learning- based methods have taken over lately, since they cope better with real-world variations in the input data. They use a wealth of features, such as Haar-wavelets [14], HOG [15], or Integral Channel Features [9]. Traffic sign classification has a history of being improved through competition, and in 2011, the problem was generally considered solved with the publication of the results of the German Traffic Sign Recognition Benchmark (GTSRB, not to be confused with GTSDB) [16]. Recently, the same authors endeavored to repeat the success for detection with the GTSDB [8]. They provided a dataset of 900 images captured in Germany containing 1206 traffic signs in four categories: prohibitory signs, danger signs, mandatory signs, and other signs. A baseline detection performance was com- puted using standard methods, and the community was asked to contribute their best detection results. 18 teams contributed their results, generally with very good results. The best competitors were Team Litsi [17], Team VISICS [9], and Team wgy@HIT501 [18]. Team Litsi used color classification and shape matching to finds regions of interest and detect signs using HOG and color histograms IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Traffic Sign Detection for U.S. Roads: Remaining ...cvrr.ucsd.edu/publications/2014/MoegelmoseLiuTrivedi_ITSC2014.pdf · Trafc Sign Detection for U.S. Roads: Remaining Challenges

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Traffic Sign Detection for U.S. Roads: Remaining ...cvrr.ucsd.edu/publications/2014/MoegelmoseLiuTrivedi_ITSC2014.pdf · Trafc Sign Detection for U.S. Roads: Remaining Challenges

Traffic Sign Detection for U.S. Roads:Remaining Challenges and a case for Tracking

Andreas Møgelmose1,2, Dongran Liu2, and Mohan M. Trivedi2

Abstract— Traffic sign detection is crucial in intelligent vehi-cles, no matter if one’s objective is to develop Advanced DriverAssistance Systems or autonomous cars. Recent advances intraffic sign detection, especially the great effort put into thecompetition German Traffic Sign Detection Benchmark, havegiven rise to very reliable detection systems when tested onEuropean signs. The U.S., however, has a rather differentapproach to traffic sign design. This paper evaluates whethera current state-of-the-art traffic sign detector is useful forAmerican signs. We find that for colorful, distinctively shapedsigns, Integral Channel Features work well, but it fails on thelarge superclass of speed limit signs and similar designs.

We also introduce an extension to the largest public datasetof American signs, the LISA Traffic Sign Dataset, and presentan evaluation of tracking in the context of sign detection. Weshow that tracking essentially suppresses all false positives inour test set, and argue that in order to be useful for higherlevel analysis, any traffic sign detection system should containtracking.

I. INTRODUCTION

Advanced Driver Assistance Systems (ADAS) and au-tonomous cars are currently very popular research topics,with many applications, as car manufacturers continue toput efforts into making cars safer and easier to drive. In bothscenarios it is crucial for the onboard systems to have a solidview and understanding of the world around the car. This caninclude looking at pedestrians [1], other cars [2]–[4], or lanemarkings [5], [6].

Part of this understanding comes from being able to readtraffic signs. Whether the aim of the system is to assist thedriver, or remove him completely from the equation, trafficsigns provide valuable information about the driving scenarioat any given time. Traffic Sign Recognition (TSR) has seenconsiderable work over the past decade. The task is generallysplit into two: detection and recognition [7]. Some full-flowsystems of course combine the two tasks, but generally astwo separate steps in a pipeline. Other systems focus on oneor the other.

This work is focused on detection, and particularly detec-tion of American traffic signs. A recent competition calledthe German Traffic Sign Detection Benchmark (GTSDB)[8] had several excellent entries, and it was even suggestedby one of the entries, Mathias et. al. [9], that the problemwas largely solved. Indeed Mathias et. al., along with othertop contenders in the competition, showed very impressivedetection results on the GTSDB. However, as the name

1Visual Analysis of People Lab, Aalborg University, [email protected]

2Laboratory for Intelligent and Safe Automobiles, UC San Diego, UnitedStates [dol060, mtrivedi]@ucsd.edu

suggests, in includes only German traffic signs, which aresubstantially different from those in the United States.

The purpose of this paper is threefold:1) Test the method successfully employed in [9] on Amer-

ican signs in order to ascertain whether the traffic signdetection problem is really solved.

2) Add a second layer of intelligence to traffic signdetection in the form of tracking.

3) Introduce a recently captured extension of the LISATraffic Sign Dataset which adds more high-resolutioncolor imagery to what is already the world’s largestand only public dataset of American traffic signs.

The remainder of this paper is structured as follows: Insection II, related work is briefly discussed. Section IIIexplains the methods used for both detection and tracking.In section IV the results are shown and discussed for severaltraffic sign classes, and finally the paper is rounded off withconclusions in section V.

II. RELATED WORK

Traffic sign detection has been heavily researched in thepast decade. A comprehensive overview is given in thesurvey [7]. There are two main approaches: Model-basedand learning-based. The earliest approaches [10]–[13] weremodel-based and relied on static knowledge of traffic signappearance, such as shape or color. This works well undergood conditions, reliable lighting, etc. However, the learning-based methods have taken over lately, since they cope betterwith real-world variations in the input data. They use awealth of features, such as Haar-wavelets [14], HOG [15],or Integral Channel Features [9].

Traffic sign classification has a history of being improvedthrough competition, and in 2011, the problem was generallyconsidered solved with the publication of the results ofthe German Traffic Sign Recognition Benchmark (GTSRB,not to be confused with GTSDB) [16]. Recently, the sameauthors endeavored to repeat the success for detection withthe GTSDB [8]. They provided a dataset of 900 imagescaptured in Germany containing 1206 traffic signs in fourcategories: prohibitory signs, danger signs, mandatory signs,and other signs. A baseline detection performance was com-puted using standard methods, and the community was askedto contribute their best detection results.

18 teams contributed their results, generally with verygood results. The best competitors were Team Litsi [17],Team VISICS [9], and Team wgy@HIT501 [18]. Team Litsiused color classification and shape matching to finds regionsof interest and detect signs using HOG and color histograms

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 2: Traffic Sign Detection for U.S. Roads: Remaining ...cvrr.ucsd.edu/publications/2014/MoegelmoseLiuTrivedi_ITSC2014.pdf · Trafc Sign Detection for U.S. Roads: Remaining Challenges

classified with support vector machines. Team VISICS use theIntegral Channel Features (ChnFtrs) proposed by Dollar et.al. [19] for pedestrian detection. ChnFtrs use a combinationof edge features and LUV color features in a boostedcascade. Team wgy@HIT501 uses HOG features, first withLDA to find sign candidates and then HOG with IK-SVMfor further refinement. It is interesting to see that the mostsuccessful detectors all use detection schemes adapted frompedestrian detection. Note that all of the above work hasbeen done using European traffic signs. Only a few worksexplicitly take on American signs: In [20] a shape baseddetector and classifier is used for American speed limitsigns. Detection results are not shown on their own, but thefinal recognition results are around 88% recognition rate,with no mention of the number of false detections. Thesystem is tested on a relatively small dataset. [21] evaluateswhether synthetic training data is useful for sign detection,but without great success. Finally, [22], [23] propose anadaptive Bayesian Classifier Cascade, again for speed limitsign detection. It shows promising detection results in the90%-range, albeit with several false positives per image,much higher than what would be acceptable in a productionsystem. Older contributions include [14], [23], [24].

Little work has been done to extend the pipeline furtherthan raw detection and classification. A few groups work ontraffic sign inventory and surveying, including [25], [26]. Forany system which aspires to use TSR in a practical system- be it ADAS, inventory systems or autonomous driving -tracking is beneficial, even necessary [7], but building track-ing on top of detections has not been thoroughly researched,except very recently, where Boumediene et. al. proposedtracking traffic signs using the Transferable Belief Model(TBM) [27].

This work takes ChnFtrs, the method employed by TeamVISICS, and applies it to American traffic signs. On top ofthe detection we build an association and tracking layer usingKalman filters.

III. METHODS

The overall structure of the system described in thepaper is simple: Detection using ChnFtrs and tracking usingKalman filters. In the following, the implementation and useof each method it described in further detail. Everythingexplained here has been implemented in Matlab.

A. Detection

1) Features: The detection scheme used is Integral Chan-nel Features/ChnFtrs by Dollar et. al. [19] as also used byMathias et. al. [9] in their winning entry in the GTSDB.This method works by computing features similar to Haar-like features in the integral image of several “channels” ofthe input image. Channel refers to different representationsof the input image. We use 10 different channels spanning6 gradient histogram channels, 1 for unoriented gradientmagnitude, and 3 for the color channels in the CIE-LUVcolor space. The oriented gradient channels are computed

Fig. 1. The flow of the detection system.

as:Qθ(x, y) = G(x, y) · 1[Θ(x, y) = θ] (1)

where G(x, y) is the gradient magnitude at pixel x, y andΘ(x, y) is the quantized gradient angle. Qθ(x, y) is then runfor 6 equally spaced angles, θ.

The features we use are the same for all channels, namelyfirst-order Haar-like features. Traditional Haar-like featuresare expressed as the difference of the sum of two rectangularimage regions. First-order Haar-like features are even simplerand expressed as the sum of a rectangular image region ina given channel. The ChnFtrs framework allow for morecomplex features, but according to [19], the gain fromusing higher-order features is negligible. The features canbe computed fast from an integral image representation ofthe relevant channels.

The features are evaluated using a modified AdaBoostclassifier with depth-2 decision trees as weak learners.

2) Training: The system is trained with at set of positivetraining images, all resized to 15x15 pixels, since that isthe smallest size of sign we are interested in detecting. Anyless, and the features become unreliable. Similarly, a set ofnegative samples is also input to the system. For a 15x15px image patch, we generate 3025 rectangular windows forthe first-order features. Since there are 10 channels for eachtraining patch, the total number of computed features is30250 per patch. As opposed to [9], we do not use differentaspect ratios in our training, as we observe that to veryrarely be a source of missed detections. After extraction, thefeatures are sent to AdaBoost to train the classifier. We use a3 stage cascade with different number of weak learners perstage. The first stage uses only 10 weak learners to quicklydiscard a lot of non-traffic sign windows. The second stagehas 100 weak learners and the last stage uses up to 2000weak learners to refine the detection. However, training ofthe last stage usually stops early at 300-600 weak learnersdue to convergence.

3) Detection: We slide a 15x15 px window across theintegral image of each channel in the test image. Like in theoriginal ChnFtrs, the window is moved by 4 pixels for eachevaluation and to ensure scale invariability, the input imageis scaled by a step of 21/10 (1.07).

An overview of the detection flow is shown in fig. 2

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 3: Traffic Sign Detection for U.S. Roads: Remaining ...cvrr.ucsd.edu/publications/2014/MoegelmoseLiuTrivedi_ITSC2014.pdf · Trafc Sign Detection for U.S. Roads: Remaining Challenges

TABLE IDETECTION RATE FOR PROHIBITORY SIGNS OF THE GTSDB.

DR FPPFOur implementation 93% 0.07Team VISICS [9] 100% 0

B. Tracking

Tracking is built on top of the detection. It has twopurposes: It can be used to minimize false detections bysuppressing those which are not part of a track, and moreimportantly, they can keep track of which signs have beenseen before. In real-world use of a sign detector, we wantto ensure that every sign is seen, but also that no sign isseen more than once. Since each specific sign relates to aspecific traffic situation, the system needs to know whetherthat situation has been handled or not, and not blindly repeatitself when a new instance of the same physical sign isdetected.

The tracking is handled using the Hungarian algorithmfor assignment and Kalman filters for tracking. Every time adetection happens, it is either assigned to an existing track,or a new track is created for it. Any existing track which doesnot get a sign assigned to them are marked as invisible. If atrack is visible for more than 3 frames, a new detection willbe announced. Any track which does not last for 3 frames isconsidered a false detection. Conversely, if a track is invisiblefor more than 2 frames, or more than 40% of its containedframes are marked invisible, it is deleted.

IV. EVALUATIONS OF DETECTION AND TRACKING

We have two separate evaluations: Firstly we test thedetection scheme on American signs, secondly we show theadvantages of employing tracking for traffic sign detection.

A. Detection

No matter the test set or sign type, throughout this sectionwe report two numbers per test:

DR Detection rate, computed as TP/P where TP isthe number of correct detections in a frame (truepositive) and P is the actual number of signs in theframe (positives). This is equivalent to recall.

FPPF False positives per frame, computed as FP/fwhere FP is the number of false positives acrossall frames, and f is the number of frames analyzed.

While this paper is about detecting American trafficsigns, we first test our system on the GTSDB supersetof prohibitory signs as a sanity check to verify that ourimplementation is not faulty. The result can be seen intable 3. While our performance is significantly lower thanthat of [9] - perfect detection is hard to beat - it is stillrespectably above 90%. The difference is probably due tovarying parameter settings between the two implementations.

With this baseline check out of the way, we turn ourattention to American signs. Specifically, we test on threesuperclasses: Stop signs, warning signs (all yellow diamond

(a) (b) (c)

Fig. 2. Examples of american traffic signs. (a) Stop sign, (b) warning sign,(c) speed limit sign. Image source: [28]

TABLE IISTATISTICS FOR THE LISA DATASET AND THE NEW EXTENSION.

LISA Traffic SignDataset

LISA Dataset Exten-sion

Classes 49 15Annotations 7855 1326Images 6610 1445

shaped classes), and speed limit signs (all speed limit signs,regardless of speed). See fig. 3 for examples.

The detector is trained on the LISA traffic sign dataset [7].Note that this detection method requires color informationand not all images in the LISA dataset are in color, so onlythe 3156 annotations on color images are used. The detectoris tested on a recently captured and annotated extension tothe LISA traffic sign dataset, expanding the dataset by morethan 1300 annotations. The extension will be integrated withthe existing publicly available dataset1. Statistics can be seenin table 3.

Detection results for each class can be seen in table 3 andprecision-recall curves for each superclass are shown in fig.4. Stop signs and warning signs both perform well, basicallyon par with European signs. This is to be expected, as theyare quite similar to European signs with distictive shapesand clear colors. For speed limit signs, the story is different.Detection rates a abysimally low. Clearly, the model is notable to capture what distinguished the speed limit signs fromeverything else. The shape is not particular distinctive – wetried adding a margin to the training images, in the hope thatthe edges would be detected better then, but to no avail –and the signs have no color that make them stand out.

The problems are also obvious in fig. 4 and 5, showingfalse detection and misses for each superclass. False detec-tions of stop signs and warning signs are generally objectswhich bear some resemblance to the signs. Missed detections

1Available for download at http://cvrr.ucsd.edu/lisa/lisa-traffic-sign-dataset.html

TABLE IIIDETECTION RATE FOR EACH SUPERCLASS. SEE ALSO FIG. 3 FOR

FURTHER DETAILS.

DR FPPF AUC in fig. 3Stop signs 93.6% 0.09 0.975Warning signs 89.2% 0.09 0.944Speed limit signs 47.7% 1.64 0.235

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 4: Traffic Sign Detection for U.S. Roads: Remaining ...cvrr.ucsd.edu/publications/2014/MoegelmoseLiuTrivedi_ITSC2014.pdf · Trafc Sign Detection for U.S. Roads: Remaining Challenges

Fig. 3. Precision-recall curves for each superclass. (a) Stop signs, AUC: 0.975, (b) warning signs, AUC: 0.944, (c) speed limit signs, AUC: 0.235.

Fig. 4. Examples of false detections for each superclass. First row, stopsigns, second row, warning signs, and third row, speed limit signs.

Fig. 5. Examples of missed detections for each superclass. First row, stopsigns, second row, warning signs, and third row, speed limit signs.

are generally signs of very poor quality, either due to size orlighting. That is not as clear cut for speed limit signs. Manyof the false detections could be anything, and the missedsigns are perfectly fine speed limit signs.

This leads us to conclude, that while sign detection mightbe solved for European signs and other “easy” signs, thereis still plenty of work to be done for hard cases, such asAmerican speed limit signs. And then we have not eventouched the multitude of other American signs with the exactsame shape and color as the speed limit signs.

TABLE IVTRACKING DETECTION RATES.

Stop signs Warning signs# of physical signs 77 62Physical signs tracked 76 60False tracks 2 1Individual detections 682 597

B. Tracking

The main purpose of our tracking implementation is tomake sure the systems know which detections belong to thesame physical sign, so each sign is only handled once. Afortunate side effect is that any spurious false detections arealso removed, since they do not belong to any track. We havetested it on video sequences of driving taken from the LISATraffic Sign Dataset Extension. The sequences cover a totalof 139 physical signs. The system has been implemented sonew tracks show up bottom right of the frame as soon asthey are established (after 3 consecutive detections). Whenthey move to a continued tracks in the following frame, theyare shown bottom right with a green border. Sample outputcan be seen in fig. 5.

Actual statistics for the tracking performance can be seenin table 4. The system scores well. Nearly all physicalsigns are seen and tracked, and only 3 false tracks appear.In other words, almost all false detections are successfullysuppressed. We did not run this test on speed limit signs,as the raw detection rate is simply too low for those. Thisclearly shows the tremendous effect tracking can have onany traffic sign detection systems.

V. CONCLUSION

This work evaluated a state-of-the-art traffic sign detec-tor on American traffic signs. We implemented ChnFtrsand trained it on three superclasses of traffic signs: stop,warning, and speed limit. The detector is known to performextremely well on European traffic signs, so in order toascertain whether the challenge of detecting traffic signs istruly solved. Our research showed that while the methodperforms well for colorful, distinctive traffic signs, such asthe superclasses stop and warning, it is not able to detectspeed limit signs with any acceptable certainty. As part of

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 5: Traffic Sign Detection for U.S. Roads: Remaining ...cvrr.ucsd.edu/publications/2014/MoegelmoseLiuTrivedi_ITSC2014.pdf · Trafc Sign Detection for U.S. Roads: Remaining Challenges

(a)

(b)

Fig. 6. Examples of tracked signs. A sign icon to the bottom left meansit has just been established as a track. An icon bottom right with a greenborder shows a continued track. The two shown images are consecutiveframes.

the testing, we introduced an extension of the LISA TrafficSign Dataset which adds high-resolution color images andsequences to the existing dataset.

Furthermore, we implemented a Kalman filter based track-ing system for traffic signs and showed that in real worlduse, a good tracking system is essential for any trafficsign detection system - and probably more important thanachieving the last 2-3 percentage points towards a perfectdetection system.

The most important and difficult challenge is undoubtedlyto develop a reliable speed limit sign detector. Apart fromspeed limit signs, the US has a wide range of other signsfollowing the same template, and no system is able to processthose at the moment.

ACKNOWLEDGMENT

The authors would like to thank their colleagues at theLISA lab for valuable discussions throughout the project.

REFERENCES

[1] A. Prioletti, A. Mogelmose, P. Grisleri, M. M. Trivedi, A. Broggi, andT. B. Moeslund, “Part-Based Pedestrian Detection and Feature-BasedTracking for Driver Assistance: Real-Time, Robust Algorithms, andEvaluation.” IEEE Transactions on Intelligent Transportation Systems,vol. 14, no. 3, pp. 1346–1359, 2013.

[2] E. Ohn-Bar and M. Trivedi, “Fast and robust object detection usingvisual subcategories,” in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition Workshops, 2014, pp. 179–184.

[3] B. Morris and M. Trivedi, “Robust classification and tracking ofvehicles in traffic video streams,” in Intelligent Transportation SystemsConference, 2006. ITSC ’06. IEEE, Sept 2006, pp. 1078–1083.

[4] S. Sivaraman and M. Trivedi, “Looking at Vehicles on the Road: ASurvey of Vision-Based Vehicle Detection, Tracking, and BehaviorAnalysis,” Intelligent Transportation Systems, IEEE Transactions on,vol. 14, no. 4, pp. 1773–1795, Dec 2013.

[5] R. Satzoda and M. M. Trivedi, “Drive Analysis using Vehicle Dy-namics and Vision-based Lane Semantics,” IEEE Transactions onIntelligent Transportation Systems, 2014.

[6] S. Sivaraman and M. Trivedi, “Integrated Lane and Vehicle Detec-tion, Localization, and Tracking: A Synergistic Approach,” IntelligentTransportation Systems, IEEE Transactions on, vol. 14, no. 2, pp.906–917, June 2013.

[7] A. Møgelmose, M. M. Trivedi, and T. B. Moeslund, “Vision basedTraffic Sign Detection and Analysis for Intelligent Driver AssistanceSystems: Perspectives and Survey,” IEEE Transactions on IntelligentTransportation Systems, vol. Special Issue on MLFTSR, no. 13.4, dec2012.

[8] S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing, and C. Igel,“Detection of traffic signs in real-world images: The German TrafficSign Detection Benchmark,” in Neural Networks (IJCNN), The 2013International Joint Conference on. IEEE, 2013, pp. 1–8.

[9] M. Mathias, R. Timofte, R. Benenson, and L. Van Gool, “Traffic signrecognition — How far are we from the solution?” in Neural Networks(IJCNN), The 2013 International Joint Conference on. IEEE, 2013,pp. 1–8.

[10] A. Ruta, Y. Li, and X. Liu, “Towards real-time traffic sign recognitionby class-specific discriminative features,” in Proc. of the 18th BritishMachine Vision Conference, vol. 1, 2007, pp. 399–408.

[11] N. Barnes and G. Loy, “Real-time regular polygonal sign detection,”in Field and Service Robotics. Springer, 2006, pp. 55–66.

[12] P. Gil Jimenez, S. Bascon, H. Moreno, S. Arroyo, and F. Ferreras,“Traffic sign shape classification and localization based on the nor-malized FFT of the signature of blobs and 2D homographies,” SignalProcessing, vol. 88, no. 12, pp. 2943–2955, 2008.

[13] R. Timofte, K. Zimmermann, and L. Van Gool, “Multi-view trafficsign detection, recognition, and 3d localisation,” in Applications ofComputer Vision (WACV), 2009 Workshop on. Ieee, 2009, pp. 1–8.

[14] C. Keller, C. Sprunk, C. Bahlmann, J. Giebel, and G. Baratoff,“Real-time recognition of U.S. speed signs,” in Intelligent VehiclesSymposium, IEEE, june 2008, pp. 518–523.

[15] G. Overett and L. Petersson, “Large scale sign detection using HOGfeature variants,” in Intelligent Vehicles Symposium (IV), 2011 IEEE,june 2011, pp. 326–331.

[16] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “The GermanTraffic Sign Recognition Benchmark: A multi-class classificationcompetition,” in Neural Networks (IJCNN), The 2011 InternationalJoint Conference on. IEEE, 2011, pp. 1453–1460. [Online].Available: http://benchmark.ini.rub.de/?section=gtsrb

[17] M. Liang, M. Yuan, X. Hu, J. Li, and H. Liu, “Traffic sign detection byROI extraction and histogram features-based recognition,” in NeuralNetworks (IJCNN), The 2013 International Joint Conference on, Aug2013, pp. 1–8.

[18] S. Wang and J. Zhang, “A New Edge Feature For Head-ShoulderDetection,” in ICIP. IEEE, 2013.

[19] P. Dollar, Z. Tu, P. Perona, and S. Belongie, “Integral ChannelFeatures,” in BMVC, vol. 2, no. 3, 2009, p. 5.

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 6: Traffic Sign Detection for U.S. Roads: Remaining ...cvrr.ucsd.edu/publications/2014/MoegelmoseLiuTrivedi_ITSC2014.pdf · Trafc Sign Detection for U.S. Roads: Remaining Challenges

[20] J. Abukhait, I. Zyout, and A. M. Mansour, “Speed Sign Recognitionusing Shape-based Features,” International Journal of Computer Ap-plications, vol. 84, no. 15, pp. 31–37, 2013.

[21] A. Møgelmose, M. M. Trivedi, and T. B. Moeslund, “Learning todetect traffic signs: Comparative evaluation of synthetic and real-worlddatasets,” in Pattern Recognition (ICPR), International Conference on.IEEE, 2012, pp. 3452–3455.

[22] A. Staudenmaier, U. Klauck, U. Kreßel, F. Lindner, and C. Wohler,“Confidence Measurements for Adaptive Bayes Decision ClassifierCascades and Their Application to US Speed Limit Detection,”in Pattern Recognition, ser. Lecture Notes in Computer Science,A. Pinz, T. Pock, H. Bischof, and F. Leberl, Eds. Springer BerlinHeidelberg, 2012, vol. 7476, pp. 478–487. [Online]. Available:http://dx.doi.org/10.1007/978-3-642-32717-9 48

[23] Resource Optimized Cascaded Perceptron Classifiers using StructureTensor Features for US Speed Limit Detection, vol. 12, 2011.

[24] F. Moutarde, A. Bargeton, A. Herbin, and L. Chanussot, “Robust on-vehicle real-time visual detection of American and European speedlimit signs, with a modular Traffic Signs Recognition system,” inIntelligent Vehicles Symposium. IEEE, 2007, pp. 1122–1126.

[25] S. Lafuente-Arroyo, S. Salcedo-Sanz, S. Maldonado-Bascon, J. A.Portilla-Figueras, and R. J. Lopez-Sastre, “A decision support systemfor the automatic management of keep-clear signs based on supportvector machines and geographic information systems,” Expert Syst.Appl., vol. 37, pp. 767–773, January 2010.

[26] L. Hazelhoff, I. Creusen, and P. de With, “Robust classification systemwith reliability prediction for semi-automatic traffic-sign inventorysystems,” in Applications of Computer Vision (WACV), 2013 IEEEWorkshop on. IEEE, 2013, pp. 125–132.

[27] M. Boumediene, J.-P. Lauffenberger, J. Daniel, and C. Cudel, “Cou-pled Detection, Association and Tracking for Traffic Sign Recogni-tion,” in Intelligent Vehicles Symposium (IV), 2014 IEEE, june 2014,pp. 1402–1407.

[28] State of California, Department of Transportation, “California Manualon Uniform Traffic Control Devices for Streets and Highways.”

IEEE Conference on Intelligent Transportation Systems, Oct. 2014