2
Quantifying Street-Level Accessibility: Algorithm meets Crowd-Sourcing Liang He, Kookjin Lee, Aditya Acharya, Heba Aly Research Report for CMSC798f C ollecting and visualizing accessibility informa- tion are always challenging problems within the realm of Human-Computer Interaction (HCI). A team of researchers at the University of Maryland have proposed innovative approaches that combine crowdsourcing, computer vision and machine learning to detect and visualize how ac- cessible a particular street or sidewalk is. Tohme is one such approach for finding curb ramps re- motely in Google Street View scenes. Based on experiments conducted by the team, the system demonstrates a performance comparable to a con- ventional manual approach but with a dramatic reduction in time and cost. Figure 1: A visual analytic tool that allows both citizens and governments to easily assess citys street-level acces- sibility. Why street-level accessibility? The U.S. Census Bureau in Americans with Disabilities : 2010 reports [1] that 30.6 million adults in the U.S. have physical disabilities that impair their ambulatory activ- ities and nearly half of them use assistive aids such as wheelchairs. For these populations, Street-level accessi- bility is an important factor in securing their mobility outdoors. Especially, mechanisms that provide informa- tion about accessible routes are necessary to enhance their mobility (e.g., Figure 1). Assessing street-level accessibility information, however, has been labor intensive and costly, and no such aiding tools have existed. New scalable data collection method and assistive map tools A team of researchers from the University of Maryland, College Park led by Prof. Jon Froehlich has developed new data collection techniques to gather street-level ac- cessibility information and new assistive navigation/map tools based on the collected accessibility data. They com- bine crowdsourcing, computer vision (CV), and machine learning techniques along with Google Street View (GSV) scenes to efficiently and accurately gather road-accessibility information, such as curb ramps (Figure 2), which exist- ing public databases have failed to sufficiently provide [3]. Combining and exploiting the three techniques helps in overcoming the scalability challenges affecting existing ap- proaches that rely solely on only one of them (i.e., crowd sourcing, computer vision, or machine learning [4] [5]). Tohme: detecting curb ramps in Google Street View As a part of their ongoing efforts, the team has presented Tohme (pronounced toe-may )[3], a semi-automatic scalable system that finds curb ramps in GSV panoramic scenes (Figure 2). Tohme is operated by four subsystems: 1. A web scraper that downloads street intersection data using Google Maps API; 2. State-of-the-art computer vision algorithms that auto- matically detect curb ramps in a given GSV scene; 3. A machine-learning-based controller that predicts the quality of the CV-based detection and then forwards the GSV scene to either a human labeling pipeline or a human verification pipeline according to the prediction result; Page 1 of 2

Quantifying Street-Level Accessibility: Algorithm …...Quantifying Street-Level Accessibility: Algorithm meets Crowd-Sourcing Liang He, Kookjin Lee, Aditya Acharya, Heba Aly Research

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Quantifying Street-Level Accessibility:Algorithm meets Crowd-Sourcing

Liang He, Kookjin Lee, Aditya Acharya, Heba AlyResearch Report for CMSC798f

Collecting and visualizing accessibility informa-tion are always challenging problems withinthe realm of Human-Computer Interaction

(HCI). A team of researchers at the Universityof Maryland have proposed innovative approachesthat combine crowdsourcing, computer vision andmachine learning to detect and visualize how ac-cessible a particular street or sidewalk is. Tohmeis one such approach for finding curb ramps re-motely in Google Street View scenes. Based onexperiments conducted by the team, the systemdemonstrates a performance comparable to a con-ventional manual approach but with a dramaticreduction in time and cost.

Figure 1: A visual analytic tool that allows both citizens andgovernments to easily assess citys street-level acces-sibility.

Why street-level accessibility?The U.S. Census Bureau in Americans with Disabilities:2010 reports [1] that 30.6 million adults in the U.S. havephysical disabilities that impair their ambulatory activ-ities and nearly half of them use assistive aids such aswheelchairs. For these populations, Street-level accessi-bility is an important factor in securing their mobility

outdoors. Especially, mechanisms that provide informa-tion about accessible routes are necessary to enhance theirmobility (e.g., Figure 1). Assessing street-level accessibilityinformation, however, has been labor intensive and costly,and no such aiding tools have existed.

New scalable data collection methodand assistive map toolsA team of researchers from the University of Maryland,College Park led by Prof. Jon Froehlich has developednew data collection techniques to gather street-level ac-cessibility information and new assistive navigation/maptools based on the collected accessibility data. They com-bine crowdsourcing, computer vision (CV), and machinelearning techniques along with Google Street View (GSV)scenes to efficiently and accurately gather road-accessibilityinformation, such as curb ramps (Figure 2), which exist-ing public databases have failed to sufficiently provide [3].Combining and exploiting the three techniques helps inovercoming the scalability challenges affecting existing ap-proaches that rely solely on only one of them (i.e., crowdsourcing, computer vision, or machine learning [4] [5]).

Tohme: detecting curb ramps inGoogle Street ViewAs a part of their ongoing efforts, the team has presentedTohme (pronounced toe-may) [3], a semi-automatic scalablesystem that finds curb ramps in GSV panoramic scenes(Figure 2). Tohme is operated by four subsystems:

1. A web scraper that downloads street intersection datausing Google Maps API;

2. State-of-the-art computer vision algorithms that auto-matically detect curb ramps in a given GSV scene;

3. A machine-learning-based controller that predicts thequality of the CV-based detection and then forwardsthe GSV scene to either a human labeling pipeline or ahuman verification pipeline according to the predictionresult;

Page 1 of 2

Figure 2: The Tohme system for semi-automatically finding curb ramps in Google Street View [3]

4. Human workers: if the prediction is a false negative,the GSV scene is forwarded to the human labelingpipeline where human workers label a curb ramp fromthe scene. Otherwise, the GSV scene is forwarded tothe human verification pipeline where human work-ers just verify the detection made by the CV-basedalgorithms.

The Tohme system has been extensively tested using over1000 intersections of GSV image data in both dense urbancores and semi-urban residential areas across four cities inthe U.S. With around 240 labeling and 160 verifying bycrowd-sourced human workers, Tohme was able to providecomparable accuracy to a pure human-powered labelingapproach with huge enhancements in run-time. A demo ofthe Tohme system in action can be accessed here [6].

Future of road accessibility studies

Tohme has proved the efficiency and the accuracy of thesemi-automated framework for assessing road accessibil-ity. To foster the work in this area, the researchers planto make their collected accessibility information availableand provide an access to their API. They hope that thisinitiative will drive the development of a new broad rangeof accessibility-aware applications and interdisciplinary re-search areas. For instance, public health researchers andurban planners can study the relationship between neigh-borhood accessibility and local residents health, using thedata publicized by the researchers. Further, Prof. Froehlichand his team also envision the integration of their resultsinto available services, e.g., searching for restaurants onYelp based on their level of accessibility, finding the mostaccessible routes in Google Maps etc.

References

[1] U.S. Census Bureau. 2012. Americans with Disabili-ties: 2010 Household Economic Studies. http://www.census.gov/prod/2012pubs/p70-131.pdf

[2] Hara, Kotaro, and Jon E. Froehlich. ”Characterizingand visualizing physical world accessibility at scaleusing crowdsourcing, computer vision, and machinelearning.” ACM SIGACCESS Accessibility and Com-puting 113 (2015): 13-21.

[3] Hara K, Sun J, Moore R, Jacobs D, Froehlich J. 2014.Tohme: detecting curb ramps in google street viewusing crowdsourcing, computer vision, and machinelearning. In Proceedings of the 27th annual ACMsymposium on User interface software and technology(UIST 14) (pp. 189-204). ACM.

[4] Hara, K., Le, V. and Froehlich, J. 2013. CombiningCrowdsourcing and Google Street View to IdentifyStreet-level Accessibility Problems. In Proceedings ofthe SIGCHI Conference on Human Factors in Com-puting Systems (CHI 13) (pp. 631640). ACM.

[5] Xiao, J., Fang, T., Tan, P., Zhao, P., Ofek, E. andQuan, L. 2008. Image-based FaAde Modeling. ACMTrans. Graph. 27, 5 (Dec. 2008), 161:1161:10.

[6] Tohme demo: https://www.youtube.com/watch?v=rtHGfVFbsp8

Contributors• Liang He ([email protected])

• Kookjin Lee ([email protected])

• Aditya Acharya ([email protected])

• Heba Aly ([email protected])

Written on Monday, February 22, 2016

Page 2 of 2