21
Designing Haptic Assistive Technology for Individuals Who Are Blind or Visually Impaired Dianne T.V. Pawluk, Member, IEEE, Richard J. Adams, Senior Member, IEEE, and Ryo Kitada Abstract—This paper considers issues relevant for the design and use of haptic technology for assistive devices for individuals who are blind or visually impaired in some of the major areas of importance: Braille reading, tactile graphics, orientation and mobility. We show that there is a wealth of behavioral research that is highly applicable to assistive technology design. In a few cases, conclusions from behavioral experiments have been directly applied to design with positive results. Differences in brain organization and performance capabilities between individuals who are “early blind” and “late blind” from using the same tactile/haptic accommodations, such as the use of Braille, suggest the importance of training and assessing these groups individually. Practical restrictions on device design, such as performance limitations of the technology and cost, raise questions as to which aspects of these restrictions are truly important to overcome to achieve high performance. In general, this raises the question of what it means to provide functional equivalence as opposed to sensory equivalence. Index Terms—Blindness, visually impaired, haptic displays for the visually impaired, assistive technology for the visually impaired Ç 1 INTRODUCTION I N developing assistive technology for individuals who are blind or visually impaired (BVI), among the most important considerations are the characteristics of the target population and their interactions with the environment. Improper consideration of these characteristics has, in part, resulted in numerous assistive technology research projects failing to transition to real-world use. In this review of the application of haptics to this subject, we consider the follow- ing key elements: the characteristics of the population and their involvement in the design process; current understanding of the behavioral research in key areas; some example insights from behavioral research that were applied to the design of assistive technology; and some examples of relevant assistive technology that has been developed for key tasks. The four task areas which will be the focus of this paper are: Braille reading, tactile graphics, orientation and mobility. A paper by O’Modhrain and her colleagues (in this issue) provides a valuable perspective by experts in the field who also happen to be BVI. 2 DESIGN In designing assistive technology for individuals who are BVI, many different areas of knowledge should be consid- ered. In this section, we describe issues involving the target population characteristics in terms of the diversity of the target population and some basic issues in sensory substitu- tion. We also describe aspects of the design process particular to assistive technology: the use of participatory action design and the importance of testing devices in the real-world. 2.1 Diversity of Population One of the most important aspects to appreciate in design- ing assistive technology for individuals who are BVI is the diversity of the population in terms of medical condition, experience, opinions, preferences and motivation. Vision impairments range from low vision through light detection only to no vision at all [1]. The parts of the eye affected may also vary; however, this is mainly associated with low vision and of more importance when considering visual enhance- ment rather than haptic displays. An individual may also have other impairments which could affect design decisions. For example, deaf-blind individuals may also vary in their degree of hearing, which may result in the need for an exclu- sively haptic (versus audio-haptic) solution [2]. However, a frequent cause of blindness is diabetes that can also produce neuropathy in the peripheral extremities resulting in a lack of sensitivity in the fingertips and toes [3]. These individuals may require a solution that, at the very least, requires the provision of haptic feedback closer to the central core of the body (e.g., wrist or torso). Finally, individuals may also have cognitive impairments which, together with issues of information transmission bandwidth (see Section 2.2), can have an impact on design decisions. Individuals may also vary greatly in terms of their expe- rience, preferences, opinions and motivation. In terms of visual experience, the population can be categorized into those who are congenitally or early blind, and those who are adventitiously or late blind. The primary differentiator between these two populations is whether the individual has had visual experience before becoming blind. Potential key milestones that may lead to later differences are: (1) by eighteen months, infants have experience with reaching and D.T.V. Pawluk is with the Biomedical Engineering Department, Virginia Commonwealth University, Richmond, VA 23284. E-mail: [email protected]. R.J. Adams is with Barron Associates, Inc., Charlottesville, VA 22901. E-mail: [email protected]. R. Kitada is with the National Institute for Physiological Sciences, Okazaki, Japan. E-mail: [email protected]. Manuscript received 15 June 2015; revised 16 Aug. 2015; accepted 18 Aug. 2015. Date of publication 26 Aug. 2015; date of current version 14 Sept. 2015. Recommended for acceptance by L. Jones. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TOH.2015.2471300 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015 1939-1412 ß 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

Designing Haptic Assistive Technology forIndividuals Who Are Blind or Visually Impaired

Dianne T.V. Pawluk,Member, IEEE, Richard J. Adams, Senior Member, IEEE, and Ryo Kitada

Abstract—This paper considers issues relevant for the design and use of haptic technology for assistive devices for individuals who

are blind or visually impaired in some of the major areas of importance: Braille reading, tactile graphics, orientation and mobility. We

show that there is a wealth of behavioral research that is highly applicable to assistive technology design. In a few cases, conclusions

from behavioral experiments have been directly applied to design with positive results. Differences in brain organization and

performance capabilities between individuals who are “early blind” and “late blind” from using the same tactile/haptic accommodations,

such as the use of Braille, suggest the importance of training and assessing these groups individually. Practical restrictions on device

design, such as performance limitations of the technology and cost, raise questions as to which aspects of these restrictions are truly

important to overcome to achieve high performance. In general, this raises the question of what it means to provide functional

equivalence as opposed to sensory equivalence.

Index Terms—Blindness, visually impaired, haptic displays for the visually impaired, assistive technology for the visually impaired

Ç

1 INTRODUCTION

IN developing assistive technology for individuals whoare blind or visually impaired (BVI), among the most

important considerations are the characteristics of the targetpopulation and their interactions with the environment.Improper consideration of these characteristics has, in part,resulted in numerous assistive technology research projectsfailing to transition to real-world use. In this review of theapplication of haptics to this subject, we consider the follow-ing key elements: the characteristics of the populationand their involvement in the design process; currentunderstanding of the behavioral research in key areas;some example insights from behavioral research that wereapplied to the design of assistive technology; and someexamples of relevant assistive technology that has beendeveloped for key tasks. The four task areas which will bethe focus of this paper are: Braille reading, tactile graphics,orientation and mobility. A paper by O’Modhrain and hercolleagues (in this issue) provides a valuable perspective byexperts in the field who also happen to be BVI.

2 DESIGN

In designing assistive technology for individuals who areBVI, many different areas of knowledge should be consid-ered. In this section, we describe issues involving the targetpopulation characteristics in terms of the diversity of the

target population and some basic issues in sensory substitu-tion.We also describe aspects of the design process particularto assistive technology: the use of participatory action designand the importance of testing devices in the real-world.

2.1 Diversity of Population

One of the most important aspects to appreciate in design-ing assistive technology for individuals who are BVI is thediversity of the population in terms of medical condition,experience, opinions, preferences and motivation. Visionimpairments range from low vision through light detectiononly to no vision at all [1]. The parts of the eye affected mayalso vary; however, this is mainly associated with low visionand of more importance when considering visual enhance-ment rather than haptic displays. An individual may alsohave other impairments which could affect design decisions.For example, deaf-blind individuals may also vary in theirdegree of hearing, which may result in the need for an exclu-sively haptic (versus audio-haptic) solution [2]. However, afrequent cause of blindness is diabetes that can also produceneuropathy in the peripheral extremities resulting in a lackof sensitivity in the fingertips and toes [3]. These individualsmay require a solution that, at the very least, requiresthe provision of haptic feedback closer to the central core ofthe body (e.g., wrist or torso). Finally, individuals may alsohave cognitive impairments which, together with issues ofinformation transmission bandwidth (see Section 2.2), canhave an impact on design decisions.

Individuals may also vary greatly in terms of their expe-rience, preferences, opinions and motivation. In terms ofvisual experience, the population can be categorized intothose who are congenitally or early blind, and those whoare adventitiously or late blind. The primary differentiatorbetween these two populations is whether the individualhas had visual experience before becoming blind. Potentialkey milestones that may lead to later differences are: (1) byeighteen months, infants have experience with reaching and

� D.T.V. Pawluk is with the Biomedical Engineering Department, VirginiaCommonwealth University, Richmond, VA 23284.E-mail: [email protected].

� R.J. Adams is with Barron Associates, Inc., Charlottesville, VA 22901.E-mail: [email protected].

� R. Kitada is with the National Institute for Physiological Sciences,Okazaki, Japan. E-mail: [email protected].

Manuscript received 15 June 2015; revised 16 Aug. 2015; accepted 18 Aug.2015. Date of publication 26 Aug. 2015; date of current version 14 Sept. 2015.Recommended for acceptance by L. Jones.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TOH.2015.2471300

258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

1939-1412� 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

moving in the environment with vision, and (2) at a some-what later age, they often have experience with visualgraphics (including perspective) and text words [4]. Manystudies in the behavioral literature have shown differencesbetween these two groups (see Section 3). Neuroimagingstudies (e.g., [5]) suggest that there are differences in brainorganization between these two groups. It is also importantto differentiate these populations from sighted individualswho are blindfolded, who may have very different perfor-mance and brain organization.

Other important experiences that can affect the use ofhaptic/tactile assistive technology include an individual’sfamiliarity with and acceptance of technology, as well asexperience with touch. Regarding the first issue, the targetpopulation is often divided between the elderly and theyoung [6] due to the disparity between age groups when itcomes to familiarity with various technologies in general.For the second issue, those individuals who have experienceexploring with touch may perform better with assistivetechnology using touch than those that do not. For example,neuroimaging studies have shown an expansion in the corti-cal area representing the reading finger in extensive Brailleusers [7]. However, it should be noted in regards to design-ing assistive technology that only 10 percent of individualswho are visually impaired are Braille readers [8].

Motivation, preferences and opinions also can have agreat impact on the acceptance of assistive technology byusers and user performance, although their effects havebeen less studied than medical condition and experience.One example of the effect of motivation on performancewas studied in an indoor way finding task [9]. In the task,participants who were motivated with a monetary incentiveto complete the task as fast as possible performed signifi-cantly faster than the control group without an increase inerrors. Individual preferences can be designed into pro-grammable technology through user settings. Providing achoice is desirable but, as with any good user interface,should be restricted and have a default setting to avoidoverwhelming the user. It should also not be used as a sub-stitute for behavioral testing to determine what sensoryparameters can lead to high performance.

Sometimes differences of opinions among individualswho are BVI may make it difficult for a designer to makedesign decisions that accommodate all individuals. Forexample, audible pedestrian signs, which provide an audi-ble tone to indicate it is safe to cross the street, have bothstrong advocates and strong opposition [10]. The differencein opinion seems dependent on whether the audible signagewill be the only information used by individuals to cross thestreet or whether they will use the sounds of the cars pres-ent to make judgments as well. One consistent opinion ofmany individuals who are BVI that we have interacted within focus groups is that there is a strong reluctance to becomecritically dependent on a piece of technology that may beslow and costly to repair or replace.

2.2 Some Background on Tactile/Haptic SensorySubstitution

One of the earliest considerations of haptics for sensory sub-stitution is the pioneering work of Bach-y-Rita and his col-leagues on reading, picture recognition and mobility [11].

For their system, the user manipulated a television camerato view objects. A subsampled tactile “image” was then pre-sented on a 20 � 20 grid of vibrating solenoids spaced12 mm apart on the back of a dentist’s chair that the user satin. Subsequent work has used arrays on the abdomen, back,thigh, forehead and fingertip, as well as electrotactile dis-plays [12]. One interesting aspect of their results is thatwhen the camera system was controlled by the user, usersreported experiencing images in space, rather than on theskin. This was not reported if the user did not actively con-trol the camera, suggesting the importance of active explo-ration in the formation of this attribution. Individuals werealso able to make perceptual judgments using visual meansof interpretation such as perspective, parallax and zooming.Although this technology did allow well-trained individu-als to gain a degree of knowledge about simple static scenes,the dynamic and complex nature of real-world environ-ments limited its applicability [13].

2.2.1 Tactile (Mechanical)

One issue in using touch as sensory substitution for vision isthe significantly limited spatial resolution of touch (Fig. 1) ascompared to vision due to the low-pass filtering of the cuta-neous system [14]. The spatial resolution also depends onthe temporal frequency of the stimulation [15] and body site.On the fingertip, spatial acuity appears best for temporalsinusoidal stimuli in the frequency bands of 1-3 Hz and 18-32 Hz, as well as showing a decreasing trend with increasingfrequency over 50 Hz [15]. Spatial acuity is also different fordifferent body sites. For example, the resolution for non-vibrating stimuli is approximately 1.0 mm on the fingertips[16] and 40.0 mm on the back [17]. These numbers can becompared to visual acuity which is approximately 0.15 mmwhen looking at an image from a distance of 0.5 m.

Several studies have also found that the spatial resolu-tion on the fingertips of individuals who are early blind andBraille readers is significantly better than that of sightedindividuals (e.g., in [18], 1.04 mm compared to 1.46 mm).However, evidence suggests that this difference is due totactile experience [19] and is eliminated if both groups aregiven practice [20].

Another issue is that the field of view (FOV) for vision isconsiderably larger than that of touch. Most research hasexamined this issue using raised line drawings explored bythe fingers (see Fig. 2, on left) and found that the FOV doesnot really extend beyond a single finger (e.g., [21]). A

Fig. 1. The Braille characters are visually blurred (as seen below eachcharacter) to a degree that produces the same level of recognition as fortouch (from [14]).

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 259

Page 3: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

modest improvement in performance was found whenfive fingers were used, although this may have been dueto reasons other than an increased perceptual FOV [22].In addition, the FOV may increase when material proper-ties are involved (see Section 4.1). The importance of thesize of the FOV is that it eases top-down processing byproviding spatial context, which can greatly aid in per-ceptual organization and object recognition. For touch,with diagrams larger than a single finger, informationmust be pieced together serially and retained in memory(which is both memory intensive and time consuming) toallow top-down processing.

Although we are not aware of any studies examining theFOV of other body sites, the back and the abdomen havegenerally been treated successfully as continuous surfaces.However, as the spatial resolution is more limited on thesebody sites than on the fingers, the larger FOV does notimply that a larger amount of spatial information can beconveyed on these sites compared to the fingers.

Other important considerations for sensory substitutionare behavioral similarities and differences between thevisual and haptic sensory systems (see Section 3), as well asconsideration of the neuroplasticity of the brain.

2.2.2 Electrotactile

Electrotactile displays are a possible alternative to hapticdisplays providing mechanical stimulation. The intention ofthese displays is to stimulate the nerve fibers of the mecha-noreceptors directly. Their advantage is that they are signifi-cantly more compact, easier to fabricate and have lowerpower consumption. However, the haptic sensations gener-ated have not been studied as thoroughly as mechanicalstimulation. The sensations produced have been describedby subjects as feeling a vibration, light buzzing or pulsing,although with a “slightly electrical nature” [23]. One poten-tial limitation of electrotactile stimulation is that it is difficultto stimulate the deeper receptors (the Pacinian Corpuscles)that produce the response of the tactile system at higher fre-quencies [24]. In addition, changes in perception can occurdue to variations in electrical impedance of the skin and sub-cutaneous tissue over time, such as through sweating.

As with the spatial acuity of mechanical stimulation, theacuity of electrical stimulation applied to the surface of theskin is expected to be affected by both neural properties andproperties of the skin and subcutaneous tissue. Researchresults suggest that the spatial acuity of the tactile system toelectrical stimulation is in the range of 5-10 mm on the back,compared to 40.0 mm for mechanical stimulation [25]. On

the tongue, spatial discrimination of electrical stimulation isin the range of 1.6–2.7 mm [26] as compared to 0.5 mm formechanical stimulation [16]. However, the spatial discrimi-nation methods used on the tongue were different for thedifferent stimulation types and it is possible that the acuitymay be more comparable between the two. No studies haveexamined the FOV, although we would expect the FOV tobe the same for the two types of stimulation.

Although our focus in Section 5, describing current hap-tic assistive technology for individuals who are BVI, is onsystems providing mechanical stimulation, assistive tech-nology displays have been developed that provide electricalstimulation to the fingertip, back, abdomen, tongue andforehead (e.g., [27], [28]). The search for the optimal targetlocation on the human body for electrotactile stimuli toachieve sensory replacement has led to a seemingly unlikelydestination—the tongue. The advantage of using the tongueis that it is an environment relatively constant and low inelectrical impedance due to its thin cutaneous layer andcontinuous saliva bath [29].

Bach-y-Rita and his colleagues, in their pioneering work,also investigated the use of electrotactile displays. One ofthe first assistive technology systems they developedemployed a 12 � 12 Cartesian grid of electrodes on thetongue with 2.34 mm center to center spacing to displayimages acquired by a head-mounted camera [28]. The sys-tem is currently being commercialized as the BrainPort byWicab, Inc. (Middleton, WI). The BrainPort V100 has a 400-element electrode array that connects to a pair of glasses thatalso supports a small camera. The grayscale images from thecamera are mapped onto the display and are rendered suchthat white pixels result in higher levels of electrotactilestimulation while black results in no stimulation [30].

2.3 Participatory Action Design

User centered design practices are now quite commonplacein the development of products. The process includes:determination of users’ needs through focus groups, eth-nography studies, preference surveys, attribute experimentsand other data gathering methods [31]; the consideration ofuniversal design issues [32]; and usability of prototypes andthe final product (e.g., [33], [34]). Participatory ActionDesign takes user centered design a step further by involv-ing stakeholders throughout the design process [35]. Thismay include users, care providers, and other stakeholderparticipation in activities such as storyboarding, low tech-nology prototyping and real-world testing of end products.

2.4 Real-World versus Laboratory Testing

Assessing the performance of assistive technology in real-world environments rather than in isolated laboratory test-ing is also an important step towards the successful devel-opment of usable technology. One reason is that real worldenvironments (such as navigating busy streets) are expectedto be noisier and more distracting than laboratory environ-ments. Alternately technology, such as headphones cover-ing the ears, may provide undesirable isolation in a realworld environment affecting safety or social interaction.

Additionally, multiple tasks often need to be performedat the same time in the real world. For example going from

Fig. 2. On the left is a typical raised line drawing. On the right is a typicalhand-made drawing (from [48]).

260 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 4: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

one location to another can involve both obstacle avoidanceand navigation technology. This creates two potential differ-ences from laboratory testing. First, there are less attentionalresources available than when considering one of these tasksin isolation in the laboratory. Second, multiple devices mayneed to be carried and used at the same time for the differenttasks in addition to items for day-to-day use such as brief-cases, knapsacks or shopping bags. Even in an office environ-ment, a user may need to switch between the use of Brailleand tactile graphics frequently, influencing the design of both.

Finally, wearable technology which cannot easily beremoved or turned off by the user may be tolerable in shortterm test environments but not as much in everyday envi-ronments. This is in contrast to non-wearable visual or hap-tic technology, where the user knows they can easily lookaway/close their eyes or remove their hand from a displayif they do not want to experience the feedback.

3 BEHAVIORAL AND NEUROIMAGING RESEARCH

Behavioral research is very important for the developmentof useful assistive devices by objectively describing differ-ences and similarities between the visual and haptic sys-tems, which can be used to guide the transformation ofinformation from one domain to the other. Understandingdifferences in perceptual processing between congenitallyblind, late blind and sighted individuals is also importantin addressing the controversial issue of using sighted indi-viduals for part of the testing and introspection, eitherconsciously or unconsciously, by sighted designers. Further-more, neuroimaging research highlights the plasticity of thesomatosensory cortex with training, which is useful to con-sider in technology use. In addition, it also reveals funda-mental differences in cortical organization betweencongenitally blind, late blind and sighted individuals, partic-ularly in regards to the use of the visual cortex (e.g., [36]).

This section focuses on haptic behavioral research andimaging studies related to the four areas of assistive tech-nology this review paper considers: Braille displays, tactilegraphics displays, orientation and mobility. Here we reviewresearch on the use of Braille, tactile graphics and hapticlocomotor spatial perception by individuals who are BVI. Inaddition, we consider the relatively new area of affectivetouch and its potential utility to assistive technology design.

3.1 Braille

In Braille, each letter is represented by a configuration of dotswithin a 3 � 2 dot cell, with dots spaced approximately2.3 mm apart [37]. Although by sight each Braille character istypically distinguished by its explicit dot pattern within acell, this is not true for tactual reading (Fig. 1), which is oftendescribed as being perceptually determined by “overallshape” (or, more specifically, the lower frequency content,which are the only frequencies retained tactually). Evidencesuggests that Braille’s tactual effectiveness compared toembossed alphabet letters is due to its greater distinctivenessbetween characters within this lower frequency band [38].

3.1.1 Reading Method

Reading Braille can be performed with one or two hands,scanning from left to right, typically using only the index

finger(s), and with regressions to move the finger(s) overpreviously covered material to help disambiguate content[39]. The use of two hands is known to be superior in speedto the use of one: the movement pattern typically starts withthe left hand scanning a line, taken over by the right hand,with the left hand then simultaneously looking for the nextline [40], [41]. There are two possible interpretations of thehand movements: (1) the left only finds the beginning of thenew line, but does not read it, while the right finishes readinga line [41], or (2) both hands read text simultaneously [40].However, Bertelson et al. [40] showed that readingwas fasterthe less two hands were on the page together, suggesting thefirst interpretation. This distinction is important as theremay be cheap one-handed refreshable Braille solutions thatinclude an alternate method to find a new line but are unableto substitute for readingwith two hands simultaneously.

For touch in general, active touch (i.e., the purposefulexploration of the stimulus) is regarded as superior to pas-sive touch (where the stimulus is either statically applied orscanned across the hand). Heller found, at least for na€ıvesighted participants, that active touch for Braille readingwas significantly superior to passive movement across thefinger, and both were greatly superior to passive static con-tact [42]. Recent work by Russomanno and his colleagues(in this issue), with individuals who are BVI, found thatboth the proprioceptive information from the hand movingfrom character to character and tactile information on slid-ing contact between the fingertip and Braille character areimportant for effective reading.

3.1.2 Neuroimaging Studies

Neuroimaging studies have shown that the neural sub-strates underlying haptic Braille reading critically differbetween individuals who are early blind and those who aresighted. For instance, Sadato et al. [43] found that for earlyblind subjects proficient in Braille, Braille reading and tactilediscrimination tasks (including reading embossed Englishletters) elicit activation in the medial occipital lobe corre-sponding to the primary visual cortex (V1) as compared tothe rest condition where the subject is relaxed and not con-ducting any task. However, for sighted subjects, the sameregion showed a decrease of activation (relative to the restcondition) during the tactile discrimination tasks; no Braillereading was performed presumably because the sighteddid not know Braille.

Further support that the early visual cortex is involvedin Braille reading by the early blind comes from workwith transcranial magnetic stimulation (TMS) applied tothe occipital (visual) cortex while performing readingtasks [44]. As compared to TMS in the air, TMS on theoccipital cortex increased the error rate in both Brailleidentification and embossed Roman letter identificationby early blind subjects but not in embossed Roman letteridentification by sighted subjects. Additional support forthe role of the primary visual cortex comes from the anec-dote of a proficient Braille reader blind from birth. Uponsustaining bilateral occipital damage from an ischemicstroke, she was no longer able to read Braille [45].

There have been few imaging studies comparing Braillereading with early and late blind individuals. One informa-tive and relevant study [36] compared cortical activation

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 261

Page 5: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

during a tactile discrimination task. They found that the pri-mary visual cortex increased its activation (from rest) dur-ing a tactile discrimination task for subjects who lost theirsight before 16 years old, but decreased activation (fromrest) for those who lost their sight after 16 years old. Thissuggests that the massive plastic change in the primaryvisual cortex occurs when individuals become totally blindin their young age.

These differences between early, late and sighted indi-viduals may explain why the early blind outperform theother groups in terms of their efficiency in Braille reading[46]. However, it is still unclear whether the absence of earlyvisual experience alone explains the reorganization of thevisual cortex for Braille reading. It is reported that the repre-sentation of the reading fingers in the primary somatosen-sory cortex is enlarged for the early blind as compared tothe other groups (e.g., [47]). By analogy, it is possible thatthe experience of extensive tactile learning may explainsuch reorganization in the visual cortex as well [7].

3.2 Tactile Graphics

A variety of different methods are used to create physicaltactile graphics, all producing different types of diagrams[48]. The most commonly used methods are: hand-madetactile pictures, embossed diagrams, the use of microcap-sule paper and vacuum forming. Hand-made tactile pic-tures (Fig. 2, right) that use textured cloth and paper, string,wood, metals and other materials are the most frequentlyused method with young children due to their effectiveness.Embossed displays can be made by hand or, more typically,using modified Braille embossers. Modified Braille emboss-ers “punch” raised dots in thicker, specialized paper, with atypical resolution of 20 dots per inch. Microcapsule paper isa specially coated paper that will expand in areas coatedwith black ink when exposed to heat. An ink printer is usu-ally used to print the black image on the paper. An effectiveset of textures can be created for microcapsule paper,although the expansion process may not be uniform insome cases. Vacuum-forming is used when more than onecopy of a tactile diagram is needed. A built-up master tactilediagram is formed using a variety of materials. The copiesare formed by heating plastic over the master with a specialmachine. The relief amplitude is not restricted to one leveland textures can be created.

All of these methods can be both expensive and time con-suming, and hence, with the increased use of graphics toconvey information in society, there is a need for effectiverefreshable display solutions. However, there are severallimitations with physical tactile diagrams which, if under-stood, may lead to more effective display solutions.

3.2.1 Raised Line Drawings

Raised line drawings are simple line drawings of objects,object scenes or graphs for which the lines are raised upfrom the background surface (Fig. 2, left). Unfortunately,the identification of even common objects is very poorwhen using raised line drawings, with an average accuracybetween 10-40 percent and a response time up to minutes toexplore a diagram, even with both hands [21]. This is likelydue to the limited field of view of touch (see Section 2.2)

which results in the need for significant serial processing(which is slow and cognitively demanding) and limits top-down processing (which aids in disambiguating lines ofshape from perspective and occlusion lines).

However, the above results were obtained with no con-text for the diagrams. Typically tactile diagrams will comewith titles, labels and, possibly, an associated text para-graph. In [49], category information was used to providecontext. With this context provided, participants greatlyimproved their accuracy and speed of picture identification.Unfortunately there are also likely to be many cases wherethe context is limited and/or difficult to comprehend with-out familiarity with the material.

There also seems to be differences in performancebetween sighted, early blind and late blind individualswhen using tactile graphics. Late blind individuals seem toperform the best, with sighted and congenitally blind hav-ing similar average performance [50]. Heller attributedthese results to two factors: (1) the experience of late blindand sighted individuals with pictures (due to visual expo-sure) is greater than for early blind individuals; and (2) thetactile skills of the late and early blind are greater than thesighted. This is also consistent with results by Ledermanand her colleagues comparing sighted to early blind indi-viduals [51] as well as more recent work by Heller et al.[52]. In addition, the latter work showed that individualswith very low vision performed as well as individuals whoare late blind.

3.2.2 Textured Diagrams

Although, raised line drawings have almost exclusivelybeen used in studying the perception of tactile graphics, inpractice hand-made drawings using textures, string andother materials (Fig. 2, right) are strongly preferred due totheir greater effectiveness. Objective comparisons betweenusing textured and raised line diagrams are considered inSection 4.1 for hand-made [53] and mechanical displaymethods [54].

3.2.3 Increasing Effectiveness

Several researchers have considered different ways to poten-tially improve performance with raised line diagrams.Recently, Wijintjes et al. [55] found that recognition of raisedline drawings was modestly improvedwhen picture size wasincreased. This is consistent with practices by teachers for theblind and visually impaired (TVIs) for which areas with smalldetails are scaled up and presented on a separate sheet ofpaper to improve recognition. However, this is usually per-formed by TVIs when the details are small, in contrast to theprevious study, where it appears that increasing picture sizeeven when details are perceptible improves performance.Simplification is also an important process TVIs use tomake adiagram more accessible. Simplification usually refers toremoving “clutter”, that is to say, unneeded details. Anotherimportant aspect is deciding whether shapes can be simpli-fied in the diagram for easier interpretation.

3.2.4 Perspective

TVIs who develop tactile diagrams for students will nor-mally remove perspective from a picture to help with

262 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 6: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

comprehension. This removes one difficulty in trying tointerpret whether lines on a drawing are depicting objectshape or some other aspect of the diagram. Frequently indi-viduals who are BVI will describe objects in 3D (rather than2D), including drawing a “folded out” version of the objectdepicted [48]. However, spatial relationships described byperspective are an important part of scholastic testing and,therefore, their effective representation is a significant con-cern for TVIs and their charges.

Perspective in 2D diagrams is currently very difficult forindividuals who are BVI to understand, which puts them ata disadvantage relative to their sighted cohort. However,work by Heller et al. [56] suggests that visual experience, asargued by others, is not necessary for understanding per-spective, while exposure to the rules of perspective is. Thissuggests that perhaps with a better notation for indicatingperspective that is less easily confused with other aspects ofthe diagram and with training, BVI individuals will be ableto improve their comprehension skills for 2D pictures.

3.2.5 Neuroimaging Studies

Several neuroimaging studies have revealed that the hapticrecognition of 3D common objects involves a distributednetwork of brain regions beyond the somatosensory corticesin both blind and sighted individuals (e.g., [57]). Activationof regions in and around the primary visual cortex is alsofrequently observed in individuals who are blind ([57]). Incontrast, to the best of our knowledge, no study has explic-itly reported the brain network of blind individualsinvolved in haptic recognition of 2D raised-line drawing (asin Fig. 2). Except for the difficulty in tactually perceiving 2Dgraphics, we expect that the haptic processing of 2D and 3Dobjects involves the same cognitive components such as rep-resenting spatial properties (e.g., shape and orientation) inspatial reference frames (see Section 3.3) and retrieving theassociated information frommemory (e.g., name) to identifyit. Therefore, it is reasonable to predict that haptic process-ing of spatial properties of 2D and 3D objects share commonbrain networks. It is still unclear how the involvement of theprimary visual cortex contributes to the performance ofhaptic object recognition by individuals who are blind.

3.3 Haptic Locomotor Space Perception

Spatial perception can refer to a wide range of perceptualactivities (e.g., [58]). Here we will focus on activities relatedto the important application areas of orientation and mobil-ity such as described in the review by Long and Giudice[59]. In addition, although audition plays important roles inthe spatial perception of noise emitting objects out of reachand in echolocation, it is beyond the scope of this paper.The focus here will be on haptics only.

3.3.1 Frames of Reference

There are two general frames of reference considered in spa-tial perception. The first is an egocentric frame of reference,where objects are referred to with respect to the location ofthe self. The second is an allocentric frame of reference,where the reference frame is external and references objectsdirectly to one another [59]. In egocentric frameworks, routedescriptions are used which have a sequential organization

and for which relative spatial information is conveyed byrelationships such as “to the left” or “to the right”. In con-trast, in allocentric frameworks, survey descriptions areused which are often hierarchical and for which objects arerelated in terms of “north”, “south”, “east” or “west” [60].

In sighted individuals, tactile stimuli in the environmentare perceived in an egocentric frame and are usually thenremapped into an allocentric frame through the modulationof visual inputs. As this mapping also appears to be per-formed to a comparable degree by individuals who are lateblind, the requirement seems to be one of previous exposureto vision rather than currently possessing vision. Individu-als who are congenitally blind do not have this exposureand are more likely to use proximal, egocentric information(e.g., [61]). Consequentially, they are likely to representinformation about a path in route-like rather than survey-like form [62].

3.3.2 Performance

The ability to maintain a stable body reference in egocentrictasks benefits performance in small spaces where objectsare within reach (i.e., manual tasks) [63]. For locomotorspace, when examining simple, primarily egocentric taskssuch as simple distance and angle estimation, retracing aroute, angle completion, finding the closest point of interest(POI) to the home position and pointing from the homeposition to a POI in a relatively small and open environment(i.e., a room or a parking lot), similar performance seemsto occur for early blind, late blind and blindfolded sightedparticipants [64], [63].

When considering more allocentric related tasks, morevaried results have been obtained, although in general indi-viduals who were early blind appeared to have more diffi-culty than those who were late blind or sighted andblindfolded. Individuals who were late blind actuallyappeared to do better than the two other groups on a taskimagining the closest POI to a non-home position, althoughthose who were sighted were much faster than the others[63]. For imagining pointing to a POI from the location, thecondition of blindness did not have an effect on perfor-mance [64], [65]. However, the performance of participantswho were sighted or late blind greatly improved if theyphysically walked to the pointing location [65]. Individualswho were early blind also performed more poorly and tooklonger to do tasks involving new route formation or estimat-ing straight line distances [66]. Finally, although the mentalspatial relationships of individuals who were congenitallyblind appeared to maintain the metric structure of the realworld, the representation was poorer than that of sightedindividuals [67].

3.3.3 Cognitive Maps

Cognitive maps are mental representations of a spatial lay-out in a person’s head and which may include distortions,holes and other exaggerations of the real world [59].Although individuals who are congenitally blind generallyuse an egocentric representation when walking a path, itappears that using a tactile map can facilitate their develop-ment of an allocentric cognitive map (Fig. 3, left). This issupported by the results in [68] which suggest that there is

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 263

Page 7: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

no decrease in the metric structure for mental spatial rela-tionships derived from tactile maps by individuals who arecongenitally blind. This is in contrast to the results men-tioned in Section 3.3.2 regarding mental spatial relation-ships derived through locomotion. Using a tactile map toform an allocentric representation also appears to benefitfrom providing instructions to use an allocentric frame ofreference and a physical reference boundary at the edges ofthe map [69].

For individuals with low vision, the use of non-geometricpathway information (Fig. 3, right), such as doors, signs andlighting, can also improve performance to a degree equiva-lent to using a map [70].

3.3.4 Neuroimaging Studies

A few neuroimaging studies have examined the brain net-work that underlies haptic locomotor space perception([71], [72]). These studies highlight the importance of thehippocampus and the parahippocampal gyrus in spatialnavigation. For instance, in [71], congenitally blind subjectsperformed a virtual navigation task using a feedback devicethat translated a visual image of the current, local segmentof a route map into electrotactile array stimulation appliedto the tongue. The brain regions that are critical for visualspace navigation (i.e., the intraparietal sulcus and parahip-pocampal gyrus) were activated by the recognition of previ-ously learned routes (relative to “scrambled” routes) inindividuals who were blind. When the same task was per-formed under full vision by individuals who were sighted,the activation pattern strongly resembled that obtainedwith the individuals who were blind with the feedbackdevice. This result suggests that the same cortical networkunderlies spatial navigation tasks between the blind andsighted subjects.

Moreover, in [72], blind individuals (both early and lateblind) and blindfolded sighted individuals performed sev-eral locomotor spatial tasks before undergoing MRI scans.These tasks were: retracing a learned maze route, pointingtasks from a given point back to the starting point and theprevious pointing position, and determining which of fivesmall scale models correspond to a freely explored spatiallayout. Both groups of blind individuals performed betterthan the sighted individuals in the maze retracing task andthe model matching task, and comparably to sighted indi-viduals in the pointing tasks. From the MRI scans, theresearchers also found that the volume of the hippocampuswas greater for individuals who were blind compared toindividuals who were sighted. The volume was also found

to be correlated with performance. However, it should alsobe noted that the results for the maze tracking task and themodel matching task are not what we would expect (seeSection 3.3.2) for egocentric and allocentric tasks, respec-tively. It is possible that these differences from previousresults and the larger hippocampus volume of individualswho were blind compared to sighted individuals may bedue to significant differences in experience in haptic loco-motor space perception.

3.4 Affective Touch (Tactile Pleasantness)

When we touch objects, we not only perceive and discrimi-nate their physical properties such as roughness (discrimi-native touch), but also experience associated affectivesensations such as pleasantness and unpleasantness (affec-tive touch). The effect of affective touch experiences with anassistive technology product may have an impact on theproduct’s acceptance. For example, some surfaces, such as3D printed plastics, are perceived as unpleasant by individu-als who are BVI and potential users are more reluctant toexplore objects built with this material. Designing surfacesthat increase the experience of pleasantness during contactcould also render users more comfortable when they interactwith products. In addition, the dimension of pleasantness/unpleasantness could be used as a display parameter. Thismay be particularly beneficial for indicating safety warningsto a user due to the likely correlation between danger andthe emotion of unpleasantness. The expected correlationcould potentially decrease the cognitive processing time ofthe user’s response which could be essential for real-time sit-uations. It is also potentially useful to control the degree ofunpleasantness to indicate different danger/alert levels.

While affective touch has attracted considerable inter-est among scientists, most of the research to date hasbeen limited to sighted individuals [73]. We will review ithere. Further research with individuals who are BVI isimportant to ascertain any similarities or differences withsighted individuals.

3.4.1 Object Properties that Produce Pleasantness

and Unpleasantness

One of the fundamental questions regarding affective touchis how it is related to discriminative touch. This is useful forthe development of assistive technology by determininghow levels of unpleasantness can be created and if unpleas-antness can be treated as a dimension independent of dis-criminative dimensions.

The relationship between discriminative and affectivetouch has been investigated for material properties such astemperature [74] and roughness (e.g., [75]). These findingssuggest that the similarity of behavior between discrimina-tive and affective touch depends on the object propertyof interest. For temperature, although the perceivedmagnitude of the environmental/object temperature (dis-criminative touch) is independent of the participant’s bodytemperature, perceived pleasantness (affective touch) dif-fers depending on the body temperature of the participant[74]. In contrast, perceived roughness and pleasantness/unpleasantness are highly related except in their relation-ship to scanning speed [75].

Fig. 3. Left: Example tactile map (with geometric info only). Right: Non-geometric information shown for current location on the pathway beingexplored during virtual exploration (from [70]).

264 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 8: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

In general, discriminative touch focuses mainly on stableexternal objects and their various properties, whereas affec-tive touch relates to how physical contact affects the internalstate of the individual’s body [76]. We expect an individualwill use a product more often if it is made of object proper-ties that the user is willing to explore haptically and if theproduct produces combinations of pleasant and unpleasantstimuli to help the user use it effectively.

3.4.2 The Effect of Body Site

Characteristics of affective touch can differ between thehuman hand and other body sites involving hairy skin. Thisis an important consideration in assistive technology designas sites other than the hands are often used. One significantdesign consideration is that, unlike the glabrous skin, thehairy skin uniquely contains unmyelinated afferents thatrespond to very low indentation forces and slow velocitiessuch as by gentle stroking with a soft brush (CT afferents)[77], [78]. These CT fibers are hypothesized to represent theneurobiological substrate for the affective and rewardingproperties of touch [73].

A second issue is related to skin structure and mechanics.The outermost layer of the epidermis is the stratum cor-neum, which consists of hard keratin that maintains homeo-stasis within the skin. The stratum corneum is substantiallythicker on the hand than on the forearm, presumably serv-ing to prevent the hand from being injured during frequentobject contact during tactual perception and manipulation[79]. These differences may affect the perception of unpleas-antness at different body sites. The neural and anatomicaldifferences could be critical points to consider in designs forassistive technology dependent on the body site to be used.

4 EXAMPLES OF APPLICATION OF DESIGN INSIGHT

FROM BEHAVIORAL RESEARCH

In this section, we show the utility of applying results frombehavioral research to the design of assistive technology. Wedescribe two examples where the results from behavioraltheory were explicitly applied to assistive technology designand validated. Further potential design insights and theirapplicationwill be considered in the Discussion section.

4.1 Material versus Geometric InformationProcessing

Lederman, Klatzky and their colleagues, have argued forthe importance of local material properties (roughness,compliance, slipperiness, thermal conductivity) and coarse3D global shape in identifying objects haptically. Theseproperties can be acquired quickly and efficiently with thehands using the appropriate haptic exploratory procedures[80]. Abrupt 3D discontinuities also appear to play animportant role in quickly identifying objects [81]. In con-trast, obtaining exact shape information requires contourfollowing which is a slow and memory intensive process.This suggests that, if a material property is diagnostic of anobject, it may be advantageous to portray it in a haptic dis-play for fast object recognition.

However, at the present, it is difficult to convey realisticmaterials accurately in haptic displays. An alternative,

particularly for 2D graphical displays, is to use dimensionsof material properties to encode information to help over-come the difficulties of interpreting the traditional raisedline representations (Fig. 4, left). This possibility was exam-ined by two research groups: one using hand-made paperdisplays [53] and the other using a haptic display [54].

Both groups proposed using different “textures” to dis-tinguish object parts from each other to alleviate confusionas to which lines belong to each part. The interpretation of aline can be difficult as some lines are created for perspectiveor have been occluded by another part. Methods were alsoproposed to indicate basic orientation of parts, another diffi-culty with interpreting raised line representations. In bothcases, the ability to identify common objects in 2D diagramssignificantly improved with the texturized diagrams ascompared to raised line diagrams.

Another important aspect of material properties wasdetermined by Lederman and Klatzky through the use of asearch task performed across one or more fingers [83]. Inthe task, the participant was required to determine whichfinger had a target (e.g., a rough surface) amongst dis-tracters (e.g., smooth surfaces) applied to the other fingers.They found that material properties and abrupt discontinu-ities (e.g., edge – no edge) could be processed in parallelacross fingers at the same time. In contrast, spatiallyencoded dimensions such as relative orientation, which areneeded to interpret detailed shape, have to be processedsequentially by focusing on one finger at a time.

The question arises as to whether parallel processing oftextures can aid in a non-search task requiring the integra-tion of information, such as identifying objects in a 2D dia-gram. The results from [54] indicate that increasing thenumber of fingers used to explore textured diagrams (Fig. 4,right) significantly increases the ability of individuals whoare BVI to identify objects in 2D diagrams and decreases theresponse time. In contrast, increasing the number of fingersused to explore raised line diagrams (Fig. 4, left) did notimprove the user’s ability to identify objects or decrease theresponse time.

4.2 Distributing Cognitive Load

The bandwidths of both touch and audition are significantlysmaller than that of the visual system they are intended toreplace for assistive technology for individuals who areBVI. An important question is whether both touch and audi-tion can be used together to provide more information thanjust one system alone.

The motivation for multi-modal presentations to conveyinformation is that separate working memory processing

Fig. 4. Examples of raised line drawings (on left) versus textured drawing(on right) used in [82]. On the right, different colors indicate different tem-poral frequencies.

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 265

Page 9: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

resources exist for visual, verbal, spatial, tactile, kinesthetic,tonal and olfactory information [84]. This model would sug-gest that to avoid overloading one modality, informationcould be split with another modality to increase the through-put of information and improve performance. However, thisdoes not occur completely in parallel as, although overallinformation capacity increases, individual modal capacitytends to decline duringmultimodal multitasking [84].

Recent work [85] examined the effect of dividing infor-mation amongst the haptic and audio modalities on anaudio-tactile graphics display to improve performance (cf.to using either modality alone). Individuals who were BVIhad to relate two types of diagram properties to each other.In one type of diagram (geographic diagram) participantshad to identify the states of a country (property 1) wherefruit is grown (property 2). In the second type of diagram(building map) participants answered questions relatingto the layout of train tracks on the first floor of a station(property 1) to the layout of stores on the second floor(property 2). Performance improved significantly when thetwo types of diagram properties were represented inseparate modalities rather than the same modality. Usersalso found the bimodal method easier to use.

However, the benefit of separating information betweenthe audio and tactile modalities is likely to depend onwhether the data is along the same dimension or differentdimensions, and how the data needs to be cognitively proc-essed. In addition, cross-modal interactions are known tooccur in perception that may affect the strategy of using twodifferent modalities to convey information at the same time(e.g., cross-modal attention, [86]). One caveat in consideringthe body of research on cross-modal interactions is thatrecent work suggests that audio-tactile cross-modal interac-tions may be reduced in individuals who are either early orlate blind as compared to those who are sighted.

5 AREAS OF HAPTIC ASSISTIVE TECHNOLOGY

A variety of different types of tactile/haptic feedback havebeen used either alone or combined with audio feedback inseveral different areas of assistive technology for individualswho are BVI. In this article, we will focus primarily on sys-tems using only tactile/haptic feedback. The applicationareas we will consider here, in turn, will be refreshableBraille, refreshable tactile graphics, orientation andmobility.

For both Braille and tactile graphics, pin type displaysare most commonly considered. These displays can bebased on a variety of technologies including piezoelectrics,solenoids and motors, smart materials (e.g., shape mem-ory alloy), electrorheological fluids, and pneumatic andthermopneumatic actuators; see [88] for a review. Systemsthat sense the position of the moving hand and providevibrational feedback to the fingers are also increasinglybeing considered. Force feedback devices have been con-sidered for alternate “visualization” techniques, typicallyfor presenting 3D shape, the physics of interaction interms of force/motion relationships and/or data; onesuch application for education is given in Murphy andDarrah (in this issue).

Consideration of haptic devices for orientation andmobility have typically considered low cost vibrational

feedback [89] through single or multiple eccentric rotatingmass motors or linear resonant actuators mounted onvarious areas of the body.

5.1 Braille

It may be tempting, given the small portion of individualswho are BVI that can read Braille (only 10 percent [8]) andthe prevalence of text to speech technology, to overlook theimportance of providing effective displays for Braille read-ing. However, it is interesting to note that 90 percent of indi-viduals who are blind and employed read Braille [90]. Twopossible important issues are that Braille reading: (a) is anactive process and this improves the retention of information(see Russomanno et al., this issue) and (b) allows individualsto hear (or overhear) colleagues and superiors to enablethem to actively participate in their work environment.

5.1.1 Technology

Commercial refreshable Braille displays use piezoelectrictechnology that has been used for many years. The limita-tion of this technology is that it is expensive, which limitsthe size of the display that can be provided in an affordabledevice. Typical displays are 40 to 80 Braille cells long in asingle line (Fig. 5, top left).

It has been argued that the “Holy Grail” of Braille is toprovide full page text to allow navigation through the infor-mation using similar techniques to vision, such as feelingfor the beginning of paragraphs. Although screen readers(which convert text on the computer screen to speech) alsoprovide some comparable strategies if properly annotated(such as progressing through subsections of a text article),they rely on the writer to properly format the accessibilityfeatures and reading is unavoidably serial in nature.

Braille printers that can provide full page Braille do exist,but they are time consuming and expensive to use. Theresulting product is also very bulky and wears with time.Some of these printers are also able to print graphical con-tent, but still have the same limitations as those that canonly print Braille.

Fig. 5. Top left, typical commercial Braille display (from [96]); Top right,microbubble actuator (from [92]; Bottom left, laterotactile display withpentagraph device (used in Levesque et al. [94]); Bottom right, Braillecell with graphics tablet (used in Headley et al. [97]).

266 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 10: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

More recently, researchers have considered alternatetechnology that could plausibly be efficiently produced fora full page display such as using electroactive polymers(e.g., [91]), or pneumatic and thermopneumatic actuators(e.g., Fig. 5, top right, [92]). Another important aspect forproduction is the electronics that are needed. Existing linedisplays set the Braille pins sequentially and they thenremain static while the user reads them. This makes theelectronics simple and efficient, but too slow for a full pageof Braille. Several groups have developed more effectiveelectronics to address this issue, such as in [93].

In contrast to most displays, which consider movementin the normal direction, Levesque et al. [94] created a novelmethod of producing the sensation of brushing over aBraille dot by lateral skin deformation using deflecting pie-zoelectric benders. This display was used in combinationwith a low-friction slider to produce Braille cell rendering(Fig. 5, bottom left). Unfortunately character recognitionwas poor, with an average of 57 percent, but it is possiblethat better understanding of the mechanics of Braille read-ing could lead to improved algorithms.

5.1.2 “Virtual” Braille Displays

An alternative to developing technology to decrease the costof Braille cells is to design displays that retain the effective-ness of larger displays while using fewer cells. In order toachieve this result, behavioral studies of Braille reading areimportant to understand (Section 3.1); although it may bepossible that alternate methods of reading may prove effec-tive as well. One idea developed by a team at the UnitedStates National Institute of Standards and Technology wasto produce a low cost line display using a drum to continu-ously slid Braille cells across the finger in a circular queue.The number of cells needed for their drum was far less thanfor a 40-80 cell line display [95]. This method allowed pas-sive sliding across the finger, which is important for Brailleinterpretation, but not active exploration, which is alsoimportant (Section 3.1).

Other designs have focused on providing the ability toactively explore a virtual pagewith a single (or small numberof) Braille cells, but without the Braille display actually phys-ically sliding across the finger (Fig. 5, bottom right). Perhapsthe earliest version of this type of design was the Optacon[98], although it differs from many of the more current devi-ces in that it used one hand to provide the kinesthetic inputand the other hand for the display. In particular, an opticaldevice was used in the first hand to explore a physical pageto read print text, while a multi-pin vibrotactile display inthe other hand provided the corresponding Braille feedback.Some users, after training, were able to use the Optacon veryeffectively; however, many users had difficulty. This wasmost likely due to the pins being vibrated at 250 Hz—a tem-poral frequency that has poorer spatial resolution comparedto lower frequencies [15]. The separation of the kinestheticfrom the tactile information between the two hands alsopotentially increased the difficulty.

More recent devices ([97], [94]) avoid the problems of theOptacon by not using high frequency vibrations and by pro-viding feedback to the exploring hand itself. However, thereis still a potential problem with these devices since there is

no relative motion between the pins and the finger pad asoccurs when reading Braille on paper. The typical methodof rendering the information, which is based on the currentpixel value from the “picture” to be rendered, is problem-atic for presenting Braille. This is due to the contact of thepins with the finger pad being very brief and, therefore,very difficult to interpret. In [94], extending the width of thedots and providing other enhancements still produced poorperformance. However, in [97], a spatial window on the vir-tual “picture”, used to maintain a static character on theBraille display, produced very good performance in readingletters (97 percent correct). It is not yet known how perfor-mance using the static characters on the Braille displaychanges reading rate compared to traditional methods ofusing a finger to slide across a Braille cell.

5.2 Tactile Graphics/Visualization

Traditionally tactile graphics have been made by handusing material scraps, using thermoform and swell papertechniques, and on Braille printers that provide higher reso-lution. One technical modification to these diagrams is toprovide additional speech information through the use of,for example, a graphics tablet [99] or QR codes [100]. Othershave considered the use of physical, tactile overlays to aidwith extracting information from a touch screen [101]. Twoimportant advantages of displays that are entirely refresh-able are that: (1) they do not require the physical creation ofa graphic, which can be slow and expensive, and (2) it iseasy to provide dynamic diagram manipulation software tofacilitate haptic diagram interpretation, which is typically aslow and difficult process. Two issues that we will considerin this section are software algorithms for zooming and sim-plification; both of which have been considered importantfor diagram interpretation (see Section 3.2).

5.2.1 Tactile Graphics Displays

Although it is possible to produce a refreshable tactilegraphics display by combining a large number of higherdensity “Braille cells”, the cost is prohibitive to most indi-viduals who are BVI. Most displays for tactile graphics usea similar concept as the small but mobile tactile displaysused for Braille. In this issue, Brayda and his colleaguesassess the importance of visual experience, gender and emo-tion in the assessment of an assistive tactile mouse. Systemswith a single contact for each finger have been consideredas well. An interesting possibility for a single finger contac-tor is the use of the built in vibration of tablet computersbased on the location of a finger on the screen (Fig. 6, left;

Fig. 6. Left, use of tablet computer’s built in vibration; Right, use of forcefeedback device (using a Novint Falcon).

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 267

Page 11: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

e.g., [102]). Multi-fingered feedback can be used with exter-nal vibrators [54], whether to multiple fingers on the sameor both hands. Alternate technology is also available for sin-gle finger feedback by modulating the frictional force [103].

Two important aspects in the design of these mobile dis-play systems are: accurate position measurement within thevirtual display and a fast response time of the tactile pins/vibrator(s) to render the spatial information of the pictureaccurately. It should be noted that the use of position sensingtechnology of regular mice (often considered or used) is notsufficiently accurate for tactile diagrams for individuals whoare BVI [104], although it may appear adequate with vision.

5.2.2 Tactile Rendering

There has been a lot of work considering tactile rendering ingeneral through pin type displays and vibrators in the hap-tics community. Here, we will specifically focus on thework of those who have considered tactile graphics for indi-viduals who are BVI. Rendering has typically consideredraising and then lowering pins for edges (for pin displays),as well as different types of vibrations, spatially created tex-tures, and/or temporal or spatial modulation of these sig-nals for points, lines or areas (e.g., [54], [105]). Variations inthe height of pin(s) [106] or other mechanisms [107] havealso been considered to convey information.

One of the difficulties with small moving displays is thatthe spatial information is not fine enough to quickly tracklines. Rather than smoothly following a line as with a physi-cally raised line diagram, individuals typically need tosweep back and forth across the line to maintain contactwhile tracking. One alternative is to provide area ratherthan line information when possible (such as for an object’spart) which does not “disappear” as easily as a line.Another possibility proposed by Pietrzak and his colleaguesis to use pin array tactons to indicate the direction the usershould move to follow a line [108]. Their group also pro-posed that pin array tactons could be used to representwhole diagram elements, such as components of an electriccircuit (battery, resistor, capacitor, junction, wire, etc.).

5.2.3 Zooming

As the spatial acuity for touch is much poorer than forvision, magnification of parts of a diagram is often neededif details are to be interpreted effectively. Even if details areperceptible in the original diagram, performance is signifi-cantly better with larger pictures [55]. For physical dia-grams, TVIs will decide which parts of the diagram needmagnification ahead of time and provide a separate dia-gram for them. Providing zooming for refreshable displaysmakes this more flexible and under the control of the user.

Several research groups have applied visual techniques,such as smooth zooming (albeit with force feedback detents)[109], and linear or logarithmic step zooming [110], [111]. Asthe appropriateness of visual zoom levels is usually judgedby inspection by the user, which can be very tedious withtouch (due to the slowness in exploring diagrams), an alter-nate technique has been proposed based on object (and sub-object) hierarchies of a diagram that were traversed witheach zooming level [112]. Comparison of the use of this newmethod to linear and logarithmic step zooming techniques

for answering questions about diagrams by individualswho are BVI found that both the correctness of the answersand the method’s usability were significantly better for thenew technique.

5.2.4 Dynamic Simplification

In physically created tactile diagrams, simplification byremoving content is a critical part of the process in makinga diagram manageable to read by touch. In addition, com-plex shapes may be replaced by simple polygons if thedetails are not needed. For physical diagrams, TVIs typi-cally remove all content and details not relevant to a child’slesson plan; however, this requires knowing what will berelevant ahead of time and greatly limits incidental learn-ing. For working adults, what is relevant is not usuallyknown a priori.

Refreshable displays provide an opportunity to allowusers to select themselves what content they would like sim-plified. Two cases where improved performance was foundwere: maps with multiple sets of features (e.g., states of acountry, topology, weather, industries, etc.) and maps ofcountries which included state and country boundaries[113]. In the first case, participants who were BVI were askedquestions relating features within two feature sets (e.g.,“Identify the state with the largest number of coal fields”).They were then allowed to select what feature sets theywanted visible on the refreshable display when determiningtheir answer. In the second case, participants were asked toidentify the shape of the country and the number of states init. They could select between using the original diagram or adiagram made from polygon simplification. In both cases,the number of correct answers increased significantly.

In contrast, no difference in the number of correctanswers was found in another task where participants whowere BVI were allowed to choose between using a complexdiagram or toggling between the complex diagram and asimplified diagram [114]. In this study, participants wereasked to find a seat location in a concert hall and were giveneither the complex diagram alone, consisting of sectionboundaries and seat locations, or both the complex diagramand a simplified diagram containing only section bound-aries. The difference in results compared to the previousstudy may be due to the questions asked, as not all ques-tions are expected to benefit from simplification.

5.2.5 Force Feedback Display Methods

Many research groups developing assistive technology toaid with “visualization” by individuals who are BVI haveconsidered using force feedback devices (Fig. 6, right),sometimes including audio feedback as well (for a reviewnon-specific to individuals who are BVI, see [115]). Someresearch groups have used haptic rendering techniques sim-ilar to those used for sighted individuals for virtual realityenvironments (e.g., [116]). Unfortunately, without vision,the haptic feedback becomes very difficult to use to deter-mine 3D shape.

Other groups have primarily focused on providing meth-ods to explore graphs using virtual fixtures to reduce thedifficulty in trying to find a line and follow it. For example,several groups have created attractive forces to lead a user’s

268 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 12: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

hand to a line and then allow the hand free movementalong the data line itself (e.g., [117]). Others have mixedthis technique, or a similar concept of modeling V-shapegrooves for lines, with audio, speech and vibrational feed-back to assist users (e.g., [118]). This method, in conjunc-tion with other concepts, seems to improve performance[118] but further experiments are needed to tease outwhich factors contribute most.

5.3 Orientation

For the purposes of this paper, we define orientation as:“The knowledge of one’s distance and direction relative tothings observed or remembered in one’s surroundings andthe ability to keep track of these spatial relationships as theychange during locomotion” [119]. A haptic orientation aid isa device that provides kinesthetic and/or tactile feedbackthat augments or replaces orientation-related informationtypically provided through vision. As described previouslyin Section 3.3.1, orientation can be in terms of an egocentricframework (with respect to oneself) or in an allocentricframework (external to one’s self). Giudice and Legge [120]provide a review of some technology developed for orienta-tion, as well as mobility, although not necessarily using hap-tic/tactile feedback exclusively.

5.3.1 Waypoint Navigation

Outdoor waypoint navigation capabilities, enabled by ubiq-uitous access to Global Positioning System (GPS) signals,have led to a number of developments in mobility supporttechnologies for individuals who are BVI. GPS does notwork indoors, but many research groups and companies(large and small) are working on indoor localization tech-nology systems for mainstream commercial applications, aswell as aids for individuals who are BVI. Auditory feed-back, in the form of speech, is the most common mechanismfor providing directions from point to point. However, itdoes have the disadvantage of potentially obscuring otherimportant sounds. Tactile feedback accesses an otherwiseunderutilized sensory channel and thus its use can poten-tially reduce cognitive load [121].

A straightforward method of providing tactile informa-tion for waypoint navigation is the marriage of a GPSreceiver with a Braille display/notetaker [122]. These sys-tems allow route planning using stored map data and pro-vide on-the-go directions via Braille. Disadvantages ofusing a Braille display as a navigation aid include the fact itoccupies at least one of the user’s hands during operationand that not all individuals who are BVI can read Braille.These issues, in part, have generated interest in hands-freeoptions (which have considered using the torso, back,tongue, foot and wrist), although some methods being con-sidered continue to use displays for the fingers and hands.

One simple approach to provide direction via non-Brailletouch is to generate a vibration somewhere on the body ifthe user is within a certain tolerance of the correct heading[123]. A slightly more complicated approach is to alternatelyapply tactile feedback at two body locations to indicate toturn right or left, for example, using actuators on theshoulders [124] or wrists [125]. The desire for more preciseinformation has led to the development of torso-worn tactile

belts. For example, Van Erp successfully demonstrated theuse of vibration location to convey direction with an eighttactor linear array [126].

For torso-worn belts, adding additional feedback posi-tions on the body to spatially indicate the direction to movein can enhance the resolution of directional cues. Researchindicates that a 10 degree angular discrimination is achiev-able [127]. A recent study demonstrated the use of a tactilebelt to provide rotational (orientation) cues in addition todirectional information [128]. In this issue, Flores and hiscolleagues compare the use of a vibrotactile belt to audioguidance in a wayfinding task. The authors found the beltprovided closer path following, but at the cost of reducedaverage speed in task completion.

5.3.2 Survey Knowledge

Several researchers have demonstrated the effectiveness ofsynthetic speech in conveying the spatial layout of rooms,hallways, and other building features (e.g., [129]). The sameconcept has been applied to GPS-based systems that pro-vide “look around” text-to-speech rendering of informationon outdoor points of interest such as street intersections,restaurants, shopping, and entertainment [130]. Such audio-only systems are practical but only partially meet the defini-tion of orientation – they lack the elements of relative dis-tance and direction between objects that are most oftenconveyed via a map.

Map exploration offers a direct means of acquiring anallocentric cognitive representation of spatial layout andcontent, which is needed for representing survey knowl-edge. This type of knowledge (as opposed to route knowl-edge) appears to be critical for formulating short cuts,detours, novel routes and recovery from disorientation[131]. For individuals who are congenitally blind, mapshave been shown to be necessary for acquisition of surveyknowledge, which is not naturally acquired by these indi-viduals by walking through a space (see Section 3.3.1).

Physical maps have also been combined with electronicsto create a Talking Tactile Tablet [99] in which the mapsbecome touch-sensitive and render audio content associatedwith the region under exploration.

A few groups have considered the creation of refreshablevibrotactile maps of varying complexity on smartphonesand/or tablets. Poppinga et al. [132] considered simplyturning a vibration on when passing over roads, combinedwith speech output naming the road. Klatzky et al. [133]considered slightly more complicated vibratory effectsthat could be generated with the UHL effects by Immer-sion Corporation, namely two different vibrating linesand pulsing information points, combined with auditoryeffects using a pitch/vertical association. Assessment ofusage with sighted subjects for simple line graphs foundno difference in performance when using the audio alone,vibrator alone or both together, although users seemed tohave a preference for using the auditory information,whether alone or combined. An ongoing project involvingAdams and Pawluk considers a wider variety of vibrotac-tile elements, combined with sonification and speech,which is being assessed for an indoor navigation task byindividuals who are BVI.

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 269

Page 13: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

5.4 Mobility

We define mobility as “The act or ability to move from one’spresent position to one’s desired position in another part ofthe environment safely, gracefully, and comfortably” [119].A haptic mobility aid thus can be defined as a device thatprovides kinesthetic and/or tactile feedback to aid in ambu-lation and avoiding obstacles in a proximal context.

5.4.1 The White Cane

The most ubiquitous haptic mobility aid has been the longcane, which most people think of as permitting physicalcontact with obstacles at an approximate 3 foot range aswell as a visual warning to others to give way [13]. How-ever, the white cane can provide much more information.Its effectiveness, ease-of-use, ease-of-replacement/repairand low cost make it a benchmark for new assistive technol-ogies. On the most basic level, a cane will be used to system-atically sweep the environment to detect objects in front ofthe user and can be used as a method of trailing parallel towalls, drop offs, texture changes and seams using physicalcontact [134].

A cane can also be used to facilitate echolocation of objectsbeyond the reach of the cane (including to the side of a per-son or overhead), as well as to enable the user to walk paral-lel to walls without contacting them. Vibrations to the handprovide essential information about ground surfaces that areahead of the person, delineating such things as sidewalk,grass or driveway. Deaf-blind individuals are encouraged touse canes that are superior in resonance and tactile feedbackformobility in place of the use of sound [135].

5.4.2 Obstacle Avoidance Systems

Assistive technology for mobility typically uses sonar, laseror video camera(s) to sense the environment while provid-ing audio or tactile feedback to the user in combination withkinesthetic feedback in place of vision. A recent review ofseveral devices is given in [136]. Historically, auditory cueshave been the preeminent means of feedback in both orien-tation and mobility aids, due to the substantial informationtransfer capacity of the sense of hearing. Unfortunately, theuse of sound necessarily masks subtle auditory cuesemployed by individuals who are blind [13]. Haptics thusprovides an alternate channel through which human sen-sory information can be conveyed.

Haptic aids for obstacle avoidance have existed for at least50 years. Two of the earliest examples involvedusing anultra-sonic transducer to detect an object in front of a user [137],[138]. These devices were intended to be used in conjunctionwith awhite cane and/or guide dog. The Travel Path Sounder[137], worn around the neck, used audio and tactile cues toprovide warnings and a rough indication of distance throughthree discrete feedback levels. The Mowat sensor, similar tothe currently available Miniguide (see Fig. 7), is a hand-helddevice providing vibratory feedback with a frequencyinversely proportional to the distance to an object [138].

Several groups have proposed using a technology-enhanced cane to provide information about obstacles. Tac-tile feedback from these devices consists of a set of from 1 to16 vibrators on the handle to convey information either togeneral points on the hand or to very specific pads of the

finger (e.g., [139]). The feedback provided by these devicesvaries from a simple alert to more complex informationabout distance and direction of an obstacle. In this issue,Kim and her colleagues investigate a range of spatial andtemporal feedback patterns for conveying obstacle distanceinformation using a cane-integrated tactile display. Onecommercially available enhanced cane is the UltraCane[140]. It uses two ultrasonic transducers: one to detectobstacles in the forward direction, the other to detect over-hanging objects that would be missed by a traditional cane.Two vibrotactile buttons in the cane’s handle provide feed-back on distance to an upcoming obstacle and whether it islow or high with respect to the user.

Other groups have proposed wearable obstacle avoid-ance devices that, similar to those for navigation, placevibration displays on a variety of parts of the body (e.g.,head, hand, wrist, abdomen, back, waist, etc.) Some of thesedevices are reminiscent of the tactile visual substitution sys-tem of Bach-Y-Rita (see Section 2.2.1). Some require that theuser sweep the environment with a hand held sensor todetect the obstacles. Others embed their devices into shirtsor vests.

Unlike the other haptic mobility aids mentioned thus far,which rely on tactile cues, the GuideCane (Fig. 7, right) gen-erates kinesthetic (force) feedback [141]. The design attachesa cane to a two-wheeled robot equipped with an array of 10ultrasonic transducers. When the robot detects an obstacle,it exercises internal decision logic to steer itself away fromthe object (by torqueing the steering assembly). The userfeels the change in direction through the movement of theattached cane. Once past the obstacle, the GuideCane reac-quires the original course. A notable disadvantage to thissophisticated approach is that the blind user becomes a(passive) follower of the robotic system – potentially imped-ing spatial learning [120].

Another variation on a haptics-augmented white cane byGallo and his colleagues provides both tactile and kines-thetic feedback [143]. The design employs braking of a spin-ning inertia mechanism to impart torques that imitate theimpact of the cane with a real object. Three verticallyaligned vibrating motors in the handle convey distanceusing the sensory illusion of apparent motion.

Fig. 7. Top left: Vibrotactile belt that has been used for either navigationor obstacle avoidance, from [128]. Bottom left: using the Miniguide.Right: Guidecane, from [142].

270 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 14: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

6 DISCUSSION

The significant difference in the way information is proc-essed in the visual and tactile systems suggests that forassistive technology one cannot simply map a visual sceneon to the skin to convey the information. One must use asmaller subset of information and look toward differentmethods to facilitate information transmission through thetactile/haptic systems. User testing is also a difficult task asthere are several characteristics of an individual user, suchas medical condition, experience, opinions, preferences andmotivation, which can affect performance. Some of thesevariables, such as motivation, have hardly been studied.

In this discussion we focus on the reciprocal benefitbetween behavioral studies and the development of assis-tive technology, issues regarding neuroplasticity and learn-ing, and more specific issues in regard to our four majorareas of assistive technology, Braille reading, tactilegraphics, orientation and mobility.

6.1 Using Behavioral Information

Results from behavioral research with individuals who areBVI in areas needing accommodation, such as reading,accessing graphics, orientation and mobility provide a rig-orously tested conceptual foundation to develop assistivetechnology. Although most, if not all, developers of assistivetechnology targeting the BVI population do use informationprovided by behavioral research for assistive technologydesign, this is often in terms of applying specific facts suchas the spatial resolution of touch. However, as our examplesin Section 4 attempted to show, translating and validatingconcepts from behavioral research with BVI subjects andgeneral behavioral research provides us with a foundationto construct design methodologies. Focusing on this foun-dation would allow future designers to build on currentwork rather than starting “from scratch”.

Also, there are some types of stimuli (e.g., electrotactile)and skin target site (e.g., trunk and arm) for which design-ers have significant interest in using but have not been wellstudied behaviorally. For example, despite the fact that thestructure of the skin/subcutaneous tissue and the mecha-noreceptors present varies significantly based on target site,little work has examined such basic information as the tem-poral frequency characteristics at these sites except for thefingers. For cylindrical surfaces, such as the trunk or arm,we are also unaware of any work examining what happenswhen spatial tactile patterns are wrapped around the whole“cylinder” rather than only on the “front” or “back”. Thiswould potentially provide more space for presenting infor-mation. Finally, despite the practical advantages to creatingelectrotactile arrays, there has been relatively little work inunderstanding the phenomena that these devices create.

6.2 Validating Assistive Technology

Currently most assessments of assistive technology are interms of user performance and user acceptance for the partic-ular prototype developed. Unfortunately this does little toadvance the field as most researchers use different testingtasks, which makes it very difficult to compare different pro-totypes by different researchers and draw conclusions. Aregime of tests to be used by developers of different

technology applications could alleviate this problem. How-ever, often these prototypes have multiple differences whichmake it difficult to extract generalized concepts. More stud-ies are needed that cleanly vary different conceptual varia-bles to provide a foundation for future design efforts.

In addition, both psychophysical and neuroimaging tech-niques can be used to investigate the strategies utilized withassistive technology to potentially evaluate their ease oftranslation. For example, Kupers et al. [71] found that thebrain activation patterns when early blind individuals per-form a virtual navigation task with a tactile display stronglyresembled that of sighted individuals with full vision. Thissuggests that it is possible that a similar strategy is used inboth cases.

6.3 What Does Neuroplasticity Mean for AssistiveTechnology?

One of the most interesting findings from the neuroimagingliterature is that the primary visual cortex only seems to berecruited for Braille reading and other tactile discriminationtasks in individuals who have lost their sight before age six-teen [36]. This was used as a division between individualswho are “early” and “late” blind. However, the divisionbetween “early” and “late” blind used in Section 2.1 wasbased on visual experience rather than plasticity andoccurred at a much earlier age. This suggests that the singledivision early/late blind should not be used in studies asmultiple changes occur at multiple time points as individu-als develop. Rather, participants in experiments should bedescribed in terms of age of onset of blindness and theresults related to the several developmental changes thatoccur over the early years.

In addition, because the above result appears to showthat neuroplasticity is more pronounced in BVI individualswith an onset of blindness before age sixteen, it is recom-mended that difficult tasks, including those that use assis-tive technology, be introduced at a young age whenpossible. However, this cut-off age may not be as strict asimplied in [36] as sighted, presumably adult, Mah-Jongexperts also showed activation of the primary visual cortexfor the tactile discrimination of Mah Jong tiles and Braille[7]. Further study is needed to determine what type of tac-tile training at later ages can lead to usage of the primaryvisual cortex and how this can benefit performance in sen-sory substitution tasks by individuals who are BVI.

However, it is also not clear how important a role the pri-mary visual cortex plays in task performance. AlthoughSadato et al. found that task performance positively corre-lated to the activity of the primary sensory cortex in [36],two subjects who lost their sight after the age of sixteen per-formed equally well to those who lost their sight before theage of 16. These two subjects did not exhibit an increase inactivity in the primary visual cortex. Their results are posi-tive indicators that achievement of high performance is notnecessarily restricted when neuroplasticity is more limited,although there are possibly significant differences in thecognitive processes involved.

Conversely, equivalent performance between the “early”and “late” blind does not mean that the same cognitive pro-cesses are involved. This means that changes to a taskmethod where equivalent performance is found, such as

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 271

Page 15: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

introducing assistive technology, could actually affect per-formance very differently between the two groups. There-fore, it is important to assess performance with anyproposed assistive technology with users of the two groupsto validate its usage with potentially different cortical proc-essing methods.

Tactile stimulation of the relevant skin area and, possi-bly, continuous performance of a tactile/haptic task over anindividual’s life time may also be necessary to develop andmaintain the new cortical connections. For example, the sizeof the primary somatosensory cortex dedicated to the Braillereading fingers is known to be larger in the early blind thanthe sighted [47]. This seems consistent with neurophysio-logical work in owl monkeys showing an expanded corticalrepresentation for skin areas which were repeatedly stimu-lated [144]. However, subsequent repeated stimulation ofother skin areas would again dynamically change the over-all primary somatosensory organization, including possiblelosses in the aforementioned expanded areas if the previousstimulation is no longer repeated.

Higher areas of the visual cortex appear to be multisen-sory areas with visual, tactile, auditory and possibly othersensory inputs. In congenitally blind cats, the areas forsomatosensory and auditory input seem to expand into thevisual areas of multisensory cortical areas in the absence ofvisual inputs [145]. Auditory processing and linguistic proc-essing, in addition to tactile processing, are known to takeover the primary visual cortex [145]. This may add furthercompetitors for space in these cortical areas.

6.4 Should Assistive Technology Be Designed toEncourage and Accommodate Learning?

The observed changes to brain organization suggest thatlearning and practice are an important part of adapting toand effectively using sensory substitution. However, in addi-tion, some behavioral differences found between individualswho are early blind and those who are sighted or late blindare suggested to be due to lack of exposure (i.e., lack ofopportunity to learn) rather than inherent limitations of theearly blind population. For example, Heller and his col-leagues [56] argue that it is exposure to the rules of perspec-tive in pictures that is important to its understanding, ratherthan exposure to vision. What is the responsibility of devel-opers of assistive technology to teach these rules rather thanprovide an alternatemethod of accessing this information?

One example is when exploring locomotor space, bothsighted and late blind individuals can use an allocentricframework whereas early blind individuals cannot. Individ-uals who are early blind can, though, construct allocentricframes of reference when using tactile maps. This suggeststhat assistive technology providing tactile maps is essentialfor these individuals when performing allocentric relatedtasks in locomotor space. However, the ability of late blindindividuals to use an allocentric framework suggests that itis not the presence of vision that is needed but exposure tovision or, perhaps only, exposure to the rules of allocentricframeworks. This suggests that assistive technology may besuccessfully developed that uses tactile maps and othercomponents to train individuals who are early blind how toconstruct allocentric frames of reference solely from the

environment. This would provide these individuals withgreater autonomy when tactile maps are not available.

However, what does this mean for issues such as recov-ering the development costs of the assistive technology?Potential users may be less motivated to purchase a deviceif it is only needed for a short period of time, assuming it isof little value. Tactile maps will likely still be desirable touse even after individuals are able to form an allocentricrepresentation without a map, as is true with visual mapswith sighted users, but what about other technology? Anec-dotal evidence from the Visual Impairment Services Outpa-tient Rehabilitation at the Hunter Holmes McGuire VAMedical Center suggests that this may be, for example,problematic for obstacle avoidance technology. The fewusers who have used electronic obstacle avoidance technol-ogy in their program always transitioned later to the use ofa white cane. The electronic technology may have been criti-cal in the learning process but it is unclear whether the usersand the rehabilitation staff appreciated its value. This hasresulted in the staff being less likely to recommend the tech-nology to potential users.

Whether encouraging and accommodating this type oflearning is “built in” to assistive technology or not, it maystill occur with potential consequences for the long termviability of the product. Documenting the advantage of aparticular assistive technology to the learning processmay circumvent problems with user perception about theproduct’s value.

6.5 Print Information: Should Full Page BrailleBe the Goal?

Several groups, such as the National Federation of the Blind,consider the ability to read and write Braille an importantprecursor of success in education and the workplace. Brailleis thought to give the same access as the “printed word” byallowing the user to experience the information spatiallyand actively control his/her focus. Braille is also consideredessential for note-taking and helpful for studying math,spelling and foreign languages [146]. However, currentrefreshable displays are expensive and fewer than 10 per-cent of individuals who are BVI in the United States knowBraille [8]. In addition, the number of Braille characters thatcan fit on a page is significantly more limited than “print”,which may reduce the advantage of experiencing the infor-mation spatially.

Previously in Section 5.1, we discussed small pin-typedisplays that were created to move over a page to provideaccess to a “full page of Braille”. However, whether modify-ing the presentation method can achieve reading and errorrates comparable to reading a full page of physical Braille isnot yet known. It is also worthwhile to consider anotherpossibility, namely, to apply what is known to be effectiveabout Braille reading to electronic speech generation.

Currently for electronic speech generation, speech is gen-erated in a serial stream with no spatial component andvery few active components. The latter may be limited toadjusting the speed of the speech and scrolling through alist of section titles to move to different parts of the text. Itshould, therefore, not be surprising that listening to text isnot as effective as actively reading visually in terms of focusand comprehension [147]. However, speech generation

272 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 16: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

remains a very affordable method of accessing printinformation. Can current speech generation methods bemodified to incorporate the successful aspects of Braille—namely, providing both active and spatial access?

One possibility is that instead of using physical Braillecharacters, we could consider a virtual block of space on apage to correspond to a spoken phoneme or word. A touchscreen could then be used to physically represent a refresh-able version of the page, with the user’s fingers being trackedto access the virtual verbal content. The speed of the readingfinger could control the rate at which the speech is spoken.Knowledge of hand movements when reading Braille (seeSection 3.1) would be essential to ensure that speech wouldonly be produced from the reading finger, particularly whenreading is transferred from one finger to the other.

One potential difficulty with this method is that no tactilefeedback is provided to maintain a user’s finger on a line orto find the next line. This could be overcome by using a tac-tile “overlay” for the touch screen, analogous to those pro-posed in [101], that would provide a physical guide for theuser to follow lines, and detect their beginnings and ends.This physical tactile overlay could be combined with vibra-tory feedback on the screen when the reading finger passesover, for example, found search terms, or, in “editor mode”,spelling or grammar mistakes. Having an editor modewould also allow text to be written, as well as read, in anaccessible spatial layout. Furthermore, gestures could beused to query about the spelling of words and other aspectsof the symbols used.

Due to the increasing number of elderly in our society,resulting in an increasing number of individuals experienc-ing visual problems with less motivation to learn Braille, webelieve the exploration of alternate presentation methods ofprint information will only increase in importance with time.

6.6 Tactile Graphics and Spatial Acuity

The difference between the spatial acuity of touch andvision must be considered when conveying informationnormally presented visually through touch. Even if finevisual details are applied to the skin, it does not mean thatthe tactile system will be able to sense this information.However, Millar [148] has suggested that the spatial resolu-tion measured by passive touch, typically used to definespatial acuity, is not necessarily the most appropriate mea-sure for the legibility of Braille patterns. She suggests thatmovement cues are also extremely important. With move-ment, the spatial pattern can be recognized by the changinglocation of each of the points on the pattern as a function oftime. This may be particularly advantageous when the tac-tile spatial resolution is limited [149].

The question then arises: what is the appropriate spacingof the tactile elements in a pin-array type of tactile display?If we consider the minimum distance a point needs tochange location to be detected, as would potentially bedone with a temporal code, we find that the distancedetected can be less than 0.2 mm on the fingertip [150].However, this is not a practical spacing for an electrome-chanical display. One possibility is to provide a combinedelectrotactile and tactile display (e.g., [24]) with the electro-tactile display stimulating the densely arrayed receptorsnear the surface of the skin and the electromechanical

display stimulating deeper receptors. Alternately, studyingwhat element spacing is sufficient for a user to performthe required tasks may have less demanding requirementsthan trying to match the maximum performance of thetactile system.

The pin spacing needed is also dependent on the type ofdisplay used. Small pin-array tactile displays which arethen moved about a virtual page can render spatial detailsof the information (e.g., a graphic) by creating temporal var-iations on the tactile elements, somewhat analogous to thebenefit of moving the exploring hand. Therefore, the move-ment accuracy and temporal response of the display also con-tribute to how accurately spatial details can be rendered. Incontrast, larger, stationary displays can only provide theinformation spatially and, are thus limited by their pin spac-ing. Even, if the hand moves across the display, it does notobtainmore spatial details of the information (e.g., a graphic).

However, the ability to track lines or edges easily andquickly on a graphic is also important as the interpretationof detailed spatial information is primarily a serial processinvolving contour following. This issue also seems to bedependent on display type. With large, stationary displays,users do not experience difficulty in tracking lines/edges.When the finger is slid along this type of display, the pinsstay in contact with the finger. The temporal response of themovement of the physical pins allows the user to easilyresolve the line/edge orientation for tracking. In contrast,small moving displays with the same spacing usually resultin slow and difficult tracking. With the latter type of dis-play, we have observed that most, if not all users, need toscan back and forth across a line/edge in order to track it.This is because there is no physical motion of the pins onthe finger and, therefore, only a brief contact with any onepoint on a diagram is made. This prevents obtaining a tem-poral response of the movement of the line/edge to determineorientation. The lack of physical motion of the pins is alsowhy it is difficult to read Braille with these devices unlessmodifications are made as to how the Braille is presented(Section 5.1.2).

Further study of static and moving displays are neededto better understand what tactile information they convey,how users process this information, and at what resolution.Both types of displays are fundamentally different thanusing physical paper or plastic methods which provide acontinuous form rather than a discrete one. How to moreeffectively convey information on these displays, particu-larly in regard to their weaknesses, is an importantconsideration for the development of effective access torefreshable tactile graphics. It is possible that a mixed elec-trotactile-electromechanical display, as described previ-ously, may be beneficial in both cases.

6.7 Other Considerations in Creating EffectiveRefreshable Tactile Graphics

Spatially exploring a diagram, even with multiple fingersand both hands, is often a slow process compared to beingable to perceiving a visual picture “all at once”. However,tactile perception also has a very rich temporal componentthat has been considered for providing alerts, directions,and locations and distances of obstacles. This componenthas been applied to a much lesser degree in aiding the

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 273

Page 17: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

interpretation of tactile graphics. Section 4.1 gives an exam-ple of successfully using the temporal component of a tactilestimulus to create “textures” to improve the interpretationof tactile graphics. Using different vibration frequencieswould also likely be an effective way to distinguish betweenmultiple lines in a graph in place of dotted and dashed lines,which are difficult to interpret with the existing pin spacingof graphics displays.

Another avenue to consider is motivated by a proposedmethod to display tactile graphs by “collapsing” the y-dimension of the graph in Cartesian space and representingit by an auditory frequency [133]. As the user physicallyscans along the x-axis, the auditory frequency gives the y-coordinate while a tactile vibration indicates the symbol orline type. However, another possibility, instead of tactilevibration, is to separate different symbols or line types byauditory timbres that include harmonics. This would allowthe natural segregation of the different timbre sounds intostreams through auditory grouping [151]. This may makethe information easier to interpret. Alternately, using tactiletimbre with tactile frequency may open up possibilities ofstream segregation in the tactile domain. We are not awareof any research that shows that stream segregation is possi-ble in the tactile domain. If feasible, it could provide a novelmethod of temporally conveying multiple streams of infor-mation. It could also reveal that touch shows similarities toboth vision and hearing in perceptual organization.

6.8 Consideration of Displays for Orientationand Mobility

One of the most significant concerns when designing andtesting displays to aid with orientation and mobility is thatthey will be used at the same time when a user moves fromplace to place. This means that it is important to considerthe cognitive load of the other tasks when assessing perfor-mance with the task under consideration. This is possiblybest controlled by using a standard secondary task, such asperforming mathematical calculations, to assess cognitiveload. It should also be noted that the time scales of usagemay be very different for the different tasks and may needto potentially be mimicked. For example, feedback foravoiding obstacles needs to be supplied quickly and contin-uously. Navigation instructions may be supplied less fre-quently, as long as notifications are given sufficiently inadvance before intersections. Survey map information maybe used even more infrequently and users are likely to stopand remain stationary while they explore a map.

An important consideration for design may be to distrib-ute the cognitive load of the tasks between multiple sensesto maximize the amount of information that can be held inworking memory (see Section 4.2). The most effective andsimplest method may be that one task (e.g., giving naviga-tion instructions) provides feedback through audition,while the other task (e.g., obstacle avoidance) provides feed-back through touch. This design is also less likely to haveunexpected interactions between the feedback for the twotasks. This may be particularly true for individuals who areBVI as recent work suggests that audio-tactile cross-modalinteractions may be reduced in individuals who are blind(e.g., [87]).

7 CONCLUSIONS

The history of the application of touch and haptics to assis-tive technologies for individuals who are BVI is long andvaried. In this review, we have shown that there is a wealthof behavioral research that is highly applicable to assistivetechnology design. In a few cases, conclusions from behav-ioral experiments have been directly applied to design withpositive results. Further controlled experiments assessingthe application of other behavioral research concepts toassistive technology design are needed to provide a meth-odological foundation for future designers. Neuroplasticityin the brain and training users in cognitive concepts, asopposed to only training a user how to use a device, arealso important aspects that need to be considered whendesigning and assessing assistive technology. Both canpotentially result in considerable differences in user behav-ior over time, and the cognitive processes involved may bedependent on when a user has become blind.

The development of assistive technologies for individu-als who are BVI also raises interesting behavioral questions.For example, given practical restrictions on device design,such as performance limitations of the technology and cost,which aspects of these restrictions are truly important toovercome to achieve high performance and which are not?In general, this raises the question of what it means to pro-vide functional equivalence as opposed to sensory equiva-lence. In addition, when functional equivalence can beprovided by a variety of methods, both psychophysical andneuroimaging techniques may be effective in determiningthe potential ease of learning a given method. The user’senvironment, including the use of other assistive technol-ogy, will also have an impact on the ease of use.

ACKNOWLEDGMENTS

This work was supported in part by the US National ScienceFoundation Grant #1218310 to Dianne T.V. Pawluk and theJapan Society for the Promotion of Science Grant-in-Aid forYoung Scientists (B) Grant #25871059 to Ryo Kitada.

REFERENCES

[1] (2014). American optometric association website [Online]. Avail-able: http://www.aoa.org

[2] (2014). National center on deaf-blindness website [Online].Available: http://www.nationaldb.org

[3] (2015). American diabetes association website [Online]. Avail-able: http://www.diabetes.org

[4] Office of Early Childhood Development Virginia Department ofSocial Services. (2008). Milestones of child development [Online].Available: http://www.earlychildhood.virginia.gov/documents/milestones.pdf

[5] O. Collignon, G. Dormal, G. Albouy, G. Vandewalle, P. Voos, C.Phillips, and F. Lepore, “Impact of blindness onset on the func-tional organization and the connectivity of the occipital cortex,”Brain, vol. 136, no. Pt 9, pp. 1–5, 2013.

[6] H. Mollenkopf, “On the potential of new media and new tech-nologies for old age,” in On the Special Needs of Blind and LowVision Seniors, Amsterdam, The Netherlands: IOS Press, 2001,pp. 335–346.

[7] D. N. Saito, T. Okada, M. Honda, Y. Yonekura, and N. Sadato,“Practice makes perfect: The neural substrates of tactile discrimi-nation by mah-jong experts include the primary visual cortex,”BMC Neurosci., vol. 7, p. 79, 2006.

[8] (2015). Braille readers are leaders. Nat. Fed. Blind [Online]. Avail-able: https://nfb.org/braille-campaign

274 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 18: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

[9] S. Srinivas and S. C. Hirtle, “The role of affect on expandingindoor spatial knowledge,” in Indoor Wayfinding and Navigation,Boca Raton, FL, USA: CRC Press, 2015, pp. 13–34.

[10] D. L. Harkey, D. L. Carter, J. M. Barlow, and B. Billie Louise.(2007). NCHRP web-only document 117A: Accessible pedestriansignals: A guide to best practices [Online]. Available: online-pubs.trb.org/onlinepubs/nchrp/nchrp_w117a.pdf

[11] P. Bach-Y-Rita, C. Collins, F. Saunders, B. White, and L. Scadden,“Vision substitution by tactile image projection,” Nature,vol. 221, pp. 963–964, 1969.

[12] P. Bach-y-Rita and S. W. Kercel, “Sensory substitution and thehuman-machine interface,” TRENDS Cognitive Sci., vol. 7, no. 12,pp. 541–546, 2003.

[13] J. Brabyn, “New developments in mobility and orientation aidsfor the blind,” IEEE Trans. Biomed. Eng., vol. BE-29, no. 4,pp. 285–289, Apr. 1982.

[14] J. M. Loomis, R. L. Klatzky, and N. A. Giudice, “Sensory substi-tution of vision: Importance of perceptual and cognitive proc-essing,” in Assistive Technology for Blindness and Low Vision. BocaRaton, FL, USA: CRC Press, 2012.

[15] K. U. Kyung, M. Ahn, D. S. Kwon, and M. A. Srinivasan,“Perceptual and biomechanical frequency response of humanskin: Implications for design of tactile displays,” in Proc. 1st JointEurohaptics Conf. Symp. Haptic Interfaces Virtual Environ. Teleopera-tor Syst., 2005, pp. 96–101.

[16] R. W. Van Boven and K. O. Johnson, “The limit of tactile spatialresolution in humans: Grating orientation discrimination on thelip, tongue and finger,” Neurology, vol. 44, no. 12, pp. 2361–2366,1994.

[17] S. Weinstein, “Intensive and extensive aspects of tactile sensitiv-ity as a function of body part, sex and laterality,” in The SkinSense, Springfield, IL, USA: Thomas, 1968, pp. 195–222.

[18] R. W. Van Boven, R. H. Hamilton, T. Kauffman, J. P. Keenan, andA. Pascual-Leone, “Tactile spatial resolution in blind braille read-ers,” Neurology, vol. 54, pp. 2230–2236, 2000.

[19] M. Wong, G. Vishi, and D. Goldreich, “Tactile spatial acuityenhancement in blindness: Evidence for experience-dependentmechanisms,” J. Neurosci., vol. 31, no. 19, pp. 7028–7037, 2011.

[20] A. C. Grant, M. C. Thiagarajah, and K. Sathian, “Tactile percep-tion in blind braille readers: A pscyhophysical study of acuityand hyperacuity using gratings and dot patterns,” Perception Psy-chophys., vol. 62, no. 2, pp. 301–312, 2000.

[21] J. Loomis, R. Klatkzy, and S. Lederman, “Similarity of tactual andvisual picture recognition with limited field of view,” Perception,vol. 20, pp. 167–177, 1991.

[22] R. Klatzky, J. Loomis, S. Lederman, H. Wake, and N. Fujita,“Haptic identification of objects and their depictions,” PerceptionPsychophys., vol. 54, no. 2, pp. 170–178, 1993.

[23] J. P. Warren, L. R. Bobich, M. Santello, and J. D. Sweeney,“Receptive field characteristics under electrotactile stimulationof the fingertip,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 16,no. 4, pp. 410–415, Aug. 2008.

[24] S. Kuroki, H. Kajmoto, H. Nii, N. Kawakami, and S. Tachi,“Proposal for tactile sense presentation that combines electricaland mechanical stimulus,” in Proc. 2nd Joint Eurohaptics Conf.Symp. Haptic Interfaces Virtual Environ. Teleoperator Syst., 2007,pp. 121–126.

[25] M. Solomonow, J. Lyman, and A. Freedy, “An initial study on lipperception of electrotactile array stimulation,” J. Rehabil. Res.Develop., vol. 42, no. 5, pp. 705–714, 2005.

[26] T. Maeyama and K. H. Plattig, “Minimal two-point discrimina-ton in human tongue and palate,” Amer. J. Otolaryngol., vol. 10,no. 5, pp. 342–344, 1989.

[27] H. Kajimoto, Y. Kanno, and S. Tachi, “Forehead retina system,”in Proc. ACM SIGGRAPH, 2006, article 11.

[28] P. Bach-y-Rita, K. Kaczmarek, and K. Meier, “The tongue as aman-machine interface: A wireless communication system,” inProc. Int. Symp. Inf. Theory Its Appl., 1998, pp. 79–81.

[29] C. A. Lozano, K. A. Kaczmarek, and M. Santello, “Electrotactilestimulation on the tongue: Intensity perception, discriminationand cross-modality estimation,” Somatosensory Motor Res.,vol. 26, no. 2, pp. 50–63, 2009.

[30] A. Arnoldussen and D. Fletcher, “Visual perception for the blind:The brainport vision device,” Retinal Physician, vol. 9, no. 1,pp. 32–34, 2012.

[31] The PDMA Handbook of New Product Development, 2nd ed. Hobo-ken, NJ, USA: Wiley, 2005.

[32] (2015). Universal Design.Com [Online]. Available: http://www.universaldesign.com

[33] J. Brooke. (1986). SUS - A quick and dirty usability scale. DigitalEquipment Corporation

[34] F. D. Davis, R. P. Bagozzi, and P. R. Warshaw, “User acceptanceof computer technology: A comparison of two theoretical mod-els,”Manage. Sci., vol. 35, no. 8, pp. 982–1003, 1989.

[35] R. A. Cooper, “Introduction,” in An Introduction to RehabilitationEngineering. Boca Raton, FL, USA: CRC Press, 2007, pp. 1–18.

[36] N. Sadato, T. Okada, M. Honda, and Y. Yonekura, “Criticalperiod for cross-modal plasticity in blind humans: A functionalMRI Study,” Neuroimage, vol. 16, no. 2, pp. 389–400, 2002.

[37] (2014). Size and spacing of braille characters [Online]. Available:www.brailleauthority.org

[38] J. M. Loomis, “On the tangibility of letters and braille,” PerceptionPsychophys., vol. 29, no. 1, pp. 37–46, 1981.

[39] P. Mousty and P. Bertelson, “Finger movements in braille read-ing: The effect of local ambiguity,” Cognition, vol. 43, no. 1,pp. 67–84, 1992.

[40] P. Bertelson, P. Mousty, and G. D’Alimonte, “A study of braillereading: 2. patterns of hand activity in one-handed and two-handed reading,” The Quart. J. Exp. Psychol., vol. 37A, pp. 235–256, 1985.

[41] S. Millar, “The perceptual ‘Window’ in two-handed braille: Dothe left and right hands process text simultaneously?” Cortex,vol. 23, pp. 111–122, 1987.

[42] M. A. Heller, “Active and passive tactile braille recognition,”Bull. Psychonomic Soc., vol. 24, no. 3, pp. 201–202, 1986.

[43] N. Sadato, A. Pascual-Leone, J. Grafman, V. Ibanez, M. P. Deiber,G. Dold, and M. Hallett, “Activation of the primary visual cortexby braille reading in blind subjects,” Nature, vol. 380, pp. 526–528, 1996.

[44] L. G. Cohen, P. Celnik, A. Pascual-Leone, B. Corwell, L. Falz,J. Dambrosia, M. Honda, N. Sadato, C. Gerloff, M. D. Catala, andM. Hallett, “Functional relevance of cross-modal plasticity inblind humans,” Nature, vol. 389, no. 6647, pp. 180–183, 1997.

[45] R. Hamilton, J. P. Keenan, M. Catala, and A. Pascual-Leone,“Alexia for braille following bilaterial occipital stroke in an earlyblind woman,”Neuroreport, vol. 11, no. 2, pp. 237–240, 2000.

[46] A. Bhattacharjee, A. J. Ye, J. A. Lisak, M. G. Vargas, and D. Gold-reich, “Vibrotactile masking experiments reveal acceleratedsomatosensory processing in congenitally blind braille readers,”J. Neurosci., vol. 30, no. 43, pp. 14288–14298, 2010.

[47] H. Burton, R. J. Sinclair, and D. G. McLaren, “Cortical activity tovibrotactile stimulation: An fMRI study in blind and sightedindividuals,” Human Brain Mapping, vol. 23, no. 4, pp. 210–228,2004.

[48] P. K. Edman, Tactile Graphics, New York: NY. USA: AFB Press,1992.

[49] M. A. Heller, J. A. Calcaterra, L. L. Burson, and L. A. Tyler,“Tactual picture identification by blind and sighted people:Effects of providing categorical information,” Perception Psycho-phys., vol. 58, no. 2, pp. 310–323, 1996.

[50] M. A. Heller, “Picture and pattern perception in the sighted andthe blind: The advantage of the late blind,” Perception, vol. 18,pp. 379–389, 1989.

[51] S. J. Lederman, R. L. Klatzky, C. Chataway, and C. D. Summers,“Visual mediation and the haptic recognition of two-dimensionalpictures of common objects,” Perception Psychophys., vol. 47, no. 1,pp. 54–64, 1990.

[52] M. A. Heller, K. Wilson, H. Steffen, and K. Yoneyama, “Superiohaptic perceptual selectivity in late-blind and very-low-visionsubjects,” Perception, vol. 32, pp. 499–511, 2003.

[53] L. J. Thompson, E. P. Chronicle, and A. F. Collins, “Enhancing 2-D tactile picture design from knowledge of 3-D haptic object rec-ognition,” Eur. Psychologist, vol. 11, no. 2, pp. 110–118, 2006.

[54] D. Burch and D. Pawluk, “Using multiple contacts with texture-enhanced graphics,” in Proc. IEEE World Haptics Conf., 2011,pp. 287—292.

[55] M. Wijintjes, T. Van Lienen, I. Verstijnen, and A. Kappers, “Theinfluence of picture size on recognition and exploratory behaviorin raised-line drawings,” Perception, vol. 36, pp. 865–879, 2008.

[56] M. A. Heller, D. D. Brackett, E. Scroggs, H. Steffen, K. Heatherly,and S. Salik, “Tangible pictures: Viewpoint effects and linear per-spective in visually impaired people,” Perception, vol. 31,pp. 747–769, 2002.

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 275

Page 19: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

[57] A. Amedi, N. Raz, H. Azulay, R. Malach, and E. Zohary,“Cortical activity during tactile exploration of objects in blindand sighted humans,” Restor Neurol. Neurosci., vol. 28, no. 2,pp. 143–156, 2010.

[58] Z. Cattaneo, T. Vecchi, C. Cornoldi, I. Mammarella, D. Bonino, E.Ricciardi, and P. Pietrini, “Imagery and spatial processes inblindness and visual impairment,” Neurosci. Biobehavioral Rev.,vol. 32, pp. 1346–1360, 2008.

[59] R. G. Long and N. A. Giudice, “Establishing and maintaining ori-entation for mobility,” in Foundations of Orientation and Mobility,New York, NY. USA: American Foundation for the Blind, 2010,pp. 45–62.

[60] M. L. Noordzij and A. Postma, “Categorical and metric distanceinformation in mental representations derived from route andsurvey descriptions,” Psychological Res., vol. 69, pp. 221–232, 2005.

[61] B. Roder, F. Rosler, and C. Spence, “Early vision impairs tactileperception in the blind,” Current Biol., vol. 14, pp. 121–124, 2004.

[62] C. Thinus-Blanc and F. Gaunet, “Representation of space in blindpersons: Vision as a spatial sense?” Psychological Bull., vol. 121,no. 1, pp. 20–42, 1997.

[63] T. Iachini, G. Ruggiero, and F. Ruotolo, “Does blindness affectegocentric and allocentric frames of reference in small and largescale spaces?” Behavioral Brain Res., vol. 273, pp. 73–81, 2014.

[64] J. M. Loomis, R. L. Klatzky, R. G. Golledge, J. G. Cicinelli, J. W.Pellegrino, and P. A. Fry, “Nonvisual navigation by blind andsighted: assessment of path integration ability,” J. Exp. Psychol.:Gen., vol. 122, no. 1, pp. 73–91, 1993.

[65] J. J. Rieser, D. A. Guth, and E. W. Hill, “Sensitivity to perspectivestructure while walking without vision,” Perception, vol. 15,no. 2, pp. 173–188, 1986.

[66] A. E. Bigelow, “Blind and sighted children’s spatial knowledgeof their home environment,” Int. J. Behavioral Develop., vol. 19,no. 4, pp. 797–816, 1996.

[67] T. Iachini and G. Ruggiero, “The role of visual experience inmental scanning of actual pathways: Evidence from blind andsighted people,” Perception, vol. 39, pp. 953–969, 2010.

[68] B. Roder and F. Rosler, “Visual input does not facilitate the scan-ning of spatial images,” J. Mental Imagery, vol. 22, nos. 3/4,pp. 165–182, 1998.

[69] S. Millar and Z. Al-Attar, “External and body-centered frames ofreference in spatial memory: Evidence from touch,” PerceptionPsychophys., vol. 6, no. 1, pp. 51–59, 2004.

[70] A.A. Kalia, G. E. Legge, andN.A.Giudice, “Learning building lay-outs with non-geometric visual information: The effects of visualimpairment and age,” Perception, vol. 37, pp. 1677–1699, 2008.

[71] R. Kupers, D. R. Chebat, K. H. Madsen, O. B. Paulson, and M.Ptito, “Neural correlates of virtual route recognition in congeni-tal blindness,” Proc. Nat. Acad. Sci., vol. 107, no. 28, pp. 12716–12721, 2010.

[72] M. Fortin, P. Voss, C. Lord, J. Pruessner, D. Saint-Amour, C.Rainville, and F. Lepore, “Wayfinding in the blind: Larger hippo-campal volume and supranormal spatial navigation,” Brain,vol. 131, pp. 2995–3005, 2008.

[73] F. McGlone, J. Wessberg, and H. Olausson, “Discriminative andaffective touch: Sensing and feeling,” Neuron, vol. 82, pp. 737–755, 2014.

[74] G. D. Mower, “Perceived intensity of peripheral thermal stimuliis independent of internal body temperature,” J. ComparativePhysiological Psychol., vol. 90, pp. 1152–5, 1976.

[75] R. Kitada, N. Sadato, and S. J. Lederman, “Tactile perception ofnonpainful unpleasantness in relation to perceived roughness:Effects of inter-element spacing and speed of relative motion ofrigid 2-d raised-dot patterns at two body loci,” Perception, vol. 41,pp. 204–220, 2012.

[76] A. D. Craig, “How do you feel? interoception: The sense of thephysiological condition of the body,” Nature Rev. Neurosci.,vol. 3, pp. 655–666, 2002.

[77] A. B. Vallbo, H. Olausson, and J. Wessberg, “Unmyelinated affer-ents constitute a second system of coding tactile stimuli of thehuman hairy skin,” J. Neurophysiol., vol. 81, pp. 2753–2763, 1999.

[78] L. S. Loken, J. Wessberg, I. Morrison, F. McGlone, and H.Olausson, “Coding of pleasant touch by unmyelinated afferentsin humans,”Nature Neurosci., vol. 12, pp. 547–548, 2009.

[79] M. Egawa, T. Hirao, and M. Takahashi, “In vivo estimation ofstratum corneum thickness from water concentration profilesobtained with raman spectroscopy,” Acta Dermato-venerologica,vol. 87, pp. 4–8, 2007.

[80] S. J. Lederman and R. L. Klatzky, “Extracting object propertiesthrough haptic exploration,” Acta Psychologica, vol. 84, pp. 29–40,1993.

[81] S. Lakatos and L. E. Marks, “Haptic form perception: relativesalience of local and global features,” Perception Psychophys.,vol. 61, no. 5, pp. 895–908, 1999.

[82] D. Burch, “Development of a multiple contact haptic displaywith texture-enhanced graphics,” Ph.D. dissertation, Unpub-lished, 2012.

[83] S. J. Lederman and R. L. Klatzky, “Relative availability of surfaceand object properties during early haptic processing,” J Exp. Psy-chol.: Human Perception Perfromance, vol. 23, no. 6, pp. 1680–1707,1997.

[84] S. N. Samman and K. M. Stanney, “Multimodal interaction,” inInternational Encyclopedia of Ergonomics and Human Facts, vol. 2,2nd ed. Boca Raton, FL, USA: Taylor & Francis, 2006.

[85] R. Rastogi, “A study towards development of an automated hap-tic user interface for individuals who are blind or visuallyimpaired,” Ph.D. dissertation, Unpublished, 2012.

[86] C. Spence, “Multisensory attention and tactile information-proc-essing,” Behavioral Brain Res., vol. 135, pp. 57–64, 2002.

[87] K. Hotting and B. Roder, “Auditory and auditory-tactile process-ing in congenitally blind humans,” Hearing Res., vol. 258,pp. 165–174, 2009.

[88] F. Vidal-Verdu and M. Hafez, “Graphical tactile displays forvisually-impaired people,” IEEE Trans. Neural Syst. Rehabil. Eng.,vol. 15, no. 1, pp. 119–130, Mar. 2007.

[89] S. Choi and K. J. Kuchenbecker, “Vibrotactile display: Percep-tion, technology, and applications,” Proc. IEEE, vol. 101, no. 9,pp. 2093–2104, Sep. 2013.

[90] A. R. Rut. (1998). Braille readers are leaders [Online]. Available:http://www.ne.nfb.org/node/358

[91] Y. Kato, T. Sekitani, M. Takamiya, M. Doi, K. Asaka, T. Sakurai,and T. Someya, “Sheet-type braille displays by integratingorganic field-effect transistors and polymeric actuators,” IEEETrans. Electron. Dev., vol. 54, no. 2, pp. 202–209, Feb. 2007.

[92] X. Wu, S.-H. Kim, H. Zhu, C.-H. Ji, and M. G. Allen, “A refresh-able braille cell based on pneumatic microbubble actuators,” J.Microelectromech. Syst., vol. 21, no. 4, pp. 908–916, 2012.

[93] M. Takamiya, T. Sekitani, Y. Kato, H. Kawaguchi, T. Someya,and T. Sakurai, “An organic FEDT SRAM with back gate toincrease static noise margin and its application to braillesheet display,” IEEE Jo. Solid-State Circuits, vol. 42, no. 1,pp. 93–100, Jan. 2007.

[94] V. Levesque, J. Pasquero, and V. Hayward, “Braille display bylateral skin deformation with the STReSS2 tactile transducer,” inProc. World Haptics Conf., 2007, pp. 115–120.

[95] J. Roberts, O. Slattery, D. Kardos, and D. Swope, “New technol-ogy enables many-fold reduction in the cost of refreshable brailledisplays,” in Proc 4th Int. ACM Conf. Assistive Technol., Arlington,VA, USA, 2000, pp. 42–49.

[96] “Enablemart.com,” School health, 2015.[97] P. C. Headley, V. E. Hribar and D. T. V. Pawluk, “Displaying

braille and graphics on a mouse-like tactile display,” in Proc.13th Int. ACM SIGACCESS Conf. Comput. Accessibility, Dundee,Scotland, 2011, pp. 235–236.

[98] J. C. Bliss, M. H. Katcher, C. H. Rogers, and R. P. Shepard,“Optical-to-tactile image conversion for the blind,” IEEE Trans.Man-Mach. Syst., vol. MMS-11, no. 1, pp. 58–65, Mar. 1970.

[99] Talking Tactile Tablet. (2015) [Online]. Available: http://touch-graphics.com/portfolio/ttt/

[100] C. M. Baker, L. R. Milne, J. Scofield, C. L. Bennett, and R. E.Ladner, “Tactile graphics with a voice: Using QR codes to accesstext in tactile graphics,” in Proc. 16th Int. ACM SIGACCESS Conf.Comput. Accessibility, Rochester, NY, USA, 2014, pp. 75–82.

[101] S. K. Kane, M. Ringel Morris, and J. O. Wobbrock, “Touchplates:Low-cost tactile overlays for visually impaired touch screenusers,” presented at the 15th Int. ACM SIGACCESS Conf. Com-puters and Accessibility, Bellevue, WA, USA, vol. 8, 2013.

[102] N. Giudiice, H. P. Palani, E. Brenner, and K. M. Kramer,“Learning non-visual graphical information using a touch-basedvibro-audio interface,” in Proc. 14th Int. ACM SIGACCESS Conf.Comput. Accessibility, Boulder, CO, USA, 2012, pp. 103–110.

[103] C. Xu, A. Israr, I. Poupyrev, O. Bau, and C. Harrison, “Tactile dis-play for the visually impaired using teslaTouch,” in Proc. HumanFactors Comput. Syst., Vancouver, BC, Canada, 2011, pp. 317–322.

276 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015

Page 20: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

[104] R. Rastogi, D. Pawluk, and J. M. Ketchum, “Issues of using tactilemice by individuals who are blind and visually impaired,” IEEETrans. Neural Syst. Rehabil. Eng., vol. 18, no. 3, pp. 311–318, Jun.2010.

[105] V. Levesque and V. Hayward, “Tactile graphics rendering usingthree laterotactile drawing primitives,” in Proc. IEEE HapticsSymp. Haptic Interfaces Virtual Environ. Teleoperator Syst., Reno,NV, USA, 2008, pp. 429–436.

[106] P. C. Headley and D. T. V. Pawluk, “A low-cost, variable-ampli-tude haptic distributed display for persons who are blind andvisually impaired,” in Proc. 12th Int. ACM SIGACCESS Conf.Comput. Accessibility, 2010, pp. 227–228.

[107] L. Brayda and C. Campus, “Conveying perceptible virtual tactilemaps with a minimalist sensory substitution device,” in Proc.IEEE Int. Workshop Haptic Audio Visual Environments. Games,2012, pp. 7–12.

[108] T. Pietrzak, A. Crossan, S. A. Brewster, B. Martin, and I. Pecci,“Creating usable pin array tactons for nonvisual information,”IEEE Trans. Haptics, vol. 2, no. 2, pp. 61–72, Apr.–Jun. 2009.

[109] S. Walker and J. K. Salisbury, “Large haptic topographic maps:marsview and the proxy graph algorithm,” in Proc. Symp. Interac-tive 3D Graph., 2003, pp. 83–92.

[110] C. Magnuson and K. Rassmus-Grohn, “Non-visual zoom andscrolling operations in a virtual haptic environment,” presentedat the 3rd Int. Conf. Eurohaptics, Dublin, Ireland, 2003.

[111] M. Ziat, O. Gapenne, J. Stewqart, C. Lenay, and J. Bausse,“Design of haptic zoom: Levels and steps,” in Proc. 2nd JointEurohaptics Conf. Symp. Haptic Interfaces Virtual Environ. Teleopera-tor Syst., 2007, pp. 102–108.

[112] R. Rastogi and D. Pawluk, “Intuitive tactile zooming for graphicsaccessed by individuals who are blind and visually impaired,”IEEE Trans. Neural Syst. Rehabil. Eng., vol. 21, no. 4, pp. 655–663,Jul. 2013.

[113] R. Rastogi and D. Pawluk, “Tactile diagram simplification onrefreshable displays,” Assistive Technol., vol. 25, no. 1, pp. 31–38,2013.

[114] V. Levesque, G. Petit, A. Dufresne, and V. Hayward, “Adaptivelevel of detail in dynamic, refreshable tactile graphics,” in Proc.IEEE Haptics Symp., Vancouver, BC, Canada, 2012, pp. 1–5.

[115] S. Paneels and J. C. Roberts, “Review of designs for haptic datavisualization,” IEEE Trans. Haptics, vol. 3, no. 2, pp. 119–137,Apr.–Jun. 2010.

[116] K. Moustakas, N. Georgios, K. Konstantinos, D. Tzovaras, andM.G. Strintzis, “Haptic rendering of visual data for the visuallyimpaired,” IEEE Multimedia, vol. 14, no. 1, pp. 62–72, Jan.–Mar.2007.

[117] J. P. Fritz and K. E. Barner, “Design of a haptic data visualizationsystem for people with visual impairments,” IEEE Trans. Rehabil.Eng., vol. 7, no. 3, pp. 372–384, Sep. 1999.

[118] W. Yu and S. A. Brewster, “Evaluation of multimodal graphs forblind people,” J. Universal Access Inf. Soc., vol. 2, no. 2, pp. 105–124, 2003.

[119] R. Long and E. Hill, “Establishing and maintaining orientationand mobility,” in Foundations of Orientation and Mobility, 2nd ed.,B. Blasch, W. Wiener, and R. Welsh, Eds. New York, NY, USA:AFB Press, 1997.

[120] N. Giudice and G. Legge, “Blind navigation and the role of tech-nology,” in Engineering Handbook of Smart Technology for Aging,Disability, and Independence, A. Helal, M. Mokhtari, and B. Abdul-razak, Eds. Hoboken, NJ, USA: Wiley, 2008, pp. 479–500.

[121] C. Wickens, “Processing resources in attention,” in Varieties inAttention, R. Parasuraman and D. Davis, Eds. London, U.K.: Aca-demic, 1984, p. 63–102.

[122] Sense navigation GPS [Online]. Available: http://www.sender-ogroup.com/products/shopvsnav.htm, 2015.

[123] J. R. Marston, J. M. Loomis, R. L. Klatzky, and R. G. Golledge,“Nonvisual route following with guidance from a simple hapticor auditory display,” J. Visual Impairment Blindness, vol. 101,pp. 203–211, Apr. 2007.

[124] M. Altini, E. Farella, E. Pirini, and L. Benini, “A cost-effectiveindoor vibro-tactile navigation system for blind people,” in Proc.Int. Conf. Health Informat., Rome, Italy, 2011, pp. 477–481.

[125] S. Bosman, B. Groenendaal, J. Findlater, T. Visser, M. De Graaf,and P. Markopoulos, “GentleGuide: An exploration of hapticoutput for indoors pedestrian guidance,” in Proc. Mobile HCI,Udine, Italy, 2003, pp. 358–362.

[126] J. Van Erp, A. Hendrik, V. Veen, and C. Jansen, “Waypoint navi-gation with a vibrotactile waist belt,” ACM Trans. Appl. Percep-tion, vol. 2, no. 2, p. 106–117, 2005.

[127] J. Van Erp, “Presenting directions with a vibro-tactile torso dis-play,” Ergonomics, vol. 48, no. 3, pp. 302–313, 2005.

[128] A. Cosgun, E. Sisbot, and H. Christensen, “Evaluation of rota-tional and directional vibration patterns on a tactile belt for guid-ing visually impaired people,” in Proc. IEEE Haptics Symp.,Houston, TX, USA, 2014, pp. 367–370.

[129] A. A. Kalia, G. E. Legge, R. Roy, and A. Ogale, “Assessment ofindoor route-finding technology for people who are visuallyimpaired,” J. Visual Impairment Blindness, vol. 104, pp. 135–147,2010.

[130] (2011). Sendero GPS LookAround [Online]. Available: https://www.senderogroup.com/support/supportandroid.htm

[131] N. A. Giudice, L. A. Walton, and M. Worboys, “The informaticsof indoor and outdoor space: A research agenda,” in Proc. 2ndACM SIGSPATIAL Int. Workshop Indoor Spatial Awareness, SanJose, CA, USA, 2010, pp. 47–53.

[132] B. Poppinga, C. Magnusson, M. Pielot, and K. Rassmus-Grohn,“TouchOver map: Audio-tactile exploration of interactivemaps,” in Proc. ACM 13th Int. Conf. Human Comput. InteractionMobile Dev. Services, New York, NY, USA, 2011, pp. 545–550.

[133] R. L. Klatzky, N. A. Giudice, C. R. Bennet, and J. M. Loomis,“Touch-screen technology for the dynamic display of 2d spatialinformation without vision: Promise and progress,” MultisensoryRes., vol. 27, pp. 359–378, 2014.

[134] S. J. La Grow and R. G. Long, Orientation and Mobility: Techniquesfor Independence. Alexandria, Egypt: AERBVI, 2011.

[135] J. Cutter, Independent Movement and Travel in Blind Children: APromotion Model. Charlotte, NC, USA: Inf. Age Publishing, 2007.

[136] D. Dakopoulos and N. G. Bourbakis, “Wearable obstacle avoid-ance electronic travel aids for blind: A survey,” IEEE Trans. Syst.,Man, Cybern. C, Appl. Rev., vol. 40, no. 1, pp. 25–35, Jan. 2010.

[137] L. Russell, “Travel path sounder,” presented at the RotterdamMobility Res. Conf., New York, NY, USA, 1965.

[138] N. Pressey, “Mowat sensor,” Focus, vol. 11, no. 3, pp. 35–39, 1977.[139] Y. Wang and K. J. Kuchenbecker, “HALO: Haptic alerts for low-

hanging obstacles in white cane navigation,” in Proc. IEEE Hap-tics Symp., 2012, pp. 527–532.

[140] W. Penrod, M. Corbett, and B. Blasch, “A master trainer class forprofessionals in teaching the ultracane electronic travel device,”J. Visual Impairment Blindness, vol. 99, no. 1, pp. 711–714, 2005.

[141] J. Borenstein and I. Ulrich, “The guidecane—A computerizedtravel aid for the active guidance of blind pedestrians,” in Proc.IEEE Int. Conf. Robot. Autom., Albuquerque, NM, USA, 1997,vol. 2, pp. 1283–1288.

[142] I. Ulrich and J. Borenstein, “The guidecane - applying mobilerobot technologies to assist the visually impaired,” IEEE Trans.Syst., Man, Cybern. A, Syst., Humans, vol. 31, no. 2, pp. 131–136,Mar. 2001.

[143] S. Gallo, D. Chapuis, L. Santos-Carreras, Y. Kim, P. Retornaz, H.Bleuler, and R. Gassert, “Augmented white cane with multi-modal haptic feedback,” in Proc. 3rd IEEE RAS EMBS Int. Conf.Biomed. Robot. Biomechatronics, Tokyo, Japan, 2010, pp. 149–155.

[144] W. M. Jenkins, M. M. Merzenich, M. T. Ochs, T. Allard, and E.Guic-Robles, “Functional reorganization of primary somatosen-sory cortex in adult owl monkeys after behaviorally controlled tac-tile stimulation,” J. Neurophysiol., vol. 63, no. 1, pp. 82–104, 1990.

[145] A. Bubic, E. Striem-Amit, and A. Amedi, “Large-scale brain plas-ticity following blindness and the use of sensory substitutiondevices,” in Multisensory Object Perception in the Primate Brain,New York, NY, USA: Springer ScienceþBusiness Media, 2010,pp. 351–380.

[146] “Braille - what is it? what does it mean to the blind?” FutureReflections, vol. 1, no. 15, 1996.

[147] T. L. Varao Sousa, J. S. Carriere, and D. Smilek, “The way weencounter reading material influences how frequently we mindwander,” Frontiers Psychol., vol. 4, p. 892, 2013.

[148] S.Millar,Reading By Touch. NewYork, NY, USA: Routledge, 1997.[149] J. M. Loomis and S. J. Lederman, “Tactual perception,” in Hand-

book of Perception and Human Performance. New York, N.Y., USA:Wiley, 1986.

[150] J. M. Loomis, “An investigation of tactile hyperacuity,” SensoryProcesses, vol. 3, pp. 289–302, 1979.

[151] J. M. Wolfe, Sensation and Perception. Sunderland, MA, USA: Sina-uer Assoc., 2015.

PAWLUK ET AL.: DESIGNING HAPTIC ASSISTIVE TECHNOLOGY FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED 277

Page 21: 258 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3 ...ryokitada.dolphin.jp/img/Pawluk2015review_IEEE_ToH1.pdfDesigning Haptic Assistive Technology for Individuals Who Are Blind or Visually

Dianne T.V. Pawluk received the PhD degreefrom Harvard. She is an associate professor ofbiomedical engineering at Virginia Common-wealth University. She teaches courses in theareas of computational methods, assistive tech-nology, and rehabilitation engineering. Her activeresearch areas include the development of hapticassistive devices and methods for individualswho are blind or visually impaired, including devi-ces and algorithms for tactile graphics, orienta-tion and mobility, haptic perceptual organization,

and the effective use of multimodal presentations in education. She cur-rently serves as an associate editor for the IEEE Transactions on Hap-tics. She is a member of the IEEE.

Richard J. Adams received the BS degree as adistinguished graduate from the U.S. Air Force(USAF) Academy in 1989, the MS degree fromthe University of Washington in 1990, and thePhD degree from the University of Washington in1999. He served as an active duty commissionedofficer in the USAF between 1989 and 2009. Hereturned to the USAF Academy in 2007 as anassistant professor in the Department of Astro-nautical Engineering. In 2009, he retired from theAir Force and joined Barron Associates, Inc.,

where, as a principal research scientist, he currently leads multipleefforts spanning game-based rehabilitation systems, assistive technol-ogy for the blind and visually impaired, tactile human-computer interfa-ces, and spaceborne sensors. He is a senior member of the IEEE.

Ryo Kitada received the PhD degree from KyotoUniversity. He is an assistant professor in theDivision of Cerebral Integration, National Institutefor Physiological Sciences, Okazaki, Japan. Hisresearch interests include human tactile/hapticperception, cognition, and neuroscience.

" For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

278 IEEE TRANSACTIONS ON HAPTICS, VOL. 8, NO. 3, JULY-SEPTEMBER 2015