19
Multisensory Research 27 (2014) 379–397 brill.com/msr The Effect of Extended Sensory Range via the EyeCane Sensory Substitution Device on the Characteristics of Visionless Virtual Navigation Shachar Maidenbaum 1,, Shelly Levy-Tzedek 1,2 , Daniel Robert Chebat 1,2 , Rinat Namer-Furstenberg 1 and Amir Amedi 1,2 1 Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel 2 The Edmond and Lily Safra Center for Brain Research, The Hebrew University of Jerusalem, Givat Ram, Jerusalem, Israel Received 3 July 2014; accepted 2 October 2014 Abstract Mobility training programs for helping the blind navigate through unknown places with a White- Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single- point-distance into auditory cues identical to the EyeCane’s in the real world. We compared perfor- mance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navi- gation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation. Keywords Orientation and mobility, blind, sensory substitution, virtual environments * To whom correspondence should be addressed. E-mail: [email protected] © Koninklijke Brill NV, Leiden, 2014 DOI:10.1163/22134808-00002463

The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

Multisensory Research 27 (2014) 379–397 brill.com/msr

The Effect of Extended Sensory Range via the EyeCaneSensory Substitution Device on the Characteristics of

Visionless Virtual Navigation

Shachar Maidenbaum 1,∗, Shelly Levy-Tzedek 1,2, Daniel Robert Chebat 1,2,

Rinat Namer-Furstenberg 1 and Amir Amedi 1,2

1 Department of Medical Neurobiology, Institute for Medical Research Israel-Canada,Faculty of Medicine, The Hebrew University of Jerusalem,

Hadassah Ein-Kerem, Jerusalem, Israel2 The Edmond and Lily Safra Center for Brain Research, The Hebrew University of Jerusalem,

Givat Ram, Jerusalem, Israel

Received 3 July 2014; accepted 2 October 2014

AbstractMobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies,offering more information to the blind user, on the underlying premises of these programs such asnavigation patterns?

We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane’s in the real world. We compared perfor-mance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device andvisual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navi-gation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levelssuccessfully, taking shorter paths and with less collisions than these groups, and we demonstrate therelative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additionaldistance information indeed changes navigation patterns from virtual-White-Cane use, and bringsthem closer to visual navigation.

KeywordsOrientation and mobility, blind, sensory substitution, virtual environments

* To whom correspondence should be addressed. E-mail: [email protected]

© Koninklijke Brill NV, Leiden, 2014 DOI:10.1163/22134808-00002463

Page 2: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

380 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

1. Introduction

How does the lack of vision affect the route one takes when crossing an envi-ronment? How does this route change when different assistive tools are usedoffering additional information?

The challenges involved in independent mobility in unfamiliar environ-ments pose some of the greatest barriers the blind and visually impaired haveto face (Douglas et al., 2006; Jacobson, 1993; Quiñones et al., 2011), oftencausing them to be injured (Manduchi and Kurniawan, 2010, 2011) or lost(White et al., 2008) even when using traditional aids such as the White-Cane.Thus, there is a need for aiding them in these tasks, both by developing newdevices and by designing new techniques and strategies for navigation.

Unlike visual navigation, in which the sighted can perceive the entire envi-ronment, or at least a large part of it, navigating without vision limits the blindor visually impaired person to using knowledge in the immediate surround-ings within their tactile reach. When using a White-Cane the user’s reach isextended, but is still limited to the area immediately around them (∼1.2 m).Orientation and mobility plans have been designed and heavily researched tohelp the blind navigate using this limited information, and have been shownto significantly improve the users mobility skills (Kim et al., 2009; Lahav andMioduser, 2008).

One of the main tools for researching these movement patterns in the pastyears has been the use of virtual environments (Lahav et al., 2012). Whilethere are many differences between navigating virtually and in the real world,virtual environments offer us access to the general characteristics of naviga-tion, more control over parameters such as the available sensory stimuli andmore varied and complex environments than are typically available for con-trolled experiments in the real world. Additionally, as it is well established thatspatial information can be transferred, albeit with some limitations (Richard-son et al., 1999), between virtual and real-world environments both for thegeneral population (Foreman et al., 2000; Witmer et al., 1996) and for theblind (Lahav and Mioduser, 2004; Merabet et al., 2012), these same environ-ments also hold great potential as a rehabilitation tool in and of themselves.

The research into mobility skills without a device and with the traditionalWhite-Cane has proven beneficial, but how are the results of this research af-fected by an increase in the device’s range beyond ranges that are practicalwith a physical extension of the White-Cane? Would such an increase be ig-nored by the users, or would it make their path more similar to that of visualnavigation? Finally, would the use of a device with an increased range requirea different rehabilitation paradigm, and if so could this have been one of thefactors limiting the adoption of such devices by the blind community?

Page 3: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 381

To explore these questions we used here a virtual version of the EyeCane(Buchs et al., 2014), an electronic travel aid for the blind recently developedin our lab as a simple Sensory Substitution Device (SSD). The EyeCane aug-ments the traditional White-Cane with additional range (up to 5 m), and wehave previously shown (Maidenbaum et al., in press) that it can be used fortasks such as distance estimation, navigation and obstacle avoidance. We thencreated a virtual version of this device and showed that it can be used forvirtual navigation (Maidenbaum et al., 2013), as well as other tasks such asvirtual shape recognition (Maidenbaum et al., 2012).

Here, we comparatively explore with groups of blindfolded sighted par-ticipants and a single congenitally blind participant, the differences in initialintuitive navigation in virtual environments when using the virtual-EyeCaneelectronic mobility aid, when using a virtual version of the traditional White-Cane, when navigating without using a device at all and when navigatingvisually. This exploration takes place in a series of virtual environments in-cluding corridors, rooms and complex indoor environments.

2. Methods — Experimental Design

2.1. The Tasks

The experiment consisted of 14 different virtual environments (see Fig. 1C–Ffor images of each one), of which the first two were simple training corri-dors, four were splitting corridors (task 1, Corridors), five were rooms (task 2,Rooms) and three were complex levels (task 3, Complex). All participants per-formed these levels in the same order.

2.1.1. Task 1, CorridorsIn the Corridor trials of task 1 (Fig. 1D), participants were given a specificinstruction on how to navigate towards the exit before starting each trial (forexample: ‘turn left at the third branch-off corridor’) and they had to find theexit by following the instruction.

2.1.2. Task 2, RoomsIn the Room trials of task 2 (Fig. 1E), participants had to find the exit fromrooms and navigate out of them.

2.1.3. Task 3, ComplexIn the Complex trials of task 3 (Fig. 1F), participants were not given even ageneral description of the environment and were only told that they had to findthe exit.

After completing each level, participants had to describe the level and routethey had taken in it to demonstrate their spatial perception of it.

Page 4: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

382 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

Figure 1. The experimental setup: (A) Picture of the experimental setup including a PC stationwith standard headphones and keyboard and a blindfolded participant. On the screen can beseen a graphical view of the virtual environment used by the experimenter for tracking andby the visual group. (B) First-person view from within a level. (C–F) Third-person view ofthe virtual levels used in the experiment; green rectangles and white circles mark the startingarea and purple rectangles and white stars the end location: (C) Training levels; (D) splittingcorridor levels; (E) rooms levels; (F) complex levels. This figure is published in colour in theonline version.

We automatically measured three parameters for each experiment. Successwas determined by successfully completing the level within the predefinedtime frame of 3.5 min for each of the levels of tasks 1 and 2 and 7 min for eachtask 3 level, Distance was measured by the length of the path the participantstook in the virtual environments, and Collisions were measured by the numberof collisions of the participant’s avatar with the walls. It is important to notethat we considered any contact the avatar had with the walls as a collision,as these contacts are obtrusive and form one of the main problems the blindreport when navigating indoors in the real world (for example, running theirhand along a desk and knocking over a cup of coffee or running their hand

Page 5: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 383

into a sharp object) and one of the main stated reasons for users avoiding useof the White-Cane (Pavey et al., 2009).

The participants were divided into 4 groups, each using a different device.Group 1, VEC, used the virtual-EyeCane (Maidenbaum et al., 2012, 2013)which transforms distance information into auditory cues for objects up to 5 maway, such that the closer the object the higher the rate of the cues. Group 2,WC, used a virtual representation of the traditional White-Cane. Group 3,NoDev, did not use any device and relied solely upon the sound of collisions ofthe avatar with the virtual walls. Group 4, Visual, performed the task visually.

The participants of groups 1–3 were blindfolded throughout the experiment.

2.2. Training

Participants completed two training levels (Fig. 1C, identical to those used inMaidenbaum et al., 2013) to familiarize them with the interface and with thedevice they would be using. Training time was under 7 min.

2.3. Participants

Twenty-eight blindfolded-sighted participants for all three tasks (16 male,aged 27.9 ± 6.7, 25 right-handed), divided into four groups of seven each.One additional congenitally blind participant performed the experiment usingthe virtual-EyeCane (EN, female, right-handed).

2.4. Experimental Setup

2.4.1. The Virtual EnvironmentsThe virtual environment in this experiment was created using Blender 2.49and Python 2.6.2. The environment was calibrated so that every ‘virtual meter’corresponds to one meter in the real world. Participants were seated comfort-ably in front of a computer (see Fig. 1A). Navigation was accomplished usingthe arrow keys on a standard keyboard, and the auditory cues were deliveredvia standard headphones. On every contact with the environment users hearda pre-recorded collision sound. The virtual-EyeCane auditory cues were a se-ries of sounds the frequency of which was determined by the distance of thepoint the device was pointed at, with anything over 5 m away being quiet. Thevirtual-White-Cane auditory cues consisted of the sound of collision wheneverthe device struck a wall. An additional ‘end of sweep’ cue signified the end ofeach sweep from left to right of the assistive device.

2.4.2. The Virtual-EyeCaneThe virtual-EyeCane conveys distances up to 5 virtual meters to the user by achanging frequency of auditory cues, such that 5 m and up will be silent andthe closer the object the device points at, the higher the rate of cues. The cuesand distance transformation are completely identical to those of the EyeCanein the real world, where the EyeCane uses a narrow IR beam to detect the

Page 6: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

384 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

distance to the objet it is pointed at and converts it to auditory cues with therate based directly on the distance (see Maidenbaum et al., in press, for moredetails about the physical device, and Maidenbaum et al., 2012, 2013 for pre-vious examples of its virtual use). By default, the device is held in the virtualusers hand and pointed forward until the user actively decides to perform asweep from left to right using the space-bar.

2.4.3. The Virtual-White-CaneThe virtual-White-Cane is a virtual object the same shape and size (length of1.2 m) as the traditional White-Cane. Whenever a collision is detected betweenthe cane object and a wall an auditory cue of a collision is sounded. By default,the device is held in the virtual users hand and pointed forward until the useractively decides to perform a sweep from left to right using the space-bar.

2.5. Keyboard Controls

The keyboard interface included the arrow keys of a standard keyboard suchthat the side arrows rotated participants (30° per click) and the up and downarrows moved users forward and back. The Space bar was used by the White-Cane and EyeCane groups for initiating a left-to-right sweep with the device,whose default position when not in-sweep was directly ahead from the centerof the body.

2.6. Statistics

Unless specified otherwise, the default notation format in the results is〈Mean〉 ± 〈Standard deviation〉, and the default statistical test is an unpairedunequal variance two-tailed t-test. All statistics are corrected for multiplecomparisons.

2.7. Ethics

The experiment was approved by The Hebrew University’s ethics committee,and all participants signed informed consent forms.

3. Results

3.1. Corridors Task

General. The VEC group performed better than the WC and NoDev groupson all three parameters, but this advantage was significant over both only forcollisions and over the NoDev group for distance as well. There is a distinctdifference in pattern between the four groups. The results for the visual groupwere the best for all parameters.

Success. Group 1, VEC, succeeded in 64.2 ± 6.8% (mean ± SD) of thelevels. Group 2, WC, succeeded in 57.1 ± 12% of the levels. Group 3, NoDev,

Page 7: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 385

succeeded in 53.5 ± 6% of the levels. Group 4, Visual, had a full success rateof 100% (see Fig. 2A). The results for the VEC group were better than theresults of the WC and NoDev groups, but not in a significant fashion.

Distance. Group 1, VEC, had an average path length of 17.9 ± 1.5 m.Group 2, WC, had an average path length of 25.6 ± 5.4 m. Group 3, NoDev,had an average path length of 36.5 ± 2.9 m. Group 4, Visual, had an averagepath length of 13.9 ± 0.1 m (see Fig. 2B). The results for the VEC group werebetter than the results of the WC but not in a significant fashion (p = 0.23)and significantly better than those of the NoDev (p < 2.3e−4) group.

Collisions. Group 1, VEC, had an average of 2.5 ± 1 collisions. Group 2,WC, had an average of 22.6 ± 6.5 collisions. Group 3, NoDev, had an averageof 65 ± 8.4 collisions. Group 4, Visual, had an average of 1 ± 0.3 collisions(see Fig. 2C). The results for the VEC group were significantly better than theresults of the WC (p < 1.7e−2) and NoDev (p < 1.9e−5) groups.

Time. Group 1, VEC, had an average of 375 ± 42 s. Group 2, WC, hadan average of 346 ± 26 s. Group 3, NoDev, had an average of 318 ± 25 s.Group 4, Visual, had an average of 48 ± 3 s. The results for the NoDev, WCand VEC groups were not significantly different.

3.1.1. Common Strategies Observed in these LevelsGroup 1, VEC, tended to walk in a straight line forward with frequent stopsfor scanning (see Fig. 3A). This group spent a lot of their time scanning whenlocating a junction, but usually did not enter them until the correct one wasreached. Most collisions were performed when attempting to turn into the fi-nal corridors. Participants of group 2, WC, tended to first head for one of thewalls, and then walk very close to it, with frequent scans. As in Group 1 mem-bers of this group spent a lot of their time scanning next to the turnoff corridorto be sure where the turnoff started and ended, and usually did not enter thesecorridors. The participants in group 3, NoDev, tended to walk near the wall,frequently colliding with it to make sure it was there and locate doors in it.Participants in this group often accidently entered the turn off corridors, andtended to have a much weaker perception of their general surroundings. Par-ticipants of group 4 tended to walk in lines that were less straight than those ofother groups, and typically proceeded directly to the final corridor and turnedthere without delays along the way.

3.2. Rooms Task

General. The VEC group performed significantly better than the WC andNoDev groups on all three parameters. There is a distinct difference in pat-tern between the four groups. The results for the visual group were the bestfor all parameters.

Success. Group 1, VEC, succeeded in 94.2 ± 3.4% of the levels. Group 2,WC, succeeded in 77.1 ± 4.8% of the levels. Group 3, NoDev, succeeded in

Page 8: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

386 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

Figure 2. Results: The comparative results of the four groups and of the blind participant or-dered by parameter: (A) Success level; (B) distance travelled; (C) collisions.

Page 9: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 387

77.1 ± 7.4% of the levels. Group 4, Visual, had a full success rate of 100%(see Fig. 2A). The results for the VEC group were significantly better than theresults of the WC (p < 9.9e−3) and NoDev (p < 3.8e−2) groups.

Distance. Group 1, VEC, had an average path length of 9 ± 0.4 m. Group 2,WC, had an average path length of 15.8 ± 2.9 m. Group 3, NoDev, had anaverage path length of 32.3 ± 3.1 m. Group 4, Visual, had an average pathlength of 6.3 ± 0.1 m (see Fig. 2B). The results for the VEC group werebetter than the results of the WC but only in a marginally significant fashion(p = 0.05) and significantly better than those of the NoDev (p < 1.7e−5)group.

Collisions. Group 1, VEC, had an average of 2.9 ± 0.5 collisions. Group 2,WC, had an average of 23.4 ± 7.3 collisions. Group 3, NoDev, had an averageof 49.9 ± 4.1 collisions. Group 4, Visual, had an average of 0.8 ± 0.1 colli-sions (see Fig. 2C). The results for the VEC group were significantly betterthan the results of the WC (p < 2.4e−2) and NoDev (p < 2.3e−7) groups.

Time. Group 1, VEC, had an average of 208 ± 16 s. Group 2, WC, hadan average of 251 ± 24 s. Group 3, NoDev, had an average of 261 ± 45 s.Group 4, Visual, had an average of 38 ± 4 s. The results for the NoDev, WCand VEC groups were not significantly different.

3.2.1. Common Strategies Observed in these LevelsGroup 1, VEC, tended to walk in a straight line forward with frequent stopsfor scanning, until locating a door, and then turning towards it (see Fig. 3B).Participants of group 2, WC, tended to first head for one of the walls, and thenwalk very close to it, with frequent scans. The participants in group 3, NoDev,tended to walk near the wall, frequently colliding with it to make sure it wasthere and locate doors in it. Participants of group 4 tended to walk into theroom and then head directly towards the exit. One of the main consequencesof these differences are the amount of time spent in the center areas of theroom vs. the areas near the walls as can be seen clearly in the Heat Maps inFig. 3B.

3.3. Complex Environment Task

General. The VEC group performed significantly better than the WC andNoDev groups on all three parameters. There is a distinct difference in pat-tern between the four groups. The results for the visual group were the bestfor all parameters.

Success. Group 1, VEC, succeeded in 61.9 ± 10.4% of the levels. Group 2,WC, succeeded in 14.2 ± 6.2% of the levels. Group 3, NoDev, succeeded in23.8 ± 7.4% of the levels. Group 4, Visual, had a full success rate of 100%(see Fig. 2A). The results for the VEC group were significantly better than theresults of the WC (p < 1.7e−3) and NoDev (p < 1.2e−2) groups.

Page 10: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

388 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

Distance. Group 1, VEC, had an average path length of 26.4 ± 2.6 m.Group 2, WC, had an average path length of 46.4 ± 4.8 m. Group 3, NoDev,had an average path length of 83.6 ± 5.5 m. Group 4, Visual, had an aver-age path length of 18.3 ± 1.7 m (see Fig. 2B). The results for the VEC groupwere significantly better than the results of the WC (p < 5.6e−3) and NoDev(p < 1.8e−6) groups.

Collisions. Group 1, VEC, had an average of 16.9 ± 4.2 collisions. Group 2,WC, had an average of 71.3 ± 14.5 collisions. Group 3, NoDev, had an aver-age of 151.1 ± 10.9 collisions. Group 4, Visual, had an average of 5.8 ± 1.2collisions (see Fig. 2C). The results for the VEC group were significantly bet-ter than the results of the WC (p < 6.1e−3) and NoDev (p < 1.8e−7) groups.

Time. Group 1, VEC, had an average of 390 ± 93 s. Group 2, WC, hadan average of 409 ± 30 s. Group 3, NoDev, had an average of 364 ± 30 s.Group 4, Visual, had an average of 67 ± 9 s. The results for the NoDev, WCand VEC groups were not significantly different.

3.3.1. Common Strategies Observed in these LevelsGroup 1, VEC, tended to make wide scans and re-align themselves towardsquieter areas, especially when these areas were flanked by walls indicating adoorway/corridor (see Fig. 3C). In large open spaces this often caused partici-pants of these groups to head towards distant corners. Participants of group 2,WC, tended to first head for one of the walls, and then walk very close tothem, with frequent scans. Participants in this group failed to locate the exiton most of their attempts. The participants in group 3, NoDev, tended to walknear the wall, frequently colliding with it which they reported as being done tomake sure it was there and locate doors in it. As in group 2, most attempts atnavigating these levels by members of group 3 ended in failure. Participants ofgroup 4 tended to walk into the room and then head directly towards the exit.

Figure 3. Mobility patterns: The mobility patterns used by the different groups from one levelin each type of level category: (A) corridors; (B) rooms; (C) complex. The top half of eachsection shows as an example the path taken by a single participant through one of the levels inthis category. Starting locations are marked with a circle and endpoints with a star. The bottomhalf shows a heat-map of the paths taken by all participants in this same level. Some specialpoints of interest to note by category: in (A) note that the visual and VEC groups did not enterthe wrong branch-offs, while the NoDev and White-Cane groups did. Also note the greaterscattering of the locations of the NoDev and White-Cane groups. In (B) note that the NoDevand White-Cane groups spent time near the walls (White-Cane mainly there, NoDev also in therooms center but to a lesser extent than by the walls) while the other two groups headed for thecenter and then towards to door. In (C) note the extra time spent by the NoDev and White-Canegroups in the earlier parts of the path; many participants did not reach the end while the othertwo groups completed the circle more evenly until it is end. This figure is published in colourin the online version.

Page 11: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 389

Page 12: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

390 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

Table 1.Confusion table for SVM: Average over all levels. Rows: The ‘actual group’, columns: ‘taggedas’. Cells highlighted by the frequency they were chosen (darker = more common), answers onthe diagonal are correct ones

Visual VEC WC NoDev

Vis 1 0 0 0VEC 0.05 0.86 0.09 0WC 0 0.32 0.37 0.31NoDev 0 0 0.15 0.85

Table 2.Confusion table for k-means: Average over all levels. Rows: The ‘actual group’, columns:‘tagged as’. Cells highlighted by the frequency they were chosen (darker = more common),answers on the diagonal are correct ones

Visual VEC WC NoDev

Vis 1 0 0 0VEC 0.19 0.62 0.19 0WC 0.05 0.38 0.38 0.19NoDev 0 0.05 0.10 0.86

3.4. Classification and Clustering

Given the differences among the navigation patterns across the groups, weexplored whether the four groups were different enough to be classified byautomated algorithms.

First we used a supervised learning algorithm, multi-class support vectormachine (SVM) with 100-fold cross-validation, and averaged the results (seeTable 1). This classification achieved an average success rate of 81.5% forcorridors, 69% for rooms and 80.5% for the complex levels (see confusionmatrices in Table 1). Interestingly, the visual group, VEC and the NoDevachieved high classification rates (>85%), while the White-Cane group wasspread between VEC, WC and NoDev. Additionally, in all of the cases wheretrials from other groups were mistakenly classified as visual the misclassifiedtrial was from the VEC group.

We then used an un-supervised clustering algorithm, k-means (see Table 2).This algorithm divides the data into four groups, which we compared to theoriginal tags. k-Means achieved an average success rate of 71.4% for cor-ridors, 75.0% for rooms and 67.8% for the complex levels (see confusionmatrices in Table 2). Interestingly, the visual group was clustered perfectly,NoDev was classified well with several mistakes toward WC and VEC, WCwas spread widely between correct classification and NoDev and VEC, and

Page 13: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 391

VEC was classified well with several mistakes towards WC and Visual. Addi-tionally, in 79% of the cases where trials from other groups were mistakenlyclassified as visual the misclassified trial was from the VEC group.

3.5. The Blind Participant

The results of the single blind participant (EN) using the virtual-EyeCane sur-passed that of the VEC group. EN’s Success level was higher in the corridors(100% vs. 64.2 ± 6.8%) and rooms (100% vs. 94.2 ± 3.4%) and above averagefor the complex levels (66% vs. 61 ± 10.9%). EN’s average path length wassimilar to the VEC group and slightly better for the corridors (14.7 vs. 17.9 ±1.5) and rooms (7.4 vs. 9 ± 0.4) but longer for the complex levels (47.4 vs.26.4 ± 2.6). EN’s number of collisions was also similar and slightly worse tothat of the VEC group for the corridors (7.25 vs. 2.5 ± 1) and rooms (3.4 vs.2.9 ± 0.5) and for the complex levels (25 vs. 16.9 ± 4.2).

4. Discussion

4.1. General

The results show that users of the virtual-EyeCane were indeed able to use thedevice to navigate through a variety of virtual environments, strengthening theresults shown previously in (Maidenbaum et al., 2013) where this device wasintroduced using much simpler environments.

The results show that there is indeed a difference in navigation patterns be-tween users of the virtual-EyeCane and users of the virtual-White-Cane. Usersof the virtual-EyeCane complete more levels successfully, taking a shorter pathand with fewer collisions than users of the virtual-White-Cane or of no de-vice. Additionally, the common observed strategies developed intuitively bymembers of these groups were markedly different between users of differ-ent devices (Fig. 3), and these parameters included sufficient information forboth significant supervised classification and significant unsupervised cluster-ing (Tables 1, 2). For example, users of the virtual-EyeCane in the virtualroom levels spent most of their time in the center regions of the room whileWhite-Cane users spent them near the walls.

Unsurprisingly, the sighted control group completed all levels with a fullsuccess rate, and with significantly better scores on all parameters than theother groups. When comparing the full paths participants commonly tookthrough the environment, the routes of participants using the virtual-EyeCanewere similar to those of the sighted group, while those of the White-Cane usersare similar to those of the group not using an assistive device (see Fig. 3).

Remarkably, these results were achieved even with only the use of a mini-malistic SSD offering information about the single parameter of depth from asingle point in the world and after only several minutes of training.

Page 14: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

392 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

4.2. Sighted vs. Blind

The results presented here, except those of the single blind user (EN) usingthe virtual-EyeCane, are from blindfolded sighted participants. It is well es-tablished that there are significant differences in the way the humans navigatewithout vision, especially when lacking vision during important developmen-tal stages such as in the case of the congenitally blind, though the extent andexact nature of these differences is conflicted (Millar, 1994; Pasqualotto andProulx, 2012, 2013; Röder et al., 2008; Strelow, 1985). One of the main ques-tions in this context is the cause for this difference between users who areblind and users who are sighted — does the difference in navigation patternsreflect an inherent internal difference in spatial perception or simply a lack ofinformation?

While only from a single participant, the results obtained from the singlecongenitally blind participant anecdotally suggest the possibility that the blindcan indeed potentially perform just as well, and even better, using the virtual-EyeCane, and that the reported difference in result patterns potentially holdsfor them as well, indicating that the difference is indeed only a consequenceof the available information. Thus, the next step will be to run this experimentwith a larger group of blind participants, including with the virtual-White-Cane and without any device, and see if these results hold for them as well.

It should be noted that the navigation patterns of the blindfolded-sightedWhite-Cane and no device users were similar to those previously reported asused by the blind when using the White-Cane or no device, suggesting as wellthat this difference is indeed based on the available information, rather than aninherit limit of the navigation networks in the brain.

4.3. Differences from the Real White-Cane

The virtual version of the White-Cane used in this experiment offers the userinformation about encountered obstacles in the range of the traditional White-Cane (1.2 m), but does not offer the full range of information offered by itin the real world. For example, it does not offer information about the textureor hardness of the obstacles encountered while navigating, and offers the ob-stacle detection information auditorilly instead of a combination of auditionand haptics. While this information is not relevant directly to this researchwe acknowledge that it may affect the use of the White-Cane even for thetasks performed here. However, as the behavior and strategies of this groupare similar to previous results from virtual navigation with versions of theWhite-Cane using haptic input (Lahav and Mioduser, 2004, 2008), and as theparticipants did not have prior experience with the real White-Cane causingthem to perceive this information as non-natural, we believe our conclusionsabout navigation parameters, such as path length, path, collisions and general

Page 15: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 393

success, to be legitimate. It would however be interesting in the future to run anadditional group using a tactile virtual-White-Cane (such as that used in Lahavand Mioduser, 2004) which would be more similar, though still not identical,to the real White-Cane, through these same levels and compare the results.

4.4. Lack of Vestibular and Proprioceptive Information

One of the main difficulties Participants had was the lack of vestibular and pro-prioceptive information as they navigated through the virtual environments,especially when turning, which often caused confusion. This is a commonproblem with all virtual simulations of motion. It would be interesting to ex-plore this question in the future using full body control instead of a keyboardinterface and see the effect of such information. It should be noted that withoutvision such orientation is often harder than one would expect (Klatzky et al.,1998).

4.5. Combination with Other Sensory Substitution Devices

The paths commonly taken here show a marked difference between usersof different devices, and a qualitative similarity between navigating using avirtual-EyeCane and using vision even when only using a minimalistic SSDsubstituting depth from a single point. As visually navigating users utilizedother visual information beyond single-point distance such as landmarks, webelieve that this similarity would grow even stronger when combining thevirtual-EyeCane with other mobility aids, such as more complex sensory sub-stitution devices which would offer the user via functioning senses additionalvisual information such as shape, location and color (Abboud et al., 2014;Bach-y-Rita and Kercel, 2003; Levy-Tzedek et al., 2012; Meijer, 1992), andwhich have already shown potential for mobility related tasks (Chebat et al.,2011; Durette et al., 2008; Johnson and Higgins, 2006). For example, in morecomplex environments which include landmarks complex SSDs will allow theuser to perceive and utilize them, or recognize and differentiate objects alonghis path, which we predict will cause an even greater shift away from White-Cane mobility patterns towards those of the sighted.

4.6. Transferring These Results to the Real World

As with any research dealing with experiments in virtual environments one ofthe main questions requiring discussion is how well these results will transferto real world use of these devices.

We believe these results will indeed transfer to the real world as (a) previousliterature has shown transfer both in sighted (Foreman et al., 2000; Witmer etal., 1996) and blind (Lahav and Mioduser, 2004; Merabet et al., 2012) usersof virtual environments. (b) Previous literature exploring the mobility patterns

Page 16: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

394 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

with the White-Cane in real and virtual environments reported for the White-Cane similar virtual results and patterns to those reported here (Lahav andMioduser, 2008). (c) Results we obtained in a previous experiment with theEyeCane in the real world (Maidenbaum et al., in press, note that this ex-periment included only a single environment and only part of the parametersreported here) appear to follow similar lines to those reported here both interms of the mobility patterns using the EyeCane and White-Cane and in termsof the relative relation between the two (though in absolute numbers this ex-periment included less collisions).

On the other hand, it is clear that there will be significant differences sinceas discussed above the real world is noisier, includes information from othersenses such as proprioceptive feedback when moving and turning, tactile in-formation about the angle a walk in a collision and many more degrees offreedom to aim the devices. A dedicated future experiment will be required toexplore the exact level and characteristics of this transfer and this will be thenext step of our work. We predict that results will indeed transfer to the realworld, but with increased performance (e.g. less absolute number of collisions,but similar gap between no device and White-Cane vs. the EyeCane), and thatthe contrast in navigation patterns (such as avoiding the center of rooms) willbe even more pronounced.

Another interesting factor to explore in this future experiment is whether theabsolute-distance conveyed by the EyeCane directly will cause users to avoidthe ‘compression’ effect reported by previous studies (Frissen et al., 2011;Lappe et al., 2011; Li et al., 2011; Philbeck et al., 1995) in which virtualdistances are misjudged.

5. Conclusions

Here, by comparatively exploring the differences in initial intuitive navigationin virtual environments when using different devices we have shown that thecharacteristics of virtual-EyeCane usage are significantly different from thoseof White-Cane usage and from those of navigation without an assistive device.Users of the virtual-EyeCane complete more virtual levels successfully, tak-ing a shorter path and with fewer collisions than users of the White-Cane orof no device and do so with mobility patterns which are different enough to bedifferentiated algorithmically. Finally, we have shown anecdotally that naviga-tion with the virtual-EyeCane takes on patterns similar to those of navigatingvisually.

This has significant implications on mobility rehabilitation for visionlessnavigation, as it suggests the availability of increased range of spatial data isuseful, and requires a different orientation and mobility rehabilitation trainingparadigm. Additionally, it demonstrates the potential of SSDs to induce be-

Page 17: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 395

havioral patterns closer to those of the sighted even with a minimalistic SSD,suggesting more complex devices offering even more visual information mighthave an even stronger effect.

Acknowledgements

This work was supported by a European Research Council grant to A.A. (grantnumber 310809); The Charitable Gatsby Foundation; The James S. McDon-nell Foundation scholar award (to A.A.; grant number 220020284); The IsraelScience Foundation (grant number ISF 1684/08); The Azrieli Foundation fel-lowship award (to D.R.C.); The Edmond and Lily Safra Center for BrainSciences (ELSC) fellowship (to S.L. and D.R.C.).

References

Abboud, S., Hanassy, S., Levy-Tzedek, S., Maidenbaum, S. and Amedi, A. (2014). EyeMusic:introducing a “visual” colorful experience for the blind using auditory sensory substitution,Restor. Neurol. Neurosci. 32, 247–257.

Bach-y-Rita, P. and Kercel, S. (2003). Sensory substitution and the human–machine interface,Trends Cogn. Sci. 7, 541–546.

Buchs, G., Maidenbaum, S. and Amedi, A. (in press). Obstacle identification and avoidanceusing the ‘EyeCane’, in: EuroHaptics, LNCS 8619, Paris, France, pp. 13–18.

Chebat, D. R., Schneider, F. C., Kupers, R. and Ptito, M. (2011). Navigation with a sensorysubstitution device in congenitally blind individuals, Neuroreport 22, 342–347.

Douglas, G., Corcoran, C. and Pavey, S. (2006). Network 1000: Opinions and Circumstancesof Visually Impaired People in Great Britain: Report Based on over 1000 Interviews. VisualImpairment Centre for Teaching and Research, University of Birmingham, UK.

Durette, B., Louveton, N., Alleysson, D. and Hérault, J. (2008). Visuo-auditory sensory substi-tution for mobility assistance: testing The VIBE, in: Workshop on Computer Vision Appli-cations for the Visually Impaired, Marseille, France, pp. 1–13.

Foreman, N., Stirk, J., Pohl, J., Mandelkow, L., Lehnung, M., Herzog, A. and Leplow, B. (2000).Spatial information transfer from virtual to real versions of the Kiel locomotor maze, Behav.Brain Res. 112, 53–61.

Frissen, I., Campos, J. L., Souman, J. L. and Ernst, M. O. (2011). Integration of vestibular andproprioceptive signals for spatial updating, Exp. Brain Res. 212, 163–176.

Jacobson, W. H. (1993). The Art and Science of Teaching Orientation and Mobility to Personswith Visual Impairments. American Foundation for the Blind, New York, NY, USA.

Johnson, L. A. and Higgins, C. M. (2006). A navigation aid for the blind using tactile-visualsensory substitution, in: Conf. Proc. IEEE Eng. Med. Biol. Soc. 1, pp. 6289–6292.

Kim, D. S., Emerson, R. W. and Curtis, A. (2009). Drop-off detection with the long cane: effectsof different cane techniques on performance, J. Vis. Impair. Blind. 103, 519.

Klatzky, R. L., Loomis, J. M., Beall, A. C., Chance, S. S. and Golledge, R. G. (1998). Spa-tial updating of self-position and orientation during real, imagined, and virtual locomotion,Psych. Sci. 9, 293–298.

Page 18: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

396 S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

Lahav, O. and Mioduser, D. (2004). Exploration of unknown spaces by people who are blindusing a multi-sensory virtual environment, J. Spec. Educ. Technol. 19, 15–24.

Lahav, O. and Mioduser, D. (2008). Haptic-feedback support for cognitive mapping of unknownspaces by people who are blind, Int. J. Hum. Comput. Stud. 66, 23–35.

Lahav, O., Schloerb, D. W. and Srinivasan, M. A. (2012). Newly blind persons using virtualenvironment system in a traditional orientation and mobility rehabilitation program: a casestudy, Disabil. Rehabil. Assist. Technol. 7, 420–435.

Lappe, M., Stiels, M., Frenz, H. and Loomis, J. M. (2011). Keeping track of the distance fromhome by leaky integration along veering paths, Exp. Brain Res. 212, 81–89.

Levy-Tzedek, S., Novick, I., Arbel, R., Abboud, S., Maidenbaum, S., Vaadia, E. and Amedi,A. (2012). Cross-sensory transfer of sensory-motor information: visuomotor learning af-fects performance on an audiomotor task, using sensory-substitution, Sci. Rep. 2, 949.DOI:10.1038/srep00949.

Li, Z., Phillips, J. and Durgin, F. H. (2011). The underestimation of egocentric distance: evi-dence from frontal matching tasks, Atten. Percept. Psychophys. 73, 2205–2217.

Maidenbaum, S., Arbel, R., Abboud, S., Chebat, D., Levy-Tzedek, S. and Amedi, A. (2012).Virtual 3D shape and orientation discrimination using point distance information, in: Proc.9th Int Conf. Disab. Virtual Real. Assoc. Technol., Laval, France, pp. 471–474.

Maidenbaum, S., Levy-Tzedek, S., Chebat, D. R. and Amedi, A. (2013). Increasing accessibilityto the blind of virtual environments, using a virtual mobility aid based on the “EyeCane”:feasibility study, PloS One 8, e72555. DOI:10.1371/journal.pone.0072555.

Maidenbaum, S., Hanassy, S., Abboud, S., Buchs, G., Chebat, DR., Levy-Tzedek, S. and Amedi,A. (in press). The “EyeCane”, a new electronic travel aid for the blind: technology, behavior& swift learning, Restor. Neurol. Neurosci.

Manduchi, R. and Kurniawan, S. (2010). Watch your head, mind your step: mobility-relatedaccidents experienced by people with visual impairment, Tech. Rep. UCSC-SOE-10-24, Uni-versity of California, Santa Cruz, CA, USA.

Manduchi, R. and Kurniawan, S. (2011). Mobility-related accidents experienced by people withvisual impairment, AER J. 4(2), 44–54.

Meijer, P. B. L. (1992). An experimental system for auditory image representations, IEEE Trans.Biomed. Eng. 39, 112–121.

Merabet, L. B., Connors, E. C., Halko, M. A. and Sánchez, J. (2012). Teaching the blind tofind their way by playing video games, PloS One 7, e44958. DOI:10.1371/journal.pone.0044958.

Millar, S. (1994). Understanding and Representing Space: Theory and Evidence from Studieswith Blind and Sighted Children. Clarendon Press/Oxford University Press, New York, NY,USA.

Pasqualotto, A. and Proulx, M. J. (2012). The role of visual experience for the neural basis ofspatial cognition, Neurosci. Biobehav. Rev. 36, 1179–1187.

Pasqualotto, A. and Proulx, M. J. (2013). The study of blindness and technology can reveal themechanisms of three-dimensional navigation, Behav. Brain Sci. 36, 559–560.

Pavey, S., Dodgson, A., Douglas, G. and Clements, B. (2009). Travel, Transport, and Mobilityof People who are Blind and Partially Sighted in the UK — Final Report for the RNIB,University of Birmingham, Birmingham, UK.

Page 19: The Effect of Extended Sensory Range via the EyeCane ...in.bgu.ac.il/en/fohs/CARlab/Documents/MSR_2463_Maidenbaum.pdf · S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397

S. Maidenbaum et al. / Multisensory Research 27 (2014) 379–397 397

Philbeck, J., Loomis, J. and Beall, A. (1995). Measurement of visually perceived egocentricdistance under reduced-cue and full-cue conditions, Invest. Ophthalmol. Vis. Sci. 4, 666.

Quiñones, P. A., Greene, T., Yang, R. and Newman, M. (2011). Supporting visually impairednavigation: a needs-finding study, in: Proc. Int. Conf. Hum. Fact. Comput. Syst. CHI 2011,Vancouver, BC, Canada, pp. 1645–1650.

Richardson, A. E., Montello, D. R. and Hegarty, M. (1999). Spatial knowledge acquisition frommaps and from navigation in real and virtual environments, Mem. Cogn. 27, 741–750.

Röder, B., Föcker, J., Hötting, K. and Spence, C. (2008). Spatial coordinate systems for tactilespatial attention depend on developmental vision: evidence from event-related potentials insighted and congenitally blind adult humans, Eur. J. Neurosci. 28, 475–483.

Strelow, E. R. (1985). What is needed for a theory of mobility: direct perceptions and cognitivemaps — lessons from the blind, Psychol. Rev. 92, 226.

White, G. R., Fitzpatrick, G. and Mcallister, G. (2008). Toward accessible 3D virtual envi-ronments for the blind and visually impaired, in: Proc. 3rd ACM Int. Conf. Digit. Interact.Media Entertain. Arts ACM, Athens, Greece, pp. 134–141.

Witmer, B. G., Bailey, J. H., Knerr, B. W. and Parsons, K. C. (1996). Virtual spaces and realworld places: transfer of route knowledge, Int. J. Hum. Comput. Stud. 45, 413–428.