8
The impact of vision in spatial coding Konstantinos Papadopoulos *, Eleni Koustriava Department of Educational and Social Policy, University of Macedonia, 156 Egnatia St., P.O. Box 1591, 54006 Thessaloniki, Greece 1. Introduction Many behavioral studies reveal the detrimental effect of vision loss on the acquisition of spatial knowledge (see for a review Cattaneo et al., 2008). Mental representation of space is essentially visual in character (Huttenlocher & Presson, 1973). However, there is another point of view according to which mental representation neither results from nor reflects visual perception (Millar, 1976). Although Millar (1988) accepts that the type of information used by individuals with congenital blindness may cause difficulties in mental spatial reorganization tasks, she states that vision is neither necessary nor sufficient for spatial coding. In the same direction, Paivio (1986) suggests that imagery could result from every sensory modality. There are a number of researches that provide evidence of a quite similar performance between individuals with blindness and individuals with normal vision, when the experiment of the research concerns visual imagery (see for a review Cattaneo et al., 2008). Different cognitive strategies such as mental spatial representations or haptic imagery may interfere with resulting in a performance equivalent to the one based on visual imagery (Cattaneo et al., 2008). However, it has been suggested that there may be a distinction between visual imagery and representational spatial imagery. The latter seems to be amodal and abstract mental representation (Corballis, 1982). Thus, while visual input is considered by many researchers as necessary for imagery, others argue that the lack of it can be compensated for through the development of another sensory modality. For instance, part of the research underlines the effectiveness of touch in specific tasks, which individuals with visual impairments performed as well as or even better than their sighted counterparts (Heller, 1989; Heller, Brackett, Scroggs, & Allen, 2001; Postma, Zuidhoek, Noordzij, & Kappers, 2007). Heller, Wilson, Steffen, and Yoneyama (2003) suggested that haptics might surpass visual experience when haptic selectivity is required. In addition, Fiehler, Reuschel, and Ro ¨ sler (2009) proved that early induction in orientation and Research in Developmental Disabilities 32 (2011) 2084–2091 A R T I C L E I N F O Article history: Received 23 July 2011 Accepted 25 July 2011 Available online 15 September 2011 Keywords: Vision Visual impairments Spatial coding Haptic strategies A B S T R A C T The aim of this study is to examine the performance in coding and representing of near- space in relation to vision status (blindness vs. normal vision) and sensory modality (touch vs. vision). Forty-eight children and teenagers participated. Sixteen of the participants were totally blind or had only light perception, 16 were blindfolded sighted individuals, and 16 were non-blindfolded sighted individuals. Participants were given eight different object patterns in different arrays and were asked to code and represent each of them. The results suggest that vision influences performance in spatial coding and spatial representation of near space. However, there was no statistically significant difference between participants with blindness who used the most effective haptic strategy and blindfolded sighted participants. Thus, the significance of haptic strategies is highlighted. ß 2011 Elsevier Ltd. All rights reserved. * Corresponding author. Tel.: +30 2310 891403; fax: +30 2310 891388. E-mail addresses: [email protected] (K. Papadopoulos), [email protected] (E. Koustriava). Contents lists available at ScienceDirect Research in Developmental Disabilities 0891-4222/$ see front matter ß 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.ridd.2011.07.041

The impact of vision in spatial coding

Embed Size (px)

Citation preview

Page 1: The impact of vision in spatial coding

Research in Developmental Disabilities 32 (2011) 2084–2091

Contents lists available at ScienceDirect

Research in Developmental Disabilities

The impact of vision in spatial coding

Konstantinos Papadopoulos *, Eleni Koustriava

Department of Educational and Social Policy, University of Macedonia, 156 Egnatia St., P.O. Box 1591, 54006 Thessaloniki, Greece

A R T I C L E I N F O

Article history:

Received 23 July 2011

Accepted 25 July 2011

Available online 15 September 2011

Keywords:

Vision

Visual impairments

Spatial coding

Haptic strategies

A B S T R A C T

The aim of this study is to examine the performance in coding and representing of near-

space in relation to vision status (blindness vs. normal vision) and sensory modality (touch

vs. vision). Forty-eight children and teenagers participated. Sixteen of the participants

were totally blind or had only light perception, 16 were blindfolded sighted individuals,

and 16 were non-blindfolded sighted individuals. Participants were given eight different

object patterns in different arrays and were asked to code and represent each of them. The

results suggest that vision influences performance in spatial coding and spatial

representation of near space. However, there was no statistically significant difference

between participants with blindness who used the most effective haptic strategy and

blindfolded sighted participants. Thus, the significance of haptic strategies is highlighted.

� 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Many behavioral studies reveal the detrimental effect of vision loss on the acquisition of spatial knowledge (see for areview Cattaneo et al., 2008). Mental representation of space is essentially visual in character (Huttenlocher & Presson,1973). However, there is another point of view according to which mental representation neither results from nor reflectsvisual perception (Millar, 1976). Although Millar (1988) accepts that the type of information used by individuals withcongenital blindness may cause difficulties in mental spatial reorganization tasks, she states that vision is neither necessarynor sufficient for spatial coding. In the same direction, Paivio (1986) suggests that imagery could result from every sensorymodality.

There are a number of researches that provide evidence of a quite similar performance between individuals withblindness and individuals with normal vision, when the experiment of the research concerns visual imagery (see for a reviewCattaneo et al., 2008). Different cognitive strategies such as mental spatial representations or haptic imagery may interferewith resulting in a performance equivalent to the one based on visual imagery (Cattaneo et al., 2008). However, it has beensuggested that there may be a distinction between visual imagery and representational spatial imagery. The latter seems tobe amodal and abstract mental representation (Corballis, 1982).

Thus, while visual input is considered by many researchers as necessary for imagery, others argue that the lack of it can becompensated for through the development of another sensory modality. For instance, part of the research underlines theeffectiveness of touch in specific tasks, which individuals with visual impairments performed as well as or even better thantheir sighted counterparts (Heller, 1989; Heller, Brackett, Scroggs, & Allen, 2001; Postma, Zuidhoek, Noordzij, & Kappers,2007). Heller, Wilson, Steffen, and Yoneyama (2003) suggested that haptics might surpass visual experience when hapticselectivity is required. In addition, Fiehler, Reuschel, and Rosler (2009) proved that early induction in orientation and

* Corresponding author. Tel.: +30 2310 891403; fax: +30 2310 891388.

E-mail addresses: [email protected] (K. Papadopoulos), [email protected] (E. Koustriava).

0891-4222/$ – see front matter � 2011 Elsevier Ltd. All rights reserved.

doi:10.1016/j.ridd.2011.07.041

Page 2: The impact of vision in spatial coding

K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091 2085

mobility training may procure acuity of spatial perception in individuals with congenital blindness, who performed as wellas individuals without visual impairment in the spatial tasks of the study in question.

What happens, though, when individuals with total blindness code and represent the near (peripersonal) space? Doesvision still have superiority or does the haptic experience of individuals with blindness compensate for the lack of vision andenable an equal or even better performance? Moreover, what happens, when sighted with and without blindfold code andrepresent the near space? Could touch be as effective as vision? Does visual experience influence the current performance?

There are several studies that have examined the ability of people with visual impairments in terms of spatial coding andspatial representation of near space (Hollins & Kelley, 1988; Millar, 1979; Monegato, Cattaneo, Pece, & Vecchi, 2007;Papadopoulos, Koustriava, & Kartasidou, 2009; Papadopoulos, Koustriava, & Kartasidou, 2010; Pasqualotto & Newell, 2007;Postma et al., 2007; Vanlierde & Wanet-Defalque, 2004) with significant findings. The majority of the researchers whocompared the spatial performance of individuals with that of those without visual impairments or performance ofindividuals with blindness to that of peers with residual vision come to the conclusion that visual experience decisivelyinfluences management of spatial environment. Also, the role of visual experience in spatial cognition is considered to be amajor one (Cattaneo et al., 2008).

Apart from the influence of visual experience on enabling spatial imagery (Hollins, 1985; Vanlierde & Wanet-Defalque,2004), visual experience is also considered to be very important for an effective coding and representation of spatialinformation as well as for the update of spatial haptic representations (Pasqualotto & Newell, 2007). However, research hasindicated that a person with congenital visual impairment may perform better at spatial tasks than a person who isadventitiously impaired (Monegato et al., 2007).

There are previous researches on sighted participants which examine near space coding through various sensorymodalities (Newell, Woods, Mernagh, & Bulthoff, 2005), different frames of reference (Kappers, 2007; Waller, Lippa, &Richardson, 2008) and viewpoints (Mou, Fan, McNamara, & Owen, 2008; Mou, McNamara, Valinquette, & Rump, 2004),under blindfolded conditions or conditions of non-informative vision (Newport, Rabb, & Jackson, 2002; Newell et al., 2005).On the other hand, the same procedures cannot be used nor can the same conclusions be validated for individuals withblindness. That is because individuals with blindness rely on haptic strategies to code and represent space and tend to applydifferent, more egocentric, coding systems (Spencer, Blades, & Morsley, 1989; Warren, 1994).

Thinus-Blanc and Gaunet (1997) define strategy as ‘the set of functional rules implemented by the participant at the various

phases of information processing, from the very first encounter with a new situation until the externalization of the spatial

knowledge’. Studying the results of a research in large-scale space, they concluded that strategies may be the cause of similarperformance levels between participants with blindness, late blindness and sighted participants with blindfold (Thinus-Blanc & Gaunet, 1997). Egocentric representation expresses the relationship between the position of objects and the viewer(Wang, 2003). It originates in sensory data and can provide a starting-point for action in space (Nardini, Burgess,Breckenridge, & Atkinson, 2006; Pick, 2004). In the case of haptic exploration of space the egocenter – the reference pointaccording to which the distance and/or the orientation of an object in space is encoded – is not the eye or head and notnecessarily the body of the viewer but could be the shoulder, elbow or wrist (Kappers, 2007). Allocentric representationcontains no spatial information relative to a viewer (Wang, 2003) and expresses a location in relation to an external point ofreference (Nardini et al., 2006).

Children with blindness have difficulty in changing their frame of reference from egocentric to allocentric (Morsley,Spencer, & Baybutt, 1991; Ochaita & Huertas, 1993; Warren, 1994). They seem to abide by egocentric strategies and to beslow to make the transition to an allocentric encoding system (Ochaita & Huertas, 1993; Warren, 1994). Visual experienceseems to be responsible for the adoption of specific haptic strategies (Spencer et al., 1989; Ungar, Blades, & Spencer, 1995),which in turn defines the performance of an individual with visual impairments in spatial tasks (Ungar et al., 1995).However, it seems that visual experience is not the only factor that influences the adoption of a specific coding strategy.Since, individuals with congenital blindness appear to make use of allocentric coding strategies in researches, other factors,such as education in orientation and mobility, may mediate. In a study by Ungar et al. (1995), it was proved that theparticipants (either those with congenital blindness or those with residual vision) who used methodical strategies ofcoding near space – calculated on the position of each shape based on the distance between the shapes and the distancebetween the shapes from the external frame – had better performances than the participants who used simpler codingstrategies.

The results of studies proving that individuals with blindness can have similar performances with individuals withnormal vision in spatial tasks, are usually discussed through the prism of compensatory mechanisms (Cattaneo et al., 2008).What if near-space coding supports haptics in having an approximately equal force to vision? It is suggested that if a spatialpoint of reference is detectable by an individual who is visually impaired then it could lead the person to an allocentriccoding (Millar, 1979). Millar (1979) designed her research supposing that children with blindness rely on egocentric system– specifically, on movements – because they fail to internalize external cues that are not present for them. According toMillar, individuals with visual impairments fail to ‘internalize’ spatial information because the external cues cannot beperceived. In other words, they internalize spatial cues when these are present (Millar, 1979).

Vision facilitates mental combinations of three or more items, whereas haptics leads to a sequential way of receivinginformation (Cattaneo et al., 2008). For this reason it is assumed that individuals with blindness face serious difficulties insimultaneously processing a considerable amount of haptic information (Cattaneo et al., 2008). This hypothesis is furthersupported by the fact that the efficiency of vision in collecting and processing a large amount of spatial information is

Page 3: The impact of vision in spatial coding

K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–20912086

decisively reduced when vision exploration is turned to sequential exploration to resemble haptics (Loomis, Klatzky, &Lederman, 1991).

What happens though when there are haptic strategies that permit touch to combine a significant amount of spatialinformation? Is it possible that haptic strategies compensate for vision properties? And what really happens when sightedindividuals are forced to use haptic strategies?

2. Study

The aim of this study was to examine the performance in coding and representing near space in relation to vision status(blindness vs. normal vision) and sensory modality (touch vs. vision). For this purpose, individuals with blindness,individuals with normal vision and blindfolded sighted individuals compared based on their performance in coding andrepresenting near-space. The influence of haptic strategies on the performance of participants with blindness was examinedas well.

2.1. Participants

Forty-eight children and teenagers participated in the present study. Sixteen of the participants were totally blind or hadonly light perception, 16 were blindfolded sighted individuals, and 16 were non-blindfolded sighted individuals. The threegroups were matched in terms of gender, age and educational level.

Eight boys and eight girls, aged from seven years and 11 months to 17 (M = 12.58 years, SD = 2.42), comprised the group ofparticipants with blindness (group A). Fourteen of them were congenitally blind, and two of them became blind before theage of six. Concerning their educational level, 11 of the participants were primary school students, 3 were secondary schoolstudents and 2 were high school students.

The participants come from two different cities in Greece, the capital city of Greece, Athens, and from the second largestcity, Thessaloniki. In the beginning, we composed a list of students with blindness who studied in special schools for childrenwith visual impairments or in mainstream schools. However, only students whose parents gave their consent and who hadno additional disabilities participated.

Eight boys and eight girls, aged from seven years and six months to 17 (M = 12.57 years, SD = 2.96), constituted the groupof blindfolded sighted participants (group B). Eight boys and eight girls, aged from seven years and four months to 17(M = 12.51 years, SD = 2.82), constituted the group of sighted participants without blindfold (group C).

2.2. Experiment

The experiment examined the ability of each participant in spatial coding and spatial representation of near space. Theparticipants were required to memorize the type, position and orientation of the various shapes which were given to them inturn and subsequently to place the correct shapes in the right position and correctly orient them. The experiment is similar totests that have been used by Platsidou (1993) for the evaluation of ability in spatial coding and spatial representation ofsighted individuals. Moreover, the same experiment was used in a previous research of Papadopoulos et al. (2010) for theevaluation of individuals with visual impairments.

The experiment consisted of two tests. In the first test the shapes were placed in their original orientation; in other words,the shapes were not rotated. In the second test the shapes were rotated (see Fig. 1 for the correct orientation of shapes andFig. 2 for rotated shapes). Each test included four different sub-tests (2-, 3-, 4- and 5-shapes, respectively). The experimentconsisted of eight sub-tests in total.

2.3. Materials and design

Eight surfaces and a base were constructed. The base was wooden, A3 size (42 cm � 29.7 cm) and its edges were definedby a prominent black frame which could be easily identified through touch. Subsequently, the participant could use theframe or some of its points (the angles) as reference points, when coding and representing the shapes. Therefore he/she coulduse allocentric references for spatial coding. Spatial coding based on allocentric references brings about better results(Nardini et al., 2006; Pick, 2004).

The surfaces were constructed out of white A3-size paper, on which the geometric shapes, made out of black painted cork,were glued. The height of the shapes was about 3 mm. The black color used for the frame of the base and for the shapes waschosen to ensure proper contrast with the white paper. Moreover, a set of shapes (not glued on the paper) was built out ofblack painted cork and was used in the representation phase. The height of the shapes was approximately 3 mm.

2.4. Procedure

Each participant was examined alone in a quiet room. Initially, the researcher explained in detail the procedure to befollowed during the test to the participant and this was followed by a short period of time (5 min) for the participant tofamiliarize herself with the procedure.

Page 4: The impact of vision in spatial coding

Fig. 1. The four surfaces that were used in the test including the correct orientation.

K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091 2087

The participant sat comfortably in a chair in front of a table. The base was placed on the table, precisely in front of theparticipant. On this base the researcher placed the surface with the glued-on shapes (initial surface) (see Figs. 1 and 2). Aspecific amount of time was given for the coding of each surface. For the surface with the two shapes 30 s were given and foreach additional shape an extra 15 s were allowed. That is, for the three shapes 45 s were given, for the four shapes 60 s and forthe 5 shapes 75 s. Participants with blindness and blindfolded sighted participants read (code) the surface through touch.Non-blindfolded sighted participants were only able to use their vision. In order to examine the impact of haptic strategieson performance, we observed and recorded the way each participant read/coded each surface.

When the participant stated that he/she had completed the coding phase or when the time available ran out theresearcher took the surface with the shapes away from the participant. Presently, after a period of 10 s, a blank surface (pieceof A3 size paper) was placed on the base and a set of shapes placed on the table. The delay was inevitable as this was the time

Fig. 2. The four surfaces that were used in the test including the rotated shapes.

Page 5: The impact of vision in spatial coding

Fig. 3. Specimen of the four shapes correctly orientated (d, distance between the centers of the original shape and the final shape – location error; a,

divergence of direction – orientation error).

K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–20912088

needed to place the blank surface on the base and the base in the appropriate position in front of the participant, as well as toplace the set of shapes on the table, next to the base. In each sub-test the set of shapes given to the participant numbered twice asmany as the shapes on the surface. The participant had to choose the correct shapes (the same that were placed on the initialsurface) and place them in the correct position and orientation. For example, for the representation of the two shapes theparticipant had to choose the two correct shapes out of the four different ones that were given to him/her. Under each of thegiven shapes was a piece of modeling clay to ensure that the shapes could be securely placed on the paper, but also to allow swiftcorrections of a shape’s position. The surfaces were presented to the participant in a specific order; that is first the surface withthe 2 shapes, then the surface with the 3 shapes, etc. No time limits were applied for the representation of each surface.

When the participant stated that she/he had completed reproducing each surface, the researcher registered the positionof each shape by drawing, with a pencil, its outline on the surface. Out of this procedure a total of 384 A3 surfaces emerged (8for each participant), on which the positions of the shapes were drawn as the participants defined them during therepresentation procedure (final surfaces).

For the evaluation of the participants’ performance the following measurements were taken: (1) the location error, i.e. thedistance (d) between the centers of the original shape (the one the participant read to begin with on the initial surface) andthe final shape (the one that came out of the representation procedure) (see Fig. 3), (2) the orientation error, i.e. thedivergence angle (a) of the direction of the shape (see Fig. 3) placed by the participant from the initial correct direction of theshape; every time this angle was more than 158 the participant’s answer was marked as wrong (this measurement was notimplemented when, during the selection and placement of the shapes, the participant replaced one shape with another), (3)the number of replacements, i.e. the sum of two errors – object identity error and object-to-position assignment error. Objectidentity error represents how many shapes after the representation procedure did not match the ones from the initial surfaceof each sub-test. Object-to-position assignment error occurred when a shape appeared on the initial and final surface but in adifferent position and so was counted as an object-to-position assignment error.

3. Results

Test scores for each group of participants were calculated in relation to location error, orientation error, and replacements(see Table 1). As far as the first test is concerned, the implementation of one-way ANOVA revealed that there are statisticallysignificant differences among the three groups regarding the location error (F = 8.510, p < .01) and orientation error(F = 7.997, p < .01). In particular, the individuals of group A (participants with blindness) have a greater location error thanthe individuals of group B (blindfolded sighted participants) (Bonferroni post hoc test, p < .05) and those of group C (sighted)(p < .01). Moreover, the individuals of group A have a greater orientation error than the individuals of group B (p < .05) andindividuals of group C (p < .01).

able 1

ean score of wrong answers of each group in the first and second test.

Group First test Second test

LE OE RP LE OE RP

Participants with blindness 73.59 2.44 3.94 74.83 4.44 5.00

Blindfolded sighted participants 56.48 1.13 3.13 48.01 4.63 2.31

Sighted participants 46.59 .75 2.94 41.42 3.50 1.38

E, location error (cm); OE, orientation error (number of errors); RP, replacements (number of errors).

T

M

L

Page 6: The impact of vision in spatial coding

Table 2

Mean score of the wrong answers of participants with visual impairments with relation to haptic strategy in the first and second test.

Group Fist test Second test

LE OE RP LE OE RP

Group-1 54.27 2.00 1.50 63.50 4.50 3.33

Group-2 76.83 2.00 3.67 65.37 6.00 2.00

Group-3 71.63 1.67 5.00 78.87 5.33 4.33

Group-4 101.60 4.00 7.00 95.88 2.5 10.25

LE, location error (cm); OE, orientation error (number of errors); RP, replacements (number of errors).

K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091 2089

Concerning the second test, the implementation of one-way ANOVA revealed that there are statistically significantdifferences among the three groups regarding the location error (F = 15.350, p < .01), and number of replacements (F = 7.097,p < .01). The individuals of group A have a greater location error than the individuals of group B (Bonferroni post hoc test,p < .01) and individuals of group C (p < .01). Moreover, the individuals of group A have a greater number of replacementsthan the individuals of group B (p < .05) and those of group C (p < .01).

On the other hand, in neither the first test nor the second are there any statistically significant differences betweenblindfolded sighted individuals and sighted individuals with reference to the location error, orientation error, andreplacements.

As was previously mentioned (Section 2.4) the researcher observed and recorded the way each participant read/codedeach surface. Depending on the strategy that was used by each participant, five groups emerged with the followingcharacteristics: (a) the participants of group 1 scanned the surface with their hands and simultaneously measured thedistances of the shapes from the points of reference, using as points of reference both the ends of the frame and the othershapes placed on the surface – the nearest ones (resulting in an allocentric – both intrinsic and extrinsic – mentalrepresentation), (b) the participants of group 2 used as points of reference only the other shapes (resulting in an intrinsicallocentric mental representation), (c) the participants of group 3 used as points of reference only the ends of the frame(resulting in an extrinsic allocentric mental representation), (d) the participants of group 4 did not use any external points ofreference, but simply touched the shapes without measuring distances, and (e) the participants of group 5 used their visionto code the arrangement of the shapes. Of the total of 16 participants with blindness, six were integrated into group 1, threeinto group 2, three into group 3, and four into group 4. Of the total of 16 blindfolded sighted participants, eight wereintegrated into group 1, seven into group 3 and only one participant was integrated into group 4. All the participants whoused haptic strategies used both hands in exploring the configurations and did not have one hand anchored to the framewhile using the other to explore. Participants with blindness who formed group 1, which had the best haptic strategy, appearto have better performance than the other groups in the first and second tests (see Table 2).

Moreover, a one-way ANOVA was implemented to see if there were any statistically significant differences betweengroup 1 of participants with blindness and the two groups of sighted participants (blindfolded and non-blindfolded).Concerning the first test, ANOVA revealed that there are indeed statistically significant differences among the three groupsregarding the orientation error (F = 4.157, p < .05). The participants with blindness have a greater orientation error than thesighted participants without blindfold (Bonferroni post hoc test, p < .05). Statistically significant differences did not emergebetween participants with blindness and blindfolded sighted participants.

Concerning the second test, ANOVA revealed that there are statistically significant differences among the three groupsregarding the location error (F = 4.164, p < .05). Participants with blindness have a greater orientation error than the sightedparticipants without blindfold (Bonferroni post hoc test, p < .05). There are no statistically significant differences betweenparticipants with blindness and blindfolded sighted individuals.

4. Discussion

The present study concludes that vision influences performance in spatial coding and spatial representation of near space.The participants with blindness had worse performance in comparison with the sighted participants – either the latter usedtheir touch to code and represent near space (group B) or they used only their vision (group C). Moreover, it seems that thisdoes not derive mainly from the fact that individuals with blindness code and represent near space through touch;otherwise, someone would anticipate a statistically significant difference between blindfolded sighted participants andsighted participants without blindfold. Here, no such result was detected. Although sighted participants who used theirvision outperformed sighted participants who only used their touch (blindfolded), the differences between the former andthe latter were not statistically significant.

Previous studies which examined the ability of people with visual impairments in spatial coding and spatialrepresentation of near space have concluded that the vision – even if it is reduced – is important for coding and representingspace (Papadopoulos et al., 2009, 2010; Pasqualotto & Newell, 2007; Ungar et al., 1995). However, individuals withcongenital blindness are able to improve their performance by using the proper strategies to explore and code space(Papadopoulos et al., 2010); the proper strategies in this context seem to be the allocentric ones. In the present study, thestatistical data concerning the performance of participants with blindness who used the most effective haptic strategy

Page 7: The impact of vision in spatial coding

K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–20912090

compared with the performance of blindfolded sighted participants are of great importance. They reveal the significance ofanother influential factor: the haptic strategies.

It has been suggested that individuals with congenital blindness tend to use self-referent strategies when they codesmall-scale places (Hollins & Kelley, 1988). Papadopoulos et al. (2010) found, however, that only around a third ofparticipants with blindness appeared to use egocentric strategies alone to code near-space. Moreover, only a quarter ofparticipants who were congenitally blind used self-referent strategies. Similarly, in the study of Ungar et al. (1995) only asmall proportion of the participants with congenital blindness (10% in the three-shape layout and 20% in the five-shapelayout) used self-referent strategies to code and represent space. Here, we support not only the fact that individuals withblindness are capable of adopting allocentric strategies but that their performance in coding space can be equated to that ofblindfolded sighted individuals.

In research by Postma et al. (2007) no significant differences between the performances of participants with blindnessand blindfolded sighted participants in coding near space were observed. However, the vast majority of the participants wereover 40 years old. Moreover, participants had the chance to code the locations in previous trials through movement, whichaccording to Millar (1994) is a basic mode of spatial coding. Millar (1979) suggested that in near space (hand) movementsintervene with the memorization of distance and direction of objects. Movements and proprioceptive information used tocode space are by definition egocentric strategies. Thus, one could not argue in favor of allocentric representation of objects.Although Postma et al. (2007) mention that the descriptions of participants with blindness reveal an allocentric–intrinsicmental representation of objects, this same representation could be just an indication of serial memory, which seems to beessential for individuals with blindness trying to generate mental representations (Cattaneo et al., 2008).

Moreover, in the study of Papadopoulos et al. (2010) it also became apparent that the ability for independent movementinfluences the haptic strategy selection. The group of participants with the most efficient haptic strategy appeared to have agreater ability for independent movement. Furthermore, Fiehler et al. (2009) suggested that the early training of childrenwith congenital blindness in orientation and mobility results in acuity of spatial perception as well as in allocentric codingperformance similar to the performance of individuals with normal vision.

To summarize, vision is very important for coding and representing near space. The proper allocentric strategy cancompensate, however, for vision properties. Within the present study we indicate that individuals with blindness – eventhose with congenital blindness – are indeed capable of applying allocentric strategies to code and represent space. For thefirst time it appears that when proper allocentric strategies are used during the codification and representation of near space,the performance of individuals with blindness can be equated to that of blindfolded sighted individuals. These findings aresignificant enough to be applied to orientation and mobility training of individuals with visual impairments, to the field ofenvironmental accessibility or even to the field of adaptation of the environment. Previous studies with sighted participantshave also concluded that there are correlations between spatial abilities in small-scale space and the spatial knowledge of farspace (see Hegarty, Montello, Richardson, Ishikawa, & Lovelace, 2006).

References

Cattaneo, Z., Vecchi, T., Cornoldi, C., Mammarella, I., Bonino, D., Ricciardi, E., et al. (2008). Imagery and spatial processes in blindness and visual impairment.Neuroscience and Biobehavioral Reviews, 32, 1346–1360.

Corballis, M. C. (1982). Mental rotation: Anatomy of a paradigm. In M. Potegal (Ed.), Spatial abilities: Developmental and physiological foundations. New York:Academic Press.

Fiehler, K., Reuschel, J., & Rosler, F. (2009). Early non-visual experience influences proprioceptive-spatial discrimination acuity in adulthood. Neuropsychologia, 47,897–906.

Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., & Lovelace, K. (2006). Spatial abilities at different scales: Individual differences in aptitude-testperformance and spatial-layout learning. Intelligence, 34, 151–176.

Heller, M. (1989). Picture and pattern perception in the sighted and the blind: The advantage of late blind. Perception, 18, 379–389.Heller, M. A., Brackett, D. D., Scroggs, E., & Allen, A. C. (2001). Haptic perception of the horizontal by blind and low-vision individuals. Perception, 30, 601–610.Heller, M. A., Wilson, K., Steffen, H., & Yoneyama, K. (2003). Superior haptic perceptual selectivity in late-blind and very-low-vision subjects. Perception, 32, 499–

511.Hollins, M. (1985). Styles of mental imagery in blind adults. Neuropsychologia, 23, 561–566.Hollins, M., & Kelley, K. E. (1988). Spatial updating in blind and sighted people. Perception and Psychophysics, 43(4), 380–388.Huttenlocher, J., & Presson, C. C. (1973). Mental rotation and the perspective problem. Cognitive Psychology, 4, 277–299.Kappers, M. L. A. (2007). Haptic space processing—Allocentric and egocentric reference frames. Canadian Journal of Experimental Psychology, 61(3), 208–218.Loomis, M. J., Klatzky, L. R., & Lederman, J. S. (1991). Similarity of tactual and visual picture recognition with limited field of view. Perception, 20, 167–177.Millar, S. (1976). Spatial representation by blind and sighted children. Journal of Experimental Child Psychology, 21, 460–479.Millar, S. (1979). The utilization of external and movement cues in simple spatial tasks by blind and sighted children. Perception, 8, 11–20.Millar, S. (1988). Models of sensory deprivation: The nature/nurture dichotomy and spatial representation in the blind. International Journal of Behavioral

Development, 11(1), 69–87.Millar, S. (1994). Understanding and representing space: Theory and evidence from studies with blind and sighted children. New York: Oxford University Press.Monegato, M., Cattaneo, Z., Pece, A., & Vecchi, T. (2007). Comparing the effects of congenital and late visual impairments on visuospatial mental abilities. Journal of

Visual Impairment and Blindness, 101(5), 278–295.Morsley, K., Spencer, B., & Baybutt, K. (1991). Is there any relationship between a child’s body image and spatial skills? The British Journal of Visual Impairment, 9(2),

41–43.Mou, W., Fan, Y., McNamara, P. T., & Owen, B. C. (2008). Intrinsic frames of reference and egocentric viewpoints in scene recognition. Cognition, 106, 750–769.Mou, W., McNamara, P. T., Valiquette, M. C., & Rump, B. (2004). Allocentric and egocentric updating of spatial memories. Journal of Experimental Psychology:

Learning, Memory, and Cognition, 30(1), 142–157.Nardini, M., Burgess, N., Breckenridge, K., & Atkinson, J. (2006). Differential developmental trajectories for egocentric, environmental and intrinsic frames of

reference in spatial memory. Cognition, 101, 153–172.Newell, F. N., Woods, A. T., Mernagh, M., & Bulthoff, H. H. (2005). Visual, haptic and crossmodal recognition of scenes. Experimental Brain Research, 161, 233–242.Newport, R., Rabb, B., & Jackson, R. S. (2002). Noninformative vision improves haptic spatial perception. Current Biology, 12, 1661–1664.

Page 8: The impact of vision in spatial coding

K. Papadopoulos, E. Koustriava / Research in Developmental Disabilities 32 (2011) 2084–2091 2091

Ochaita, E., & Huertas, J. A. (1993). Spatial representation by persons who are blind: A study. Journal of Visual Impairment and Blindness, 87(2), 37–41.Paivio, A. (1986). Mental representations: A dual coding approach. Oxford: Oxford University Press.Papadopoulos, K., Koustriava, E., & Kartasidou, L. (2009). The impact of residual vision in spatial skills of individuals with visual impairments. Journal of Special

Education doi:10.1177/0022466909354339.Papadopoulos, K., Koustriava, E., & Kartasidou, L. (2010). Spatial coding of individuals with visual impairment. Journal of Special Education doi:10.1177/

0022466910383016.Pasqualotto, A., & Newell, N. F. (2007). The role of visual experience on the representation and updating of novel haptic scenes. Brain and Cognition, 65, 184–194.Platsidou, M. (1993). Information processing system: Structure, development and interaction with specailized cognitive abilities. Doctoral Dissertation.

Thessaloniki: Aristotle University of Thessaloniki.Pick, H. L., Jr. (2004). Mental maps psychology of. In P. B. Baltes & N. J. Smelser (Eds.), International encyclopedia of the social & behavioral sciences (pp. 9681–9683).

Amsterdam, Netherlands: Elsevier.Postma, A., Zuidhoek, S., Noordzij, M. L., & Kappers, A. M. L. (2007). Differences between early-blind, late-blind, and blindfolded-sighted people in haptic spatial-

configuration learning and resulting memory traces. Perception, 36, 1253–1265.Spencer, C., Blades, M., & Morsley, K. (1989). The child in the physical environment: The development of spatial knowledge and cognition. Chichester, NY: Wiley.Thinus-Blanc, C., & Gaunet, F. (1997). Representation of space in blind persons: Vision as a spatial sense? Psychological Bulletin, 121(1), 20–42.Ungar, S., Blades, M., & Spencer, C. (1995). Mental rotation of a tactile layout by young visually impaired children. Perception, 24(8), 891–900.Vanlierde, A., & Wanet-Defalque, M. C. (2004). Abilities and strategies of blind and sighted subjects in visuo-spatial imagery. Acta Psychologica, 116, 205–222.Waller, D., Lippa, Y., & Richardson, A. (2008). Isolating observer-based reference directions in human spatial memory: Head, body, and self-to-array axis. Cognition,

106, 157–183.Wang, R. F. (2003). Spatial representations and spatial updating. The Psychology of Learning and Motivation, 42, 109–155.Warren, D. H. (1994). Blindness and children: An individual differences approach. Cambridge: Cambridge University Press.