16
187 Multisource Assessment Programs in Organizations: An Insider’s Perspective Stéphane Brutus, Mehrdad Derayeh This study is an overview of multisource assessment (MSA) practices in organizations. As a performance evaluation process, MSA can take various forms and can be complex for an organization to use. Although the literature on MSA is extensive, little information exists on how these programs are perceived by the individuals responsible for their implementation and maintenance. The purpose of this study was twofold: to describe the current MSA practices used in organizations and to assess the issues associated with implementation and management of these practices from the perspective of the individual responsible for managing an MSA program. One hundred one companies located in Canada were surveyed for the study; almost half of these organizations (43 percent) were using MSA. Interviews of managers responsible for MSA in various organizations and some archival data on these organizations were the main source of data for the study. The study revealed that the use of MSA differs widely from one company to another. In addition, results show that, once implemented, MSA requires a number of adjustments. The source of these adjustments centered on employee resistance, lack of strategic purpose for MSA, poor design of the instrument, and problems with the technology used to support MSA. These results are discussed and a proposed research agenda is outlined. Of the many trends that have swept organizations in the past decade, few have had the impact of multisource assessment (MSA). Romano (1994) estimated that, in 1992 alone, companies spent more than $150 million on developing and implementing MSA programs. Although the origins of MSA can be traced back to the late 1960s (Hedge, Borman, and Birkeland, 2001) its widespread HUMAN RESOURCE DEVELOPMENT QUARTERLY, vol. 13, no. 2, Summer 2002 Copyright © 2002 Wiley Periodicals, Inc. Note: This study was supported by grant NC2006 from Quebec’s Fonds pour la Formation de Chercheurs et l’Aide à la Recherche (FCAR).

Multisource assessment programs in organizations: An insider's perspective

Embed Size (px)

Citation preview

187

Multisource AssessmentPrograms in Organizations:An Insider’s Perspective

Stéphane Brutus, Mehrdad Derayeh

This study is an overview of multisource assessment (MSA) practices inorganizations. As a performance evaluation process, MSA can take variousforms and can be complex for an organization to use. Although theliterature on MSA is extensive, little information exists on how theseprograms are perceived by the individuals responsible for theirimplementation and maintenance. The purpose of this study was twofold:to describe the current MSA practices used in organizations and to assessthe issues associated with implementation and management of thesepractices from the perspective of the individual responsible for managingan MSA program. One hundred one companies located in Canada weresurveyed for the study; almost half of these organizations (43 percent)were using MSA. Interviews of managers responsible for MSA in variousorganizations and some archival data on these organizations were the mainsource of data for the study. The study revealed that the use of MSA differswidely from one company to another. In addition, results show that, onceimplemented, MSA requires a number of adjustments. The source of theseadjustments centered on employee resistance, lack of strategic purpose forMSA, poor design of the instrument, and problems with the technology usedto support MSA. These results are discussed and a proposed researchagenda is outlined.

Of the many trends that have swept organizations in the past decade, few havehad the impact of multisource assessment (MSA). Romano (1994) estimatedthat, in 1992 alone, companies spent more than $150 million on developingand implementing MSA programs. Although the origins of MSA can be tracedback to the late 1960s (Hedge, Borman, and Birkeland, 2001) its widespread

HUMAN RESOURCE DEVELOPMENT QUARTERLY, vol. 13, no. 2, Summer 2002Copyright © 2002 Wiley Periodicals, Inc.

Note: This study was supported by grant NC2006 from Quebec’s Fonds pour laFormation de Chercheurs et l’Aide à la Recherche (FCAR).

188 Brutus, Derayeh

application in organizations is relatively recent. The rise in the number of con-sulting groups specializing in MSA is an indication that interest in theseprograms is at an all-time high and shows no sign of leveling off.

Multisource assessment, also known as 360 degree feedback, refers tothe process by which performance evaluations are collected from manyindividuals—supervisors, peers, subordinates, and customers (London andSmither, 1995; Dunnette, 1993; Tornow, 1993). Collection of performanceinformation from multiple sources assumes that different evaluation perspec-tives offer unique information and thus add incremental validity to assessingindividual performance (Borman, 1998). MSA embraces a dynamic and mul-tidimensional view of individual performance, one that is best captured bythese multiple perspectives (Borman, 1998; London and Smither, 1995). Theshift in evaluation duties from the supervisor, the traditional bearer of evalua-tive tasks, to all relevant individuals has profound implications for individualsand organizations alike. Accordingly, MSA is a phenomenon that has takencenter stage in the practice of and research on performance evaluation.

On a theoretical level, MSA stems from the argument that, in terms of reli-ability (reduction in measurement error) and validity (greater coverage of theindividual performance domain), assessment of individual performance ben-efits from using multiple evaluators. One could argue that the popularity ofMSA is due, in large part, to the intuitive appeal of this argument. However,empirical support for the impact of MSA on the reliability and the validity ofperformance evaluations has been somewhat mixed (for example, Mount andothers, 1998; Greguras and Robie, 1998).

After roughly a decade of widespread adoption of MSA in organizations,the question Does it work? has been on the minds of many, researchers andpractitioners alike (Antonioni, 1996; Borman, 1998; McLean, 1997; Nowack,Hartley, and Bradley, 1999; Tornow, 1993). Naturally, the answer to this ques-tion can be found at a number of levels. A majority of the empirical work con-ducted on MSA has focused on individual-level outcomes. This approach hashelped clarify the affect, cognitions, and behaviors of the many actors directlyinvolved in the process, mainly raters and ratees. Research has focused on thereactions of raters (London, Wohlers, and Gallagher, 1990), the reactions ofratees (Levy, Cawley, and Foti, 1998; Smither, Wohlers, and London, 1995),and the impact of MSA feedback on subsequent behavior (Atwater andYammarino, 1997; Smither and others, 1995).

However, the impact of MSA is probably best addressed at the organiza-tional level. Performance evaluation systems are among the most importanthuman resource components of an organization (Judge and Ferris, 1993;Cleveland, Murphy, and Williams, 1989; Smith, Hornsby, and Shirmeyer,1996); the belief that MSA yields performance information superior to thatresulting from traditional performance appraisal systems may explain its pop-ularity. The utility of feedback-based developmental practices and of anyadministrative decisions is intimately linked to the quality of the individual

performance information collected. However, there is more to this appeal thanmeasurement quality. An important consequence of MSA stems from the dis-sociation of evaluative duties from hierarchical standing. The democratizationof performance appraisal is believed to have a substantial impact on organiza-tional culture in terms of promoting participation, increasing the level of trustand satisfaction, and facilitating communication between employees (Hall,Leidecker, and DiMarco, 1996; Waldman and Atwater, 1998). For many, thelatter may well be the most important outcome of MSA.

Interestingly, there is an increasing amount of evidence cautioningagainst an overly optimistic view toward the organizational benefits of MSA.Waldman, Atwater, and Antonioni (1998), for example, suggested that someof the motives behind adopting MSA might be misguided. Using institutionaltheory to support their argument, the authors propose that the desire to imi-tate competing organizations greatly influences an organization’s decision toimplement MSA. Also, criticism pertaining to lack of strategic alignment of per-formance evaluation systems (Schneier, Shaw, and Beatty, 1991) has also beenlevied against users of MSA. Finally, cultural factors have often been identifiedas a barrier to successful adoption of MSA (Waldman and Atwater, 1998).

The difficulty with evaluating MSA is that it is not a “categorically uniquemethod” (London and Smither, 1995, p. 804). MSA is a process that can takevarious shapes. Reliance on multiple raters exacerbates issues such as rateranonymity and participants’ trust; it renders this practice a lot more complexto manage than more traditional performance appraisal systems. Practicalissues such as maintenance of anonymity and the method by which raters areselected represent tricky, but unavoidable, issues in the use of MSA.

Probably the most pressing decision for an organization is the choice betweenusing MSA information for developmental or for administrative purposes. Mostorganizations use MSA as a tool to facilitate employee development (Londonand Smither, 1995; Waldman and Atwater, 1998). Implementing MSA for devel-opmental purposes (commonly referred to as 360 degree feedback) centerson communicating MSA information to the focal employee. This process is aimedat increasing employees’ self-awareness, helping them set goals, focusing onareas of development and, ultimately, altering their behavior to improve jobperformance (London and Smither, 1995). As such, MSA processes are typi-cally embedded in a development system that includes feedback mecha-nisms, development planning, and support.

In addition to being put to developmental use, the same MSA informationcan also be fed into existing performance appraisal practices and usedfor job placement, pay decisions, and downsizing (Fleenor and Brutus,2000). Scholars have warned against dual-purpose performance evaluations(Cleveland, Murphy, and Williams, 1989) and a similar debate has risen inregard to use of MSA (Dalton, 1996; London, 2000). The decision to use MSAfor either purpose has been shown to be consequential. For example, researchshows that performance ratings differ substantially as a function of the

Multisource Assessment Programs 189

190 Brutus, Derayeh

intended use of the process (Bracken, 1994; Timmereck, 1995). In a recentsurvey, Timmereck and Bracken (1996) reported that 50 percent of those orga-nizations that had adopted MSA for decision-making purposes abandoned itwithin a year because of poor acceptance by users and rating inflation. Thepoint that we want to make here is not to promote the virtue of a particularuse of MSA; rather, it is to highlight the multiple variations of the process avail-able to an organization and the importance of understanding the ramificationsof these variations.

Surprisingly, most of the evidence on whether MSA “works” from an orga-nizational standpoint is indirect. For example, the literature is rife withlists of prescriptions and key elements for successful implementation ofMSA (Antonioni, 1996; Hall, Leidecker, and DiMarco, 1996; Lepsingerand Lucia, 1998; Nowack, Hartley, and Bradley, 1999; Waldman, Atwater, andAntonioni, 1998). These lists include, among other suggestions, identifyingkey stakeholders (e.g., Lepsinger and Lucia, 1998), clearly communicating theprocess to future users (Hall, Leidecker, and DiMarco, 1996), using extensivepilot testing (Waldman, Atwater, and Antonioni, 1998), training all users (Hall,Leidecker, and DiMarco, 1996), identifying effectiveness measures to evaluatethe effectiveness of the program (Nowack, Hartley, and Bradley, 1999), andfocusing on appropriate performance dimensions (London and Smither, 1995).These prescriptions are useful but may lack external validity as they are mostlyanecdotal in nature and are based on an outsider’s perspective of the process(usually that of the consultant).

As more organizations seem to be increasing their use of MSA, there is anurgent need to go beyond the perspective of the individual user (the rater orratee) and the consultant and take stock of the significant experiences of theindividuals responsible for implementing MSA programs. The goals of thisstudy are (1) to describe current MSA practices used in organizations and(2) to assess the issues associated with implementing and managing these prac-tices from the perspective of the individual responsible for managing them.

Methodology

The following section describes the methodology used in this study.Population and Sample. The targeted population consisted of all

Canadian organizations of more than one thousand employees, a total of 506organizations at the time of the study. Large organizations were targetedbecause they are likely to adopt MSA programs (Edwards and Ewen, 1998).These organizations were identified by means of Strategis, an online databaseprovided by the government of Canada to offer comprehensive business andconsumer information on Canadian companies. Two hundred six organizationswere randomly selected from this list and contacted by a member of theresearch team. One hundred one individuals (49 percent) agreed to participatein the study.

The results of independent sample t-tests indicated no evidence of samplebias; there was no significant difference in the number of employees or in totalsales volume between participating and nonparticipating organizations.The size of the 101 organizations surveyed ranged from 1,010 to 51,000employees, with a mean of 4,778. Their total sales volume (a categorical vari-able) was, on average, between CAN$25,000,000 and CAN$49,999,999. Theorganizations surveyed represented a variety of industries: manufacturing(38 percent), food and agricultural (11 percent), computer (10 percent), min-ing and forestry (6 percent), consulting (5 percent), education (4 percent), aero-space (4 percent), health (3 percent), energy (3 percent), finance (2 percent),and others (4 percent).

Data Collection. Interviews were conducted with those individualsresponsible for MSA processes in their respective organizations. In addition tothe interview data, two archival measures were collected for each organization:number of employees and sales volume.

An interview protocol was developed for this study. The content of theinterview was based on a review of the available literature on characteristics ofMSA. The protocol was revised as a result of a pilot test conducted with threehuman resource managers currently overseeing MSA programs. The pilot studyalso permitted calibration of the interviewers as these three interviews wereconducted jointly. The final interview protocol contained eighteen questionspertaining to the characteristics of the MSA in place and the issues related toits usage. The interview protocol included open-ended questions (for exam-ple, What decisions are based on the results?) and questions that were morerestrictive (such as What type of technology supports the MSA process: Web,phone, paper, or other?). If an organization had more than one MSA processin place, the participant was asked to describe the programs separately. Datacollection lasted three months and was completed in the second quarter of theyear 2000.

Telephone interviews targeted people who oversaw the MSA initiativesin place in their organization. To locate these individuals, a telephone callwas made to each organization to identify the person responsible for thehuman resource function. This individual was then contacted and asked torefer the researcher to the appropriate MSA representative. Of the 101 orga-nizations surveyed, close to half (44, or 43 percent) used MSA at the time ofthe interview. If MSA was not used, the data collection process was termi-nated. Otherwise, the individual responsible for MSA was asked to partici-pate in a telephone interview to be scheduled at a convenient time. Forvarious reasons, 17 of the 44 users of MSA could not be interviewed; a totalof 27 interviews were conducted. At the beginning of these interviews, theresearcher made clear to participants that their anonymity and that of theirorganization would be protected. Most participants were enthusiastic aboutthe project and appeared to answer the interview questions quite freely andopenly.

Multisource Assessment Programs 191

192 Brutus, Derayeh

A majority of the respondents were from the human resource function:sixteen were vice presidents or directors of human resources (59 percent),six were managers of training or of organizational development (22 per-cent), two were managers of competencies (7 percent), and one was the man-ager of recruitment and selection (4 percent) while another managedsuccession planning (4 percent). Only two respondents (7 percent) were gen-eral managers who did not hold a formal human resource function.

Data Analysis. Responses collected from the interviews were comparedand aggregated over the course of several meetings. Agreement between theresearchers as to the meaning of answers was high as the interview questionswere quite specific. When present, disagreement between the two researcherswas resolved by consensus. Emergent themes were tabulated according tofrequency of occurrence.

Results

The results of the study are summarized into two broad sections: descrip-tion of the MSA practices used in organizations and issues involved in theirimplementation and management.

Description of the MSA Practices Used in Organizations. The firstresearch question focused on the description of the MSA used. The participantswere asked to describe, in detail, the MSA program in place in their organiza-tion. These results are summarized in Figure 1.

Extent of Use. Forty-three percent of the organizations surveyed usedMSA. This adoption rate shows a substantial increase from Antonioni’sresearch (1996), in which 20 percent of the organizations surveyed usedMSA. Worthy of mention is the fact that nine of the organizations thatwere not using MSA at the time of the interview were planning to implementit in the near future. To investigate organizational trends in adoptingMSA, comparison was made between the forty-four organizations using MSAand those fifty-seven currently not using it. Although no difference wasfound in terms of sales figures, a significant effect was found in terms of thenumber of employees (t [93] � �318; p � .01). Organizations using MSAhad significantly more employees (average of 8,001) than those not using it(average of 2,362 employees).

Number of Programs. A large majority of organizations using MSA had,at the time of the interview, implemented a single program (twenty-four, or 77percent). Two organizations used two distinct programs; just one had three.These programs involved different instruments and were targeted at differentsegments of the organization. In total, the survey allowed the research team toinvestigate the characteristics of thirty-one separate MSA programs.

Age. The “oldest” program described was introduced ten years ago; onaverage, the programs were initiated 3.5 years ago. These results point to theincreasing popularity of MSA.

Fig

ure

1.

Des

crip

tion

of

MSA

Pro

gram

s

Usa

geLi

nk w

ith D

evel

opm

enta

l Sup

port

Faci

litat

orLi

nk w

ith P

lan

Adm

inis

trat

ive

1 (3

)O

ther

s11

(35

)E

xter

nal

8 (2

8)Ye

s20

(66)

Dev

elop

men

tal

23 (

7)Su

peri

ors

12 (

39)

Inte

rnal

3 (9

)N

o10

(33)

Both

7 (2

3)N

o fa

cilit

ator

7 (2

6)

Targ

et E

mpl

oyee

sD

esig

nU

se o

f Tec

hnol

ogy

All

5 (1

6)A

ll4

(45)

Ext

erna

l26

(83

)To

tal

11 (

35)

Pape

r an

d pe

ncil

20 (

65)

Man

ager

s on

ly17

(62

)H

igh-

pote

ntia

ls4

(13)

Inte

rnal

5 (1

6)Pa

rt15

(48

)W

eb4

(13)

Exe

cuti

ves

only

7(2

3)C

orpo

rate

tra

inee

s1

(3)

E-m

ail

5 (1

6)D

iske

tte

2(6

)

Freq

uenc

yR

ater

Sel

ectio

n

Mor

e th

an o

nce

a ye

ar3

(9)

Tota

l fre

edom

27 (

87)

Ann

ually

12 (

38)

App

rova

l of s

uper

viso

r2

(7)

Eve

ry 2

yea

rs3

(9)

Som

e se

lect

ed b

y su

perv

isor

,A

s ne

eded

7 (2

4)ot

hers

by

empl

oyee

2 (7

)

Not

e:N

umbe

rs in

par

enth

eses

are

per

cent

ages

.

}}

194 Brutus, Derayeh

Developmental or Administrative Purpose. A large majority of the MSAprograms investigated claimed to be for developmental purposes only(twenty-three, or 74 percent). However, many respondents mentionedconfusion regarding these developmental programs. In certain situations,MSA information collected for a developmental purpose somehow found itsway to individuals responsible for making administrative decisions. Forexample, a majority of these so-called development programs usedimmediate supervisors to facilitate the link with developmental planning(twelve of the twenty-three, or 52 percent). Many respondents commentedon the dilemma of having the same person who facilitates MSA also beingthe one responsible for the performance management process. In oneextreme example, a copy of the “developmental” feedback report was sentdirectly to the manager and reviewed with the senior management team.

The remaining programs, with the exception of one, were used forboth development and administrative purposes (seven, or 23 percent).Two organizations used an original way to skirt this thorny issue, combiningdevelopmental and administrative use through two separate sets of questions—one for development (whose results were sent to the employee) andanother for administrative purposes (whose results were also sent to thesupervisor).

Links with Development Support. The thirty programs that were used, as awhole or in part, for developmental purposes were investigated further interm of their links with developmental support. Some programs used anexternal facilitator (separate from the recipient’s immediate work unit) tomanage the feedback process (eleven, or 37 percent). A few of thesefacilitators were external consultants (eight, or 27 percent) while others reliedon internal HR staff to do the facilitation (three, or 10 percent). As stated inthe previous section, many organizations used the recipient’s supervisor tofacilitate the feedback. The extent to which the feedback was linked to adevelopment plan tended to vary. A majority of programs had a formal linkwith developmental planning activities (twenty-one, or 70 percent).

Target Employees. The group of employees targeted by the MSA programvaried. Although a few processes (five, or 16 percent) were used for everyemployee in the organization, most were specifically intended for managerialpositions (nineteen, or 61 percent). When the program was used formanagers, the most common target was the total managerial population(fourteen, or 45 percent) while others focused only on high-potentials (four,or 13 percent); one of these programs was used for corporate trainees only.Finally, some programs (seven, or 23 percent) were reserved exclusively forexecutives.

Internal Versus External Design and Management. As stated in theintroduction, MSA is a complex system to develop and manage. Results showthat a majority of MSA programs rely on external consultants (twenty-six, or83 percent). In some instances, an external group used their own instrument

and administered the whole process (eleven, or 35 percent), while in othersconsultants shaped the process according to the organization’s competencymodels (fifteen, or 48 percent). Relatively few organizations did not useexternal assistance for their MSA programs (five, or 16 percent).

Use of Technology. MSA programs still rely on traditional paper admin-istration for data collection (twenty, or 65 percent) but other technologieswere also used: Web (four, or 13 percent), e-mail (five, or 16 percent), andcomputer diskette (two, or 6 percent).

Frequency of Administration. In terms of frequency of administration, amajority of organizations used MSA cyclically: organizations used it annually(twelve, or 38 percent), every two years (three, or 9 percent), or more thanonce a year (three, or 9 percent). Many use MSA infrequently, as needed(seven, or 24 percent); these organizations used it to develop high-potentials, to address performance problems with some employees, or only ifrequested by an employee. Interestingly, some organizations have used MSAonly once and have no plan to use it in the future (six, or 19 percent).

Rater Selection. A majority of programs gave complete freedom to theiremployees in choosing who will rate them (twenty-seven, or 87 percent).Two programs had supervisors sanction the list of raters they chose, in thatemployees had to submit their list of raters for approval prior to distributionof the surveys. Only one program had the supervisor choose the raters. In aninteresting variation on this theme, two programs specified the percentage ofraters to be selected by the employee and by the supervisor (for instance,fifty-fifty).

Special Features of Program. Many programs used mechanisms that areworthy of mention because of their uniqueness. One program requiredemployees to select raters one year prior to completing the evaluation. Thiswas aimed at raising raters’ awareness of their evaluative duties andaugmenting the quality of their evaluation. In another organization,supervisors managed the whole process; they selected all raters for theirsubordinates and received the MSA report directly. Moreover, ratingsincluded in the reports were not anonymous (the supervisor knew whoprovided the ratings) and supervisors could use their discretion to “screen”the information or seek additional data from selected raters.

Issues Involved with MSA Usage. In this section, we focus on the issuesthat arose as a result of MSA, and on the solutions that were proposed by thevarious respondents. The strongest theme to emerge from the interviews wasthe resistance that various constituencies offered to the MSA process. Thisresistance appears to stem from three sources: time and effort required, lack oftrust, and lack of strategy.

Resistance Owing to Time and Effort. Twenty-one (out of twenty-seven)respondents mentioned that the time and effort associated withimplementing and managing MSA may be a deterrent to future use. Thesecomments focused on the burden of introducing MSA. As one participant

Multisource Assessment Programs 195

196 Brutus, Derayeh

stated, “The process is very expensive in terms of infrastructure; it is also verytime-consuming—there is a two-to-three-month time lag between theassessment and the feedback period.” Another said, “It is simply not worth it;it is way too labor-, time-, cost-, and energy-consuming.”

Furthermore, time requirements on the part of the user also arose as a sig-nificant issue regarding MSA. Seventeen respondents (63 percent) mentionedthis as being a problem. One participant stated that “many employees receiveda lot of surveys, and the process simply overloaded a few managers.” Anotherrespondent gave the example of a manager who received more than three thou-sand feedback requests! This issue seems especially problematic for high-levelmanagers, for which situation respondents used such expressions as “beingbombarded” and “flooded” to describe the amount of evaluation requestssubmitted to them.

The proposed solutions to rater overload took various forms. Five respon-dents (18 percent) mentioned the transition from paper-based to Web-basedadministration as a way to increase efficiency. Four respondents (15 percent)also mentioned using intensive follow-up to get the surveys completed. Some(three, or 11 percent) discussed modifying the survey itself (for example,reducing the number of performance items) while others (three, also 11 per-cent) lowered the number of raters required for completion of the process. Twoorganizations tied completion of the process to the compensation of key man-agers. In both cases, however, these incentives were perceived to be harmfulby diminishing the legitimacy of the process. Other solutions to rater over-load were less frequent administration or staggered administration of theprocess, offering MSA to fewer employees (only high-potentials, only “red flag”employees), and allowing a longer time frame to complete the surveys.

Resistance Owing to Lack of Trust. Another emerging theme that wasrelated to participant resistance is the user’s trust (or lack thereof) in theprocess. Nine respondents (33 percent) commented on the fact that ratersviewed MSA suspiciously. These negative attitudes appeared to have asignificant impact on the process. Despite the promise of rater anonymity,individuals feared identification from a peer or supervisor. This led to ratinginflation or plain refusal to participate in the process. One respondent said,“Employees did not want to evaluate their direct supervisors for fear thattheir feedback could be traced back to them.”

An additional repercussion of the lack of trust was political maneuveringin rater selection. Four respondents (15 percent) commented on reliance on“wrong” or “safe” raters and use of “I scratch your back, you scratch mine”strategies. Interestingly, use of anonymity appears to also have a downside inthat it offers cover for certain counterproductive behaviors. One respondentcommented, “Some raters abused the system and used it to stab colleagues.”In three instances, respondents commented that anonymity ran counter to theculture of openness present in their organizations.

The solutions offered to counter the lack of trust focused on the need tocommunicate, clearly and extensively, the purpose of the process. A respon-dent commented that “when people do not understand the benefits, either ofbeing critical in evaluating or really trying to use the feedback, they do not buyinto it.” Communication efforts seem particularly important before the imple-mentation and when the feedback results are obtained. Prior to implementa-tion, sessions targeting all raters (not simply the ratee) focused on clarifyingthe purpose of the process, explaining the steps involved, training future raterson how to use the instrument, and putting the emphasis on how anonymitywould be protected. Other sessions were conducted at the feedback stage andtargeted mostly the ratees and their supervisors. These focused on report inter-pretation, use of the information, and the future use of the process. Finally,three respondents (11 percent) mentioned the importance being successfulwith the first group of participants in gaining the trust of subsequent users.One comment highlights this issue: “The first wave of participants, who werethe top echelons of the organization, never really took the process seriously.This had a snowball effect for the rest of the organization.”

Resistance Owing to Lack of Strategy. As stated earlier, the lack of strategicalignment of performance evaluations with organizational objectives is acommon weakness (Schneier, Shaw, and Beatty, 1991). The absence of linksbetween MSA and other developmental systems was mentioned by fiverespondents (19 percent). One participant commented that “we do not havea clear strategy for using MSA. We use it almost randomly. We need tointegrate it with the other systems in place.” Note that seventeenorganizations surveyed (63 percent) used their own competency model todesign their MSA. In other word, this lack of strategy did not refer to thecontent of the competencies measured; rather, it had more to do with use ofMSA and its integration with other systems. The strategic issues focused onparticipant selection (who gets it), coherent incorporation with formalperformance appraisal practices, and the need to follow up withdevelopmental plans and other appropriate developmental support (such ascoaching).

Design Issues. The need to anchor an MSA process with well-designedsurveys is critical for its success (London and Beatty, 1993). Manyrespondents questioned the relevance of the MSA instruments used. A few(four, or 15 percent) commented that MSA instruments were not detailedenough or contained poorly phrased items, while others (three, or 7 percent)mentioned the lack of face validity of the instrument as a deterrent for userswho failed to see the relevance of many items. This was especially evidentwhen the same instrument was used for employees at varying organizationallevels, an issue raised by four respondents. One respondent commented, “Ifthe raters see no link between the questions used in the surveys and ourcompetencies, how are they supposed to take the feedback seriously?”

Multisource Assessment Programs 197

198 Brutus, Derayeh

As a solution to this issue, a few respondents identified the need to useseparate instruments for each organizational level. Also, using behavioralanchors for each item was suggested. Two respondents argued for reliance onmore qualitative feedback; using written statements was considered a goodalternative for some of the design.

Technical Issues. Although advances in technology have made imple-menting and managing MSA a lot easier (Bracken, Summers, and Fleenor,1998; Coates, 1998), these were also mentioned as the cause of unique issues.Five participants experienced major technical difficulties during admin-istration of MSA. Technology failure was identified as the cause of substantialdelays in implementation. Other technological glitches occurred moresporadically throughout the process: feedback reports that were sent to thewrong person, data processing errors, the spread of a virus via disketteadministration.

Other Trends. Analysis of the relationship between the perceived successof MSA usage and characteristics of the processes revealed four general themes.Two were made salient by contrasting the results of the twenty organizationsthat reported meeting all of their objectives through the use of MSA with thefive stating that none or only part of their objectives were met. Note that tworespondents felt it was too early to evaluate the program in place.

First, there appears to be a relationship between the degree to which MSArelied on external consultants and the perceived utility of the process. Five ofthe twenty organizations that met their objectives relied on external consul-tants for design and implementation of MSA (25 percent); four of the five thatdid not meet their objectives relied on external consultants (80 percent).Second, a majority of those organizations that were successful used feedbackfacilitation (twelve, or 63 percent). Interestingly, every organization that failedto meet its objectives with MSA did not facilitate the feedback process. In gen-eral, these observations are indicative of the importance of planning and strate-gic use of MSA. Successful companies were less reactive; they had greater initialplanning and anticipated potential problems before they arose.

Discussion

Although much has been written on MSA, the perspective of the organizationusing the process has rarely been considered in the literature. The purpose ofthis study was to describe MSA programs currently used in organizations andto outline some of the learnings that could be derived from the experience ofindividuals responsible for these processes. The results of the study confirmthe widespread usage of MSA previously reported in the literature (Londonand Smither, 1995; Romano, 1994; Timmereck and Bracken, 1996) as morethan half of the companies surveyed used MSA at the time of the studyor were considering using it in the near future. Note that these results areconservative since our criterion for inclusion was fairly restrictive and only

those programs that used the full 360 degree evaluations (self, peers, sub-ordinates, and supervisor) were considered. More limited application ofmultisource assessment, such as relying solely on subordinate or customerevaluations, is also prevalent in organizations (Walker and Smither, 1999).

If anything, this study confirmed the fact that MSA is not a unitary process(London and Smither, 1995). On the contrary, the diversity of MSA practicesis considerable. This study demonstrated that MSA practices differed widely intheir stated purpose, the group of employees targeted, whether external con-sultancies were relied upon, the process used for rater selection, and the typeof technological platform used. The breadth of this practice places substantialdemands on the manager in terms of choosing or designing an equitable pro-gram for the organization. In our opinion, selecting the right MSA is criticalbecause, despite the overall positive views held of MSA, our results also showedvariations in the perceived success of MSA. Many of the organizations surveyedactually curtailed their use of MSA as a result of poor implementation.

Basically, MSA is an expensive system; it requires a substantial amount oftime and resources to implement it successfully. Although some may argue thatany new HRD initiative requires such effort, what distinguishes MSA from otherinitiatives may be the energy to sustain the process over time. The number ofconstituents involved in producing the expected developmental or adminis-trative outcomes is high; it comprises those in top management in charge ofcommunicating the process, the subordinates or peers or supervisors solicitedto evaluate each focal employee, the recipients of these evaluations expected toput these results to good use, and all the personnel used to support them intheir development (supervisors, coaches, and so on). The need to obtain thecommitment of all these constituents is critical to the success of MSA.

The issue of accountability is the key to obtaining the commitment of allthese actors. For raters, accountability is in direct opposition to provision ofanonymity, since the presence of the latter severely limits the former (London,Smither, and Adsit, 1997). On the one hand, the protection afforded byanonymity is essential in producing valid evaluations (Dalton, 1996). On theother hand, this same protection allows users to skirt their responsibility.Apparently, organizations have yet to produce a satisfying solution to this para-dox. Instead, many have blurred the distinction in use of MSA, as is the casewhen a supervisor handles both the developmental and administrative aspectsof the process.

This may well jeopardize the integrity of MSA and its impact. Those whohave committed their MSA practices to development and purposefully isolatedthem from decision-making processes appear to be the most satisfied. Clearlyestablishing the purpose of the process—and, more specifically, making it con-sequent with that goal—is important since the success of MSA rests on the tacittrust that exists between users and the organization. Organizational readinessfor MSA has been discussed by Waldman and Atwater (1998). They madethe distinction between using MSA to instill trust and cooperation in an

Multisource Assessment Programs 199

200 Brutus, Derayeh

organization, which they refer to as the “push” phenomenon; and using it toformalize an already existing trusting and collaborative culture (the “pull” phe-nomenon). HRD professionals need to be cognizant of the maturity of theirorganization before engaging in MSA.

The lack of strategic alignment of MSA programs with other activitieswas surprising. Despite its associated costs, many organizations have engaged inthis process without considering its alignment with other organizational processes(Waldman and Atwater, 1998). Not surprising was the fact that those who didmention integration of MSA with existing organizational competencies, devel-opmental planning, and targeted employee groups reported the most success.

Although many have attributed the surge of MSA to advances in technol-ogy (Coates, 1998; Bracken, Summers, and Fleenor, 1998), this study revealedmany problems associated with technological support. It may be true that,without technology, MSA would just be too cumbersome to administer. How-ever, that same technology appears to also create some issues for organizations.

Limitations

In answering the question “Does MSA work?” multiple perspectives have to beconsidered—a conclusion in the true spirit of MSA! As stated earlier, MSAinvolves a large number of individuals; the respondents used in this study pre-sent only one of many valid perspectives on this process. Although our relianceon those individuals responsible for MSA addresses a perspective yet untappedby the literature, it also creates issues. For one, some programs may have beenleft undetected. Accordingly, our result may represent a conservative estimateof the adoption of MSA programs. In addition, other “managerial” perspectives,such as that of the supervisor using MSA for his or her employees, werenot addressed in this study. Finally, much of the data on which this study isbased are subjective in nature (issues related to MSA, success of MSA, and soforth). Relying on a single respondent did not allow us to capture the within-organization variation in perception that was most likely present.

It is important to note that, although human resource practices in Canadaclosely resemble those in place in the United States, the use of Canadian orga-nizations may limit the generalizability of our findings. Brutus, Leslie, andMcDonald (2001) have commented on the multitude of issues involvedin applying nontraditional performance appraisal in various cultures. Theextent to which MSA is used outside North America certainly deserves furtherexploration.

Conclusion

This study highlighted the dynamic nature of MSA programs and the needfor those responsible for their implementation and administration to be awareof the implications of using such a system. A cursory look at the existing

literature on MSA paints an overly optimistic picture of the effectivenessof these programs; the reality of those involved in designing and administer-ing the programs is a more nuanced story. Organizations need to engagein the MSA process with clear and realistic expectations in terms of thetime and energy required to bring the process to fruition, the state of readi-ness of their employees, and the degree of fit between MSA and existingHRD practices.

References

Antonioni, D. (1996). Designing an effective 360 degree appraisal feedback process. Organiza-tional Dynamics, 25, 24–38.

Atwater, L., & Yammarino, F. J. (1997). Antecedents and consequences of self-other rating agree-ment: A review and model. In G. Ferris (Ed.), Research in Personnel and Human Resources Man-agement. Greenwich, CT: JAI Press.

Borman, W. C. (1998). 360 degree ratings: An analysis of assumptions and a research agenda forevaluating their validity. Human Resource Management Review, 7, 299–315.

Bracken, D. W. (1994, Sept.). Straight talk about multirater feedback. Training and Development,44–51.

Bracken, D. W., Summers, L., & Fleenor, J. (1998, Aug.). High-tech 360. Training and Develop-ment, 42–45.

Brutus, S., Leslie, J., & McDonald, D. M. (2000). Cross-cultural issues in multisource feedback.In D. Bracken, C. W. Timmereck, & A. Church (Eds.), Handbook of Multisource Feedback. SanFrancisco: Jossey-Bass.

Cleveland, J. N., Murphy, K. R., & Williams, R. E. (1989). Multiple uses of performance appraisal:Prevalence and correlates. Journal of Applied Psychology, 74, 130–135.

Coates, D. E. (1998). Breakthrough in multisource feedback software. Human Resources Profes-sional, 6, 7–11.

Dalton, M. (1996). Multirater feedback and conditions for change. Consulting Psychology Journal,48, 12–16.

Dunnette, M. D. (1993). My hammer or your hammer? Human Resource Management, 32,373–384.

Edwards, M. R., & Ewen, A. J. (1998). Multisource assessment survey of industry practice. Paperpresented at the 360 Degree Feedback Conference, Orlando.

Fleenor, J. W., & Brutus, S. (2000). The use of multisource feedback for succession planning,staffing and severance. In D. Bracken, C. W. Timmereck, & A. Church (Eds.), Handbook of Mul-tisource Feedback. San Francisco: Jossey-Bass.

Greguras, G. J., & Robie, C. (1998). A new look at within-source interrater reliability of360-degree feedback ratings. Journal of Applied Psychology, 6, 960–968.

Hall, J. L., Leidecker, J. K., & DiMarco, C. (1996). What we know about upward appraisal ofmanagement: Facilitating the future use of UPAs. Human Resource Development Quarterly, 7,209–226.

Hedge, J. W., Borman, W. C., & Birkeland, S. A. (2001). History and development of multisourcefeedback as a methodology. In D. Bracken, C. W. Timmereck, & A. Church (Eds.), Handbookof Multisource Feedback. San Francisco: Jossey-Bass.

Judge, T. A., & Ferris, G. R. (1993). Social context of performance evaluation decisions. Acad-emy of Management Journal, 36, 80–105.

Lepsinger, R., & Lucia, A. L. (1998, Feb.). Creating champions for 360 degree feedback. Trainingand Development, 49–52.

Levy, P. E., Cawley, B. D., & Foti, R. J. (1998). Reactions to appraisal discrepancies: Performanceratings and attributions. Journal of Business and Psychology, 12, 437–455.

Multisource Assessment Programs 201

202 Brutus, Derayeh

London, M. (2000). The great debate: Should 360 be used for administration or developmentonly? In D. Bracken, C. W. Timmereck, & A. Church (Eds.), Handbook of Multisource Feedback.San Francisco: Jossey-Bass.

London, M., & Beatty, R. W. (1993). A feedback approach to management development. HumanResource Management, 32, 353–372.

London, M., & Smither, J. W. (1995). Can multisource feedback change perceptions of goalaccomplishment, self-evaluations, and performance-related outcomes? Theory-based applica-tions and direction for research. Personnel Psychology, 48, 803–839.

London, M., Smither, J. W., & Adsit, D. J. (1997). Accountability: The Achilles’ heel of multi-source feedback. Group and Organization Management, 22, 162–184.

London, M., Wohlers, A. J., & Gallagher, P. (1990). 360 feedback surveys: A source of feedbackto guide managerial development. Journal of Management Development, 9, 17–31.

McLean, G. N. (1997). Multirater 360 feedback. In L. J. Bassi & D. Russ-Eft (Eds.), What works:Assessment, development, and measurement. Alexandria, VA: American Society for Trainingand Development.

Mount, M., Judge, T., Scullen, S. E., Systma, M. R., & Hezlett, S. A. (1998). Trait, rater, and leveleffects in 360-degree performance ratings. Personnel Psychology, 51, 557–576.

Nowack, K. M., Hartley, J., & Bradley, W. (1999, Apr.). How to evaluate your 360 feedbackefforts. Training and Development, 48–53.

Romano, C. (1994). Conquering the fear of feedback. HR Focus, 71, 9–19.Schneier, C. E., Shaw, D., & Beatty, R. W. (1991). Performance measurement and management:

A new tool for strategy execution. Human Resource Management, 30, 279–301.Smith, B. N., Hornsby, J. S., & Shirmeyer, R. (1996, Summer). Current trends in performance

appraisal: An examination of managerial practice. SAM Advanced Management Journal, 10–15.Smither, J. W., London, M., Vasilopoulos, N. L., Reilly, R. R., Millsap, R. E., & Salvemini, N.

(1995). An examination of the effects of an upward feedback program over time. Personnel Psy-chology, 48, 1–34.

Smither, J. W., Wohlers, A. J., & London, M. (1995). A field study of reactions to normative ver-sus individualized upward feedback. Group and Organization Management, 20, 61–89.

Timmereck, C. W. (1995). Upward feedback in the trenches: Challenges and realities. Paper presentedin May at the Tenth Annual Conference of the Society for Industrial and Organizational Psy-chology, Orlando.

Timmereck, C. W., & Bracken, D. (1996). Multisource assessment: Reinforcing the preferred “means”to the end. Paper presented in April at the meeting of the Society for Organizational and Indus-trial Psychology, San Diego.

Tornow, W. W. (1993). Perceptions or reality: Is multi-perspective measurement a means or anend? Human Resource Management, 32, 221–230.

Waldman, D. A., & Atwater, L. E. (1998). The power of 360-degree feedback: How to leverage per-formance evaluations for top productivity. Houston: Gulf.

Waldman, D. A., Atwater, L. E., & Antonioni, D. (1998). Has 360-degree feedback gone amok?Academy of Management Executive, 12, 86–94.

Walker, A. G., & Smither, J. W. (1999). A five-year study of upward feedback: What managersdo with their results matters. Personnel Psychology, 52, 393–423.

Stéphane Brutus is assistant professor at the John Molson School of Business,Concordia University.

Mehrdad Derayeh is a graduate student in industrial-organizational psychology atthe University of Waterloo.