19
Journal of Applied Psychology 1998, Vol. 83, No. 4, 615-633 Copyright 1998 by ihe American Psychological Association, Inc. 002I-9010/98/J3.00 Participation in the Performance Appraisal Process and Employee Reactions: A Meta-Analytic Review of Field Investigations Brian D. Cawley SHL Landy Jacobs, Inc. Lisa M. Keeping and Paul E. Levy University of Akron The relationship between participation in the performance appraisal process and various employee reactions was explored through the meta-analysis of 27 studies containing 32 individual samples. The overall relationship (p) between participation and employee reactions, corrected for unreliability, was .61. Various conceptualizations and operational- izations of participation and employee reactions also were discussed and analyzed. Over- all, appraisal participation was most strongly related to satisfaction, and value-expressive participation (i.e., participation for the sake of having one's "voice" heard) had a stronger relationship with most of the reaction criteria than did instrumental participation (i.e., participation for the purpose of influencing the end result). The results are discussed within the framework of organizational justice. Performance appraisal is frequently performed in orga- nizations for a variety of purposes, including administra- tive decisions (e.g., raise, promotion), feedback and de- velopment, and personnel research. Thus, performance ap- praisals are among the most important human resource systems in organizations insofar as they represent critical decisions integral to a variety of human resource actions and outcomes (Judge & Ferris, 1993). Because of its prevalence and importance in organizations, performance appraisal is also one of the most widely researched areas in industrial/organizational psychology (Murphy & Cleveland, 1995). Of great concern to scientists and practitioners has been the issue of appraisal effectiveness and its measurement. Appraisal effectiveness refers to how well the appraisal system is operating as a tool for the assessment of work Brian D. Cawley, SHL Landy Jacobs, Inc., Boulder, Colorado; Lisa M. Keeping and Paul E. Levy, Department of Psychology, University of Akron. We contributed equally to this project, and therefore author- ship is listed alphabetically. An earlier version of this article was presented at the Eighth Annual Meeting of the Society for Industrial and Organizational Psychology, San Francisco, April/ May 1993. We thank Mike McDaniel, Pete Villanova, Doug Brown, and Russell Cropanzano for their insightful comments on this article; Joelle Elicker for her help in uncovering studies for the meta-analysis; and the many authors who were so cooper- ative in providing us with the particular coefficients and other data that were needed for our analyses. Correspondence concerning this article should be addressed to Paul E. Levy, Department of Psychology, University of Akron, Akron, Ohio 44325-4301. Electronic mail may be sent to [email protected]. performance. It is perhaps best regarded as a multidimen- sional construct or an ultimate criterion (Cascio, 1991) that cannot be directly measured but rather is assessed through the measurement of other subordinate criteria (Cardy & Dobbins, 1994). Cardy and Dobbins suggested that appraisal effectiveness is composed of rater errors, rating accuracy, and qualitative aspects of the appraisal (p. 26). They further suggested that these three specific criteria may not appear to be related to one another and in fact may conflict at times. Thus, research typically has focused on these criteria as separate issues rather than taken a more holistic approach. Although it is often argued that all three classes of criteria are important in the assessment of appraisal effec- tiveness, in general, performance appraisal research has been dominated by investigations concerning rating errors and rating accuracy. In contrast, less attention has been focused on qualitative criteria such as subordinates' reac- tions to appraisals and the factors contributing to these reactions (Cardy & Dobbins, 1994; Murphy & Cleveland, 1995). In fact, the relative lack of research attention di- rected toward reaction criteria compared widi psychomet- ric and accuracy criteria led Murphy and Cleveland to refer to reaction criteria as one class of ' 'neglected crite- ria" that might be critical in evaluating the success of an appraisal system (p. 310). It seems reasonable to expect that subordinates' reactions to appraisal systems would have just as much impact on the success and effectiveness of an appraisal system as the more technical aspects of the system. In fact, Bernardin and Beatty (1984) sug- gested that subordinates' reactions are usually better indi- cators of the overall viability of an appraisal system than are more narrow psychometric indices, such as leniency 615

Cawley Keeping Levy 1998

Embed Size (px)

DESCRIPTION

Cawley Keeping Levy 1998

Citation preview

  • Journal of Applied Psychology1998, Vol. 83, No. 4, 615-633

    Copyright 1998 by ihe American Psychological Association, Inc.002I-9010/98/J3.00

    Participation in the Performance Appraisal Process and EmployeeReactions: A Meta-Analytic Review of Field Investigations

    Brian D. CawleySHL Landy Jacobs, Inc.

    Lisa M. Keeping and Paul E. LevyUniversity of Akron

    The relationship between participation in the performance appraisal process and variousemployee reactions was explored through the meta-analysis of 27 studies containing 32individual samples. The overall relationship (p) between participation and employeereactions, corrected for unreliability, was .61. Various conceptualizations and operational-izations of participation and employee reactions also were discussed and analyzed. Over-all, appraisal participation was most strongly related to satisfaction, and value-expressiveparticipation (i.e., participation for the sake of having one's "voice" heard) had astronger relationship with most of the reaction criteria than did instrumental participation(i.e., participation for the purpose of influencing the end result). The results are discussedwithin the framework of organizational justice.

    Performance appraisal is frequently performed in orga-nizations for a variety of purposes, including administra-tive decisions (e.g., raise, promotion), feedback and de-velopment, and personnel research. Thus, performance ap-praisals are among the most important human resourcesystems in organizations insofar as they represent criticaldecisions integral to a variety of human resource actionsand outcomes (Judge & Ferris, 1993). Because of itsprevalence and importance in organizations, performanceappraisal is also one of the most widely researched areasin industrial/organizational psychology (Murphy &Cleveland, 1995).

    Of great concern to scientists and practitioners has beenthe issue of appraisal effectiveness and its measurement.Appraisal effectiveness refers to how well the appraisalsystem is operating as a tool for the assessment of work

    Brian D. Cawley, SHL Landy Jacobs, Inc., Boulder, Colorado;Lisa M. Keeping and Paul E. Levy, Department of Psychology,University of Akron.

    We contributed equally to this project, and therefore author-ship is listed alphabetically. An earlier version of this articlewas presented at the Eighth Annual Meeting of the Society forIndustrial and Organizational Psychology, San Francisco, April/May 1993. We thank Mike McDaniel, Pete Villanova, DougBrown, and Russell Cropanzano for their insightful commentson this article; Joelle Elicker for her help in uncovering studiesfor the meta-analysis; and the many authors who were so cooper-ative in providing us with the particular coefficients and otherdata that were needed for our analyses.

    Correspondence concerning this article should be addressedto Paul E. Levy, Department of Psychology, University of Akron,Akron, Ohio 44325-4301. Electronic mail may be sent [email protected].

    performance. It is perhaps best regarded as a multidimen-sional construct or an ultimate criterion (Cascio, 1991)that cannot be directly measured but rather is assessedthrough the measurement of other subordinate criteria(Cardy & Dobbins, 1994). Cardy and Dobbins suggestedthat appraisal effectiveness is composed of rater errors,rating accuracy, and qualitative aspects of the appraisal(p. 26). They further suggested that these three specificcriteria may not appear to be related to one another andin fact may conflict at times. Thus, research typically hasfocused on these criteria as separate issues rather thantaken a more holistic approach.

    Although it is often argued that all three classes ofcriteria are important in the assessment of appraisal effec-tiveness, in general, performance appraisal research hasbeen dominated by investigations concerning rating errorsand rating accuracy. In contrast, less attention has beenfocused on qualitative criteria such as subordinates' reac-tions to appraisals and the factors contributing to thesereactions (Cardy & Dobbins, 1994; Murphy & Cleveland,1995). In fact, the relative lack of research attention di-rected toward reaction criteria compared widi psychomet-ric and accuracy criteria led Murphy and Cleveland torefer to reaction criteria as one class of ' 'neglected crite-ria" that might be critical in evaluating the success of anappraisal system (p. 310). It seems reasonable to expectthat subordinates' reactions to appraisal systems wouldhave just as much impact on the success and effectivenessof an appraisal system as the more technical aspects ofthe system. In fact, Bernardin and Beatty (1984) sug-gested that subordinates' reactions are usually better indi-cators of the overall viability of an appraisal system thanare more narrow psychometric indices, such as leniency

    615

  • 616 CAWLEY, KEEPING, AND LEVY

    or halo. After all, one may develop the most technicallysophisticated, accurate appraisal system, but if that systemis not accepted and supported by employees, its effective-ness ultimately will be limited (Cardy & Dobbins, 1994;Carroll & Schneier, 1982; Lawler, 1967; Murphy & Cleve-land, 1995). Furthermore, Hedge and Borman (1995), intheir discussion of the changing nature of performanceappraisals, suggested that worker attitudes toward perfor-mance appraisal may play an increasingly important rolein appraisal processes as the procedures and systems con-tinue to develop.

    Despite the relative neglect of reaction criteria, however,there have been several recent studies that have attemptedto investigate various appraisal characteristics that elicit orat least contribute to positive employee reactions (e.g., Dob-bins, Cardy, & Platz-Vieno, 1990; Evans & McShane, 1988;Klein & Snell, 1994; Pooyan & Eberhardt, 1989; Sil-verman & Wexley, 1984). One of the most widely researchedof these appraisal factors has been employee participation.Participation has been conceptualized and operationalized inmany ways, but in general, research suggests that allowingemployees to participate in the appraisal process is associ-ated with positive employee reactions toward the appraisalsystem (e.g., Dipboye & dePontbriand, 1981; Giles &Mossholder, 1990; Korsgaard & Roberson, 1995; Landy,Barnes, & Murphy, 1978).

    Although some studies have investigated the relation-ship between employee participation and subsequent sub-ordinate reactions toward the appraisal, no narrative orquantitative review of this relationship exists. This issomewhat surprising given its potential for providing im-portant information about the utility of participation. Thepurpose of the present study, therefore, was to systemati-cally review the research in this area through the use ofmeta-analytic procedures. It was predicted that therewould be a strong positive relationship between overallparticipation and overall appraisal reactions. Before ex-amining the relationship between participation and subor-dinate reactions, however, it is important to understandthe nature of these variables. To this end, we will firstreview the literature examining reaction criteria and out-line the different types of reactions that have most fre-quently been measured. On the predictor side, employeeparticipation in the performance appraisal process hasbeen conceptualized and operationalized in many differentways, and it is important that these conceptualizationsand operationalizations be distinguished. Thus, we willalso present an overview of the research examining partic-ipation as well as some of the justice literature, whichcan be used as a framework for denning and classifyingdifferent types of participation.

    Subordinates' Reactions to Performance AppraisalAs mentioned previously, the reactions of subordinates

    regarding their performance appraisal can be an important

    determinant of the ultimate success and effectiveness ofthe appraisal process. Researchers have assessed manydifferent types of subordinate reactions and have opera-tionalized these reactions in various ways. The most fre-quently assessed subordinate reaction to performance ap-praisal has been satisfaction (Giles & Mossholder, 1990).In part, this may be because research has demonstratedthat employee satisfaction with the performance appraisalprocess can affect variables such as productivity, motiva-tion, and organizational commitment (cf. Ilgen, Fisher, &Taylor, 1979; Larson, 1984; Pearce & Porter, 1986; Wex-ley & Klimoski, 1984). Due to the importance of theseoutcomes of performance appraisal satisfaction, it followsthat organizations ought to attempt to increase this typeof employee satisfaction.

    Within a performance appraisal context, satisfactionlargely has been measured in one of two different ways:satisfaction with the appraisal interview and satisfactionwith the appraisal system. Most of the research seems tohave focused on satisfaction with the session, whereas lessattention has been given to satisfaction with the appraisalsystem (Giles & Mossholder, 1990; Mount, 1984). Thisperhaps has been because most of the early work on ap-praisal satisfaction focused on the appraisal interview(e.g., Burke, Weitzel, & Weir, 1978; Burke & Wilcox,1969; Greller, 1978). Research that has assessed bothsystem and session satisfaction (e.g., Giles & Mossholder,1990; Mount, 1984) has found aspects of the appraisalprocess to be differentially related to these two types ofsatisfaction. The present study has the potential to shedsome light on the differences between system and sessionsatisfaction by examining their relationship with differenttypes of participation. Because participation is usuallyoperationalized as employees' perceptions of the extentto which they were able to participate in the performanceappraisal session, it is predicted that a stronger relation-ship will be uncovered between participation and satisfac-tion with the appraisal session than will be uncoveredbetween participation and satisfaction with the appraisalsystem. This is consistent with the work of Ajzen and hiscolleagues (Ajzen & Fishbein, 1977; Ajzen & Madden,1986), who argue that relationships among attitudes andbehaviors will be enhanced when the attitudes or behav-iors involved correspond or match in terms of the targetand action.

    A second reaction that has been heavily researchedin the appraisal literature is the perceived utility of theperformance appraisal interview. Greller (1978) concep-tualized utility in terms of the appraisal session and opera-tionalized this with four items: "The appraisal helped melearn how I can do my job better," "I learned a lot fromthe appraisal," "The appraisal helped me understand mymistakes," and "I have a clearer idea of what the bossexpects from me because of the appraisal." Many re-

  • PARTICIPATION AND PERFORMANCE APPRAISAL 617

    searchers since then have operationalized perceived utilitysimilarly and have used Greller's scale or some modifica-tion of it (e.g., Nathan, Mohrman, & Milliman, 1991;Prince & Lawler, 1986). In addition to the foregoing,utility also has been conceptualized with regard to theappraisal system. For example, Dipboye and dePontbriand(1981) measured items such as "The system has helpedmy supervisor understand my problems.'' Here, the mea-sure is tapping a different, more global construct thanutility of the performance appraisal interview.

    The extent to which subordinates feel they were fairlyappraised also has been assessed by many researchers.In some of the early research on performance appraisalreactions, fairness was assessed simply by asking employ-ees how fair they felt the appraisal had been (e.g., Landyet al., 1978). With the increased interest and research inthe area of organizational justice within the past decade(e.g., Greenberg, 1986; Korsgaard & Roberson, 1995),however, the issue of appraisal fairness has expanded.For example, Greenberg (1986) distinguished betweenprocedural and distributive justice in a performance ap-praisal context. A factor analysis revealed two distinctconstructs. Distributive justice was concerned with thefairness of performance ratings relative to work per-formed, whereas procedural justice involved perceptionsof the appraisal process, such as perceived input on thepart of employees.

    Other important reactions regarding performance ap-praisal that have been assessed include how motivatedsubordinates are to improve their performance on the jobsubsequent to the appraisal (e.g., Burke & Wilcox, 1969;Burke et al., 1978; Dipboye & dePontbriand, 1981; Nem-eroff & Wexley, 1979) and the perceived accuracy of theappraisal (e.g., Klein & Snell, 1994; Korsgaard, Rober-son, & Rymph, 1996).

    Employee Participation in Performance Appraisal

    Research on employee participation has suffered froma lack of consensus on the meaning of participation andits underlying mechanism (Anderson, 1993; Greenberg &Folger, 1983; Korsgaard & Roberson, 1995; Locke &Schweiger, 1979). This has led to a number of differentconceptualizations and operationalizations of participa-tion in the literature. For example, the early research onparticipation in a performance appraisal context differen-tiated between four basic forms of participation: whotalked the most in the appraisal interview (subordinate orsupervisor); who set goals for the future (subordinate orsupervisor); whether the subordinate had the opportunityto state his or her side of the issue; and whether the subor-dinate felt that he or she influenced the appraisal in anyway (Burke et al., 1978; Greller, 1975, 1978; Wexley,Singh, & Yukl, 1973). More recent research in the area

    of employee participation has continued to focus on manyof these conceptualizations (e.g., Giles & Mossholder,1990; Klein & Snell, 1994; Korsgaard & Roberson,1995).

    Other ways that researchers have operationalized par-ticipation include the opportunity to self-appraise and theexperimental manipulation of actual participation in theappraisal procedures. In terms of self-appraisal, it hasbeen suggested that self-appraisals may increase ratees'participation in the appraisal interview (Farh, Werbel, &Bedeian, 1988; Latham & Wexley, 1981), which maymake them more committed to performance goals andmore accepting of criticism (Riggio & Cole, 1992). Thus,some researchers have incorporated self-ratings of perfor-mance into an appraisal system or compared systems withself-ratings to those without (e.g., Roberson et al., 1993).Other researchers have manipulated participation by com-paring a control group without participation to an experi-mental group with participation. Operationalizations ofparticipation in these types of experimental contexts haveincluded participation in the development of the perfor-mance appraisal system (e.g., Silverman & Wexley, 1984)and the introduction of a new, more collaborative typeof appraisal system (e.g., French, Kay, & Meyer, 1966;Taylor & Zawacki, 1978). Finally, Anderson (1993 ) sug-gested that participation and involvement in the perfor-mance appraisal process can be enacted at different stagesof the process: (a) design and implementation phaseinput into the design of the system; (b) preappraisalstageself-assessments; (c) during the appraisal inter-viewinput and being able to express one's views; and(d) after the appraisal interviewcontinual meetings,feedback, and input.

    In terms of conceptualizing participation for the presentmeta-analysis, the most pervasive distinction in the litera-ture seems to be between participation that allows anemployee to influence the outcomes of the appraisal andparticipation that allows an employee to voice his or heropinions, without regard to influence (Korsgaard & Ro-berson, 1995). These two approaches go back to the earlyliterature in the area of appraisal participation. For exam-ple, Greller (1978) identified the participation factorsof ownership and contributions. Ownership measuredwhether subordinates' thoughts were welcomed andwhether the topics of importance to them were addressed,whereas contributions measured the amount of influenceand impact subordinates felt they had during the interviewand for the future. At the same time, Burke et al. (1978)distinguished between influence in planning and the op-portunity to present ideas and feelings as two separatetypes of participation. Although a significant proportionof performance appraisal research has distinguished be-tween the measurement of participation for influence ver-sus participation for the sake of voicing one's opinions,

  • 618 CAWLEY, KEEPING, AND LEVY

    the procedural justice literature provides a useful frame-work with which to hetter understand why these two formsof participation are valued and how each might affectsubordinate reactions.

    The idea of allowing individuals who are affected bya decision to present information that they consider rele-vant to the decision is known in the justice literature asvoice (Lind & Tyler, 1988). Research has shown that voicecan lead to perceptions of procedural justice (Folger &Greenberg, 1985) as well as to positive reactions suchas satisfaction and perceptions of fairness (e.g., Kanfer,Sawyer, Early, & Lind, 1987; Tyler, 1987). Two alternativeinterpretations of the effect of voice are the value-expres-sive explanation and the instrumental explanation (Kors-gaard & Roberson, 1995; Lind & Tyler, 1988; McFarlin &Sweeney, 1995; Tyler, 1987; Tyler, Rasinski, & Spodick,1985). The value-expressive explanation suggests thatemployees perceive the chance for self-expression as pro-cedurally just, regardless of the final decision (Tyler etal., 1985). According to this explanation, attitudes areaffected because the opportunity to voice one's opinionsis a desired end in itself (Korsgaard & Roberson, 1995);or as stipulated in Tyler and Lind's (1992) relationalmodel, people value voice in its own right because itvalidates their self-worth and their feelings of belong-ingness to a valued group. The instrumental explanation,on the other hand, suggests that voice is valued becauseit increases the potential amount of control (both processand decision control) one has over decisions and, in thelong run, will result in more favorable outcomes. In thisapproach, attitudes toward a decision are affected byvoice because employees perceive that they have had anopportunity to indirectly influence the decision (Tyler,1987). Thus, similar to the participation research in per-formance appraisal, one key distinction between thevalue-expressive and instrumental explanations of voiceis the centrality of influence in each of these models(Korsgaard & Roberson, 1995; Shapiro, 1993). That is,although the potential to influence external outcomes isintegral to the instrumental explanation, it is absent ordeemphasized in the value-expressive explanation, whichinstead emphasizes long-term social relationships to thegroup or authorities, or both.

    A substantial amount of research has attempted to em-pirically support each interpretation of voice. Althoughmany studies have found support for both interpretations,the evidence remains largely equivocal in terms of whetheror not one of these explanations is superior to the other(Shapiro, 1993; Korsgaard & Roberson, 1995). However,Tyler and Lind (1992) concluded that the value-expressiveeffect involves something beyond instrumental concernsand that the opportunity to state one's case may be valuedas much or more for its own sake as for its instrumentalinfluence on decisions.

    Related to this issue is the distinction we made earlierregarding participation in the performance appraisal con-text. Borrowing from the justice literature, in the presentstudy we refer to participation for the sake of voicingone's opinions as value-expressive participation and par-ticipation intended to influence the appraisal as instrumen-tal participation. Although both have been examined interms of their relationships with subordinate reactions to-ward the appraisal, it remains unknown whether one ofthese types of participation is more strongly associatedwith positive subordinate reactions than the other. Thus,there appears to be a debate regarding which type ofparticipation, value-expressive or instrumental, is morehighly related to these reactions. The present study at-tempts to reconcile this debate by examining the extantresearch for each of these conceptualizations to uncoverthe true nature of their relationships with subordinatereactions.

    The Present Study

    This study has the potential to contribute to the existingliterature regarding performance appraisal participationand subordinate reactions in three ways. First, it will pro-vide the best estimate to date of the true relationshipbetween participation in the appraisal process and subor-dinate reactions toward that process. Second, it distin-guishes between different conceptualizations of participa-tion and examines the relationship of each of these witha number of important reaction criteria. Finally, it exam-ines the relationship of both value-expressive participationand instrumental participation with a variety of subordi-nate reactions. This allows for a comparison of the rela-tionship of each participation type with a number of reac-tion criteria in a performance appraisal context. Thisshould contribute to the understanding of the differentialrole each might play in an organizational context.

    Method

    Meta-Analytic Method

    The Hunter and Schmidt (1990) psychometric meta-analyticprocedure was used to cumulate the results of research investi-gating the relationship between performance appraisal participa-tion and subordinates' affective reactions. This approach isbased on the belief that much of the variability in results acrossstudies is due to statistical and methodological artifacts ratherthan any real differences in underlying population relationships.In addition, these artifacts may also provide an inaccurate de-scription of population relationships by attenuating correlationsor mean differences below their true population values. Giventhis, the HunterSchmidt approach determines the amount ofvariance attributable to artifacts (e.g., sampling error and unre-liability) and then subtracts that amount from the total amountof variation in the observed correlations. The mean of the popu-

  • PARTICIPATION AND PERFORMANCE APPRAISAL 619

    lation distribution is also more accurately estimated by correct-ing for artifacts that distort the observed mean. This analysisresults in an estimate of the mean and variance of the populationdistribution.

    The mean of the observed distribution was corrected for mea-surement error in both the participation and reaction measures(Hunter & Schmidt, 1990, p. 165), yielding an estimate ofthe population mean. The observed variance was corrected forsampling error and differences across studies in the reliabilityof the participation and reaction measures, yielding an estimateof the population variance (Hunter & Schmidt, 1990, pp. 208-210). The computer program that was used to calculate thecorrected correlations is described in McDaniel (1986). Addi-tional details on the program are presented in Appendix B ofMcDaniel, Schmidt, and Hunter (1988).

    Literature Review and Decision Rules

    Using the PsycINFO database, we conducted an extensivecomputer-based search of the major psychological and manage-ment publications regarding performance appraisal participationand subordinate reactions from 1967 to the present, which gener-ated a pool of studies to be included in the mela-analysis. Refer-ence lists of various sources were also searched to find appro-priate studies. Finally, relevant journals (e.g., Academy of Man-agement Journal, Academy of Management Review, Journalof Applied Psychology, Personnel Psychology, OrganizationalBehavior and Human Decision Processes) were searched in anattempt to uncover any references that were not identifiedthrough the aforementioned procedures. Thus, the pool of stud-ies from which we sampled included published articles, unpub-lished dissertations, and conference papers.

    Next, we identified key criteria for inclusion of studies in themeta-analysis. First, because we were interested in providing arepresentation of the relationship between appraisal participa-tion and employee reactions in actual work situations, only stud-ies conducted in a field setting with currently employed workerswere considered. Second, each study had to include one ormore measures of subordinate participation in the performanceappraisal process and one or more measures of reactions to theappraisal. Third, the study must have directly reported an effectsize measure (r) or results that were sufficient to calculate ameasure of effect size, for example, { or F (1 df). If a studyappeared to meet the first two criteria but failed to report aneffect size measure or results with which we could calculatesuch a measure, we contacted the authors and requested theadditional information, which in most cases was graciously sup-plied. In terms of the proportion of studies excluded from analy-ses on the basis of the foregoing criteria, the majority wereexcluded because they were conducted in a laboratory setting,followed by studies excluded because they measured only par-ticipation or reactions but not both. Finally, a significant butfewer number of studies were omitted because of insufficientdata with which to calculate an effect size.

    Classification of StudiesUsing the criteria stated above, 27 studies, containing 32

    independent samples, were chosen for the meta-analysis. These

    studies were then coded as to their participation type and reac-tion type. Two raters coded the studies independently, with aninterrater agreement of .74 as indexed by Cohen's kappa (Co-hen, 1960), which measures agreement for nominally scaleddata. The raters compared coding results, discussed discrepan-cies, and then came to a consensus in terms of the final codesthat studies received.

    Participation. Participation was coded as one of the follow-ing : instrumental, value expressive, proportion of time talked,self-rating, and "other." To be coded as instrumental, the partici-pation had to involve some element of potential influence onthe part of the subordinate (e.g., "To what extent did you influ-ence how your manager evaluates your work?"). To be codedas value expressive, the participation had to be such that it wasonly with the intention of having one's voice heard, withoutregard to influencing the final results of the appraisal (e.g., "Towhat extent did you use the session as an opportunity to shareyour ideas and feelings?''). If a measure contained both value-expressive and instrumental items, it was coded as instrumentalbecause we did not want the issue of potential influence tocontaminate the issue of voice for the sake of voice. In otherwords, we tried to be conservative in our coding so that if therewas even a hint that the manner in which participation wasoperationalized could be perceived as being instrumental, wecoded it as instrumental even though there may have been ele-ments of value-expressive voice as well. Participation was codedas proportion of time talked if it measured the relative amountof time the subordinate and the supervisor talked during thesession (e.g., "In your last performance review session, whotalked the most?"). Note that although it appears on the surfacethat time talked could be conceptualized as value-expressiveparticipation, it was placed in a separate category primarilybecause the researchers who used this measure (e.g., Burke etal., 1978; Greller, 1975) concluded more than 15 years ago thatit was a very different construct from the opportunity to expressone's ideas or any other operationalization of value-expressiveparticipation (see Greller, 1975, for a more detailed discussion).Finally, if a study used self-ratings as a form of participation,then that participation was coded as self-ratings.

    Any other measures of participation that did not easily fitinto the just-mentioned categories were coded as other. Thiscategory included two different types of studies. First, therewere three studies that measured employees' participation inthe goal-setting process that were placed in this category. Forexample, one way Neineroff and Wexley (1979) measured par-ticipation was by asking employees, "Who sets the job perfor-mance goals for the next period during the feedback interview?"Second, one study, Greller (1978), conducted simultaneous re-gressions including various operationalizations of participationand reported two coefficients (from two independent samples)from these analyses. For the purposes of the meta-analysis, onlythe overall R value could be used to index the relationshipbetween participation and reactions.

    Reaction criteria. Subordinate reaction measures also wereclassified and then coded as one of the following: satisfaction,motivation to improve, utility, fairness, and "other." To be codedas satisfaction, a measure had to clearly address the questionof how satisfied or happy subordinates were with the appraisal.Any measures addressing the question of how satisfied subordi-

  • 620 CAWLEY, KEEPING, AND LEVY

    nates were with the appraisal outcome (i.e., their ratings) werenot included because it was felt that these measures were tappinga different construct than satisfaction with the appraisal sessionor the appraisal system. In addition, measures addressing jobsatisfaction were not included because we were only interestedin satisfaction with the appraisal, not with the joh itself. Finally,measures addressing satisfaction with the supervisor were alsoexcluded because these tapped into the leadership domain,which was beyond the scope of this article. Measures codedas satisfaction were further divided into satisfaction with theappraisal session and satisfaction with the appraisal system,where possible. This is consistent with previous research sug-gesting that some aspects of an appraisal may be differentiallyrelated to these two different types of satisfaction (e.g., Giles &Mossholder, 1990; Mount, 1984). Measures coded as satisfac-tion with the appraisal session only addressed satisfaction withthe appraisal interview (e.g., "1 felt quite satisfied with thelast appraisal session") and did not address the system itself.Similarly, satisfaction with the appraisal system only addressedsatisfaction with the system (e.g., "In general, I feel the com-pany has an excellent performance appraisal system'') and notsatisfaction with the interview.

    Reaction measures coded as motivation to improve addressedthe question of how motivated subordinates were to improvetheir performance after their appraisal interview (e.g., "At theend of the interview I really wanted to try and improve myjob performance"). Measures coded as utility addressed thequestion of how valuable subordinates perceived the system tobe, or the extent to which they perceived the review to havegiven them a clearer idea of what was expected of them on thejob (e.g., "I learned a lot from the appraisal," "The system hashelped me recognize my strengths and weaknesses"). Measurescoded as fairness addressed the question of how fair the ap-praisal session/system was perceived to be (e.g., "How fair doyou feel your last performance appraisal session was?"). Itshould be noted that measures assessing the extent to whichsubordinates agreed with the ratings assigned by their supervi-sors were not included because although agreement may influ-ence fairness, it was perceived to be a separate construct. Finally,measures that assessed some combination of the reactions justdescribed were coded as other. These measures consisted ofcomposite scales attempting to tap an overall subordinate reac-tion measure (e.g., Dobbins et al., 1990; Greller, 1975).

    Perceived accuracy was not included because of the paucityof studies in which it was actually measured as a separate con-struct. In fact, we uncovered only one study, Taylor et al. (1995),thai measured it independently from other constructs. This wassurprising given that perceived accuracy is often cited as oneof the more popular reaction criteria assessed in performanceappraisal (e.g., Murphy & Cleveland, 1995). The lack of studiesmeasuring accuracy is misleading, however. Rather than havingbeen neglected, it seems that perceived accuracy has most oftenbeen measured in conjunction with perceived fairness. For ex-ample, Landy et al. (1978) asked subordinates to respond tothe question "Has performance been fairly and accuratelyevaluated?''

    Predictor and Criterion ReliabilitiesTable 1 presents the reliabilities for the various types of reac-

    tion and participation measures. Because we had reliability in-

    Table 1Reliabilities for Reaction and Participation Measures

    VariableNumber ofcoefficients N

    Range ofvalues M SD

    Reaction reliabilities

    Session satisfactionReportedEstimated

    System satisfactionReportedEstimated

    Motivation to improveReportedEstimated

    UtilityReportedEstimated

    FairnessReportedEstimated

    OtherReported

    113

    52

    36

    84

    36

    5

    2,160790

    853558

    5621,368

    2,067808

    7971,480

    895

    .63 -.90

    .67 -.67

    .60- .90

    .52-.S2

    .63 -.83,49-.69

    .60-. 89

    .59-. 83

    .68-. 93

    .5S-.69

    .66-.96

    .83

    .67

    .79

    .52

    .76

    .52

    .83

    .68

    .82

    .58

    .81

    .08

    .00

    . 1 1

    .00

    .11

    .08

    .101 1

    .13

    .06

    .11

    Participation reliabilities

    Value expressiveReportedEstimated

    InstrumentalReportedEstimated

    Time talkedReportedEstimated

    OtherReportedEstimated

    117

    113

    14

    32

    2,1742,439

    2,498416

    56845

    588101

    .67-.S6

    .52-.71

    .38-.91

    .44-.61

    .S2-.52

    .35-35

    .49-.80

    .67-.6T

    .80

    .55

    .74

    .50

    .52

    .35

    .59

    .67

    .06

    .08

    .13

    .10

    .00

    .00

    .18

    .00

    formation for most but not all studies, we used artifact distribu-tions rather than correcting each coefficient separately(Hunter & Schmidt, 1990, chap. 4). In some cases, participa-tion, subordinate reactions, or both, were measured with oneitem, with no reliability reported. To obtain a more representa-tive distribution of reliability values, we computed reliabilityestimates for these cases. This was done by first identifying thestudies in which both the reliability and number of items werereported. Within each predictor and criterion subtype (e.g.,value-expressive participation or system satisfaction), we thenused the Spearman-Brown prophesy formula to compute thereliability of single-item scales from the reliability for the sameconstruct measured with a multiple-item scale. This was doneseparately for scales with the same number of items. For exam-ple, we calculated the average reliability of all four-item scalesfor a particular construct and then used the Spearman-Brownformula to determine the reliability of a one-item scale, thenwe did the same for all three-item scales, and then all two-itemscales. We then summed the results for each of these multiple-item levels and took the average as the estimate for a one-itemmeasure of the same construct (Hirsh, Northrop, & Schmidt,1986). Note that Table 1 contains both the reliabilities reportedin individual studies or made available to us from authors (whichwere mostly measures of internal consislency and are labeled

  • PARTICIPATION AND PERFORMANCE APPRAISAL 621

    "Reported" in the table) as well as those reliabilities calculatedusing the Spearman-Brown method described earlier (labeled"Estimated" in the table).

    Results

    Results are reported in the following order: (a) overallparticipation related to overall reactions (Table 2); (b)overall participation related to various types of reactions(Table 2); (c) overall reactions related to different formsof participation (Table 3); and (d) different forms ofparticipation broken down specifically for various reac-tions (Table 3). Table 2 presents the results of the meta-analytic investigation of overall participation and ap-praisal reactions. The first column describes the distribu-tion being analyzed (i.e., the type of participation andreaction measures analyzed). The next four columns pres-ent the total sample size, the number of coefficients ana-lyzed for each distribution, the mean observed correlationfor the distribution in question, and the correspondingstandard deviation associated with this observed correla-tion, respectively. The remaining columns provide infor-mation about the estimated population distribution. Here,the mean and standard deviation of the population distri-bution are provided, followed by the lower and upperbounds of the 95% credibility interval.

    It should be noted that the nature of the variables inves-tigated led to the situation in which one study often col-lected data on more than one reaction measure, participa-tion measure, or both. This resulted in the situation inwhich 92 coefficients were collected from 32 independentsamples. The Appendix presents all 92 coefficients and

    the studies from which they were obtained, along withthe type of participation and reaction coded for each coef-ficient and their respective reliabilities.

    The situation in which different effect coefficients areanalyzed from the same sample violates the meta-analyticassumption that the effect sizes used are statistically inde-pendent of each other, which is guaranteed if the valuescome from different studies. The formula for samplingerror serves to underestimate the variance attributable tosampling error, leading to an undercorrection of samplingerror and a population variance estimate that is too large(Hunter & Schmidt, 1990, pp. 452-453). In such situa-tions, Hunter and Schmidt (1990) suggested using theaverage correlation for measures of the same constructrather than entering each correlation separately. For exam-ple, in the overall meta-analysis between participation andappraisal reactions, if one study provided multiple corre-lations for this relationship based on different reactionmeasures, the average correlation was entered into themeta-analysis rather than the separate correlations. Thisensured the independence of each correlation. As recom-mended by Hunter and Schmidt (1990, p. 454), the corre-sponding sample size entered for these average correla-tions was the sample size of the study rather than theproduct of the sample size and the number of correlationsaveraged.

    Overall Relationship Between Participation andSubordinate Reactions

    As indicated in Table 2, for the general analysis ofoverall participation by overall reactions, the number of

    Table 2Results for Overall Analyses Between Appraisal Participation and Employee Reactions

    Distribution No. rs Mean i Obs. i 95% CI

    Overall reactions byoverall participation

    Overall participation bytype of reaction

    SatisfactionSession satisfactionSystem satisfaction

    Motivation to improveUtilityFairnessOther

    Overall reactions by typeof participation

    Value expressiveInstrumentalTime talkedSelf-appraisalsOther

    6,732

    4,5292,9501,6811,9302,8752,5471,031

    4,6133,116

    901556689

    32

    211489

    12109

    6

    1816535

    .43

    .45

    .46

    .44

    .28

    .39

    .39

    .42

    .47

    .42

    .16

    .22

    .47

    .11

    .14

    .15

    .13

    .06

    .12

    .13

    .06

    .11

    .12

    .15

    .05

    .15

    .61

    .64

    .64

    .64

    .44

    .55

    .59

    .57

    .65

    .59

    .30

    .25

    .70

    .10

    .16

    .17

    .13

    .00

    .13

    .15

    .00

    .11

    .11

    .26

    .00

    .18

    .41-. 81

    .33-.9S

    .31 -.97

    .39-.S9

    .44-.44

    .30-.80

    .30-.88

    .57-.S7

    .43-.R7

    .37-.81-.21-. 81

    .25-.2S

    .35-. 10

    Note. Mean r = observed average correlation; Obs. a ~ observed standard deviation; p estimatedpopulation correlation; up = estimated population standard deviation; 95% CI = 95% credibility intervalaround the population mean.

  • 622 CAWLEY, KEEPING, AND LEVY

    Table 3Results for Analyses Between Appraisal Participation and Employee Reactions Broken Downby Type of Participation and Type of Reactions

    Distribution

    Satisfaction byValue expressiveInstrumentalTime talkedOther

    Motivation to improve byValue expressiveInstrumentalTime talked

    Utility byValue expressiveInstrumentalTime talkedOther

    Fairness byValue expressiveInstrumental

    Other byValue expressiveInstrumental

    N

    3,2592,266

    820664

    1,779429820

    1,9841,627

    769343

    2,251738

    221810

    No. rs

    Type of reaction

    121134

    643

    6733

    765

    24

    Mean r Obs. a P af 95% CI

    by type of participation

    .53

    .42

    .15

    .53

    .32

    .31

    .13

    .45

    .39

    .16

    .41

    .43

    .34

    .54

    .40

    .14

    .13

    .16

    .08

    .07

    .07

    .13

    .07

    .14

    .20..18

    .11

    .16

    .07

    .06

    .72

    .59

    .27

    .77

    .49

    .49

    .28

    .61

    .53

    .29

    .59

    .64

    .51

    .71

    .54

    .16

    .14

    .28

    .00

    .03

    .00

    .24

    .00

    .15

    .35

    .22

    .11

    .21

    .00

    .00

    .41-1.0

    .32-. 86-.28-.S2

    .77-.7T

    .43-.S5

    .49 -.49-.19-.75

    .61-.61

    .24-.S2-.40-.98

    .16-1.0

    .42-. 86

    .10-.92

    .71-.71

    .54-.S4

    Type of satisfaction by type of participation

    Satisfaction with session byValue expressiveInstrumentalOther

    Satisfaction with system byValue expressiveInstrumental

    2,5151,503

    394

    846763

    1073

    34

    .54

    .40

    .54

    .50

    .47

    .16

    .11

    .10

    .09

    .15

    .72

    .54

    .77

    .70

    .67

    .19

    .11

    .05

    .05

    .16

    .35-1.0

    .32- .76

    .67-.S7

    .60- .80

    .36- .98

    Note. Mean r - observed average correlation; Obs. a observed standard deviation; p = estimated population correlation; oe estimatedpopulation standard deviation; 95% CI = 95% credibility interval around the population mean.

    coefficients analyzed was 32 with 6,732 observations. Theoverall relationship between performance appraisal par-ticipation and the reactions of subordinates to the perfor-mance appraisal was, as expected, rather large (p = .61).

    Overall Participation by Type of ReactionTable 2 also contains the analyses investigating the rela-

    tion between the different types of reactions and overallparticipation. Separate analyses were run for each typeof reaction measure. Consistent with the foregoing analy-sis, the correlations analyzed here were average correla-tions in cases where multiple measures of the same con-struct were collected from the same sample. As expected,all five reaction measure categories were positively relatedto participation, and satisfaction had the strongest rela-tionship with participation (p = .64). This was followedby fairness (p = .59), the "other" category (p - .57),utility (p = .55), and motivation to improve (p - .44).

    As mentioned previously, we further divided the satis-faction measures into satisfaction with the session and

    satisfaction with the system. These separate analyses wereconducted, resulting in a corrected correlation of .64 be-tween both participation and satisfaction with the sessionand between participation and satisfaction with the sys-tem. Thus, contrary to our expectations, at this level ofanalysis, the relationship between participation and satis-faction did not vary as a function of the type of satisfac-tion. For session satisfaction, 14 coefficients and 2,950observations were used in the analysis, whereas only 8coefficients and 1,681 observations were uncovered forsystem satisfaction.

    Overall Reactions by Type of ParticipationTo examine whether the type of participation resulted

    in different size relationships between participation andoverall reactions, separate analyses were conducted foreach type of participation. Results for these analyses arealso presented in Table 2. Of particular interest was thecomparison between instrumental participation and value-expressive participation. Eighteen coefficients and 4,613

  • PARTICIPATION AND PERFORMANCE APPRAISAL 623

    observations were analyzed for value-expressive partici-pation, and 16 coefficients and 3,116 observations wereanalyzed for instrumental participation. Results indicatedthat the relationship between participation and reactionswas higher when the participation was value expressive(p = .65) than when it was instrumental (p = .59) innature. The relationship between participation coded as"other" and reactions was also high (p = .70), althoughit was based on only 5 coefficients and 689 observations.In contrast, the relationship between proportion of timetalked and affective reactions was quite low (p = .30),indicating that the mere act of talking during an appraisalsession is not highly associated with positive affectivereactions to the appraisal. This result was based on only5 coefficients and 901 observations.

    The relationship between self-appraisal as a form ofparticipation and reactions was also quite low (p = .25),and again there were a limited number of studies availablehere (i.e., 3 studies and 556 observations). As can beseen in Table 2, the standard deviation associated withthis distribution was zero. This is not an uncommon oc-currence in meta-analysis, and it often indicates the exis-tence of second-order sampling error (Hunter & Schmidt,1990, pp. 411-412). Second-order sampling error refersto the fact that for a meta-analysis based on a small num-ber of studies, the outcome depends in part on propertiesof the studies that vary randomly across studies. Distribu-tions with a small number of coefficients have greaterpotential for second-order sampling error, which can dis-tort true validity variance estimates (Hunter & Schmidt,1990, chap. 9). Alternatively, a standard deviation of zeromay indicate that the relationship between two variablesis invariant.

    It also should be noted that all of die coefficients ana-lyzed for self-appraisals were taken from quasi-experi-mental studies in which one group engaged in self-ratingsand the other did not. In studies of an experimental nature,a reliability of 1.0 is assumed. That is, group membershipis a nominal designation representing what the experi-menter intends to happen to the participant, but does notnecessarily actually happen to the participant (Hunter &Schmidt, 1990). Thus, researchers assume that all partici-pants in the self-rating condition and none of the them inthe control condition performed self-ratings, resulting inno error. Because of this assumption, most studies do notprovide the necessary information regarding the extent ofmisidentification, and consequently measurement errorsin the independent variables typically remain uncorrectedin meta-analysis (Hunter & Schmidt, 1990). Thus, forself-appraisals in the present study, unreliability was cor-rected only for reactions because perfect reliability wasassumed for participation.

    In sum, very strong relationships were uncovered forvalue-expressive participation and instrumental participa-

    tion with overall reactions. Furthermore, the relationshipbetween reactions and participation was somewhatstronger when participation was value expressive in naturerather than instrumental. Time talked and self-appraisalswere related to overall reactions, but the magnitude ofthose effects was much smaller. In addition to the smalleffect size for time talked, the credibility interval was alsoquite wide (-.21-.81). This suggests that this relation-ship may be affected by moderators and that a second-order sampling error may be present. Together, the wideconfidence interval including zero and the small numberof coefficients indicate that great caution should be usedwhen interpreting this particular result.

    Type of Reaction by Type of Participation

    Further analyses were used in an attempt to better un-derstand the relationships among the participation and re-action variables. That is, we examined the relationshipbetween appraisal participation and employee reactionsby analyzing the relationship between the different combi-nations of reaction and participation types. The results ofthese analyses are presented in Table 3. We note that someof the results discussed in the following sections are basedon smaller numbers of coefficients, and observations andshould therefore be interpreted with some degree of cau-tion. These meta-analyses should be updated and rerun asmore data become available (Hunter & Schmidt, 1990).However, it also should be noted that results are consistentacross the various analyses. This consistency leads us toplace more confidence in the findings and the conclusionsbased on these findings, despite the small number of coef-ficients involved.

    Satisfaction. We performed additional analyses look-ing at the relationship between the different types of par-ticipation and satisfaction. For the analyses presented inthis section, satisfaction was measured overall by combin-ing both satisfaction with the session and satisfaction withthe system. Only those analyses with greater than twocoefficients were performed. Again, of primary interestwas the comparison between instrumental and value-ex-pressive participation. Similar to the results obtained ear-lier with overall affective reactions, results here indicatedthat the correlation between value-expressive participa-tion and satisfaction (p = .72) was higher than the rela-tionship between instrumental participation and satisfac-tion (p = .59). The relationship between time talked andsatisfaction was only .27 but should be regarded withcaution because it was based on only 3 coefficients and820 observations.

    Motivation to improve. For motivation to improve, thecoefficient for value-expressive participation (p = .49)was the same as that for instrumental participation. Thisis the only reaction in Table 2 for which value-expressive

  • 624 CAWLEY, KEEPING, AND LEVY

    participation did not emerge as a stronger predictor thaninstrumental participation. Tt should be noted, however,that the instrumental coefficient was based on a smallnumber of observations (N = 429) and the standard devia-tion was zero, which suggests the potential for second-order sampling error.

    Utility. For perceived utility of the appraisal, value-expressive participation had a stronger relationship withthis type of reaction (p = .61) than did instrumentalparticipation (p = .53). Overall participation (i.e., the"other" category) also had a fairly high correlation withperceived utility (p = .59), whereas time talked had alow correlation with utility (p = .29). Both of thesecorrelations were based on a limited number of coeffi-cients as well (i.e., 3).

    Fairness. Continuing with the same pattern, value-expressive participation was correlated more highly withfairness (p = .64) than was instrumental participation(P = .51).

    Type of Satisfaction by Type of ParticipationAlthough there was no difference between overall par-

    ticipation and satisfaction with either the system or thesession, we were interested in investigating whether thetype of participation might interact with the type of satis-faction in different ways. Thus, we performed analysesinvestigating the relationship between different types ofparticipation and different types of satisfaction. As in theanalyses just mentioned, only analyses with more thantwo coefficients were conducted. Value-expressive partici-pation (p = .72) was more highly correlated with satisfac-tion with the session than was instrumental participation(p = .54). Similarly, value-expressive participation (p =.70) was more strongly correlated with satisfaction withthe system than was instrumental participation (p = .67),although the difference was much smaller. In addition,note that the relationship between value-expressive partic-ipation and session satisfaction was only slightly stronger(p = .72) than was the relationship between value-expres-sive participation and system satisfaction (p = .70). Incontrast, instrumental participation was much morestrongly related to system satisfaction (p = .67) than itwas to session satisfaction (p = .54).

    Discussion

    Overall, results clearly indicate that there is a strongrelationship between performance appraisal participationand subordinates' affective reactions (p = .61). Thisfinding underscores what many researchers have sug-gested for some lime: Participation in the appraisal pro-cess is directly related to employees' satisfaction withand acceptance of the performance appraisal system (e.g.,

    Dipboye & dePontbriand, 1981; Giles & Mossholder,1990; Korsgaard & Roberson, 1995; Landy et al., 1978;Murphy & Cleveland, 1995). Considering different typesof reactions, participation also was, as we anticipated,most strongly related to satisfaction (p - .64). However,participation also was positively related to many otherimportant reactions, including perceived fairness of theappraisal (p = .59), perceived utility of the appraisal (p= .55), and subordinates' motivation to improve after theappraisal (p = .44). Thus, the present analysis has firmlyestablished that participation in performance appraisal ispositively associated with a diverse number of favorablesubordinate reactions.

    Type of Participation

    Knowing the role of participation is important, but itis also of interest to scientists and practitioners alike toknow the types of participation that are associated withpositive employee reactions and whether there are differ-ences among these types of reactions as a function of typeof participation. From the present results, it appears thatthe type of participation used is important. Value-expres-sive participation was strongly related to overall reactions(p = .65), with instrumental participation also beingstrongly related to overall reactions (p = .59). Two othertypes of participationproportion of time talked by sub-ordinates (p = .30) and self-appraisals (p = .25)alsowere positively related to overall reactions, although notnearly as strongly as value-expressive and instrumentalparticipation. The last category, other, was strongly related(p = .70) to reactions as well; as was noted earlier, thiscategory consisted of studies that measured the extent towhich employees participated in goal setting, as well ascomposite reaction measures.

    Of specific interest in the present study was a compari-son of value-expressive and instrumental participation. Asmentioned earlier, in the overall analysis, value-expressiveparticipation was more strongly related to reactions thanwas instrumental participation. If we examine results forvalue-expressive and instrumental participation for eachtype of reaction, it is clear that this overall result summa-rizes a pattern in which the coefficient for value-expres-sive participation, with one exception (i.e., motivation toimprove), was always higher than that for instrumentalparticipation. However, it is important to note that al-though the existence of this difference was consistentacross the analyses, the difference was, at times, fairlysmall. The consistency can be seen by comparing value-expressive (V-E) with instrumental (I) coefficients forsatisfaction with the appraisal overall (V-E = .72; 1 =.59), satisfaction with the appraisal session (V-E = .72;I = .54), satisfaction with the appraisal system (V-E =.70; I = .67), perceived utility of the appraisal (V-E =

  • PARTICIPATION AND PERFORMANCE APPRAISAL 625

    .61; I = .53), and perceived fairness of the appraisal(V-E = .64; I = .51).

    The consistent finding that value-expressive participa-tion was more strongly related to positive reactions thaninstrumental participation seemed to us, at first blush, tobe counter-intuitive. However, this pattern of results isconsistent with a growing body of research in the justiceliterature suggesting that the opportunity to voice one'sopinions regarding a decision increases the perceived fair-ness of the process, even if one does not influence thedecision (Folger, 1977; Tyler et al., 1985). Moreover, amore careful examination of the justice literature revealsthat these results are consistent with theoretical and empir-ical work in this area.

    With respect to empirical research, there are prior stud-ies that corroborate our findings. Tyler (1987) reviewedsome of the research that has demonstrated a value-ex-pressive effect of voice on outcomes that is independentof the instrumental effect. For instance, Lind, Lissak, andConlon (1983), in a dispute-resolution context, found thatsatisfaction was related to value-expressive voice but notto instrumental voice (or what they called decision con-trol). Similarly, Tyler et al. (1985) demonstrated strongeffects of value-expressive voice even at low levels ofdecision control. Tyler (1987) replicated the independenteffects of process control on reactions such as satisfactionand affect in two studies but also demonstrated that theprocess control (or value-expressive) effect was strongerin magnitude than was the decision control (or instrumen-tal) effectresults that are consistent with the results ofour meta-analysis. Korsgaard and Roberson (1995) alsoidentified unique effects of both value-expressive voiceand instrumental voice on affective reactions. In addition,they found that only value-expressive voice was related to"trust in manager." Finally, consistent with the relationalmodel (Tyler & Lind, 1992), Tyler (1994) demonstratedthat relational issues (neutrality, trustworthiness, andstanding) shape judgments of procedural justice. He con-cluded that value-expressive voice dominates affective re-actions rather than resource- or outcome-based perspec-tives. So, it appears that this finding of the unique, andoften stronger, effect of value-expressive voice on af-fective reactions is not specific to performance appraisal,as demonstrated in our meta-analysis. On the contrary,this finding has been uncovered in other more traditionalorganizational justice areas, such as dispute resolution.

    Although the empirical research in organizational jus-tice contains findings that are consistent with the resultsof our meta-analysis, most of these studies provide littlein terms of a theoretical explanation for the effects. How-ever, a careful review of the justice literature does revealtheoretical frameworks that are consistent with our find-ings. As we have suggested earlier, Tyler and Lind's(1992) relational model argues that individuals want to

    maintain membership in what they perceive to be im-portant groups, and it is this desire for group membershipand being valued by the group that is at the forefront oftheir theory. Lind (1988, as cited in Greenberg, 1993)explained it best:

    Tb the extent that procedures provide signs that the per-ceiver has full status in the group, the procedures will beseen as fair. Thus, procedural justice will be high when theprocedure emphasizes the importance of the person andthe importance the group accords to the person's concernsand rights [italics added]. The model therefore predictsthat procedural fairness will be high for procedures thatpromote respectful or polite treatment or that dignify theconcerns of people involved, because such treatment sym-bolizes full status in the group, (p. 14)

    Clearly, this theory predicts that value-expressive voiceis integral to individuals' views about the fairness of pro-cedures, and it follows that this type of voice should bestrongly related to affective reactions, as was consistentlydemonstrated in the present analyses.

    The relational model seems to emphasize individuals'social or emotional goals (e.g., esteem maintenance),whereas the instrumental explanation for voice seemsmore focused on having a direct economic or outcome-related effect on future behaviors or rewards (e.g., payraise). Tyler and Dawes (1993) argued that individuals'behaviors and attitudes are not always determined by ego-istical motives. Furthermore, they suggested that in situa-tions without strong social bonds, people may tend to beegoistical in that they are self-interested in maximizingtheir own personal gains but that once a group identity iscreated, people become more responsive to group-cen-tered motives.

    We suggest that when individuals are involved in theirperformance review or interview, they may become morefocused on these group-centered motives and the socialbonds as a result of the social context in which they findthemselves (e.g., a face-to-face encounter with a supervi-sor) than they are on the more distal long-term, egoistic,economic goals. This may explain the consistent patternin the data regarding the somewhat stronger relationshipsbetween value-expressive voice and reactions versus in-strumental voice and reactions. We should note that thedistal economic goals are important as well, and this iscorroborated by the strong relationship between instru-mental participation and affective reactions in these data.

    An even more recent theoretical approach to workplacefairness has been articulated by Cropanzano and Prehar(in press). The essence of this approach is that justice orfairness results from a psychological contract betweentwo people or between a person and the organization. Itis argued that employees have different expectations aboutthe process as a result of that psychological contract. Em-ployees may have greater expectations regarding their

  • 626 CAWLEY, KEEPING, AND LEVY

    right to provide input into the appraisal process but lesserexpectations regarding their right to influence the out-come. This being said, we suggest that the violation ofvalue-expressive voice is a more severe violation for em-ployees than is the violation of instrumental voice. Simply,employees expect the opportunity to participate but arenot as likely to presume an opportunity to affect the out-come, and thus stronger reactions result from having thisopportunity for participation thwarted than result frombeing precluded from affecting the outcome.

    The finding that the proportion of time talked was nothighly related to positive subordinate reactions and exhib-ited the widest credibility intervals is not surprising giventhe results of individual studies using this measure. Forexample, Greller (1975) found that the invitation to par-ticipate carried much more weight in determining satisfac-tion with the appraisal than did the relative amount oftime the subordinate spoke versus his or her supervisor.As Greller suggested, the opportunity to express one'sopinions seems to be what is important, regardless ofwhether and the extent to which subordinates choose toseize this opportunity.

    The relatively weak relationship between self-ratingsand positive subordinate reactions is somewhat surprisingconsidering that one of the most frequently cited advan-tages to including self-ratings as part of an appraisal sys-tem is increased satisfaction on the part of employees(Bassett & Meyer, 1968; Farh et al., 1988). However,the resulting coefficient associated with the self-ratingsanalysis is potentially misleading and must be interpretedwithin its proper context.

    First, recall that the analysis of the relationship betweenself-ratings and employee reactions included only threecoefficients and 556 observations. A second, related con-sideration is the paucity of field experiments investigatingself-ratings. This may appear surprising on the surfacegiven both the recent literature on multisource appraisals(e.g., Tornow, 1993) and the more traditional researchthat has investigated self-ratings and their relationshipto ratings obtained from other sources (e.g., Harris &Schaubroeck, 1988; Williams & Levy, 1992). However,a review of the literature on self-ratings reveals that themajority of these studies used self-ratings with no controlor comparison group (e.g., Farh et al., 1988; Somers &Birnbaum, 1991). Thus, in most studies that have exam-ined self-ratings, all participants completed self-ratings.Because these studies lacked a "no self-rating" controlgroup, no comparison or correlation could be computed,and therefore the studies could not be included in ouranalyses.

    Third, the three studies that were included in the presentanalysis were quasi-experimental in nature, in which onegroup completed self-ratings and the control group didnot. As mentioned previously, in such experiments, perfect

    reliability is assumed. However, it is possible that in someof these studies individuals in the experimental conditionschose not to self-rate whereas those in the control condi-tions opted to self-rate. This is likely to result in somedegree of measurement error that was not corrected. Thus,the small sample size as well as the methodologicalproblems associated with some of the studies includedwarrants a cautious interpretation of the present findingsconcerning self-ratings. In addition, given that all the self-rating studies were quasi-experimental in design, it is con-ceivable that the results are method bound rather thanreflecting the true relationship between self-ratings andappraisal reactions.

    Type of Reaction

    Among the most consistent results was the relationshipbetween participation and satisfaction. The overall coef-ficient of .64 with an associated credibility interval of .33to .95 indicates that there is a strong, stable relationshipbetween participation in the appraisal process and ap-praisal satisfaction. Of interest is that at this overall levelof analysis, the relationship was not affected by the typeof satisfaction measured (session p = .64 and systemp = .64). Because most of the studies that measured ormanipulated participation did so with respect to participa-tion in the appraisal interview, we predicted that participa-tion would be more strongly related to session (or inter-view) satisfaction than it would be to system satisfaction.Our rationale was simply that participation, as measuredin the majority of studies, was more proximally relatedto session satisfaction and more distally related to systemsatisfaction. However, our overall results do not supportthis perspective.

    Other findings regarding satisfaction, however, are moresupportive of our original ideas. For example, the relation-ship between value-expressive participation and sessionsatisfaction was slightly stronger than was the relationshipbetween value-expressive participation and system satis-faction. Although this difference was small, the differencemakes intuitive sense because relational processes shouldbe most salient during the appraisal session, thus bringingvalue-expressive participation to the forefront when evalu-ating the appraisal session. On the other hand, instrumen-tal participation was much more strongly related to systemsatisfaction than it was to session satisfaction. This alsomakes intuitive sense because when considering the ap-praisal system, the relational or group elements of thesystem are generally less salient, and thus instrumentalparticipation becomes more important. Of course, futureresearch would be necessary to experimentally test theseintuitive interpretations of the findings.

    Even a quick look at Tables 2 and 3 provides very clearevidence that appraisal participation is related to much

  • PARTICIPATION AND PERFORMANCE APPRAISAL 627

    more than satisfaction. In fact, participation was consis-tently related to motivation to improve, utility, fairness,and "other" reactions. These relationships appear to beof moderate to large magnitude (Cohen, 1988) and quitestable. In addition, value-expressive participation wasmore strongly related to three of the four reactions (withmotivation to improve being the lone exception) than wasinstrumental participation. This also is consistent with ourinterpretation of the relational model (Tyler & Lind, 1992)and the more recent psychological-contract approach tojustice (Cropanzano & Prehar, in press). Employees whovalue group membership will respond favorably when al-lowed to participate and have input into the process, andthese same employees who expect to be allowed to partici-pate will react negatively when this psychological contractis violated. The social-psychological nature of these in-teractions seems more important in determining affectivereactions than actual control over the decision. This theo-rizing explains the results across most of the affectivereactions included in our analyses.

    It is interesting to note that this social-psychologicalapproach to justice, although very consistent with ourresults, is also consistent with newer approaches to perfor-mance appraisal that emphasize the social context of per-formance appraisal (Murphy & Cleveland, 1995). In par-ticular, researchers need to remember that performanceappraisal takes place within a social context and that thefailure to consider those social and situational influenceson the performance appraisal process while emphasizingonly cognitive process issues or outcomes is a costly error(cf. Ferris, Judge, Rowland, & Fitzgibbons, 1994; Levy &Steelman, 1997). A second related perspective of perfor-mance appraisal has developed out of the organizationaljustice literature. Folger, Konovsky, and Cropanzano(1992) have applied organizational justice issues to thestudy of performance appraisal, resulting in what theycall a "due process metaphor." These researchers main-tain that performance appraisal traditionally has beenviewed as a type of test, and therefore the emphasis hasbeen on the psychometrics of scales and ratings. Theyadvocate approaching performance appraisal from a "dueprocess metaphor," which emphasizes the importance ofproviding employees with adequate notice (e.g., employeeinput in the development of these objectives), a fair hear-ing (e.g., self-appraisal), and judgment based on evidence(e.g., principles of fairness). The point made by theseresearchers and echoed here is that performance appraisalresearch should begin taking into account the variablesand processes that we typically think of as being ' 'organi-zational" in nature rather than "industrial." It is our con-tention that we are in the early stages of this second shiftbecause, over the last 10 years, empirical work has begunto examine performance appraisal from new perspectives(e.g., Dobbins, Cardy, & Platz-Vieno, 1990; Ferris et al.,

    1994; Giles & Mossholder, 1990; Judge & Ferris, 1993;Williams & Levy, 1992). The results of our meta-analysessupport this view because the social context in which theappraisal takes place seems very pertinent to appraisalreactions.

    implications for Future Research

    The results of the current review suggest the need forresearch in a number of important areas. First, althoughit is clear that different types of participation seem tobe differentially related to employee reactions, it is notcompletely clear why these differences exist. Specifically,the present analysis provides consistent evidence thatvalue-expressive participation is more strongly related toa number of employee reactions to the appraisal than isinstrumental participation. From the justice literature, wepresented two frameworks that are consistent with thedata. Future research should attempt to explore these ideasmore formally. Although researchers in the justice fieldhave begun to examine this issue, we encourage perfor-mance appraisal researchers to investigate this as well.The results of the present analysis clearly indicate thatjustice is an important component in the performanceappraisal process, representing both an antecedent and aconsequence of employee reactions. Moreover, a recentstudy of 295 legal cases related to performance appraisaldemonstrated that issues of fairness and due process werethe most strongly related variables to case outcomes, morestrongly related than accuracy or validation evidence(Werner & Bolino, 1997). In other words, when renderingtheir decisions, the courts examine whether or not theappraisal process was fairly conducted. Certainly, this isvery strong evidence for the role played by justice andparticipation in the performance appraisal process.

    Second, the current analysis suggests a need for re-search assessing satisfaction with the appraisal system aswell as the appraisal interview. Although overall therewere no differences between the participation-session sat-isfaction and participation-system satisfaction relation-ships, there were differences when breaking participationdown into the two major types. However, given the smallsample sizes present in these analyses, we recommendthat future research investigate both system and sessionsatisfaction and compare the effects of different forms ofparticipation on these two measures.

    Third, there appears to be a need for experimental stud-ies investigating the reactions of employees to the incor-poration of self-ratings into appraisal systems. Althoughmost of the extant literature posits increased satisfactionand fairness as a result of including self-appraisals, thishas yet to be substantiated by experimental research.Given the current trend in the performance appraisal andfeedback literatures of using 360 feedback systems, ex-

  • 628 CAWLEY, KEEPING, AND LEVY

    perimental examination of the role of self-appraisalsseems warranted while the process and variables involvedneed careful experimental investigation.

    Fourth, although the current analysis focused on certaintypes of participation, namely value-expressive, instru-mental, proportion of time talked, and self-ratings, itshould be noted that participation can take other formsthat may be equally relevant to employee reactions. Forexample, the "other" category in the present study con-tained research on participation in the appraisal goal-set-ting process. Although fewer studies with these operation-alizations exist, given the significant size of the relation-ships for the "other" category, future research shouldinvestigate the effects of these types of participation onemployees' reactions as well.

    Fifth, perceived accuracy seems to be a potentially im-portant reaction criterion that has received very little indi-vidual research attention. As mentioned previously, mostof the existing performance appraisal research that hasmeasured perceived accuracy has done so by confoundingit with perceived fairness. That is, employees are generallyasked,' 'How fair and accurate do you think the appraisalwas?" Although additional research has examined fair-ness as a separate criterion, perceived accuracy seemsto have been relatively ignored. Future research shouldexamine perceived accuracy as a separate criterion to in-vestigate its importance. Furthermore, if researchers be-lieve that fairness and accuracy are synonymous con-structs, this needs to be tested empirically rather thanassumed.

    Limitations

    Although the present study uncovered many consistentrelationships between appraisal participation and positiveemployee reactions, the limitations of the study also mustbe noted and the results considered in light of these limita-tions. The major limitation, which was noted periodicallythroughout the text, is the issue of second-order samplingerror. Analyses for many of the variables contained onlya small number of studies, and therefore the results shouldhe regarded as tentative. Analyses should be rerun as moreresults become available. Still, the fact that we uncoveredconsistent patterns in the data and that these patterns cor-responded with both theory and empirical research lendsconsiderable support to our findings.

    A second limitation of the study was the fact that the92 correlations uncovered originated from only 32 distinctsamples. As mentioned previously, the use of multiplecorrelations from the same study violates the assumptionthat samples are independent and leads to a populationvariance estimate that is too large. In situations wheremultiple correlations were reported in the present study,the average correlation was used. Although this avoids

    the violation of the assumption of independent samples,the question of the appropriate sample size for this averagecorrelation remains an issue. That is, what should be en-tered as the sample size for the average correlation, thesample size of the study or the product of the sample sizetimes the number of correlations averaged from the study?Hunter and Schmidt (1990) suggested that there is muchless error in using the study sample size, which is whatwas used in the present analyses. However, it should benoted that using the study sample size leads to an overesti-mation of the variance attributable to sampling error be-cause the average correlations will have less samplingerror than a single correlation. This overestimation thenleads to an overcorrection of sampling error and an under-estimate of the variance associated with the relationshipbetween participation and reactions.

    Conclusion

    Current trends in performance appraisal (e.g., 360 sys-tems) seem to indicate that, as Meyer (1991) has sug-gested, the traditional top-down approach is no longerconsistent with organizations that are moving towardinvolvement-oriented climates. The current review high-lights some of the benefits associated with allowing per-formance appraisal systems to become more involvement-oriented through employee participation in the process.Specifically, employee participation was positively relatedto employee satisfaction with the appraisal session, theappraisal system, perceived utility of the appraisal, moti-vation of employees to improve performance, and per-ceived fairness of the system. Although by no means ex-haustive, we have empirically identified at least five waysin which employees can become more involved in theappraisal process: (a) offering them the opportunity tovoice their opinions (i.e., value-expressive participation);(b) allowing them to influence the appraisal through voic-ing their opinions (i.e., instrumental participation); (c)allowing them to perform self-appraisals; (d) allowingthem to participate in the development of the appraisalsystem; and (e) allowing them to participate in goal-setting in the appraisal process. As uncovered in thepresent study, some of these operationalizations of partici-pation are more strongly related to positive employee re-actions than others, most notably value-expressive partici-pation and instrumental participation. Ultimately, how-ever, the type of participation an organization uses isdependent on and must be consistent with the larger orga-nizational context within which the performance appraisalsystem exists.

    References

    References marked with an asterisk indicate studies in-cluded in the meta-analysis.

  • PARTICIPATION AND PERFORMANCE APPRAISAL 629

    Ajzen, I., & Fishbein, M. (1977). Attitude-behavior relations:A theoretical analysis and review of empirical research. Psy-chological Bulletin, 84, 888-918.

    Ajzen, I., & Madden, T. J. (1986). Prediction of goal-directedbehavior: Attitudes, intentions, and perceived behavioral con-trol. Journal of Experimental Social Psychology, 22, 453-474.

    Anderson, G. C. (1993). Managing performance appraisal sys-tems. Cambridge, MA: Blackwell.

    Bassett, G. A., & Meyer, H. H. (1968). Performance appraisalbased on self-review. Personnel Psychology, 21. 421-430.

    Bernardin, H. J., & Beatty, R. W. (1984). Performance ap-praisal: Assessing human performance at work. Boston: Kent.

    *Burke, R. J. (1970). Characteristics of effective performanceappraisal interviews: I. Open communication and acceptanceof subordinate disagreements. Training and DevelopmentJournal, 24, 9-12.

    *Burke, R. J., Weitzel, W., & Weir, T. (1978). Characteristicsof effective employee performance review and developmentinterviews: Replication and extension. Personnel Psychology,31, 903-919.

    *Burke, R. J., & Wilcox, D. S. (1969). Characteristics of effec-tive employee performance review and development inter-views. Personnel Psychology, 22, 291-305.

    *Bustamante, C., & Dickinson, T. L. (1996). Some determi-nants of employee attitudes about the performance appraisalprocess. Paper presented at the llth annual conference of theSociety for Industrial and Organizational Psychology, SanDiego.

    Cardy, R. L., & Dobbins, G. H. (1994). Performance appraisal-Alternative perspectives. Cincinnati, OH: South-WesternPublishing.

    Carroll, S. J., & Schneier, C. E. (1982). Performance appraisaland review systems: The identification, measurement and de-velopment of performance in organizations. Glenview, IL:Scott, Foresman.

    Cascio.W. F. (1991). Applied psychology in personnel manage-ment (4th ed.). Englewood Cliffs, NJ: Prentice Hall.

    Cohen, J. (1960). A coefficient of agreement for nominal scales.Educational and Psychological Measurement, 20, 3746.

    Cohen, J. (1988). Statistical power and analysis for the behav-ioral sciences (2nd ed.). New %rk: Academic Press.

    Cropanzano, R., & Prehar, C. A. (in press). Emerging justiceconcerns in an era of changing psychological contracts. In R.Cropanzano (Ed.), Justice in the workplace (Vol. 2): Fromtheory to practice. Mahwah, NJ: Erlbaum.

    *Dipboye, R. L., & dePontbriand, R. (1981). Correlates of em-ployee reactions to performance appraisals and appraisal sys-tems. Journal of Applied Psychology, 66, 248-251.

    *Dobbins, G. H., Cardy, R. L., & Platz-Vieno, S. J. (1990).A contingency approach to appraisal satisfaction: An initialinvestigation of the joint effects of organizational variablesand appraisal characteristics. Journal of Management, 16,619-632.

    *Evans, E. M., & McShane, S. L. (1988). Employee percep-tions of performance appraisal fairness in two organizations.Canadian Journal of Behavioral Science, 20, 177-191.

    Farh, J. L., Werbel, J. D., & Bedeian, A. G. (1988). An empiri-

    cal investigation of self-appraisal-based evaluation. PersonnelPsychology, 41, 141-156.

    Ferris, G. R., Judge, T. A., Rowland, K. M., & Fitzgibbons, D. E.(1994). Subordinate influence and the performance evalua-tion process: Test of a model. Organizational Behavior andHuman Decision Processes, 58, 101-135.

    Folger, R. (1977). Distributive and procedural justice: Com-bined impact of "voice" and improvement on experiencedinequity. Journal of Personality and Social Psychology, 35,108-119.

    Folger, R., & Greenberg, J. (1985). Procedural justice: An inter-pretive analysis of personnel systems. In K. Rowland & G.Ferris (Eds.), Research in personnel and human resourcesmanagement (Vol. 3, pp. 141-183). Greenwich, CT: JAIPress.

    Folger, R., Konovsky, M., & Cropanzano, R. (1992). A dueprocess metaphor for performance appraisal. In B. Staw &L. Cummings (Eds.), Research in organizational behavior(Vol. 14, pp. 127-148). Greenwich, CT: JAI Press.

    *French, J. P., Kay, E., & Meyer, H. H. (1966). Participationand the appraisal system. Human Relations, 19, 3-19.

    *Giles, W. F., & Mossholder, K. W. (1990). Employee reactionsto contextual and session components of performance ap-praisal. Journal of Applied Psychology, 75, 371-377.

    Greenberg, J. (1986). Determinants of perceived fairness ofperformance evaluations. Journal of Applied Psychology, 71,340-342.

    Greenberg (1993). The social side of fairness: Interpersonaland informational classes of organizational justice. In R. Cro-panzano (Ed.), Justice in the workplace: Approaching fair-ness in human resource management (pp. 79103). Hills-dale, NJ: Erlbaum.

    Greenberg, J., & Folger; R. (1983). Procedural justice, partici-pation and the fair process effect in groups and organizations.In P. Paulus (Ed.), Basic group processes (pp. 235-266).New York: Springer-Verlag.

    *Greller, M. M. (1975). Subordinate participation and reactionsto the interview. Journal of Applied Psychology, 60, 554-559.

    *Greller, M. M. (1978). The nature of subordinate participationin the appraisal interview. Academy of Management Journal,21, 646-658.

    Harris, M. M., & Schaubroeck, J. (1988). A meta-analysis ofself-supervisor, self-peer, and peer-supervisor ratings. Person-nel Psychology, 41, 43-62.

    Hedge, J. W., & Borman, W. C. (1995). Changing conceptionsand practices in performance appraisal. In A. Howard (Ed.),The changing nature of work (pp. 451-481). San Francisco:Jossey-Bass.

    Hirsh, H. R., Northrop, L. C., & Schmidt, F. L. (1986). Validitygeneralization results for law enforcement occupations. Per-sonnel Psychology, 39, 399-420.

    Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analy-sis: Correcting error and bias in research findings. NewburyPark, CA: Sage.

    Ilgen, D. R., Fisher, C. D., & Taylor, S. M. (1979). Conse-quences of individual feedback on behavior in organizations.Journal of Applied Psychology, 64, 347371.

    *Ilgen, D. R., Peterson, R. B., Martin, B. A., & Boeschen, D. A.

  • 630 CAWLEY, KEEPING, AND LEVY

    (1981). Supervisor and subordinate reactions to performanceappraisal sessions. Organizational Behavior and Human Per-formance, 28, 311-330.

    *Inderrieden, E. J., Allen, R. E., & Keaveny, T. J. (1992). Aninvestigation of the antecedents and consequences of volun-tary self-ratings in a performance appraisal system.. Paperpresented at the Annual Conference of the Academy of Man-agement. Las Vegas,

    *Inderrieden, E. J., Keaveny, T. J., & Allen, R. E. (1988). Pre-dictors of employee satisfaction with the performance ap-praisal process. Journal of Business and Psychology, 2, 306-310.

    Judge, T. A., & Ferris, G. R. (1993). Social context of perfor-mance evaluation decisions. Academy of Management Jour-nal, 36, 80-105.

    Kanfer, R., Sawyer, J., Early, P. C., & Lind, E. A. (1987). Pair-ness and participation in evaluation procedures: Effects ontask attitudes and performance. Social Justice Research, I,245-249.

    *Klein, H. J., & Snell, S. A. (1994). The impact of interviewprocess and context on performance appraisal interview effec-tiveness. Journal of Managerial Issues, 6, 160-175.

    *Korsgaard, M. A., & Robcrson, L. (1995). Procedural justicein performance evaluation: The role of instrumental and non-instrumental voice in performance appraisal discussions.Journal of Management, 21. 657-669.

    *Korsgaard, M. A., Roberson, L., & Rymph, D. (1996). Pro-moling fairness through subordinate training: The impact ofsubordinate communication stvle on manager's fairness. Pa-per presented at the 11th Annual Conference of the Societyfor Industrial and Organizational Psychology, San Diego.

    *Landy, F. J., Barnes, J., & Murphy, K. (1978). Correlates ofperceived fairness and accuracy of performance appraisals.Journal of Applied Psychology, 63, 751-754.

    Larson, J. R. (1984). The performance feedback process: Apreliminary model. Organizational Behavior and HumanPerformance, 33, 42-76.

    Latham, G. P.. & Wexley, K. N. (1981). Increasing productivitythrough performance appraisal. Reading, MA: Addison-Wesley.

    Lawler, E. E. (1967). The multitrait-multirate approach to mea-suring managerial job performance. Journal of Applied Psy-chology, 51, 369-381.

    Levy, P. E., & Steelman, L. A. (1997). Performance appraisalfor team-based organizations: A prototypical multiple ratersystem. In M. Beyerlein, D. Johnson, & S. Beyerlein (Eds.),Advances in interdisciplinary studies of work teams: TeamImplementation Issues, Vol. 4, (pp. 141 165). Greenwich,CT: JAI Press.

    Lind, E. A., Lissak, R. E., & Conlon, D. E. (1983). Decisioncontrol and process control effects on procedural fairnessjudgments. Journal of Applied Social Psychology, 4, 338-350.

    Lind, E. A., & Tyler, T. R. (1988). The social psychology ofprocedural justice. New York: Plenum.

    Locke, E. A., & Schweiger, D. M. (1979). Participation in deci-sion-making: One more look. In B. M. Staw (Ed.), Researchin organizational behavior (Vol. 1, pp. 265-339). Green-wich, CT: JAI Press.

    Meyer, H. H. (1991). A solution to the performance appraisalfeedback enigma. Academy of Management Executive, 5, 68-76.

    McDaniel, M. A.