2
Lettcw urc selectc~cl,for their pertinence to muteriol published in Evaluation Pructice or becuuw they discuss issues or problems of generul interest to practicing cwluators. Letters perruining to rnurerial published in EP mq criticize articles or reports; correct errors: provide support or agreement; or offer dtflerent points oj vie,ts. clur~icutions, or additional irzformution. Depending on the nuture of the discussion. uuthors of urticles in question muy be given un opportunit? to reply in the sume issue. Letters uccepted for publication may be edited und shortened. ACADEMIC VS CLIENT CRITIQUE: A RESPONSE TO LIPSEY To the Editor: In his letter commenting on our article “Good Organizational Reasons for Bad Evaluation Research” (Evaluation Prucrice, November, 1989), Dr. Mark Lipsey characterized our piece as “a zealous paroxysm of academic bashing” replete with “self-serving myths” that justify “compromises in quality as a necessary adaptation to the vagaries of client organizations” and “hack work designed to line the pockets of evaluators. ” At the risk of further irritating Dr. Lipsey’s sensibilities, permit us to reiterate briefly our position for the benefit of those who may want to attack the substance of the paper rather than our moral fiber or intellectual credentials. In the paper, we identified two lines of argument - the “academic critique” and the “client critique” - that represent opposite ends of the spectrum of opinion regarding evaluator effectiveness and what should be done to improve it. Since Dr. Lipsey’s paper is typical of the first sort of criticism, we used it as an example, although we also cited other exemplars. In our view, the academic critique, which stresses adherence to strict methodological protocols based on experimentation, arises from the assumption that client organizations use evaluation information in a means-end or “rational” decision making environment. We believe that this is rarely the case and that organizations frequently operate using strictly “non-rational” decision making models in which information about the outcome of programs is irrelevant or threatening. Under these conditions, the evaluator providing information about the results of operations is often criticized by clients for not producing information that can be used to accomplish important organizational objectives sometimes wholly unrelated to assessing or improving program effectiveness. The contradictions between the client’s desire for information that is useful under conditions of non-rational decision making and the evaluator’s formal training is quite real and cannot be resolved through the means suggested by Dr. Lipsey and others (higher professional standards, better training, more honesty, etc., etc.). In the best case, this approach will lead the evaluator to produce information that is viewed by 165

Academic vs client critique: A response to lipsey

Embed Size (px)

Citation preview

Page 1: Academic vs client critique: A response to lipsey

Lettcw urc selectc~cl,for their pertinence to muteriol published in Evaluation Pructice

or becuuw they discuss issues or problems of generul interest to practicing

cwluators. Letters perruining to rnurerial published in EP mq criticize articles or

reports; correct errors: provide support or agreement; or offer dtflerent points oj

vie,ts. clur~icutions, or additional irzformution. Depending on the nuture of the

discussion. uuthors of urticles in question muy be given un opportunit? to reply in the

sume issue. Letters uccepted for publication may be edited und shortened.

ACADEMIC VS CLIENT CRITIQUE: A RESPONSE TO LIPSEY

To the Editor: In his letter commenting on our article “Good Organizational Reasons for Bad

Evaluation Research” (Evaluation Prucrice, November, 1989), Dr. Mark Lipsey characterized our piece as “a zealous paroxysm of academic bashing” replete with “self-serving myths” that justify “compromises in quality as a necessary adaptation to the vagaries of client organizations” and “hack work designed to line the pockets of evaluators. ” At the risk of further irritating Dr. Lipsey’s sensibilities, permit us to reiterate briefly our position for the benefit of those who may want to attack the substance of the paper rather than our moral fiber or intellectual credentials. In the paper, we identified two lines of argument - the “academic critique” and the “client critique” - that represent opposite ends of the spectrum of opinion regarding evaluator effectiveness and what should be done to improve it.

Since Dr. Lipsey’s paper is typical of the first sort of criticism, we used it as an example, although we also cited other exemplars. In our view, the academic critique, which stresses adherence to strict methodological protocols based on experimentation, arises from the assumption that client organizations use evaluation information in a means-end or “rational” decision making environment. We believe that this is rarely the case and that organizations frequently operate using strictly “non-rational” decision making models in which information about the outcome of programs is irrelevant or threatening. Under these conditions, the evaluator providing information about the results of operations is often criticized by clients for not producing information that can be used to accomplish important organizational objectives sometimes wholly unrelated to assessing or improving program effectiveness.

The contradictions between the client’s desire for information that is useful under conditions of non-rational decision making and the evaluator’s formal training is quite real and cannot be resolved through the means suggested by Dr. Lipsey and others (higher professional standards, better training, more honesty, etc., etc.). In the best case, this approach will lead the evaluator to produce information that is viewed by

165

Page 2: Academic vs client critique: A response to lipsey

166 Evaluation Practice, 11(2), 1990

clients as even more irrelevant and in the worst, clients may view such evaluation efforts as threatening the survival of the organization. Thus, we recommended that the organization’s decision making style needs to be identified using rules that we noted and that the evaluator should adapt his or her professional role to the realities of the organization. Because we didn’t discuss the nature of the adaptation in great detail, whether it necessarily constitutes a sell out to “hack work” is uncertain, although we did grant the possibility of sell out in footnote two while Dr. Lipsey assumes the worst.

Neither in his article nor in his letter did Dr. Lipsey address why there is so much ‘ ‘malpractice, ’ ’ while we consider this the really interesting question and the major focus of our paper. In addition, he completely ignores the client critique, perhaps because it appears to him to have no rational, empirical, or intellectual justification, while we proposed that a fruitful line of inquiry would be analyzing organizational decision-making methods and determining how the evaluation enterprise is (or can be) usefully incorporated into this process. Apparently Dr. Lipsey believes he can remedy the patient without first understanding the nature of the complaints or the sources of the ailment. We think that this is nonsense and reflects just the sort of attitude that motivated us to write the article in the first place.

MICHAEL HENNEW

Prevention Research Center MICHAEL J. SULLIVAN

Freeman, Sullivan and Company 2532 Durant Avenue Berkeley, CA 94704