1

Click here to load reader

Are We Doing What We Say?

Embed Size (px)

Citation preview

Page 1: Are We Doing What We Say?

Guest Editorial by Danny G. Langdon

Are We Doing What We Say?

have often wondered about the extent to which we actually follow the basic tenets I of our systematic approach to perfor-

mance improvement. Do we typically do careful performance needs analyses? Do we all employ a systems approach to selecting interventions? Do we evaluate for business, group, and individual impact? Sadly, although I don’t have definitive data, I am inclined to believe that not all of us really practice our technology as easily as we esDouse it.

For instance, I see and often read about what could be characterized as informal assessments. These are often conducted around surveys based on what a group thinks should be measured to reveal the performance gaps. Rarely is the assess- ment based on a careful definition of the intended performance. Although some technologists define performance objec- tives and then base measurement on those objectives-which is certainly better than mere brainstorming-as suggested in the October issue of PI, I maintain that objectives provide an incomplete definition of performance.

Don’t get me wrong. What results from brainstorming and objectives is not all that bad. It is not like the analysis that is based on only one person’s logic. The problem is that these analyses are often partially inaccurate and incomplete defini- tions of the performance needs. We end up defining only part of the performance gap. This is because any version of brain- storming about what to measure is collec- tive guessing, which makes the objectives incomplete definitions of performance.

I am encouraged, however, that this issue’s authors give us more complete

models of performance analyses. There is hope for the future as we find better and better ways to define performance, work, and behavior. The authors suggest how to do the analyses quickly-a fact that should not be overlooked because one of the more common complaints about per- formance technologists is that we always want to do an analysis, which is a good idea, and take too long to do it, which is a bad idea.

My second concern about the use of per- formance technology centers on selecting and using interventions. Although we have made advances in performance analysis, we have not made similar strides in identi- fying, selecting, and implementing inter- ventions. Rather, we are still primarily using those interventions we personally happen to know the best. As Joe Harless suggests, we may know that certain other interventions are appropriate, but we often ignore them because we assume the organization would not let us use them. We often just give up or look the other way. We have far to go in knowing how to select and implement interventions and in persuading organizations to let us use other interventions as necessary. Making sets of interventions that are visible to clients and implementing them well are steps toward success.

We may be at our weakest on the evalua- tion side of our technology. I did a better job of measurement back when I wrote programmed instruction than I do today. I may have been evaluating an inappropri- ate intervention (i.e., programmed instruc- tion) for the need, but I did a better job of pre- and post-evaluation.

It seems that we fall short particularly in evaluating our intervention’s impact on the business. Recently I have done a bit bet- ter with this because I returned to the

basic tenets of evaluation I learned from Bill Deterline years ago. I ignore the statis- tical rigmarole the measurement folk espouse. I am not even sure I fully under- stand how each level of evaluation is done. Rather, I measure output and con- sequences first, and then measure for cause. This prevents over-measuring and keeps us focused on what is most criti- cal-the performance.

Every time we think we have progressed in performance technology, we find that we have so much more to achieve. This special issue helps, but I encourage you to find better models for executing perfor- mance technology. The starting point IS

better analysis because, without a proper framework, no amount of superb work can make it better

Respond to the ideas presented in the editorial by either contacting Danny Langdon at (3 10) 453-8440 or e-mail: [email protected], or by sending a letter to the editor c/o ISPl 1500 L Street N U Suite 1250, Washington, DC 20005 or e-mail AMY@mail. ispi. org.

If you have an article idea you’d like to discuss or would like to contribute to Peflomance Improvement, please contact Martha Dean, Editor, at (610) 269-2689 @hone), (610) 873-9717 (fax) or 76173.1257 @COMPUSERVE.COM. Articles must be prepared in accordance with the guidelines provided in this publication. Send all manu- scripts and letters-to-the-editor directly to Director of Periodical Publications, at ISPI, 1300 L Street, W, Suite 1250, Washington, DC 20005.

performance improvement / voI36, #10 5