20
Industrial and Organizational Psychology, 4 (2011), 3–22. Copyright © 2011 Society for Industrial and Organizational Psychology. 1754-9426/11 FOCAL ARTICLE Evidence-Based I–O Psychology: Not There Yet ROB B. BRINER University of London DENISE M. ROUSSEAU Carnegie Mellon University Abstract Evidence-based practice is now well established in several fields including medicine, nursing, and social policy. This article seeks to promote discussion of whether the practice of industrial–organizational (I–O) psychologists is evidence based and what is needed to make I–O psychology an evidence-based discipline. It first reviews the emergence of the concept of evidence-based practice. Second, it considers the definitions and features of evidence-based practice, including evidence-based management. It then assesses whether I–O psychology is itself an evidence-based discipline by identifying key characteristics of evidence-based practice and judging the extent these characterize I–O psychology. Fourth, some key strategies for promoting the use of evidence in I–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O psychology are identified along with suggestions for overcoming them. Last is a look to the future of an evidence-based I–O psychology that plays an important role in helping consultants, in-house I–O psychologists, managers, and organizations become more evidence based. Most industrial–organizational (I–O) psy- chologists would confidently claim that their practice is based on evidence—in particular, psychological research findings. And, in a general sense, that’s more or less true. The founders of our field, including James McKeen Cattell and C. S. Myers, originated the application of systematic research to workplace issues and in doing so established psychological science itself in both the United States and Britain. Most of what we do in I–O psychology draws on or is at least informed by some type of evi- dence whether that be research published Correspondence concerning this article should be addressed to Rob B. Briner. E-mail: [email protected] Address: Department of Organizational Psychol- ogy, Birkbeck College, University of London, Malet Street, London WC1E 7HX, United Kingdom. We are very grateful to three anonymous reviewers for their comments and suggestions. in journal articles or workforce facts and metrics collected through practice. Look a bit deeper, however, and things aren’t quite so straightforward. Practitioners of all stripes—from the snake-oil salesper- son with a cure-all remedy to the fam- ily physician or the personal trainer to the psychic who communicates with the dead—claim to have evidence for what they do. Claims to use evidence are there- fore meaningless in themselves. However, the hallmark of any profession is the exis- tence of an agreed-upon core of knowledge and means for its continued generation and refinement (Friedson, 1986). Unlike com- mon sense or general knowledge claims, it’s the nature and quality of the par- ticular evidence a profession’s practition- ers use that distinguishes them from the layperson—or the snake-oil salesperson. In the case of professions such as I – O psychol- ogy, medicine, education, or engineering, 3

Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Industrial and Organizational Psychology, 4 (2011), 3–22.Copyright © 2011 Society for Industrial and Organizational Psychology. 1754-9426/11

FOCAL ARTICLE

Evidence-Based I–O Psychology:Not There Yet

ROB B. BRINERUniversity of London

DENISE M. ROUSSEAUCarnegie Mellon University

AbstractEvidence-based practice is now well established in several fields including medicine, nursing, and social policy.This article seeks to promote discussion of whether the practice of industrial–organizational (I–O) psychologistsis evidence based and what is needed to make I–O psychology an evidence-based discipline. It first reviewsthe emergence of the concept of evidence-based practice. Second, it considers the definitions and features ofevidence-based practice, including evidence-based management. It then assesses whether I–O psychology isitself an evidence-based discipline by identifying key characteristics of evidence-based practice and judging theextent these characterize I–O psychology. Fourth, some key strategies for promoting the use of evidence inI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicingevidence-based I–O psychology are identified along with suggestions for overcoming them. Last is a look to thefuture of an evidence-based I–O psychology that plays an important role in helping consultants, in-house I–Opsychologists, managers, and organizations become more evidence based.

Most industrial–organizational (I–O) psy-chologists would confidently claim thattheir practice is based on evidence—inparticular, psychological research findings.And, in a general sense, that’s more or lesstrue. The founders of our field, includingJames McKeen Cattell and C. S. Myers,originated the application of systematicresearch to workplace issues and in doingso established psychological science itselfin both the United States and Britain. Mostof what we do in I–O psychology draws onor is at least informed by some type of evi-dence whether that be research published

Correspondence concerning this article should beaddressed to Rob B. Briner.E-mail: [email protected]

Address: Department of Organizational Psychol-ogy, Birkbeck College, University of London, MaletStreet, London WC1E 7HX, United Kingdom.

We are very grateful to three anonymous reviewersfor their comments and suggestions.

in journal articles or workforce facts andmetrics collected through practice.

Look a bit deeper, however, and thingsaren’t quite so straightforward. Practitionersof all stripes—from the snake-oil salesper-son with a cure-all remedy to the fam-ily physician or the personal trainer tothe psychic who communicates with thedead—claim to have evidence for whatthey do. Claims to use evidence are there-fore meaningless in themselves. However,the hallmark of any profession is the exis-tence of an agreed-upon core of knowledgeand means for its continued generation andrefinement (Friedson, 1986). Unlike com-mon sense or general knowledge claims,it’s the nature and quality of the par-ticular evidence a profession’s practition-ers use that distinguishes them from thelayperson—or the snake-oil salesperson. Inthe case of professions such as I–O psychol-ogy, medicine, education, or engineering,

3

Page 2: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

4 R.B. Briner and D.M. Rousseau

much of their value and legitimacy dependson the extent a scientific evidence baseinforms and is used in practice.

In many areas of I–O psychology, largebodies of reasonable quality evidence arecontinually refined and critically evaluated.The expanded use of meta-analysis in I–Opsychology indicates the value it places onsystematic evidence, cumulative researchfindings, and critical analyses (e.g., Judge,Heller, & Mount, 2002; Judge & Illies,2002). Similarly, the value of the structuredinterview over its unstructured counterparthas been affirmed repeatedly over decades(McDaniel, Whetzel, Schmidt, & Maurer,1994); the I–O psychology practitioneris probably one of the more consistentvoices in industry to buck the populisttide that continues to favor unstructuredtechniques. On the other hand, I–Opsychology does not always resist more‘‘pop-psychology’’ topics such as emotionalintelligence. Although introduced to socialscience through work by Mayer andSalovey (1997), the type of claims madein Daniel Goleman’s (1998) popular bookand other popular accounts of the topic areclearly questionable (e.g., Cherniss, 2010;Locke, 2005).

In other areas, the evidence familiar toI–O psychologists may appear more static,taking the form of received wisdom. Forexample, consider the notion that accuratefeedback is good (i.e., at least if taskrelated; cf., Kluger & DeNisi, 1996). Still,uncritical reliance on received wisdomis problematic: Cognitive and educationalresearch demonstrates that feedback tooearly in the learning process can underminethe mental processes needed to acquirecertain skills (Merrill, Reiser, Merrill, &Landes, 1995; VanLehn, Siler, Murray,Yamauchi, & Baggett, 2003). Given thatnew information replaces or refines existinginformation, the evidence base we use isinevitably a work in progress.

I–O psychologists have expressed con-cerns about the existence, relevance, anduse of evidence in the profession. A recentsurvey of SIOP members (Silzer, Cober,Erickson, & Robinson, 2008) revealed that a

majority of respondents (who were largelypractitioners) held the opinion that prac-tice is ahead of research in 14 contentareas, including coaching, talent manage-ment, and employee relations, whereasscience was ahead in only two (measure-ment and statistics, and job analysis). In fiveareas, practitioners and nonpractitionersheld opposite views. Practitioners saw prac-tice ahead and nonpractitioners saw scienceahead in recruitment, performance manage-ment, organizational culture, training anddevelopment, and employee engagement.A plurality shared the view that science isbehind practice in its choice of researchtopics and that practice-relevant researchwas sparse. Finding perceived gaps on bothsides is not so surprising. In I–O psy-chology scientists and practitioners eachprize their own knowledge sources overthe other’s, raising concern regarding thequality of the interface between the two. Inturn, these findings raise questions aboutthe extent the science and practice ofI–O psychology is synergistic in a fash-ion that would promote evidence-basedpractice.

Another expression of this concern canbe found in many of this journal’s focalarticles, as they are precisely about exam-ining the concepts and evidence under-lying our practice in a range of areasincluding the identification of discrimi-nation in workplace evaluations (Landy,2008), executive coaching (McKenna &Davis, 2009), job performance ratings(Murphy, 2008), employee engagement(Macey & Schneider, 2008), executiveselection (Hollenbeck, 2009), and leader-ship development (McCall, 2010). Thesearticles have created lively, important, andhealthy debate.

Many professions have expressed suchconcerns and pursued ways that evidencecan better inform practice. One of themost recent and widespread ideas usedto frame solutions to this problem is thesubject of this focal article: evidence-basedpractice. For some years, evidence-basedpractice has evolved as a way of identifyingand developing techniques and processes

Page 3: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 5

that practitioners can use to incorporateevidence from various sources into theireveryday work. Our purpose here is to pro-mote discussion and further developmentof the practice of I–O psychology as anevidence-based discipline.

This article is structured as follows.It reviews the emergence of evidence-based practice in medicine, social pol-icy, and more recently management, andconsiders the definitions and features ofevidence-based practice, including that ofevidence-based management (EBMgt). Itassesses whether I–O psychology is itselfan evidence-based discipline by identify-ing key characteristics of evidence-basedpractice and gauging the extent these char-acterize the practice of I–O psychology. Byidentifying the key contributions of practice-oriented research and systematic reviews toevidence-based practice, this article detailshow these applications promote evidenceuse in practice, and describes barriers toevidence-based approaches to I–O psy-chology practice along with suggestions toovercome them. Last, we look to the futureand consider the prospects for an evidence-based I–O psychology that helps managersand organizations themselves become moreevidence based.

Emergence of Evidence-BasedPractice and Management

The notion of using scientific evidence toinform professional practice is not new. Atthe same time, neither is the observationthat from medicine to management muchpractice is not related to evidence inany significant way. This tension foundexpression in a British Medical Journaleditorial reporting, ‘‘only about 15% ofmedical interventions are supported bysolid scientific evidence’’ (Smith, 1991,p. 798).1 This article marked a turningpoint in the development of evidence-basedmedicine. More than other professions,

1. It may be possible and potentially useful tomake a similar estimate of the percentage ofI–O psychology practices that are based on solidscientific evidence.

medicine and its allied disciplines, suchas nursing, have engaged the challenge ofusing scientific evidence to better informpractice.

In medicine and nursing, the notion ofbeing evidence based is now well estab-lished. It forms part of the basic trainingof nurses, physicians, and other profes-sions allied to medicine. Medical researchand its funders, in addition to producingnew basic science, also put considerableresources into research on effective prac-tice, including how to best treat specifictypes of patients, as well as in conduct-ing systematic reviews of research litera-ture to answer practice-relevant questions.Systematic reviews are essentially litera-ture reviews that address a very specificreview question using an explicit, system-atic methodology to identify, select, andpull together findings from a range of studiesto draw conclusions about what is knownand not known about the question. (Meta-analyses are one type of systematic review.)Systematic reviews are in essence pieces ofresearch on existing research.

The findings of both patient-orientedstudies and systematic reviews are thentranslated into tools and decision-makingaids such as checklists and patient-careprotocols used by medical clinicians. Later,we will talk about how the ‘‘functionalI–O psychology equivalents’’ of these canpromote our own evidence-based practice.

In the late 1990s, the term ‘‘evidencebased’’ became paired with other nonmedi-cal disciplines and practice areas, includingeducation, social work, criminology, andgovernment policy making. Over a decadeago, Briner (1998) made what appears tobe the first attempt to suggest that someideas from evidence-based medicine couldbe adapted to the practice of organizationalpsychology. These same ideas have beenapplied more recently to the practice ofmanagement. In 2006, two key publications(Pfeffer & Sutton, 2006; Rousseau, 2006)stimulated discussions of EBMgt and how itmight be used in business school teaching(Rousseau & McCarthy, 2007), its limita-tions and potential dangers (Learmonth &

Page 4: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

6 R.B. Briner and D.M. Rousseau

Harding, 2006; Morrell, 2008), the evi-dence for EBMgt (Reay, Berta, & Kohn,2009), and its meaning and definition(Briner, Denyer, & Rousseau, 2009).

In the last 2 decades, considerable soulsearching by management scholars overthe research–practice ‘‘gap’’ or ‘‘divide’’has raised difficult questions about whythe gap exists, how to bridge it, and thevalue and purpose of management researchitself (Hambrick, 1994; Rynes, Brown, &Colbert, 2002). Although I–O psychol-ogy is not immune to such concerns, inother respects I–O psychology would seemto have avoided the criticisms levied onother social sciences (Anderson, Herriot, &Hodgkinson, 2001). Anderson and col-leagues used the label ‘‘pedantic science’’to describe those research domains drivenby theoretical concerns and fastidious ana-lytics while ignoring real-world issues.

I–O psychology might seem an unlikelycandidate for the pedantic label. Even whenconducted by university-based scholars,I–O psychology research is rooted in issuesand problems arising in organizationalsettings. Given that its graduates work inindustry as consultants or in-house I–Opsychologists, our discipline would seemto be an exemplar of ‘‘evidence-based’’practice. The research-intensive training ofI–O psychologists and our often close-to-practice research should make reliance onresearch evidence almost second nature.Indeed, it is probably no coincidence thatmany advocates of EBMgt (including thepresent authors) are also I–O psychologists(e.g., Wayne Cascio, Edwin Locke, and EdLawler).

So how can we explore the truth of I–Opsychology’s claim to engage in evidence-based practice? On one hand, I–O psy-chology seems to embody Anderson et al.’s(2001) ideal of a ‘‘pragmatic science,’’addressing questions of theoretical andpractical relevance in a methodologicallysound manner. Yet, if we use the natureof evidence-based practice as the startingpoint in specifying our criteria, we woulddraw a different conclusion.

What Is Evidence-Based Practice?

As mentioned, all practitioners claimto have evidence for their practice.Nonetheless, evidence-based practice is aparticular approach or more accuratelya set of approaches to incorporate evi-dence into practice decisions. In medicine,for example, this means ‘‘integrating indi-vidual clinical expertise with the bestavailable external clinical evidence fromsystematic research’’ (Sackett, Richardson,Rosenburg, & Haynes, 1997, p. 2) in makingdecisions about patient care. Three aspectsof this definition need to be highlighted.

First, evidence-based practice integratesthe practitioner’s expertise and externalevidence from research. Both sources ofknowledge are vital. Second, it is abouttrying to obtain and use the best availableevidence even if ultimately determined tobe inconsistent or rejected as irrelevant.Using evidence does not mean slavishlyfollowing it, acting only when there isgood evidence or doing nothing if there isnone. Rather, it is a more active process ofexamining and evaluating the best of whatis there and applying it along with othersources of information, such as situationalfacts, to decision making. Some researchevidence may be more readily convertedinto practical knowledge, although bothbasic scholarly and more applied evidencecan contribute to evidence-based practice.Third, it uses systematic reviews to assessall available and relevant evidence ratherthan relying on single studies.

EBMgt has been defined in severaldifferent ways but most definitions drawon ideas of evidence-based practice foundin medicine and elsewhere. One recentdefinition of EBMgt (Briner et al., 2009,p. 19) is as follows:

Evidence-based management is aboutmaking decisions through the consci-entious, explicit, and judicious use offour sources of information: practitionerexpertise and judgment, evidence fromthe local context, a critical evaluationof the best available research evidence,

Page 5: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 7

and the perspectives of those people whomight be affected by the decision.

The conscientious use of the foursources of information means that anEBMgt approach involves paying carefuland sustained attention to sources of whatcan be potentially different, conflicting,and sometimes difficult-to-interpret infor-mation. Being explicit means using informa-tion from each source in a clear, conscious,and methodical way such that the rolesplayed by all the information in the finaldecision are understood. And being judi-cious involves using reflective judgment toevaluate the validity and relevance of theinformation from each source. Evidence andinformation is critically evaluated in relationto the practice context and problem.

Take the example of a senior HRMpractitioner in a large firm who believesthe organization has a problem with highabsence levels and wants to intervene toreduce the absence rate. Table 1 providesexamples of the sorts of informationthat the practitioner may compile andcritically evaluate to decide on what kindof intervention, if any, is likely to bemost effective. After this information andevidence is compiled, the next stage is tointegrate the different sources of evidence.The decision takes place at the intersectionof these four information sources. Exactlyhow these sources of information areintegrated depends on the context and whois making the decision. It is likely that theywill vary in several ways.

First, varying quantities of informa-tion are available from each source. Forexample, little may be known about theperspectives of those who may be affectedby the intervention, but the practitionersinvolved may have much expertise in andexperience with the problem. Very strongasymmetries may lead to decisions biasedtoward those sources of information ofwhich there simply are more. A secondway in which these sources of informa-tion will vary is in relation to their quality,validity, and reliability. Although there maybe plenty of information from one of these

sources, it may be judged to be of poorquality and therefore not that usable orrelevant. Third, even where the quantityand quality of information is relatively bal-anced across sources, it may be that onehighly pertinent piece ‘‘trumps’’ others. Ina safety-critical organization, for example,even where nearly all the information sug-gests that a particular intervention may beeffective, a very small piece of informationimplying that it also increases errors maybe enough to push the decision away fromwhat most of the evidence would suggest.

Evidence-based practice across variousfields uses similar approaches intended toimprove the process and consequences ofdecision making by collecting and criticallyanalyzing evidence from a number sourcesand then integrating it to make a practicaldecision or solve a problem (e.g., Gough,Kiwan, Sutcliffe, Simpson, & Houghton,2003; Soukup, 2000).

How Evidence-Based IsI–O Psychology?

So, how evidence based are we? To beasked this question can feel a bit unnerv-ing or even insulting. I–O psychology is ascience after all. We all know how to readjournals and do research. We understandscientific principles and can distinguishgood research from bad. We can interpretand apply findings from studies. We eval-uate, measure, and assess what we do aswe go along. Although all these statementsmay be broadly true, they don’t reveal howevidence based we actually are as profes-sionals. Instead, these statements expresssome of our principles and aspirations. Suchstatements may be our expectations, butthey can differ from what we actually do.

We now turn to examine the extentto which characteristics of evidence-basedpractice identified above are present inthe practice of I–O psychology. Note thatno systematic study exists on the actualpractice of I–O psychologists, that is, theconsultants, in-house psychologists, andothers working in industry who hold I–Opsychology degrees either at the master’s or

Page 6: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

8 R.B. Briner and D.M. Rousseau

Table 1. Examples of Information From Each of the Four Sources Relevant to Interveningin the Presenting Problem of High Absence

Practitioner expertise and judgment• Have I seen this before?• What happened?• What are my beliefs about the causes of

absence?• What’s worked in the past and why?• What are my hunches?• What do I think are the causes and

possible solutions?• Is this situation occurring elsewhere?• How relevant and applicable is my

experience?

Critical evaluation of best available researchevidence• What are the average rates of absence

in my sector and location—Is theabsence rate here ‘‘high’’?

• What does systematically reviewedresearch evidence suggest to be themajor causes of absence?

• How relevant and applicable is thatevidence here?

• What does research evidence fromsystematic reviews suggest as effectiveinterventions?

• How well might the interventions theresearch describes work here?

Evidence from the local context• What actually is the absence rate?• What type of absences and where?• What are local explanations for

absence?• Internal research (e.g., surveys)• What absence management is currently

in place, and is it working?• What do managers think is going on?• What are the possible costs and benefits

of interventions? Is it worth interveninghere?

• What is happening or what is going tohappen in the organization or outsideit that might be affecting absence?

Perspectives of those who may be affectedby intervention decision

• How do employees feel about theproposed interventions?

• Do they see downsides or unintendednegative consequences?

• How do managers feel about theseinterventions?

• How practical or workable do thoseresponsible for implementing theinterventions feel?

• What alternative explanations andproposed solutions do others have?

doctoral level. So, our starting point is thisquestion: If I–O psychology were stronglyevidence based in the ways defined anddescribed above, what would we expect toobserve?

Table 2 describes some of the charac-teristics we would expect in an area ofpractice if it were also evidence-basedper the above definitions. It describes ourcurrent judgment of the extent these char-acteristics are observable in I–O psychol-ogy. Others will have different experi-ences and observations, which we hopewill be expressed in the responses to thisfocal article. Our judgments are based onobservations from a number of sources,including our recent participation in thefirst evidence-based practice workshop atSIOP (attended by dozens of practitioners

to obtain continuing education credit), ourinterpretations of results from the recentsurveys of the I–O psychology professionand practitioner–research connections dis-cussed briefly above (Silzer et al., 2008),and also decades of teaching in I–O psy-chology programs. Our assessments of I–Opsychology practice are discussed below,each accompanied by an explanation.

1. The term ‘‘evidence based’’ is used orknown. Although the general notionof using evidence is well established,the specific term ‘‘evidence-based’’and what it entails is not widely usedor well known. It is rare to find theterm ‘‘evidence based’’ paired withI–O psychology or organizationalpsychology. However, the 2009 SIOP

Page 7: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 9

Table 2. Some Key Characteristics of Evidence-Based Practice and an Estimate of theExtent to Which They Are Found in I–O Psychology

Some characteristics of evidence-based practiceFound in I–Opsychology?

1. The term ‘‘evidence based’’ is used or well known. Givenevidence-based practice exists in many other fields and the term iswidely used, it is unlikely that any field adopting an evidence-basedapproach would not know of and use the term, even thoughdefinitions might vary across practice fields.

To a very limitedextent?

2. The latest research findings and research summaries are accessible.It is difficult to do evidence-based practice without access to evi-dence in research journals and research summaries. A fundamentalprinciple of evidence-based practice is that systematic reviews of thebest available external evidence need to be available.

To a very limitedextent?

3. Articles reporting primary research and traditional literature reviewsare accessible to practitioners. For many reasons systematic reviewsmay not be available or produced in an area of interest. When this isthe case, primary research and traditional reviews published injournals can be used.

To some extent?

4. ‘‘Cutting-edge’’ practices, panaceas, and fashionable new ideas aretreated with healthy skepticism. One characteristic of areas ofpractice that are not particularly evidence based is the constantsearch for and promotion of the newest solution or cure. Thischaracteristic is found in popular books on topics such as dieting,self-help, and indeed management.

To some extent?

5. There is a demand for evidence-based practice from clients andcustomers. If the clients or customers of a particular practice do notwant or even reject evidence-based practice then it is simplyimpossible to practice in an evidence-based way.

To some extent?

6. Practice decisions are integrative and draw on the four sources ofinformation and evidence described above. As emphasized,evidence-based decision making is more than looking at externalpublished evidence. Rather, it is about combining evaluated externalevidence, the perspectives of those who may be affected by thedecision, information from the local context, and practitionerexperience and expertise.

To some extent?(Difficult to judge.)

7. Initial training and continuing professional development (CPD) adoptevidence-based approaches. From an evidence-based perspective,initial training and CPD focus on developing evidence-basedapproaches to practice. This involves training practitioners to identifyand critically evaluate external and contextual evidence relevant to aspecific practice problem to help inform a practice decision. Thisapproach creates an active need to obtain and use relevant evidence,as it is being used directly to help solve a problem.

To a very limitedextent?

annual conference did have EBMgt asone of its themes.

2. Systematic reviews are produced andmade accessible. Although we haveplenty of traditional reviews andmeta-analyses, there are very few

systematic reviews in I–O psychol-ogy. SIOP is currently developing itsScience You Can Use series, whichwill contain reviews that are notsystematic as such but will go someof the way to summarizing research

Page 8: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

10 R.B. Briner and D.M. Rousseau

findings that I–O practitioners canuse.

3. Articles reporting primary researchand traditional literature reviews areaccessible to practitioners. We havefound that this is a difficulty formany I–O psychology practition-ers unless they are still attached insome way to a university. Althoughabstracts are easily accessible, pur-chasing single articles from publisherscan be costly. In addition, to buildup even limited knowledge of theevidence in a particular area canrequire access to dozens of arti-cles, which may be prohibitivelyexpensive. It also appears to bethe case that not all I–O psychol-ogy practitioners, depending some-what on where they trained, arehighly skilled in reading and digesting(sometimes rather indigestible) jour-nal articles.

4. ‘‘Cutting-edge’’ practices, panaceas,and fashionable ‘‘new’’ ideas aretreated with healthy skepticism. Asa consequence of our training as psy-chologists, we should be inclinedto be quite skeptical or at leastare inclined to ask about evidenceand empirical support. At the sametime, however, we are also somewhatdrawn to what might be called man-agement fads and fashions. Some ofthe recent focal articles in this jour-nal demonstrate that we are some-times inclined to pick up and runwith the Next Big Thing even whereevidence does not yet exist or is ques-tionable. In addition, next time youattend the SIOP annual conference,check out the products and servicesfor sale in the exhibit hall. In ourexperience, many of these feel morelike fads than evidence-based inter-ventions (and often no supportingevidence is presented). One reasonwe pay attention to fads is that clientsoften demand the latest thing, and ifwe don’t deliver it, then some one

else will. However, as I–O psycholo-gists, we may at the same time tryto rework the fad into somethingcloser to our own practice and toestablished and evidence-based tech-niques.2

5. There is a demand for evidence-basedpractice from clients and customers.Many of our major clients are thoseworking at mid to senior levels inHRM. HRM is not a field that hasembraced the notion of evidence-based practice in any significant way.Although, of course, managers donot actively seek to purchase inef-fective I–O psychology products orservices, they are under pressure tomeet certain shorter term goals. Theymay therefore come to depend ona general impression that particu-lar products or techniques ‘‘work’’rather than whether they will workin their specific context given theproblem they are trying to solve.HRM departments may also lean onbenchmarking or mimicry by adopt-ing the same I–O psychology prac-tices already used by their moresuccessful competitors. The authorshave also heard many times from I–Opsychologists who say they wish topractice in an evidence-based waybut that clients have often alreadydecided what they want (e.g., assess-ment centers, training programs, andemployee attitude surveys) and areasking the I–O psychologist as atechnical specialist to deliver it. Thissituation suggests that our clients arenot demanding an evidence-basedapproach in the sense discussed here,although they are of course interestedin adopting practices they believe tobe effective.

6. Practice decisions are integrative anddraw on the four sources of informa-tion and evidence described above.

2. We thank an anonymous reviewer for theseobservations.

Page 9: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 11

This is the most difficult character-istic of evidence-based practice toassess without access to numerousobservations of what practicing I–Opsychologists actually do. In addition,it may be that as discussed above, weare not the decision makers; instead,we play an advisory role, provid-ing information and interpretations tothe decision makers. However, tak-ing each source of information inturn, I–O psychologists do, as dis-cussed above, draw to some extenton evaluated external evidence whenmaking decisions, even though fewsystematic reviews are available andaccess to primary research can be dif-ficult. The perspectives of those whomay be affected by the decision arelikely to be taken into account atleast to some extent because of theAPA’s Ethics Code (2002; assumingwe are complying with the code)3 andalso because of the broader aware-ness we should have as psychologistsabout our responsibilities to organi-zational and individual clients. Weare likely to look for and use evi-dence from the local context andattempt some initial assessment ofthe problem or situation and seekout organizational data that mighthelp with problem diagnosis. Theuse of practitioner experience andjudgment seems highly likely, partic-ularly if the problem or technique isone we have encountered frequentlybefore.

7. Initial training and continuing pro-fessional development (CPD) in evi-dence-based approaches. Training inI–O psychology master’s degrees anddoctorates tends to be of the fairlytraditional academic variety wherestudents are expected in a relatively

3. Ethical I–O psychology and evidence-based I–Opsychology are similar in other respects includingthe focus on being concerned and explicit aboutthe evidence for the benefits of interventions andthe evidence that interventions are not harmful.

passive way to learn and retain infor-mation. We note that in both theUnited States and Britain, the major-ity of practicing I–O psychologistshave terminal master’s degrees. Thetypical master’s program in the fieldhas no required supervised intern-ships and does not train its stu-dents to conduct literature searcheson practice topics let alone system-atic reviews. The forms of CPD usedby SIOP and other I–O psychol-ogy professional bodies also tend tobe fairly traditional. In fact, someof these forms of CPD reverse theapproach adopted by evidence-basedpractice in that they present par-ticipants with recent research find-ings, evidence, or new techniquesand then discuss how they mightbe used in practice rather than start-ing with practice problems and thensearching for and evaluating theevidence that may help solve theproblem.

So, what is it possible to conclude fromthis analysis of the extent to which I–Opsychology shares some of the character-istics of evidence-based practice? First, wesuggest that I–O psychology is not stronglyevidence based in the sense that the termis used in other professions. But, we cansay with some confidence that we are asa profession extremely well positioned toadopt, should we wish to do so, manyof these characteristics. Next, I–O psy-chologists in many instances are not thekey decision makers but, rather, sources ofinformation and advice to managers mak-ing the decision (see below). Last, it isclear that there are many barriers to theadoption of evidence-based practice, somewithin and others outside our control. Hav-ing evaluated I–O psychology as fallingsomewhat short on evidence-based prac-tice and supports for it, we turn to twoimportant means for bridging I–O psychol-ogy’s own research–practice gap: practice-oriented evidence and systematic reviews.

Page 10: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

12 R.B. Briner and D.M. Rousseau

Key Strategies for PromotingEvidence-Based Practice

I–O psychologists are not one seamlesscommunity of comembers of the samediscipline. Practitioners and scholars in I–Opsychology are largely distinct communitiesof interest, knowledge, and social ties.In promoting evidence-based practice, itis advantageous in such circumstancesto design ways of communicating andsharing ideas that serve the interests ofboth. Although evidence-based practiceinvolves the better use and integration ofevidence and information from all foursources described above, we focus hereon improving the use of critically evaluatedresearch evidence.

Between the research and practicedomains of I–O psychology, we needdevices for translating back and forth infor-mation and knowledge, promoting betterquality communication and learning. Wenote that in the SIOP practitioner surveydescribed above, a frequent practitionerrequest to SIOP was for clarification ofstandards for I–O practice and better waysof differentiating I–O psychologists fromother practitioners in the marketplace. Suchclarification and professional differentiationcan come from creating the evidence-oriented products and associated processesproposed here. Such products can simulta-neously meet the needs of I–O psychology’spractitioners and scholars, adding valueto both. The two products we suggest ascritical to evidence-based practice in I–Opsychology are practice-oriented evidenceand systematic reviews.

Practice-Oriented Evidence

Most research published in I–O psychol-ogy’s premier journals, including Journalof Applied Psychology (JAP), Person-nel Psychology, and Journal of Occupa-tional and Organizational Psychology istheory-oriented investigations authored byacademy-based I–O psychologists answer-ing questions of interests to other aca-demics. This was not always the case.

Anderson et al. (2001) noted that between1949 and 1965, practitioners authored afull 36% of JAP articles (31% by prac-titioners alone). From 1990 to 2000 (theterminal year of their survey), practitionersauthored only 4% of JAP articles (1% bypractitioners alone). The other I–O journalsmanifest a similar decline. Many factorsmay account for this decline in practitionerresearch publication in our field’s journals,including a shift in journal focus to moreacademic topics of rigor, greater corporateconcern for protecting intellectual property,as well as ramped-up global competitionand its accompanying time and resourcecrunch, which in turn limited practitioneropportunity for research let alone publi-cation. One conclusion is apparent: I–Opsychology’s academics and practitionersare not mingling with each other in our jour-nals. Regardless of the underlying reasons,there is one serious consequence of thedecline in practitioner conducted researchpublication: Academics are the ones askingthe research questions and interpreting theanswers.

If the gap between research and prac-tice in I–O psychology is at least partlyattributable to lower participation by prac-titioners in research, the problem maybe exacerbated by omission in currentresearch of the kinds of complex prob-lems in complex settings faced by prac-titioners in their work. An antidote tothe latter has been suggested in the formof engaged scholarship (Van de Ven,2007) and scholar–practitioner collabora-tion (Lawler & Mohrman, in press), whereacademics and practitioners work togetherto formulate research questions, investigatethem, and draw conclusions.

If indeed I–O psychology research isnow academic centric, the gap betweenresearch and practice entails problems inknowledge transfer. It takes two to tango,and difficulties transferring knowledge canbe because of communication issues onboth sides (what’s readable, accessible,understandable, and interesting?). Barriersto transfer may also reside in the nature ofthe knowledge itself. As a case in point,

Page 11: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 13

meta-analysis and literature reviews in I–Opsychology have led to the formulationof general knowledge principles based onscientific evidence (The Handbook of Prin-ciples of Organizational Behavior: Indis-pensable Knowledge for Evidence-BasedManagement; Locke, 2009). The Handbookhas over 30 chapters summarizing severalhundred evidence-based principles, the vastmajority of them derived from I–O psy-chology research and all intended to guidethe practice of current and future managersand other practitioners. This book providesmany examples and is written in plainEnglish. It represents, perhaps at its best,knowledge (principles) based on what mightbe called ‘‘researcher-oriented evidence,’’that is, evidence from rigorous tests of the-ory, replicated and found to be relativelygeneralizable over time and context.

So, what’s wrong with that? In manyways, these research-based principlesachieve our ideal as scientists: to under-stand the world well and disseminate thisknowledge. The downside is this: It isnot always obvious to practitioners, cer-tainly not to the least experienced or lessreflective, how exactly to apply the prin-ciples identified in such research. Let’stake the classic example of the findingthat general mental ability (GMA) is pos-itively related to individual performance(Schmidt, 2009). A well-established findingover several decades, one practice impli-cation is that if an organization seeks toimprove the quality of its workforce andthe performance of individual members, itshould select on intelligence. For a host ofreasons, this principle is not widely pro-moted by I–O psychology practitioners andis soundly rejected by even experienced HRmanagers (Rynes et al., 2002). Practitionersthink about the educated idiot who is booksmart, tests fantastically well, and can’tmatch socks. Managers fear being labeledelitist and perhaps wonder whether theywould have gotten their own jobs if theircompany used IQ tests. Or, they use a testof GMA and find that it doesn’t improve per-formance levels over the biographical infor-mation they already rely on like grade point

average and college reputation. Anotherreason for caution in using tests of GMA isconcern over adverse impact, even thoughsome methods have relatively low adverseimpact (Schmidt, 2009).

This debacle may be tied to what Vande Ven and Johnson (2006) refer to asa knowledge production problem, argu-ing that the research–practice gap is bestbridged by producing practice-oriented sci-entific knowledge via research approachesengaging both academics and researcherscollaboratively. This approach calls forcombining the knowledge of practition-ers and the knowledge of academics atall stages of the research process. Thisknowledge production problem has beenencountered, and addressed, in otherevidence-based practice fields.

Medical researchers and clinicians dis-tinguish between two kinds of researchevidence: disease oriented and patient ori-ented. ‘‘Disease-oriented evidence’’ (DOE)focuses on the causes of disease providingevidence of pathology and ways of alteringthe condition (e.g., drugs and surgery). InI–O psychology, our version of DOEs mighttake the form of phenomena-oriented evi-dence, such as the origins of job stress or jobsatisfaction. The second kind of researchevidence in medicine is ‘‘patient-orientedevidence that matters’’ (POEMs), evidencegathered from studies of real patients aboutissues such as mortality, morbidity, andquality of life. An example of a POEMis a study comparing one antihyperten-sive drug to another to determine whichreduced mortality from hypertension over a10- to 20-year period. In I–O psychology,our version of POEMs might take the formof studies contrasting two interventions toreduce job stress that assess the types ofindividuals, work settings, and job strainsthey best ameliorate.

There is a growing trend in the prac-tice of medicine to value patient-orienteddata more highly than DOE. However,because practice-oriented evidence doesnot yet exist to inform every clinicalneed, practitioners must use other waysof making decisions too, including relying

Page 12: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

14 R.B. Briner and D.M. Rousseau

on their knowledge of basic physiologicalprocesses. We expect much the samealternative forms of decision making inan evidence-informed practice of I–Opsychology.

An example of practice-oriented evi-dence in I–O psychology are the reports byRobert Pritchard and his team, developingand investigating the use of the Produc-tivity Measurement and Enhancement Sys-tem (ProMES) system for job analysis andstrategic planning (e.g., Pritchard, Harrell,DiazGranados, & Guzman, 2008). Differ-ences identified between studies in howthoroughly the ProMES system was appliedsuggested that several implementation-related factors, including the extent usersadhered to the ProMES process and thequality of the feedback provided, affectedthe overall productivity gains associatedwith ProMES. Pritchard and colleagues thenaddress the circumstances under whichthere are differences in implementation orcompliance with standard practices andthe sensitivity of outcomes to these vari-ations (see Pritchard et al., 2008). In thecontext of widespread variation in orga-nizational and management practice (fromperformance appraisals to quality programs)as well as in individual implementers, evi-dence regarding the effects of such variabil-ity on outcomes has considerable practicaland scholarly value.

Consider what practice-oriented evi-dence might mean for some of the stickierproblems in I–O psychology. We knowthat GMA is predictive of individual per-formance, but organizations are reluctantto accept or act on this knowledge (Ryneset al., 2002), often preferring intuitive selec-tion methods (Highhouse, 2008). Practice-oriented evidence could be developed frominvestigations into conditions making useof GMA as a selection criterion more read-ily useful. Looking into conditions of usecould identify, for example, whether thebacklash is to written IQ-type tests, wherestructured interview questions that tappedGMA would be more acceptable, or howthe concerns over adverse impact could bebetter balanced with the predictive validity

of GMA. Practice-oriented research couldlook into whether performance criteria inuse affected the value and usefulness practi-tioners attach to indicators of GMA. Perhapssettings where innovation and creativity areimportant performance metrics place morevalue on mental ability than those wheremore routine performance is involved. Aca-demically oriented evidence indicates thatGMA is likely to predict performance ineither case. Practitioners may only findGMA useful where mental ability is anorganizationally valued contributor to per-formance.

Systematic Reviews

Systematic reviews are fundamental toevidence-based practice. As such, much iswritten about them in other fields (Goughet al., 2003; Soukup, 2000; Tranfield,Denyer, & Smart, 2003). Experience indi-cates that it is impossible to fully engagein evidence-based practice without them:Such reviews provide one of the foursources of information required when mak-ing evidence-based decisions. Applied tothe example in Table 1, a systematicreview’s purpose would be to search for,collect, critically appraise, and pull togetherresearch evidence relevant to the causesand possible solutions to the problemof high absence. I–O psychologists areavid producers and consumers of literaturereviews. However, systematic reviews arevery different from nearly all those pub-lished in I–O psychology.

Systematic reviews are literature reviewsthat adhere closely to a set of scientificmethods that explicitly aim to limit sys-tematic error (bias), mainly by attemptingto identify, appraise and synthesize allrelevant studies (of whatever design) inorder to answer a particular question (orset of questions). In carrying out this taskthey set out their methods in advance,and in detail, as one would for any pieceof social research. In this respect . . . they

Page 13: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 15

are quite unlike most ‘‘traditional’’ narra-tive reviews (Petticrew & Roberts, 2006,pp. 9–10).

Systematic reviews are, essentially, away of analyzing existing research usingexplicit and replicable methods, allowingconclusions to be drawn about what isknown and what is not known in relationto the review question (and within thelimitations of the method). Similar, but notidentical to the traditional meta-analysis,systematic reviews are studies of studies.Meta-analyses are a type of systematicreview but one that uses only quantitativedata and statistical synthesis and focuseson a question repeatedly addressed in thesame way by researchers rather than apractice question or problem. As with meta-analyses, systematic reviews are conductedout of recognition that single empiricalstudies, although useful and sometimesinformative, should not be emphasizedbecause their biases and limitations cannotbe fully accounted for. Looking at allrelevant studies, systematically gathered,constitutes more reliable evidence.

Thus, in the context of evidence-based practice, neither traditional literaturereviews nor meta-analyses are especiallyuseful. First, traditional literature reviewsare open to many forms of bias. Forexample, reviewers do not make clear howthey have selected the studies they haveincluded, do not critically appraise themin an explicit or systematic way, and donot usually pull them together or synthe-size findings across studies. Second, tra-ditional reviews do not usually focus ona specific research, practice question, orproblem. It is this latter point that alsodifferentiates a systematic review from thequantitative meta-analysis used tradition-ally in I–O psychology. The process ofmaking evidence-based decisions requiresmore focused and tailored reviews of evi-dence where both a practice question orproblem and the conditions to which theevidence might be applied are taken intoaccount. Returning to the case of highabsence in Table 1, a systematic review

would attempt to find evidence about therelative effectiveness of different forms ofabsence management interventions giventhe current and desired absence rates andtaking into account as much as possibleaspects of the context such as the type ofemployees involved, the sector, and theexisting work arrangements and absencepolicies. In the context of evidence-basedpractice, systematic reviews can take formsakin to phenomena-oriented evidence orpractice-oriented evidence, depending onthe review questions and their intendeduse as well as the kinds of research avail-able. In evidence-based fields, an importantresult of systematic reviews is guidelines forpractice.

Systematic reviews can be useful forpurely academic research purposes too.We may, for example, be interested incollecting all available evidence aboutabsence-management interventions to pro-vide a more general overview about whatis known, and not known, about the effi-cacy of such interventions. In this respect,a systematic review might differ from thetraditional meta-analysis in that it wouldalso consider qualitative information anddescriptions, not being limited to effectsizes alone depending on the review ques-tion (Rousseau, Manning, & Denyer, 2008).All systematic reviews follow a process ofclearly specified stages. One example of thisprocess (adapted from Petticrew & Roberts,2006) contains seven stages.

1. Identify and clearly define the ques-tion the review will address. The ques-tion needs to be sufficiently specific sothat it is clear, in principle, what typesof data would be relevant. Aspects ofthe context (e.g., population, sector,and organizational type), the inter-ventions (what qualifies as a relevantintervention?), the mechanisms link-ing intervention to outcomes (e.g.,processes, mediators, and modera-tors), and the outcomes themselves(which data are the outcomes of inter-est) are also clearly specified.

Page 14: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

16 R.B. Briner and D.M. Rousseau

2. Determine the types of studies anddata that will answer the question.The criteria used to decide whichstudies will be selected or excludedfrom the review are identified. Forexample, a review that addresses acausal question might exclude studieswith cross-sectional designs. The aimis to increase the chances that allrelevant studies are included and allthose that are irrelevant are excluded.

3. Search the literature to locate relevantstudies. A clear search strategy is usedspecifying, for example, key words,the databases to be searched, andhow, and whether unpublished datawill be found and included.

4. Sift through all the retrieved studies toidentify those that meet the inclusioncriteria (and need to be examinedfurther) and those that do not andshould be excluded. Each study isexamined, usually by two reviewteam members, and checked againstthe inclusion and exclusion criteria.Where agreement is not possible, athird reviewer assesses the study. Atthis stage, it is not uncommon to findthat only a fraction of the initial poolof studies can be included.

5. Critically appraise the studies byassessing the study quality deter-mined in relation to the review ques-tion. The quality of each study iscritically appraised or evaluated inrelation to the review question asresearch quality can only be judged inrelation to the question. Even wherestudies meet the inclusion criteria,they are still likely to vary in termsof quality. Assessing quality allowsthe review conclusions to clearly statehow many of the studies includedwere, for example, of very high,medium, and low quality.

6. Synthesize the findings from the stud-ies. A key part of any systematicreview is the pulling together of find-ings from across the studies to repre-sent what is known and not known.

Synthesis may be quantitative, qual-itative, or both. Review findings areoften described in terms of the overallnumber of studies found, the qualityprofile of this group of studies, andthe number of studies that obtainedparticular results.

7. Disseminate the review findings. Afull report of a systematic review canbe quite large. In addition, shorterjournal article length versions or evenshorter summaries may be produced.Dissemination is often planned at theoutset of a systematic review giventhe aim is often to inform practice.

Although systematic review is rare inI–O psychology at present, I–O psychol-ogists are certainly familiar with its generalapproach. The underlying logic of system-atic review is similar to that of many psy-chological research methods and is similarto meta-analyses. Meta-analysis’ exclusiveuse of quantitative data and statistical ratherthan other forms of synthesis sets it apartfrom a systematic review, which often usesdifferent types of data and different formsof synthesis. In addition, meta-analysescan only address questions that have beenaddressed many times in more or less thesame way by researchers (e.g., the corre-lation between job satisfaction and per-formance) rather than questions that arisefrom practice problems, where an array ofdata types may be required to formulate ananswer.

An example of a structured abstract froma systematic review particularly relevant toI–O psychology is presented in Table 3.This demonstrates much of the underly-ing methodology and shows the explicitand systematic nature of the method.Joyce, Pabayo, Crichley, & Bambra’s (2010)review clearly states the review objectives,search strategy, criteria for including stud-ies, method of analysis (in this case, anarrative synthesis as the studies were dis-similar), the number of studies found, andthe findings of each.

A few aspects of systematic reviewsand their differences from traditional I–O

Page 15: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 17

Table 3. Example of a Systematic Review Abstract

Flexible working conditions and their effects on employee health and well-being (Joyce et al.,2010)

Background: Flexible working conditions are increasingly popular in developed countries,but the effects on employee health and well-being are largely unknown

Objectives: To evaluate the effects (benefits and harms) of flexible working interventions onthe physical, mental, and general health and well-being of employees and their families

Search strategy: Our searches (July 2009) covered 12 databases including the CochranePublic Health Group Specialized Register, CENTRAL, MEDLINE, EMBASE, CINAHL,PsycINFO, Social Science Citation Index, ASSIA, IBSS, Sociological Abstracts, andABI/Inform. We also searched relevant Web sites, hand searched key journals, searchedbibliographies, and contacted study authors and key experts

Selection criteria: Randomized controlled trials, interrupted time series, and controlledbefore and after studies (CBA), which examined the effects of flexible workinginterventions on employee health and well-being. We excluded studies assessingoutcomes for less than 6 months and extracted outcomes relating to physical, mental, andgeneral health/ill-health measured using a validated instrument. We also extractedsecondary outcomes (including sickness absence, health service usage, behavioralchanges, accidents, work–life balance, quality of life, health and well-being of children,family members, and coworkers) if reported alongside at least one primary outcome

Data collection and analysis: Two experienced review authors conducted data extraction andquality appraisal. We undertook a narrative synthesis as there was substantialheterogeneity between studies

Main results: Ten studies fulfilled the inclusion criteria. Six CBA studies reported oninterventions relating to temporal flexibility: self-scheduling of shift work (n = 4), flexitime(n = 1), and overtime (n = 1). The remaining four CBA studies evaluated a form ofcontractual flexibility: partial/gradual retirement (n = 2), involuntary part-time work(n = 1), and fixed-term contract (n = 1). The studies retrieved had a number ofmethodological limitations, including short follow-up periods, risk of selection bias, andreliance on largely self-reported outcome data. Four CBA studies on self-scheduling ofshifts and one CBA study on gradual/partial retirement reported statistically significantimprovements in either primary outcomes (including systolic blood pressure and heart rate;tiredness; mental health, sleep duration, sleep quality, and alertness; and self-rated healthstatus) or secondary health outcomes (coworker social support and sense of community),and no ill-health effects were reported. Flexitime was shown not to have significant effectson self-reported physiological and psychological health outcomes. Similarly, whencomparing individuals working overtime with those who did not, the odds of ill-healtheffects were not significantly higher in the intervention group at follow-up. The effects ofcontractual flexibility on self-reported health (with the exception of gradual/partialretirement, which when controlled by employees improved health outcomes) were eitherequivocal or negative. No studies differentiated results by socioeconomic status, althoughone study did compare findings by gender but found no differential effect on self-reportedhealth outcomes

Authors’ conclusions: The findings of this review tentatively suggest that flexible workinginterventions that increase worker control and choice (such as self scheduling orgradual/partial retirement) are likely to have a positive effect on health outcomes. Incontrast, interventions that were motivated or dictated by organizational interests, such asfixed-term contract and involuntary part-time employment, found equivocal or negativehealth effects. Given the partial and methodologically limited evidence base, thesefindings should be interpreted with caution. Moreover, well-designed intervention studiesare needed to delineate the impact of flexible working conditions on health, well-being,and health inequalities

Page 16: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

18 R.B. Briner and D.M. Rousseau

psychology reviews are worth noting. First,the exact methods used by the reviewers tofind, select, and exclude studies are open toexamination and scrutiny. Readers are thusable to reach their own judgments about theefficacy and appropriateness of the methodand also how much confidence they canplace in the findings.

Second, whether we are I–O practition-ers, researchers, or both, as we have littleaccess to systematic reviews, we tend todevelop an implicit though somewhat inac-curate sense of the quantity and qualityof research about a particular techniqueor intervention. We tend to assume thatthe research is ‘‘out there’’ somewhere andhave vague recollections of particular stud-ies and their findings. As mentioned, onecommon finding from systematic reviewsis that they reveal far fewer studies directlyrelevant to a given question than commonlyassumed. In the case of the flexible workingreview by Joyce et al. (2008), the large num-ber of possibly relevant studies that wereidentified was substantially reduced as thereview team applied their inclusion criteria:

• 11,954 articles initially identified aspossibly relevant from key wordsearches on databases, citation sear-ches, hand searching, and contactingexperts;

• 11,740 articles then excluded on basisof title and/or abstract;

• 214 full-text articles obtained and thenscreened using inclusion criteria; and

• 10 studies included in the final review.

As we will discuss further below, onehallmark of being an evidence-based prac-titioner or researcher is having a quite well-developed, specific, and explicit awarenessof what evidence is ‘‘out there,’’ the qualityof that evidence, and what, when taken asa whole, it might mean.

Third, unlike traditional reviews, the pro-cesses through which reviewers reach theirconclusions are explicit and transparent.Whether the synthesis is statistical or narra-tive (as in Joyce et al., 2008), the basis onwhich the data were summarized or pulled

together is clear to the reader, as the findingsof each study are extracted and presented.

Fourth, and again unlike traditionalreviews, systematic reviews also allow us toidentify the quantity and quality of studiesand many aspects of study heterogeneity(method, population, design, findings, etc.).This information is essential if we wantto draw conclusions about what is knownand not known in relation to the reviewquestion, the basis of these claims, and theconfidence with which they can be made.

In addition to full-blown systematicreviews of the sort described in Table 2,there are other quicker and more focusedways of doing systematic reviews that sharemany of their qualities. Although they maynot be as thorough and therefore not asinformative as systematic reviews, theycan still provide important information andpractice insights, especially where time isof the essence. These go by various names,including rapid evidence assessment andbest evidence topics and can be completedmore quickly by restricting the parametersin various ways, such as using fewer searchterms, using a smaller date range, andsearching in fewer databases or across fewerjournals.

We believe that systematic reviews canbe and will become an essential part ofthe I–O psychologist’s toolkit. Not only dothey allow practitioners to provide moreevidence-based advice and share with theirclients and customers the basis of thatadvice, they also allow researchers in amore structured way to identify importantgaps in knowledge. Also importantly,systematic reviews can highlight whereconducting more research on the samequestion using a particular method isunlikely to yield any new information.

Barriers to Evidence-BasedPractice in I–O Psychology

There are numerous barriers to the adoptionof evidence-based practice and numerousways in which they can be overcome.Here, we focus on just a few. First isthe apparent lack of demand from our

Page 17: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 19

clients for evidence-based I–O psychology.It is readily apparent from looking at therapid adoption of some new and ‘‘cutting-edge’’ practices that such decisions arenot made in an evidence-based way. Ifindividual managers are mostly rewardedfor achieving short-term goals as fast aspossible rather than doing what works in thelonger term, why would they be interestedin evidence-based practice? Perhaps, theonly way to overcome this barrier is byworking with organizations to demonstratethat approaching problems in an evidence-based way is more likely to produceeffective and sustainable solutions. It isalso important to emphasize that evidence-based practice constitutes a family ofapproaches to making decisions and is notintended to provide the answer to everyproblem but rather improve the process andoutcome of decision making.

A second barrier is the predominanceof master’s-level practitioners who havelearned to practice I–O psychology in unsu-pervised ways. Because of the level of suchprograms, such practitioners have a lim-ited understanding of research, a limitedcapacity to access new evidence, and lackthe skills to conduct their own systematicreviews let alone primary research. Collec-tively, we currently do not have enoughof the necessary skills to widely under-take evidence-based practice, even thoughour background as psychologists gives usa strong foundation on which to build.CPD that enables evidence-based practiceand helping I–O practitioners to accessresearch evidence are two possible waysto overcome this barrier. It may also help toincrease the opportunities for those qual-ified at the master’s level to go on tocomplete doctorates.

As a consequence of our desire to marketI–O psychology, we may be reluctantto acknowledge the limitations of ourknowledge where evidence is mixed orwhere there are grounds for uncertainty.It can be difficult to achieve a balancebetween promoting a profession while atthe same time acknowledging its limitationsas clients may find this unnerving or

see it as a sign of competence. I–Opsychologists are especially challengedbecause other organizational consultantsoutside the discipline can be extremelybullish about their products and servicesdespite their own absence of evidence.

As skepticism is a key ingredient ofevidence-based practice, its limited pop-ularity in I–O psychology is something ofa barrier. One way this can be overcomeis to remind ourselves that skepticism isfundamental to scientific inquiry and toany area of practice based on science. Italso has the potential to clearly differentiateus from other organizational practitionersand consultants, particularly if the disci-pline supports practice through systematicreviews, evidence-based guidelines, andpractice-oriented research that contributeto more effective practice.

A fourth barrier concerns the politics ofevidence in organizations. Power and poli-tics are fundamental to decision making andalso surround the identification and use ofevidence in organizations. Senior leadersmay feel they have the right or even respon-sibility to make decisions based on theirexperience and judgment that seem to flyin the face of the available evidence. Theneed to be explicit in evidence-based deci-sion making means that those with vestedinterests in a particular course of action mayfind it more difficult to hide such interests.In general, an evidence-based approachmay prove challenging particularly in orga-nizations with highly political cultures.Although it is impossible to remove pol-itics from evidence and decision making,evidence-based approaches do at least offerthe possibility of making clearer distinctionsamong politics, values, interests, and otherforms of information such as research evi-dence. The more decision makers are heldaccountable for their decisions, the morelikely they are to welcome such distinctions.

Prospects for Evidence-BasedI–O Psychology

This article concludes that I–O psychol-ogy cannot yet claim to be fully evidence

Page 18: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

20 R.B. Briner and D.M. Rousseau

based. Our analysis suggests that we havesome way to go before we can reason-ably claim to be an evidence-based areaof research and practice in the sense itis used in other areas of evidence-basedpractice. At the same time, I–O psychol-ogists are uniquely qualified to undertakethe key activities required by evidence-based practice such as evaluating exter-nal research evidence and collecting andusing internal evidence for organizationaldiagnosis. In evidence-based practice, I–Opsychologists as trainers and consultants,in-house management and staff, are posi-tioned to enable an array of approaches.These include, various combinations aswarranted, using evidence-based processesfor making decisions, giving feedback, aswell as incorporating evidence-based con-tent, that is, research findings, into theirdecisions and practices (Briner et al., 2009).

Beyond making our own practice moreevidence based, we also have an importantpart to play in helping organizationsand those who manage them becomemore evidence based. We envisage thiscould happen in several ways. First, wecould provide systematic review services(or their briefer and quicker versions) tomanagers and organizations. These mightbe seen as particularly useful where theorganization is trying to decide whetherto invest a large sum of money in aparticular intervention or program. For afraction of the proposed program budget,we would be able to provide an objectiveand reasonably comprehensive review ofwhat the published and (where available)unpublished research tells us about howeffective the intervention is known to beand whether it might work here.

Second, we have the skills to helporganizations either make sense of theirexisting internal data or collect new datathat might diagnose problems or showwhat is working and why. We believe itis not unusual, even where organizationshave large quantities of data about, forexample, employee behavior, performance,and attitudes, for those data to remainlargely unanalyzed or subject only to the

simplest and least informative analyses suchas cross tabs and zero-order correlations.I–O psychologists should be able to workwith organizations to help them get themost out of the data they already have andwhere appropriate suggest additional datacollection to develop a fuller picture of whatis happening in the organization.

Our roles as organizational knowledgebrokers as well as our own evidence-basedpractice would be facilitated by the useof systematic reviews on practice ques-tions. Systematic reviews need to be partof the professional training of I–O psychol-ogists. A professional activity accessible toboth master’s-prepared and doctoral-levelI–O psychologists, systematic reviews, andbriefer versions are important ways of help-ing would-be practitioners learn to gatherand use evidence while at the same timedeveloping their skills in formulating ques-tions, structuring reviews, and synthesizingfindings.

Third, I–O psychologists have the back-ground to enable us to work as facilitatorsand coaches for managers and managementteams seeking to engage in evidence-basedmanagement as well as helping organi-zations collect the external and internalevidence they may need (described above).We can also help collect information aboutthe perspectives of those who may beaffected by a decision and help makeexplicit managers’ own expertise and expe-rience and how it is shaping a decision.In effect, we can support organizations tomake decisions in a conscientious, explicit,and judicious way—in short, to help orga-nizations to practice EBMgt.

Evidence-based practice is not aboutperfection. Rather, it is about on-goingpursuit of mindful improvement in ouruptake and use of both scholarly researchand practice-related facts and assessments.We hope this article and the responses thatfollow help move I–O psychology in anevidence-based direction.

ReferencesAnderson, N., Herriot, P., & Hodgkinson, G. P. (2001).

The practitioner-researcher divide in industrial,

Page 19: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

Evidence-based I–O psychology 21

work and organizational (IWO) psychology: Whereare we now and where do we go from here? Journalof Occupational and Organizational Psychology,74, 391–411.

Briner, R. B. (1998). What is an evidence-basedapproach to practice and why do we need onein occupational psychology? Proceedings of the1998 British Psychological Society OccupationalPsychology Conference, pp. 39–44.

Briner, R. B., Denyer, D., & Rousseau, D. M. (2009).Evidence-based management: Concept clean-uptime? Academy of Management Perspectives, 23,19–32.

Cherniss, C. (2010). Emotional intelligence: Towardclarification of a concept. Industrial and Organi-zational Psychology: Perspectives on Science andPractice, 3, 110–126.

Friedson, E. (1986). Professional powers: A study of theinstitutionalization of formal knowledge. Chicago:University of Chicago Press.

Goleman, D. (1998). Working with emotional intelli-gence. New York: Bantam Books.

Gough, D. A., Kiwan, D., Sutcliffe, S., Simpson, D.,& Houghton, N. (2003). A systematic map andsynthesis review of the effectiveness of personaldevelopment planning for improved student learn-ing. Retrieved from http://eppi.ioe.ac.uk/EPPIWebContent/reel/review_groups/EPPI/LTSN/LTSN_June03.pdf

Hambrick, D. C. (1994). What if the academy actuallymattered? Academy of Management Review, 19,11–16.

Highhouse, S. (2008). Stubborn reliance on intuitionand subjectivity in employment selection. Indus-trial and Organizational Psychology: Perspectiveson Science and Practice, 1, 333–342.

Hollenbeck, G. (2009). Executive selection—What’sright . . . and what’s wrong. Industrial and Organi-zational Psychology: Perspectives on Science andPractice, 2, 130–143.

Joyce, K., Pabayo, R., Critchley, J. A., & Bambra, C.(2010). Flexible working conditions and theireffects on employee health and wellbeing.Cochrane Database of Systematic Reviews, Art. No.CD008009. doi: 10.1002/14651858.CD008009.pub2.

Judge, T. A., Heller, D., & Mount, M. K. (2002). Fivefactor model of personality and job satisfaction: Ameta-analysis. Journal of Applied Psychology, 87,530–541.

Judge, T. A., & Illies, R. (2002). Relationship ofpersonality to performance motivation: A meta-analytic review. Journal of Applied Psychology,87, 797–807.

Kluger, A. N., & DeNisi, A. (1996). The effect of feed-back intervention on performance: A historicalreview, a meta-analysis, and a preliminary feed-back intervention theory. Psychological Bulletin,119, 254–284.

Landy, F. J. (2008). Stereotypes, bias, and personneldecisions: Strange and stranger. Industrial andOrganizational Psychology: Perspectives on Sci-ence and Practice, 1, 379–392.

Lawler, E. E., & Mohrman, S. A. (in press). Doingresearch that is useful for theory and practice–25years later (2nd ed.). San Francisco: Berrett Koehler.

Learmonth, M., & Harding, N. (2006). Evidence-basedmanagement: The very idea. Public Administration,84, 245–266.

Locke, E. A. (2005). Why emotional intelligence isan invalid concept. Journal of OrganizationalBehavior, 26, 425–431.

Locke, E. A. (2009). Handbook of principles oforganizational behavior: Indispensable knowledgefor evidence-based management (2nd ed.). NewYork: Wiley.

Macey, W. H., & Schneider, B. (2008). The meaningof employee engagement. Industrial and Organi-zational Psychology: Perspectives on Science andPractice, 1, 3–30.

Mayer, J. D., & Salovey, P. (1997). What is emo-tional intelligence? In P. Salovey and D. Sluyter(Eds.), Emotional development and emotional intel-ligence: Educational implications (pp. 3–34). NewYork: Basic Books.

McCall, M. W. (2010). Recasting leadership develop-ment. Industrial and Organizational Psychology:Perspectives on Science and Practice, 3, 3–19.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., &Maurer, S. D. (1994). The validity of employmentinterviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79,599–616.

McKenna, D. D., & Davis, S. L. (2009). Hidden inplain sight: The active ingredients of executivecoaching. Industrial and Organizational Psychol-ogy: Perspectives on Science and Practice, 2,244–260.

Merrill, D. C., Reiser, B. J., Merrill, S. K., & Landes, S.(1995). Tutoring: Guided learning by doing.Cognition and Instruction, 13, 315–372.

Morrell, K. (2008). The narrative of ‘‘evidence based’’management: A polemic. Journal of ManagementStudies, 45, 613–635.

Murphy, K. R. (2008). Explaining the weak relationshipbetween job performance and ratings of job perfor-mance. Industrial and Organizational Psychology:Perspectives on Science and Practice, 1, 148–160.

Petticrew, M., & Roberts, H. (2006). Systematicreviews in the social sciences: A practical guide.Oxford: Blackwell.

Pfeffer, J., & Sutton, R. I. (2006). Evidence-based man-agement. Harvard Business Review, 84, 62–74.

Pritchard, R. D., Harrell, M. M., DiazGranados, D., &Guzman, M. J. (2008). The productivity measure-ment and enhancement system: A meta-analysis.Journal of Applied Psychology, 93, 540–567.

Reay, T., Berta, W., & Kohn, M. (2009). What’sthe evidence on evidence-based management?Academy of Management Perspectives, 23, 5–18.

Rousseau, D. M. (2006). Is there such a thingas evidence-based management? Academy ofManagement Review, 31, 256–269.

Rousseau, D. M., Manning, J., & Denyer, D. (2008).Evidence in management and organizationalscience: Assembling the field’s full weight ofscientific knowledge through reflective reviews.Annals of the Academy of Management, 2,475–515.

Rousseau, D. M., & McCarthy, S. (2007). Evidence-based management: Educating managers from anevidence-based perspective. Academy of Manage-ment Learning and Education, 6, 94–101.

Page 20: Evidence-Based I–O Psychology: Not There YetI–O psychology are considered: practice-oriented research and systematic reviews. Fifth, barriers to practicing evidence-based I–O

22 R.B. Briner and D.M. Rousseau

Rynes, S. L., Brown, K. G., & Colbert, A. E. (2002).Seven common misconceptions about humanresource practices: Research findings versus practi-tioner beliefs. Academy of Management Executive,16, 92–103.

Sackett, D. L., Richardson, W. S., Rosenburg, W., &Haynes, R. B. (1997). Evidence-based medicine:How to practice and teach EBM. London: ChurchillLivingstone.

Schmidt, F. L. (2009). Select on intelligence. InE. A. Locke (Ed.), Handbook of principles of orga-nizational behavior: Indispensable knowledge forevidence-based management (2nd ed., pp. 3–18).New York: Wiley.

Silzer, R., Cober, R., Erickson, A., & Robinson, G.(2008). Practitioner needs survey: Final sur-vey report. SIOP Professional Practice Commit-tee. Retrieved from www.siop.org/Practitioner%20Needs%20Survey.pdf

Smith, R. (1991). Where is the wisdom . . .? The povertyof medical evidence. British Medical Journal, 303,798–799.

Soukup, S. M. (2000). The center for advanced nursingpractice evidence-based practice model. NursingClinics of North America, 35, 301–309.

Tranfield, D., Denyer, D., & Smart, P. (2003). Towarda methodology for developing evidence-informedmanagement knowledge by means of system-atic review. British Journal of Management, 14,207–222.

VanLehn, K., Siler, S., Murray, C., Yamauchi, T., &Bagget, W. B. (2003). Why do only some eventscause learning during human tutoring? Cognitionand Instruction, 21, 209–249.

Van de Ven, A. H. (2007). Engaged scholarship: Aguide for organizational and social research. NewYork: Oxford University Press.

Van de Ven, A. H., & Johnson, P. E. (2006). Knowledgefor theory and practice. Academy of ManagementReview, 31, 802–821.