242
The University of Notre Dame Australia The University of Notre Dame Australia ResearchOnline@ND ResearchOnline@ND Theses 2013 Knowledge translation intervention to improve evidence-based practice Knowledge translation intervention to improve evidence-based practice behaviour of allied health professionals: A cluster randomised controlled behaviour of allied health professionals: A cluster randomised controlled trial and 2-year follow-up study trial and 2-year follow-up study Lanie Campbell University of Notre Dame Australia Follow this and additional works at: https://researchonline.nd.edu.au/theses Part of the Medicine and Health Sciences Commons COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING The material in this communication may be subject to copyright under the Act. Any further copying or communication of this material by you may be the subject of copyright protection under the Act. Do not remove this notice. Publication Details Publication Details Campbell, L. (2013). Knowledge translation intervention to improve evidence-based practice behaviour of allied health professionals: A cluster randomised controlled trial and 2-year follow-up study (Doctor of Philosophy (PhD)). University of Notre Dame Australia. https://researchonline.nd.edu.au/theses/89 This dissertation/thesis is brought to you by ResearchOnline@ND. It has been accepted for inclusion in Theses by an authorized administrator of ResearchOnline@ND. For more information, please contact [email protected].

Knowledge translation intervention to improve evidence

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Knowledge translation intervention to improve evidence

The University of Notre Dame Australia The University of Notre Dame Australia

ResearchOnline@ND ResearchOnline@ND

Theses

2013

Knowledge translation intervention to improve evidence-based practice Knowledge translation intervention to improve evidence-based practice

behaviour of allied health professionals: A cluster randomised controlled behaviour of allied health professionals: A cluster randomised controlled

trial and 2-year follow-up study trial and 2-year follow-up study

Lanie Campbell University of Notre Dame Australia

Follow this and additional works at: https://researchonline.nd.edu.au/theses

Part of the Medicine and Health Sciences Commons

COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969

WARNING

The material in this communication may be subject to copyright under the Act. Any further copying or communication of this material by you may be the subject of copyright protection under the Act.

Do not remove this notice.

Publication Details Publication Details Campbell, L. (2013). Knowledge translation intervention to improve evidence-based practice behaviour of allied health professionals: A cluster randomised controlled trial and 2-year follow-up study (Doctor of Philosophy (PhD)). University of Notre Dame Australia. https://researchonline.nd.edu.au/theses/89

This dissertation/thesis is brought to you by ResearchOnline@ND. It has been accepted for inclusion in Theses by an authorized administrator of ResearchOnline@ND. For more information, please contact [email protected].

Page 2: Knowledge translation intervention to improve evidence

Knowledge Translation Intervention to

Improve Evidence-Based Practice Behaviour

of Allied Health Professionals

A cluster randomised controlled trial

and 2-year follow-up study

Lanie Campbell

Bachelor of Applied Science Speech Pathology (1991)

A thesis submitted for the degree of

Doctor of Philosophy at

University of Notre Dame Australia

November 2013

Page 3: Knowledge translation intervention to improve evidence

ii

This thesis is composed of my original work, and contains no material

previously published or written by another person except where due

reference has been made in the text.

The content of my thesis is the result of work I have carried out since the

commencement of my research higher degree candidature and does not

include a substantial part of work that has been submitted to qualify for the

award of any other degree or diploma in any university or other tertiary

institution. No part of my thesis has been submitted to qualify for another

award.

____________________

Signature

_______________

Date

Page 4: Knowledge translation intervention to improve evidence

iii

Thank you to:

My principal supervisor Associate Professor Iona Novak for providing me

with the opportunity to pursue such an interesting and rewarding doctoral

degree. Your strategic thinking, work ethic and dedication to cerebral palsy

research are admirable. You have diligently mentored me, inspired me and

remained engaged with my PhD journey. I am very grateful.

My co-supervisor Sarah McIntyre for your thoroughness, careful

consideration and encouragement to persevere. Your example of rigour set

the bar high, and you have supported my path towards that goal. I am

deeply appreciative of all that you have contributed.

Cerebral Palsy Alliance management team and consultants who endorsed

and supported this project from its inception. I would especially like to thank

the clinical consultants who provided valuable expertise and support.

Staff at Research Institute Cerebral Palsy Alliance for your input to the

project design, development of the online Evidence Alert System and for

your ongoing encouragement.

Sarah Lord for invaluable assistance with statistical analysis of the cluster

randomised controlled trial.

Allied health professionals at Cerebral Palsy Alliance who participated in

the research studies. Without your participation, none of this would have

been possible.

My family and friends who provided practical and emotional support,

especially my parents and sister Cheryl.

Miles, Finn, Henry and Dudley– you have kept me grounded and helped

me remember what is important. Thanks for your belief in me and for

coming on this adventure with me.

Page 5: Knowledge translation intervention to improve evidence

iv

Background: It is difficult to foster the use of research findings among allied

health professionals (AHPs). Tailored, multifaceted knowledge translation

(KT) strategies are now recommended but are resource intensive to

implement. Employers need effective KT solutions, but little is known about:

(a) the impact and viability of multifaceted KT strategies using an online KT

tool, (b) their immediate and longer-term effectiveness with AHPs, and (c)

their effect on evidence-based practice (EBP) decision-making behaviour. The

aim of this project was to measure the effectiveness of a multifaceted KT

intervention including a customised KT tool, to change EBP behaviour,

knowledge and attitudes of AHPs over an 8-week period and at 2-years.

Methods: The first study was an evaluator-blinded, cluster randomised

controlled trial (RCT) conducted in a community-based cerebral palsy

service. AHPs (135 physiotherapists, occupational therapists, speech

pathologists, psychologists and social workers) from 4 regions were cluster

randomized (n = 4), to either the KT intervention group (n = 73) or the

control group (n = 62), using computer-generated random numbers,

concealed in opaque envelopes, by an independent officer. The KT

intervention included a 3-day skills training workshop and workplace

support to redress barriers (paid EBP time, mentoring, system changes and

access to an online research synthesis tool). Primary RCT outcome (self- and

peer-rated EBP behaviour) was measured using the Goal Attainment Scale

(individual level). Secondary RCT outcomes (knowledge and attitudes) were

measured using exams and the Evidence Based Practice Attitude Scale.

The second study was a follow-up study 2-years after the completion of the

RCT using an online survey. The survey included: (a) questions based on

Goal Attainment Scale, and (b) questions relating to the utilisation and

usefulness of an evidence alert system.

Page 6: Knowledge translation intervention to improve evidence

v

Results RCT - the intervention group’s primary outcome scores improved

relative to the control group, however when clustering was taken into

account, the findings were non-significant: self-rated EBP behaviour [effect

size 4.97; 95% confidence interval (CI)-10.47, 20.41; p = 0.52]; peer-rated EBP

behaviour (effect size 5.86; 95% CI-17.77, 29.50; p = 0.62). Statistically

significant improvements in EBP knowledge were detected (effect size 2.97;

95% CI 1.97, 3.97; p < 0.0001). Change in EBP attitudes was not statistically

significant. Two-year follow-up study - AHPs’ KT strategy GAS T-scores

improved (GAS T-score change from RCT to 2-years = 29.58; 95%CI 12.66,

46.52; p = 0.02).

Conclusions The two studies suggest meaningful gains in EBP behaviour,

with consistent GAS peer-ratings and self-ratings in the RCT, along with an

overall increase in GAS T-scores in the 2-year follow-up study. This cannot

be stated with certainty however, due to methodological issues due to

pragmatic constraints. The large variability in behaviour observed between

clusters suggests barrier assessments and subsequent KT interventions may

need to target subgroups within an organisation.

Page 7: Knowledge translation intervention to improve evidence

vi

Declaration ...................................................................................................... ii

Acknowledgements ........................................................................................ iii

Abstract .......................................................................................................... iv

Table of Contents ........................................................................................... vi

List of Tables................................................................................................... x

List of Figures ................................................................................................ xi

List of Appendices ......................................................................................... xii

Abbreviations ............................................................................................... xiii

CHAPTER 1 INTRODUCTION ....................................................................... 1

Overview ......................................................................................................... 1

Background ..................................................................................................... 1

Statement of the problem ................................................................................ 2

Research aims and methods .......................................................................... 3

Research questions .......................................................................................... 4

Thesis outline .................................................................................................. 4

CHAPTER 2 LITERATURE REVIEW ............................................................. 6

Overview ......................................................................................................... 6

Evidence-based practice ................................................................................. 6

Definition of evidence-based medicine .............................................................. 6

History of evidence-based medicine .................................................................. 7

Why use EBM? ................................................................................................. 8

EBP in the allied health professions .................................................................. 8

The research–practice gap ............................................................................. 9

Knowledge translation ................................................................................... 10

Theories and models underpinning knowledge translation ........................... 10

Conceptual KT models .................................................................................... 11

KT theories ..................................................................................................... 16

Barriers to EBP implementation .................................................................... 23

Support/resource barriers ............................................................................... 24

Cognitive/behavioural barriers ......................................................................... 25

Attitudinal/rational-emotive barriers ................................................................. 25

Clinical practice guideline/evidence barriers .................................................... 25

Client barriers ................................................................................................. 26

Health care professional/physician barriers ..................................................... 27

System/process barriers ................................................................................. 27

Strategies aiming to change health professionals’ EBP behaviour ............... 28

Face-to-face educational meetings ................................................................. 29

Retrieval of electronic health information......................................................... 30

Printed educational materials .......................................................................... 30

Page 8: Knowledge translation intervention to improve evidence

vii

Outreach visits (mentoring) ............................................................................. 31

Opinion leaders ............................................................................................... 31

Audit and feedback ......................................................................................... 32

Journal clubs ................................................................................................... 32

Financial Incentives ........................................................................................ 33

Organisational change – strategic planning, management training.................. 33

Tailored interventions ...................................................................................... 34

Multifaceted KT strategies ............................................................................... 34

Knowledge translation in the allied health professions .................................. 37

Measuring the outcomes of multifaceted KT strategies ................................ 40

Domains of evaluation ..................................................................................... 40

Gaps in the literature ..................................................................................... 44

1. No RCTs with an evidence-based information resource as a key element of a KT strategy ............................................................................. 45

2. No studies involving AHPs have attempted to measure a wide range of EBP behaviour ............................................................................................ 45

3. No RCTs sampling a range of professional groups ..................................... 46

4. No RCTs with AHPs that have used a strong KT theoretical framework ...... 46

Rationale for the studies ............................................................................... 46

Rationale for the randomised controlled trial ................................................... 46

Rationale for the follow-up study ..................................................................... 47

Synopsis ....................................................................................................... 47

CHAPTER 3 RANDOMISED CONTROLLED TRIAL METHODS ................ 48

Overview ....................................................................................................... 48

Aim and hypotheses ..................................................................................... 48

Trial design ................................................................................................... 49

Setting ........................................................................................................... 50

Ethics ............................................................................................................ 51

Eligibility ........................................................................................................ 51

Blinding ......................................................................................................... 52

Randomisation .............................................................................................. 52

Intervention ................................................................................................... 53

Assessment of barriers and facilitators ............................................................ 53

Development of multifaceted intervention ........................................................ 53

KT intervention group ...................................................................................... 54

Control group .................................................................................................. 60

Primary outcomes ......................................................................................... 62

Goal attainment scaling ................................................................................... 62

Secondary study outcomes ........................................................................... 63

Open-ended exam questions .......................................................................... 63

Evidence based practice attitude scale ........................................................... 63

Use of the cerebral palsy evidence alert system ............................................. 64

Page 9: Knowledge translation intervention to improve evidence

viii

Procedures and data collection ..................................................................... 67

Data cleaning ................................................................................................ 70

Sample size and power ................................................................................. 70

Statistical methods ........................................................................................ 71

Synopsis ....................................................................................................... 72

CHAPTER 4 RANDOMISED CONTROLLED TRIAL RESULTS ................. 73

Overview ....................................................................................................... 73

Baseline characteristics ................................................................................ 73

Professional background ................................................................................. 76

Grade level ..................................................................................................... 76

Years at Cerebral Palsy Alliance and experience in the disability field ............ 77

English as first language ................................................................................. 78

Previous continuing education in EBP ............................................................. 79

Participant flow................................................................................................ 80

Missing Data ................................................................................................. 80

Clustering effect ............................................................................................ 81

Effectiveness of KT strategy ......................................................................... 81

Primary outcome – EBP practice behaviours .................................................. 81

Secondary outcomes – knowledge, attitudes and EAS ................................... 82

Additional analyses ......................................................................................... 82

Synopsis ....................................................................................................... 83

CHAPTER 5 2-YEAR FOLLOW-UP STUDY METHODS ............................. 87

Overview ....................................................................................................... 87

Background ................................................................................................... 87

Aims and hypothesis ..................................................................................... 88

Trial design ................................................................................................... 89

Survey Design ............................................................................................... 89

Pilot testing ..................................................................................................... 90

Eligibility ........................................................................................................ 90

Ethics ............................................................................................................ 91

Procedures.................................................................................................... 91

Statistical analysis ......................................................................................... 91

Calculating change in GAS T-scores ............................................................... 93

Missing data .................................................................................................... 93

Synopsis ....................................................................................................... 93

CHAPTER 6 2-YEAR FOLLOW-UP STUDY RESULTS .............................. 94

Overview ....................................................................................................... 94

Survey results – all survey participants ......................................................... 94

Participant flow & baseline characteristics ....................................................... 94

Results relating to Evidence Alert System .................................................... 96

Page 10: Knowledge translation intervention to improve evidence

ix

RCT follow-up study ...................................................................................... 97

Participant flow .............................................................................................. 97

Long-term effectiveness of KT strategy ......................................................... 101

Evidence-based practice behaviours of survey participants according to cluster ....................................................................................................... 102

Synopsis ..................................................................................................... 102

CHAPTER 7 DISCUSSION ........................................................................ 103

Overview ..................................................................................................... 103

Key findings ................................................................................................ 103

Evidence-based practice behaviour .............................................................. 104

Evidence-based practice knowledge ............................................................. 107

Evidence-based practice attitudes................................................................. 108

Use of the evidence alert system .................................................................. 109

Strength and limitations .............................................................................. 109

Strengths ...................................................................................................... 109

Limitations ..................................................................................................... 110

Recommendations ...................................................................................... 112

Future research ............................................................................................. 112

Recommendations for organisations ............................................................. 112

Conclusion .................................................................................................. 114

REFERENCES ........................................................................................... 116

Page 11: Knowledge translation intervention to improve evidence

x

Table 1: Underpinning theories of KT ...................................................................... 22

Table 2: Systematic review evidence for the effectiveness of KT strategies ............ 35

Table 3: Evidence table – KT strategies in the allied health professions ................. 38

Table 4: Theoretical basis and strategies to address modifiable barriers ................ 56

Table 5: KT strategy with corresponding KTA phases ............................................. 61

Table 6: Hypotheses matched to domain and measurement .................................. 65

Table 7: RCT study procedures .............................................................................. 68

Table 8: Baseline characteristics of participants ..................................................... 74

Table 9: Primary and secondary outcomes - RCT ................................................... 84

Table 10: Mean outcome scores for each cluster .................................................... 85

Table 11: Data Analysed – follow-up study ............................................................. 92

Table 12: Survey participants’ baseline characteristics (n = 65) .............................. 95

Table 13: Survey respondents’ professional backgrounds ...................................... 96

Table 14: Evidence Alert System survey question results (n = 65) .......................... 97

Table 15: Participant characteristics (RCT participants) – follow-up study ............ 100

Table 16: GAS T-score 8-week to 2-year comparison (n = 19) ............................. 101

Table 17: GAS T-score comparison based on attendance at original EBP training .................................................................................................. 102

Table 18: GAS T-score according to original cluster ............................................. 102

Table 19: Key findings at a glance ........................................................................ 103

Page 12: Knowledge translation intervention to improve evidence

xi

Figure 1: The evidence-based medicine triad ........................................................... 7

Figure 2: Knowledge-to-Action (KTA) process ........................................................ 12

Figure 3: The 5S pyramid model of evidence-based information resources ............ 14

Figure 4: RCT trial design ....................................................................................... 50

Figure 5: EAS infogram........................................................................................... 55

Figure 6: Study structure and measures ................................................................. 67

Figure 7: Percentage of participants in various professional backgrounds in intervention and control groups ............................................................... 76

Figure 8: Percentage of participants for AHP grade levels in intervention and control groups ......................................................................................... 77

Figure 9: Percentage of participants according to number of years employed at Cerebral Palsy Alliance in intervention and control groups ...................... 77

Figure 10: Percentage of participants according to number of years working in disability in intervention and control groups ............................................. 78

Figure 11: Percentage of participants whose first language was English in intervention and control groups ............................................................... 79

Figure 12: Percentage of participants who had previous continuing education in EBP in intervention and control groups ................................................... 79

Figure 13: Participant flow diagram for RCT – from randomisation to primary analysis ................................................................................................... 80

Figure 14: Participant flow throughout entire study ................................................. 98

Page 13: Knowledge translation intervention to improve evidence

xii

APPENDIX 1 Oxford Centre for Evidence-based Medicine ........................ 131

APPENDIX 2 National Ethics Application approval letters .......................... 139

APPENDIX 3 Evidence Alert System .......................................................... 141

APPENDIX 4 Information Sheet for Staff Participants ................................ 153

APPENDIX 5 Self-evaluation Form ............................................................. 157

APPENDIX 6 Peer Evaluation Form ........................................................... 166

APPENDIX 7 Marking Criteria for Exam ..................................................... 172

APPENDIX 8 2-year Follow-up Survey ....................................................... 174

APPENDIX 9 Journal paper accepted for publication by Implementation Science ................................................................................ 188

APPENDIX 10 Conference presentations and posters ............................... 228

Page 14: Knowledge translation intervention to improve evidence

xiii

AHP allied health professional

CI confidence interval

EAS evidence alert system

EBM evidence-based medicine

EBP evidence-based practice

EBPAS evidence-based practice attitude scale

GAS goal attainment scaling

ICC intra-cluster correlation co-efficient

KT knowledge translation

KTA knowledge-to action

NEAF national ethics application form

OTseeker occupational therapy systematic evaluation of evidence

PEDro physiotherapy evidence database

RCT randomised controlled trial

sd standard deviation

SpeechBITE speech pathology database for best interventions and

treatment efficacy

Page 15: Knowledge translation intervention to improve evidence

1

Overview

The focus of this doctoral research programme was to measure whether the

evidence-based practice (EBP) behaviours of allied health professionals

(AHPs) working with people who have cerebral palsy in a community-based

organisation could be changed using knowledge translation (KT) techniques.

This chapter introduces the topic by providing:

1) Background information about EBP and KT

2) Background information about cerebral palsy

3) Background information about AHPs’ role in cerebral palsy treatment

4) Statement of the problem and rationale for the studies

5) Research aims and questions

6) Overview of the thesis contents.

Background

Evidence-based practice involves using the best available research evidence

to inform clinical decisions. Although there is strong support for EBP, there

is a significant gap between what research evidence suggests and what

health professionals do in most areas of healthcare.1 The reason this gap

exists is complex as there are many factors that may hinder or facilitate

evidence from becoming a part of everyday practice.2 There is a growing

body of research that seeks to understand and measure the best strategies to

change health professionals’ behaviour, and therefore narrow the research-

practice gap known as knowledge translation.3 The ultimate purpose of KT is

to increase the use of evidence-based interventions to optimise clinical

outcomes. KT strategies include face-to-face workshops,4 mentoring,5 clinical

guidelines or a combination of strategies known as multifaceted KT

Page 16: Knowledge translation intervention to improve evidence

Chapter 1 – Introduction

2

strategies. Systematic review literature suggests that most KT strategies lead

to small–moderate changes in EBP behaviour. A KT strategy that is based on

a strong theoretical model and designed to overcome context specific barriers

is most likely to induce behaviour change.6 There are unique challenges in

the field of cerebral palsy that need careful consideration prior to designing a

KT strategy.

AHPs play a key role in assessing and treating people with cerebral palsy.

The AHPs discussed throughout this thesis are physiotherapists,

occupational therapists, speech pathologists, psychologists and social

workers. Although AHPs endorse EBP, lack of time,7 lack searching and

research appraisal skills,8,9 and lack of access to databases are barriers to new

knowledge being translated in a timely and efficient way.10

Statement of the problem

Survey data suggest that the research–practice gap exists in the field of

cerebral palsy11,12 despite quality research being available. In addition to the

barriers mentioned above, AHPs working with people with cerebral palsy

face specific EBP challenges including complex clinical decision-making due

to the complicated nature of cerebral palsy, and the rapid expansion of the

cerebral palsy evidence base in the last two decades, making it hard for

clinicians to keep up to date.10 For example, a MEDLINE search for cerebral

palsy studies during 2012 retrieved 887 articles, compared to 407 studies in

2002, and 218 studies in 1992.

The most common strategy chosen to influence AHPs EBP behaviour to date

has been teaching searching and critical appraisal skills. This technique

however, may not be feasible longer-term given the ever increasing volume

of published literature.13 Additionally, research evaluating the effectiveness

of teaching critical appraisal skills does not lead to an improvement in EBP

behaviour.14,15 Leaders in the knowledge translation field therefore

recommend that future KT strategies should pursue the development of

Page 17: Knowledge translation intervention to improve evidence

Chapter 1 – Introduction

3

evidence-based information resources (such as research summaries) that are

embedded into health professionals’ workflow.13 The idea here is that,

evidence embedded in workflow will prompt adoption and thus is easier

and less time-consuming to use than strategies that necessitate an

interruption in workflow that involves skilled and time-consuming

searching.

Despite this, no studies with AHPs have investigated the effectiveness of KT

strategies that have revolved around the development of evidence-based

information resources. More broadly, the KT evidence base in the allied

health professions is scant.16 There have been no RCTs measuring the

effectiveness of KT strategies that have: (1) included a wide range of AHPs,

(2) been done in the field of cerebral palsy, or (3) measured a wide range of

EBP behaviours.

Research aims and methods

The aim of this research was to measure the effectiveness of a KT strategy

(that centred around an evidence-based information resource) to change

AHPs’ EBP behaviour. The secondary aims were to measure the effect of the

KT strategy on EBP knowledge and attitudes. We conducted a cluster

randomised controlled trial (RCT) in 2009 with follow-up study 2-years later

to test the effectiveness of the KT strategy. The KT strategy was based on a

theoretical model called the Knowledge-to-Action (KTA) process and was

developed after a comprehensive, informal barriers/facilitators assessment.

Barriers identified were: lack of time, skill and knowledge, restricted access

to databases, negative attitudes towards EBP and evidence not always being

clinically relevant (see Table 4 for details). The KT strategy therefore

included an online evidence-based information resource that summarised

cerebral palsy research, called the Evidence Alert System (EAS); a 3-day

workshop; paid protected EBP time; mentoring; and mandatory use of

outcome measures, included in client documentation. The following

research questions were formulated to address these aims.

Page 18: Knowledge translation intervention to improve evidence

Chapter 1 – Introduction

4

Research questions

Over an 8-week period does a multifaceted KT strategy

• improve AHPs EBP behaviour

• improve AHPs EBP attitudes

• improve AHPs EBP knowledge

• lead to increased use of the EAS?

And further, does a multifaceted KT strategy improve AHPs EBP behaviour

over a 2-year period?

The RCT findings have been accepted for publication in the peer-reviewed

journal, Implementation Science, which is the leading journal on KT. A copy

of the article proofs can be found in Appendix 9.

Thesis outline

This doctoral thesis presents a cluster RCT and 2-year follow-up study

seeking to answer the above 5 research questions. It is presented in the

following order.

Chapter 1 – Introduction

Chapter 1 introduces the thesis topic by providing background information

and the rationale for the studies. This is followed by research aims, an

overview of the methods used, and an outline of the thesis.

Chapter 2 – Literature Review

Chapter 2 provides an overview of the theoretical and empirical background

of EBP and KT. The key theories that the studies were based upon are

highlighted along with an overview of KT strategies and KT research in the

allied health professions. The chapter finishes with a detailed rationale for

conducting the RCT and 2-year follow-up study.

Page 19: Knowledge translation intervention to improve evidence

Chapter 1 – Introduction

5

Chapter 3 – Randomised Controlled Trial Methods

Chapter 3 describes the steps that were undertaken to address the

hypotheses and aims. The reporting of the RCT methods comply with the

CONSORT statement17 for cluster RCTs. The theoretical framework and

development of the KT strategy are described in detail, applying the

literature summarised in Chapter 2 to the specific context of the RCT.

Chapter 4 – Randomised Controlled Trial Results

Chapter 4 presents a statistical analysis of the data obtained from the RCT.

Participant flow through the study and results for the primary and

secondary outcomes are detailed.

Chapter 5 – 2-year Follow-up Study Methods

Chapter 5 begins by describing the relationship between the RCT and the

follow-up study, and the flow of participants throughout the 2-year period.

The survey methods and process undertaken to address the hypotheses and

research questions are detailed.

Chapter 6 – 2-year Follow-up Study Results

Chapter 6 presents the survey results from the follow-up study according the

study hypotheses. Interpretation of these results is provided in Chapter 7.

Chapter 7 – Discussion

Chapter 7 provides interpretation and implications of the RCT and follow-up

study, and describes how these studies have contributed to the KT evidence

base. Strengths and limitations of each study are detailed. The chapter

finishes by providing recommendations for organisations wanting to

implement KT strategies, and future research directions.

Page 20: Knowledge translation intervention to improve evidence

6

Overview

This chapter reviews the published literature and has six components:

1) Definition of EBP and KT

2) Summary of the theories and models underpinning EBP behaviour

change

3) Consideration of the barriers to EBP use

4) Summary of the effectiveness of KT strategies to change behaviour

5) Ways to measure EBP behaviour

6) Rationale for the research.

There were a number of systematic reviews available related to knowledge

translation and as a result, this chapter provides a broad overview of the

available literature, rather than being a systematic review itself.

Evidence-based practice

The term ‘evidence-based practice’ is more commonly used than ‘evidence-

based medicine’ (EBM) in the allied health professions. EBP has its roots in

EBM, and the terms are often used interchangeably.18,19 This section will

therefore begin with a definition and history of EBM however, the term EBP

will be used from the Section EBP in the allied health professions onwards.

Definition of evidence-based medicine

Evidence-based medicine is the “conscientious, explicit and judicious use of

current best evidence in making decisions about the care of individual

clients. The practice of EBM means integrating individual clinical expertise

with the best available external clinical evidence from systematic research”.20

Page 21: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

7

Figure 1 illustrates the integration of clinical expertise, client values and the

best evidence into the decision-making process for client care.

Figure 1: The evidence-based medicine triad

Source: Florida State University, College of Medicine. <http://med.fsu.edu/index.cfm?page=medicalinformatics.ebmTutorial> Retrieved 10.12.11

History of evidence-based medicine

The philosophy of EBM dates back to the 19th century; however, Gordon

Guyatt first used the term ‘evidence-based medicine’ in 1992 for the JAMA

user guides.10,21 These guides were designed to integrate research findings

into bedside clinical decision-making. Inclusion of research papers in

discussing client care was integrated into the ward round system at

McMaster teaching hospitals in the early 1990s.10 By the late 1990s

information technology had improved and Sackett and Straus22 described the

usefulness of an “evidence cart” used on ward rounds at John Radcliffe

Hospital in Oxford. It is now almost 20 years on and EBM has become

accepted as best practice with few practitioners debating the need to base

clinical decisions on the best available evidence.13,23-25 In some ways, the

focus has changed from whether research should be included in clinical

decision-making to the ways that this can best occur. At a fundamental level

EBM has changed the way in which health professionals approach clinical

questions and has changed the landscape of our health care system.

Page 22: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

8

“Perhaps one of the most important contributions of EBM has been to drive

us from ‘do this do that’ medicine to the justification of ‘why to do this or

that’ ”.26

Why use EBM?

There is little doubt that EBM has become the new paradigm in health care,

as Sackett et al.20 suggested it would. Although there is widespread support

of EBM in the Australian health-care system, there are researchers and health

professionals who maintain that our health-care system should not rely on

the principles of EBM.27 The need to use EBM in our health-care setting is

however driven by compelling medical ethics. First, there is an ethical

obligation to do no harm by providing clients with treatment options that

research suggests will be most likely to work.28 Second, with increasing

demands on our health-care system, policy makers need to ensure that

funding is allocated to effective treatments that have a strong evidence base,

and that funds are not directed to those that have been proven to be

ineffective.2,29-32 Physicians, nursing staff, AHPs, managers and policy

makers ethically must therefore embrace EBM.33

EBP in the allied health professions

The term ‘evidence-based practice’ was coined to accommodate the wide

range of services that AHPs provide (apart from medical interventions).

Considering its origins in hospital-based medicine, there has been ongoing

discussion in the literature about how the allied health professions can

appropriately apply the principles of EBP to their professions.33-36 Some

authors question whether the conceptual and philosophical framework is

suitable for the allied health professions, however most AHPs are supportive

of the underlying principles.34,37-39 The way in which each professional group

interprets and applies EBP varies greatly.10,25 This is due in part to the fact

that each profession has unique EBP implementation challenges. For

example, Reilly40 noted that in speech pathology literature, there are few

RCTs — the gold standard for measuring effectiveness. This is particularly

Page 23: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

9

true in sub-specialties where the client groups are often heterogeneous and

case series design is often a more realistic methodology even though it is

considered a lower level of evidence.36,39 Reilly40 argued that rather than

being a reason not to engage with EBP, it is simply a challenge to researchers

(to produce the highest quality evidence possible), clinicians (to access the

highest quality evidence and use valid outcome measures) and professional

bodies (to educate and create clinical guidelines).

The research–practice gap

The implementation of research findings into practice are often haphazard

and delayed.1,37,41,42 This problem is referred to as the research-to-practice

gap43 or the gap between “what is known” and “what is currently done”.2

For example, two areas of medicine where the research–practice gap has

been quantified are hypertension management and respiratory care. Each

year, 68,000 deaths from hypertension in the USA have been deemed

preventable.44 Furthermore, people with hypertension only received 64. 7%

of the optimal care recommended by national and hospital guidelines.

Mularski et al.44 examined the medical records of 260 asthma clients and 169

clients with obstructive lung disease. Alarmingly, asthma clients received

only 53.5% of recommended care, and clients with obstructive lung disease

only 58% of recommended care when the quality of care provided was

compared to national evidence-based guidelines.

The research–practice gap is worldwide. Widespread variation exists in the

use of non-steroidal anti-inflammatory drugs in Europe, the United States

and Canada,45 despite clear, consistent guidelines regarding their best

use.46,47 The research–practice gap also exists in allied health such as speech

pathology,15,40,48 physiotherapy9,42,49 and occupational therapy.50

The need to redress the research–practice gap has given rise to a growing

body of research focusing on the processes of how to move research findings

Page 24: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

10

into clinical care as quickly, accurately and sustainably as possible. This new

research field is most commonly known as knowledge translation.

Knowledge translation

As strategies to narrow the research–practice gap have evolved and changed,

so too has the terminology used to describe this field.51 In Europe the terms

implementation science and research utilisation have been used, whereas in

the United States knowledge transfer, dissemination and uptake have been

more commonly used. The term knowledge translation originated in Canada

and is now more widely used. The Canadian Institutes of Health Research

(the federal agency that funds health research) described KT as “a dynamic

and iterative process that includes the synthesis, dissemination, exchange

and ethically sound application of knowledge to improve Canadians’ health,

provide more effective health services and products, and strengthen the

health care system”.3 KT ultimately aims to improve client outcomes via

smoothing the transition of EBP into clinical practice. This process is

achieved by strategies such as continuing medical education, organisational

change and guideline implementation.46

The term knowledge translation will be used from this point forward to

describe a range of activities, including research utilisation, innovation

diffusion, knowledge transfer, research implementation, research uptake and

evidence-based decision-making.51,52 The term also suggests a dynamic, two-

way process as opposed to a top-down, one-way process.46,52

Theories and models underpinning knowledge

translation

KT theories are grounded in theories of behaviour change.43 The theoretical

underpinnings of KT are important as they can assist to test, modify and

inform whether change is possible, and highlight the complexities of

attempting to induce change. Literature suggests that theoretical

Page 25: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

11

perspectives should be carefully considered prior to developing an

educational intervention as different theoretical assumptions lead to different

intervention strategies.53 Theoretical models and approaches are often

selected from potentially biased beliefs about human behaviour and

change.54 A systematic approach to considering underlying theoretical

assumptions can reduce this bias and generate testable hypotheses.

However, authors rarely document the role of theory underpinning their KT

strategies,55 making it difficult for others to replicate successful strategies and

build evidence supporting or refuting the effectiveness of strategies aligned

with a given theory. KT theories and models draw on theories in other areas

such as public health,56 organisational change,57 business58 and mental

health.59 The body of theoretical literature regarding KT is extensive and

complex,60 however there are some helpful models that synthesise a range of

theories and have been adapted for KT in health settings.

A theoretical-informed approach offers the advantage of a generalizable

framework to: inform the development and delivery of interventions;

guide evaluation; explore moderating factors and causal mechanisms; and

facilitate a better understanding of the generalizability and replicability of

implementation interventions.16

Conceptual KT models

A number of KT models have been proposed, that incorporate key theories

suited for various target settings and professional groups.51,52,61-63

Knowledge-to-Action process

The Knowledge-to-Action (KTA) process model selected for the present

study provides a guideline on how to implement change.64 The KTA model51

was developed to assist research implementation and is particularly well

suited for community-based organisations such as the study site in the

present study. It provides a comprehensive and cohesive basis to underpin

the multifaceted KT strategy described in this thesis.

Page 26: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

12

Graham et al.51 reviewed thirty-one planned-action theories resulting in the

development of the KTA process. As outlined in Figure 2 the KTA process

has two distinct but interacting components:

1) Knowledge creation is at the centre of the model and includes 3

phases, knowledge inquiry, knowledge synthesis and knowledge

tools/products. It involves gathering and synthesising research

information leading to tools that are to be used by health

professionals. The inverted cone shape represents the distillation of

knowledge tailored to the knowledge users. The circle of arrows

represents the ongoing process of knowledge creation.

2) Action cycle, which has 7 steps and revolves around activities that

may be needed for knowledge application. The phases are not linear

but rather dynamic and interact with the knowledge-creation funnel

at the centre of the model.

Figure 2: Knowledge-to-Action (KTA) process

Used with permission: Graham et al., 200651

.

Page 27: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

13

5S model for seeking evidence-based information

Central to the KTA process is knowledge creation, involving inquiry,

synthesis and tools. This process involves tailoring knowledge (evidence-

based information) for a group of users and is a cornerstone to any KT

strategy.13,65 Evidence-based information may take the form of systematic

reviews, research summaries, clinical guidelines or clinical decision-making

tools.

Straus and Haynes13 described a hierarchy of evidence-based information

resources in the 5S model (Figure 3). The model is depicted by a pyramid

with 5 levels (studies, syntheses, synopses, summaries, systems) that aim to

be increasingly readable, reliable and relevant as one moves up the pyramid.

Straus and Haynes recommend a top-down approach for answering clinical

questions. According to the top-down approach, when faced with a clinical

question, an AHP would ideally be able to rely on clinical decision-making

support systems linked to client data and the process of care (Level 5). In the

absence of decision support systems, the next level of evidence-based

information resource would be sought (customised summaries), and so on.

Levels 4 and 5 could also be referred to as KT tools. Figure 3 provides

examples of evidence-based information resources available to AHPs at each

level of the pyramid.

The top-down approach to answering clinical questions is in stark contrast to

the bottom-up approach commonly used in EBP education of AHPs.66 An

example of the bottom-up approach is workshops that aim to teach AHPs the

stepwise process of EBP involving: (1) developing an answerable clinical

question, (2) searching for relevant information using databases and journals,

(3) appraising articles, and (4) synthesising the information gathered in

appraised articles. The reasons for EBP being taught the bottom-up approach

may partly be due to (1) the lack of availability of information resources such

as evidence summaries and clinical decision support systems, (2) the fact that

AHPs report that they lack confidence and skill in searching and appraising

Page 28: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

14

research, so education has aimed to overcome this barrier, and (3) for

historical reasons. This approach may have been more feasible in the past,

when there were vastly smaller numbers of original studies to synthesise.

Figure 3: The 5S pyramid model of evidence-based information resources

A description of each level of the 5S pyramid and its application to AHPs

follows.

Level 1 – Studies

Level 1 encompasses all primary studies. Within primary studies, there is a

hierarchy of the levels of evidence (refer to Appendix 1 – Oxford Levels of

Evidence) relating to the evidence quality of published research. AHPs

report that they lack confidence and skill in appraising primary studies.7,9

Level 2 – Syntheses

Level 2 includes primary studies that are synthesised in the form of

systematic reviews answering a specific clinical question. AHPs prefer

systematic reviews over individual studies,67 however they still report that

Level 3: Brief critical appraisals of articles or reviews (sources:

Allied Health Evidence, speechBITE, PEDro, Otseeker, PsychBITE)

Level 4: Comprehensive overviews of evidence related to a

particular area (sources: none known specific to allied health or

cerebral palsy)

Level 5: Online decision-support systems where clinical data

are linked with evidence (sources: none known specific to allied

health or cerebral palsy)Systems

Synopses

Summaries

Syntheses

Studies

Level 2: Systematic reviews answering a specific clinical question

(sources: Cochrane Database of Systematic Reviews)

Level 1: Primary studies (sources: MEDLINE, PubMed, CINAHL)

Page 29: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

15

systematic reviews do no always answer their clinical questions.13,68

Additionally, systematic review literature may not always be interpreted

correctly.69 A study conducted by Lai and colleagues70 found that only 30%

of health professionals were able to correctly identify both the direction of

effect and strength of recommendation from four systematic reviews.

Level 3 – Synopses

Synopses provide brief critical appraisal of studies or topic areas. In the

allied health professions the available synopses are discipline-based. These

include free sites such as PEDro (The Physiotherapy Evidence Database,

http://www.pedro. org), OTseeker (Occupational Therapy Systematic

Evaluation of Evidence, http://www.otseeker.com) and SpeechBITE (Speech

Pathology Database for Best Interventions and Treatment Efficacy,

http://www.speechbite.com). Each of these resources includes searchable

databases (according to keyword or topic area) and contains the highest level

of research evidence available. All RCTs on the sites are rated for evidence

quality (e.g. PEDro resource uses the PEDro scale, SpeechBITE uses a

modified version of PEDro called the PEDro-P). These are invaluable

resources, however it is outside their scope to provide clinically useful

summaries and recommendation for specific interventions within every

diagnostic area. There are no known resources at this level (level 3, synopses)

that pertain specifically to cerebral palsy.

Level 4 – Summaries

Summaries collate the information from the lower levels (studies, syntheses

and synopses). This would normally be presented according to a clinical

problem such as upper limb spasticity. There are no resources of summaries

known in the allied health professions. Examples in medicine include

Dynamed (www.ebscohost.com/dynamed) and ClinicalEvidence

(http://clinicalevidence.bmj.com/ceweb/index.jsp). A key component of

our study involved the development of a level 4 evidence-based information

resource (the EAS).

Page 30: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

16

Level 5 – Systems

At this level, electronic health information/clinical data would be linked to

relevant evidence and incorporate a decision-making aid. These are rare (and

none exist for AHPs or are related to cerebral palsy) so the top-down

approach recommended normally begins at Level 4.13,65,71,72

KT theories

KT is primarily concerned with changing what health professionals do, with

the ultimate aim of improving outcomes for clients and the wider

community. The KTA process51 defines a number of stages in the KT process.

Each phase in this process draws on different theoretical assumptions as the

factors and outcomes for each stage are different.73 For this reason it is

necessary to consider the many theories in more detail as each theory has

relevance for different aspects of the stages of the KT strategy in the present

study.53,73 The focus of the KT strategy may be directed towards any

combination of the following: the individual health professional, the social

context, the organisational context, or the political context.73 The theories

below are summarised according to those domains and are all relevant to the

KT strategies applied in this doctoral program of research. Table 4 describes

the way in which each theory influenced the choice of KT strategies in the

present study.

Theories related to individual professionals

Educational theories

Educational approaches include adult learning theories such as problem-

based learning74 and learning styles. The underlying assumption of these

theories is that change occurs as a result of an individual striving for

competence. The emphasis is therefore less on cognitive or rational processes

and more on the motivation to learn.54 These theories are relevant for

consideration in the action cycle component of the KTA process. The

resultant interventions and educational strategies include strategies such as

small group interactive learning, problem-based learning and a bottom-up

Page 31: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

17

approach. These strategies are often used in medical education74 such as

workshops and seminars. There is low-level evidence for the effectiveness of

strategies such as problem-based learning, self-directed learning and

portfolio learning75 and the impact of educational theory remains largely

untested.54,76 These theories suggest that a KT strategy needs to focus on:

attitudes, the idea that motivation to change is crucial to success, and that

people change as a result of real problems experienced.

Cognitive theories

Cognitive theories focus on human rational processes and the choices that

result. These theories consider the provision of accurate, convincing

information as a cornerstone to change.54,73 The other types of theories that

are applied in epidemiological approaches include theories that describe

how rational thinking may be prevented. The purpose of preventing rational

thinking to elicit behaviour change rests on the belief that people make

choices based on context and previous experience or to fit the individual’s

beliefs, needs and behaviour.53 The theory of confirmation bias is an

example, where the human tendency to look for evidence that supports the

hypotheses we personally favour and to consciously, or unconsciously

disregard the ones that we disagree with.77 Although there is limited

evidence that this group of theories is effective in isolation, it is possible that

they have contributed to the push towards high quality, accurate and

rigorous research summaries. The strategies that have evolved from these

theories include evidence-based guidelines, journals, and other research

dissemination channels.

Motivational theories

Motivation theories have been primarily used in the field of health

promotion and suggest that implementation of change needs to focus on

health professionals’ attitudes, perceptions and intentions.78 According to

these theories, EBP behaviour such as using outcome measures, are

determined by the AHPs attitudes and perceived positive or negative

Page 32: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

18

consequences from using outcome measures. Strategies resulting from

motivation theory can be incorporated into different stages of the KT

strategy. For example, convincing managers, AHPs and clinical seniors of the

importance of using outcome measures and developing a positive culture

may increase desired performance.

Theories related to social context

Communication theories

Communication theories regard effective communication as being important

to change an individual’s attitudes, beliefs and behaviour. Both the

credibility of the source of the message and the recipient are key factors in

the extent to which an individual may change. Repetition of information,

novelty, adaptability to an individual’s context, personal relevance and

perceived validity are factors dictating the relative success of a

communication interaction.79 Communication theories can be applied to

many phases of a KT strategy. Ensuring that messages are clear, presented

multiple times, are clinically relevant and from a credible source may

maximise the success of a KT strategy.

Social learning theory

Bandura developed social cognitive theory as an extension to classic

behavioural theories in the mid-1980s. Social learning theory suggests that

there is a dynamic interplay between personal behaviour and context-related

factors that reinforce and inhibit behaviour change in an ongoing way.80

Important context-related factors include modelling and reinforcement. For

example, certain behaviour may be reinforced by material rewards, or non-

material rewards such as positive feedback from a clinical senior.

Encouraging senior staff to model EBP behaviour, such as checking levels of

evidence for client treatment, or overtly using an outcome measure is an

example of modelling.

Page 33: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

19

Many of the strategies that have strong evidence to increase EBP use (even if

to a small extent) are related to social learning theory. Examples include

outreach visits,5 opinion leaders,81 and small group support — all of which

draw on social networks within an organisation. Strong professional

relationships are a key feature of these theories and often the strategies that

result focus on creating and strengthening networks within an organisation

or professional body.

Social network theories

Diffusion and innovation theory82,83 considered the networks between

individuals, and how these effect dissemination of information and ideas.

Network characteristics that influence knowledge dissemination include the

strength of the networks between individuals, the proportion of the group

who have already adopted an innovation and the differences between

individuals within the network.84 Network theories recommend studying

local team interaction and influencing identified opinion leaders (who may

or may not be senior staff).

Professional development theories

Professional development theories are about development of specific

disciplines and professionals, and how this influences behaviour. Health

professionals have expertise in their fields, and their identities and loyalties

are often tied to their professions as opposed to their workplaces.85

Professional bodies can influence behaviour by introducing clinical

guidelines and standards, and by discipline specific training that they offer

both at undergraduate and postgraduate levels. KT strategies that are

consistent with a professional group are more likely to be successful.

Tapping into professional pride and loyalty can be effective tools to inducing

behaviour change.

Leadership theories

Effective leaders, either formal or informal can promote or block a new

innovation. Leaders may be managers, however they can also be respected

Page 34: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

20

for their professional expertise, or may be respected socially within a

network. Different types of leaders are useful for bringing about different

types of change.86 For KT strategies to be successful, education and ‘buy-in’

from formal and informal leaders can be key factors.

Theories related to organisational context

Organisational theories do not focus on the individual but rather on

changing the environment to be conducive for change. Key theories relevant

to the present study are summarised below.

Marketing approaches

Marketing approaches assume that different groups have different needs,

goals and barriers to success.52 The focus is on producing an attractive

product or message that will appeal to the target group and spread it

through numerous channels (for example media, or networks). These

approaches lead to KT strategies based on needs assessment and emphasise a

number of channels for dissemination, using a stepwise approach. Elements

of the marketing approach have been incorporated into a number of recent

conceptual models of change such as the KTA process that assesses

individual and group needs — continually reviewing, and adapting the

intervention to produce a highly customised intervention.

Total quality management theory

Total quality management (TQM) theory emphasises the importance of

continuous improvement in multidisciplinary processes to improve client

care.87 Substandard client care is viewed as a failure of the systems and

processes rather than the individual. Important aspects of this theory include

identifying leaders, building strong teams and influencing workplace

culture. TQM is a client centred, whole organisation model encouraging

periods of implementing change followed by periods of relative stability.

TQM encourages a long-term view of changing health professionals’

behaviours, and elements of this theory can be the backbone of a KT strategy.

Page 35: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

21

Organisational learning theory

Organisational learning theory says that the interaction between the

organisation and the individual is looked at as a symbiotic relationship,

where the individual and organisation learn from one another. Ortenblad88

described the process as individuals learning as agents for an organisation

and that knowledge then being stored as embedded routines in the

organisation. Organisational change theory says that in order for an

organisation to learn and change, the individuals within the context must be

willing to change. The concept of a climate for optimal learning is therefore

important in organisational theory54 and includes leadership theories.89

Theories related to political context

Theories included in this category are reimbursement theories, contracting

theories, and accreditation and licensing theories. Reimbursement theories

focus on how health care is paid for at a political level. A number of reviews

have looked at the effect of different payment methods for client care with

mixed results.90 Although an organisation rarely has direct control over these

aspects it can still be important to consider them in the whole system when

developing an intervention for changing behaviour.

Summary of theories

Critical analyses and syntheses of KT theories43,53,91 reported that there is

little evidence to suggest the superiority of one theory over another, it is in

fact the choice of KT strategies tailored to overcome the local barriers that

matter. Some types of theories lend themselves towards specific contexts and

interventions. For example, cognitive theories are particularly useful to

change simple, routine behaviour in highly structured environments (for

example, hand washing).53 Organisational theories are often useful in chronic

care, or community settings. To assist in planning effective KT strategies,

there are a number of conceptual models combining elements of different

theories. Table 1 summarises potential application of the different theories to

Page 36: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

22

the present study context. The KTA process combines aspects of a number of

the theories summarised above. When using the KTA process the

combination of theories and extent to which one theory is utilised over

another, depends on what the specific barriers to EBP are in the given

setting. Using a theory informed approach, in response to context specific

barriers results in a highly tailored, targeted intervention.

Table 1: Underpinning theories of KT

Theory Potential interventions for the present KT study

Individual professionals

Educational Involve AHPs in the problem-solving process during workshops, mentoring sessions; provide mentoring to set customised personal goals.

Cognitive Provide accurate, easily accessible research evidence on cerebral palsy assessment and treatment.

Motivational Convince AHPs of the need for EBP in cerebral palsy treatment via workshop, mentoring and online KT tool.

Social context

Communication Credible staff to facilitate EBP workshops and provide mentoring; cohesive, convincing EBP message based on the online KT tool.

Social learning Ensure that clinical seniors and managers are modelling target EBP behaviours (management training, strategic planning, system changes to support this).

Professional development

Use professional pride to motivate EBP use within specific disciplines via workshop, mentoring, clinical seniors and specific interventions targeting professional groups on online KT tool.

Leadership Management ‘buy-in’ and endorsement from executive to support changes throughout the organisation.

Organisational context

Marketing Produce an appealing product and disseminate the information regarding the product in a variety of ways (intranet, workshop, supervision, written guidelines, memos and reminders).

Total quality management

Reorganise client documentation and work processes to support clinical decision-making; introduce a standard, organisation-wide process and monitor/adapt as necessary.

Organisational learning

Ensure that all staff members at every level of the organisation have access to current cerebral palsy evidence and ensure exchange of information via team meetings and mentoring sessions.

Political and economic context

Reimbursement Provide paid, protected time for AHPs to engage in EBP activities.

Contracting Modify job descriptions to reflect engagement in EBP activities.

Page 37: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

23

Barriers to EBP implementation

Real and perceived barriers hinder evidence being embedded into clinical

practice.73,92 A complex interplay exists between the numerous barriers and

this will affect whether or not a health professional uses research evidence in

their planning, decision-making and treatment. Literature suggests that

clinicians have a high level of awareness of EBP value93-96 and believe that

clinical decision-making should be evidence-based.93,96,97 The process of

identifying and categorising barriers is considered to be an important phase

in developing tailored, effective interventions.7,51,73,98

Seven categories of barriers to KT have been proposed in systematic review

literature.1

• Support/resource barriers

� Time

� Resource barriers

� Support

� Costs/funding issues

• Cognitive/behavioural barriers

� Knowledge

� Awareness

� Skill/expertise

• Attitudinal/rational-emotive barriers

� Efficacy/perceived competence

� Accurate self-assessment

• Clinical Practice guidelines/evidence barriers

� Clinical usefulness

� Evidence/disagree content

� Access

• Client barriers

� Client characteristics/factors

� Client adherences

• Health care professional/physician barriers

� Characteristics

� Professional boundaries

Page 38: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

24

� Gender

� Inertia

• System/process barriers

� Organisational

� System

� Workload

� Referral process.

Literature about barriers to EBP use has research methodology limitations.1

The studies are mostly survey design, few are based on any existing

framework or model, and they are mostly closed questions. Nevertheless,

stronger methodology is emerging, reflecting the complexity of KT, and

reported barriers in the existing literature can assist in developing effective

interventions.

Support/resource barriers

Lack of time is the most commonly perceived barrier concerning the use of

EBP for occupational therapists,8,50,99 speech pathologists,7,96 physio-

therapists9,24,93 and physicians.100,101 Lack of time may have multiple

dimensions, and can overlap with issues related to workplace support for

paid EBP time and extra time being required due to low skill level. Some

studies report that the issue is lack of paid, protected time for EBP9,102 — only

8% of participants in one study having paid time for EBP activities.9 Speech

pathologists in one survey reported that they did not have enough time to

read literature and implement research findings.7 Perceived lack of time can

also be a proxy for other issues such as difficultly synthesising information

or lack of clear, quality evidence summaries.93,100,101 A study by Young and

Ward101 using a questionnaire along with in-depth interviews with GPs,

found that the completed questionnaire indicated that lack of time was the

major barrier to EBP use. However when the participants were interviewed,

it became clear that lack of time was obscuring more complex barriers. The

barriers that emerged were lack of skill to quickly understand and synthesise

Page 39: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

25

research studies and a lack of time to carefully consider a client’s demands

for non-evidence-based treatments. This finding may be specific to

physicians and it is difficult to say whether this study can be generalised to

other professional groups.

Cognitive/behavioural barriers

Most health professionals report inadequate skill levels to search, critically

appraise, synthesise and implement research findings as a significant barrier

to EBP implementation.7-9,50,97,100,101,103-105 This is not surprising as performing

these tasks requires a complex skill set, even for academic researchers,96 and

is borne out in research studies that have found educating health

professionals to perform these skills increases knowledge but does not carry

over to changing practice.14,15 The degree to which lack of knowledge and

skill level are barriers may be related to professional discipline,106 and varies

between studies.

Attitudinal/rational-emotive barriers

Attitudes to EBP are often considered to be a key barrier — a finding that is

supported by systematic reviews in the literature.94,104,107 The most recent

systematic review looking at individual determinants to research use in

allied health found that overcoming negative attitudes toward EBP may be

important in reducing the research–practice gap. Attitudes to EBP and

feelings of confidence appear to vary according to profession and

background.108 This may be important in a workplace whose staff have

vastly different levels of background training and are a mix of professional

groups. Different strategies for different professions and level of training

may be necessary to induce change.

Clinical practice guideline/evidence barriers

Although Internet and library access have been major barriers to EBP use in

the past, access to computers and Internet resources have increased

significantly in recent years.109 Ten years ago, Internet access rates for doctors

Page 40: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

26

in developed countries were reported to be between 13–17%100,101 compared

to 60–70% in 2008.110 There seems to be considerable differences in ease of

access between rural and metropolitan areas111,112 and between different

organisations and professional groups.113 Internet access available at key

clinical decision-making points in time could be a factor in whether or not

client care is evidence-based. Mixed results from studies may reflect the

trend towards better access to the Internet in health care — some studies

reported adequate access to research93,96 and other studies reported access as

a barrier to EBP8.

Internet access does not however imply full access to journals, systematic

reviews, evidence-based guidelines or research summaries. Even when an

AHP does have adequate access, the enormous quantity of research studies

that are published114-116 means that searching and appraising research

evidence can be time consuming. Additionally, AHPs believe that research

does not always translate well into practice50,99 and that methodological

inadequacies are a barrier.7,8,40 Despite recent efforts by professional

organisations to customise research evidence with tools such as PEDro for

physiotherapists,117 there is consensus that it is still difficult to access reliable,

easy to read summaries.9,40,97,100,118

Client barriers

Clients are now far more likely to research their own health-care needs using

the Internet.119 This has changed the client-health professional relationship in

terms of EBP since clients have access to a range of health information not all

of which is reliable.120 This may result in increased use of research in

practice, however it can potentially create an EBP barrier. Some studies have

reported that client demands for treatments that may not be evidence-based

are a barrier to EBP use.101 Family-centred practice is considered best practice

in disability organisations, and the interplay between family-centred practice

and EBP is complex.121 This complexity is also reflected in the EBP triad (see

Page 41: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

27

Figure 1) with client preferences representing one of the three overlapping

circles.

Health care professional/physician barriers

Qualification and years of experience are known barriers to EBP use.94

Seniority of qualification is positively correlated with self-reported uptake of

research findings.94,106 In other words, health professionals with a university

degree are more likely than colleagues without a degree to use research

evidence in their clinical decision-making. Conversely, more years of clinical

experience are negatively correlated with EBP use.50,96 Health professionals

who have been practicing for more than 10 years report lower skill,

confidence and implementation rates. 108 107 108 108 108 McEvoy et al.108 reported

that males had a higher level of confidence towards EBP than females, and

females had more positive attitudes towards EBP than males. The other

professional boundary reported in the literature is health professionals’ belief

that searching and synthesising research findings should not be a part of

their professional role.9,101 This view is supported by Vallino-Napoli48 who

encouraged academics to publish systematic reviews on topics of high

clinical relevance. The present study sought to address this barrier by

creating customised topic summaries based on the best available research

evidence, avoiding the need for AHPs to search for research evidence.

System/process barriers

Workplace factors such as systems and organisational culture can

significantly facilitate or hinder EBP use,23,122 and are commonly reported

barriers.1 Even if quality evidence is available, systems and processes in a

workplace may halt the dissemination of research evidence and prevent it

from flowing on to benefit clients. In fact, lack of organisational, system,

referral, work or team structures or processes have been reported in 62

studies as the primary reason that guidelines and evidence are not

implemented.123 Specific barriers may include information not being

Page 42: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

28

available quickly, at the right time23 or systems may not be in place to remind

and support evidence-based clinical decisions.106

The culture of an organisation and interactions between staff can either foster

EBP use or inhibit it.124 A recent systematic review found that medical

residents cited lack of support from other staff members along with a belief

that there was a low possibility for change, as major barrier to EBP use.104

More experienced staff have lower rates of EBP use50,96 and may intentionally

or unintentionally be hindering implementation of research evidence.

Strategies aiming to change health

professionals’ EBP behaviour

The following information presents findings from literature that included a

systematic review and meta-analysis reporting on the effectiveness of key KT

strategies in the following order: face-to-face educational meetings, retrieval

of electronic health information, printed educational materials, outreach

visits, opinion leaders, audit and feedback, journal clubs, financial incentives,

organisational change, tailored interventions, and multifaceted interventions.

Table 2 provides a summary of information presented in the research

literature along with estimated effect sizes.

It is difficult to compare the relative effect of one KT strategy to another due

to research studies having different outcomes, varying degrees of

methodological quality of studies, and poorly reported interventions.16,65

That said, the effect sizes for many interventions have been calculated by

meta-analysis (see Table 2) and reveal similar absolute median effect sizes

across KT strategies.65 This could indicate that the choice of KT components

is not important but rather that any intervention is better than no

intervention. Grimshaw and colleagues125 however do not believe this is the

case as many KT studies are cluster RCTs, powered to detect a change of 10

to 20 per cent improvement, and similarity of absolute effect sizes is

therefore unsurprising; and although the absolute median effect sizes are

Page 43: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

29

remarkably similar, the range is wide both within, and between KT

strategies. For example, on-screen point of care computerised reminders had

a range in improvement scores of +0.8% to +18.8%. This may suggest that

different KT strategies are indeed more effective than others, and the relative

effectiveness may be related to whether or not a KT strategy is tailored to

overcome a specific barrier.65 Considering the similarity in effect sizes

between KT strategies along with an incomplete evidence base, current

research literature is unable to provide information about whether one KT

strategies is more effective than another.16,65 Personnel involved in planning

KT strategies therefore need to design the intervention in response to a

barriers assessment and use professional judgement.65

Details about the barriers assessment and KT strategies that were chosen in

response to the specific EBP barriers in our context are in Chapter 3.

Face-to-face educational meetings

Face-to-face educational meetings include lectures, courses and workshops

in various formats with the number of participants, intensity, frequency and

content being highly variable in nature. Educational meetings have been

heavily adopted as a strategy for improving health professionals EBP

knowledge, awareness and skills. Systematic review evidence4 showed that

educational meetings have small to moderate benefit on improving health

professionals’ EBP behaviour. The lessons learned from this review of 81 EBP

implementation intervention trials were that a mixture of didactic and

interactive styles were more effective than either alone, and targeting simple

behaviour led to the greatest behaviour change and the magnitude of the

resultant change in behaviour lessened as the target behaviour increased in

complexity. The authors concluded that although educational meetings had

an effect on behaviour (either alone or in combination with other

approaches), educational meetings alone were unlikely to change complex

EBP behaviour. Educational interventions are most likely to be effective as a

component of a multifaceted KT strategy, targeting context specific EBP

Page 44: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

30

barriers (such as lack of knowledge), although relying solely on face-to-face

education is unlikely to result in complex behaviour change.

The multifaceted KT strategy in the RCT in this thesis included a 3-day face-

to-face workshop.

Retrieval of electronic health information

Electronic health information refers to using a computer with an Internet

connection to read research articles, evidence-based guidelines or other

material. Health professionals need to have access to health information to

ensure that their clinical decision-making is evidence based. A systematic

review examining whether retrieval of electronic health information had an

impact on practices or client care was inconclusive and recommended that

further research be conducted.126 Only two studies met eligibility and

“neither study found evidence that electronic retrieval of health-care

information changed professional behaviour; one study found that

knowledge was improved”.126 A RCT detected no difference between paper-

based and electronic forms but suggested that “other factors should be

considered when choosing the method of presentation of guidelines, such as

information-seeking time, ease of use during the consultation, ability to

update, production costs, and the physicians’ own preferences”.127

The present study utilised intranet-based clinical algorithms or pathways,

and a highly customised evidence-based information resource (as one part of

a multifaceted strategy) in an attempt to change AHPs’ EBP behaviour.

Printed educational materials

Educational materials refer to printed, hard copy information and may

include clinical guidelines, position papers and peer-reviewed journals.

Educational materials are one of the most frequently used passive

dissemination strategies.128,129 Systematic review evidence suggested that

printed educational materials can change health professionals’ behaviour,

with active strategies being more effective than passive strategies.130 There

Page 45: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

31

are many factors that influence whether printed educational materials may

lead to a change in knowledge, attitudes of behaviour of health professionals.

These include clinical applicability of the information, the health

professional’s perceptions about the importance of the information and

readiness to adopt and apply new information.130

The present study chose to provide educational material with active support,

integrated into the health professional’s workflow.

Outreach visits (mentoring)

Educational outreach visits (also referred to as academic detailing) are

defined as a face-to-face meeting where trained people provide health

professionals with information and strategies about how they can change

their practice.5 Systematic review data suggested that outreach visits

consistently lead to small effects on prescribing patterns whereas the effect

sizes for changes other aspects of professional practice are more variable.5

The small to moderate effect size was considered to be similar to other types

of continuing medical education on behaviour change, for example, audit

and feedback or educational outreach visits.

A form of outreach visits (referred to as mentoring in our study) was

employed as a KT strategy in the present study.

Opinion leaders

Opinion leaders are defined in systematic review literature as people who

are influential, likeable and respected amongst colleagues. Opinion leaders

may hold a senior management or clinical role, however any health

professional may be an informal opinion leader. According to systematic

review data opinion leaders may promote EBP, although the best techniques

to utilise opinion leaders remain unclear.131 Studies included in the

systematic reviews rarely described the role of the opinion leader, and

studies varied in terms of type of intervention and outcomes measured.

Page 46: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

32

Opinion leaders were chosen to facilitate the 3-day workshops that formed a

part of the KT strategy in the RCT reported in this thesis.

Audit and feedback

Audit and feedback involve providing direct feedback to health

professionals regarding their practice as compared to peers and evidence-

based guidelines. Audit and feedback can have a small to moderate effect on

behaviour.132 The change is likely to be greater when the baseline practices

are low and feedback is more intensive. It is unclear whether certain audit

and feedback techniques are more effective than others.132 Audit and

feedback are potentially useful tools in monitoring professional performance

and may be helpful in planning when efforts to change practice are

needed.132

Audit and feedback were not used as a KT strategy in the RCT in our study,

due to pragmatic constraints of data collection across a wide geographical

area.

Journal clubs

Journal clubs are defined as “a group of individuals who meet regularly to

discuss the clinical applicability of articles in current medical journals”.133

Although journal clubs are a frequently used interactive research

dissemination tool there is no firm evidence supporting or refuting their

effectiveness to change clinical decision making.134 A systematic review was

unable to pool results due to heterogeneity of interventions.134 That said,

some studies report improvements in health professionals’ reading

behaviour and increased confidence in critically appraising research;

however there is no evidence suggesting that this reading behaviour

translates into EBP behaviour change.

Journal clubs were therefore not included in our multifaceted KT strategy.

Page 47: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

33

Financial Incentives

Financial incentives are “an extrinsic source of motivation and exist when an

individual can expect monetary transfer which is made conditional on acting

in a particular way”.90 In health care, financial incentives can be used to

stimulate behaviour change thus facilitating the transfer or evidence into

practice. Systematic review data found that financial incentives may change

behaviour, however the findings are difficult to generalise due to

methodological shortcomings. Rigorous evaluation of the effect of an

intervention including financial incentives is recommended, as the evidence

supporting or refuting its effectiveness is limited.90

In our RCT participants were provided with paid, protected time for EBP

activities. This could be considered to be an indirect form of financial

incentive.

Organisational change – strategic planning, management

training

Organisational culture refers to shared characteristics (beliefs, values,

routines, traditions) of those in the same social or organisational group.

There is increasing emphasis placed on the importance of organisational

culture to improve health-care performance. Although workplace culture

may change as a flow-on effect from other KT strategies, no rigorous

evidence exists to support interventions aimed directly at changing culture

within an organisation.135 Even if change was induced, there is no evidence

that links improvement in workplace culture to improved client

outcomes.53,136

In our study meetings with researchers, knowledge brokers, policy makers

and managers were held in the year preceding the RCT and management

training along with policy changes that formed part of the KT strategy.

Page 48: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

34

Tailored interventions

Tailored interventions are defined as interventions that are developed

following investigation into current practices and factors that may be

blocking a new innovation. A recent systematic review conducted a meta-

analysis of 26 studies that tailored interventions to prospectively identified

barriers of change. The review found that tailored interventions were more

likely to improve professional practice than no intervention or dissemination

of guidelines.123 Although optimal methods for conducting barriers

assessments and designing interventions remain unclear, tailoring

interventions to overcome known barriers is increasingly considered to be an

integral first step in a KT strategy. In our study a comprehensive assessment

of barriers was done as a part of the RCT, and KT strategies were designed in

response to the identified barriers. See Chapter 3 for details.

Multifaceted KT strategies

Multifaceted interventions involve “a combination of methods including two

or more interventions”.137 There is no firm evidence that multifaceted

strategies are more or less effective than KT strategies with only one

component. Additionally, the effect size of more components in a

multifaceted intervention does not seem to increase along with the number

of components.138,139 It is however theoretically plausible that a multifaceted

KT strategy designed in response to a thorough barriers assessment would

be more effective than a single intervention.125 A systematic review (without

meta-analysis) examining the benefits of multifaceted KT strategies amongst

physiotherapists and occupational therapists concluded that active

multifaceted KT strategies may lead to improved self-reported knowledge

and EBP behaviour.6

A multifaceted KT strategy was the chosen approach in the present study as

a number of KT strategies were required to adequately address the identified

EBP barriers. Chapter 3 provides more detail regarding the barriers

assessment and selection of the components of the multifaceted KT strategy.

Page 49: Knowledge translation intervention to improve evidence

Ch

apter 2

– Literature R

eview

35

Table 2: Systematic review evidence for the effectiveness of KT strategies

Intervention Reference Effect sizes – median absolute improvement (unless otherwise stated)

Number of studies/ individual participants

Comments

Face-to-face educational meetings – workshops, seminars, lectures, symposia

Forsetlund et al., 20094

Flores-Mateo & Argimon, 2007

140

6.0% (range 1.8% to 15.3%) 81 RCTs Median absolute improvement similar to other KT strategies. Greater effect sizes with mixed interactive/didactic sessions, higher attendance and interactive sessions. Impact on more complex behaviours is less certain.

Retrieval of electronic health information including research articles, summaries

McGowan et al., 2009

126

Meta-analysis unable to be performed

2 RCTs No improvement in practices in either study were detected.

Printed educational materials – research articles in journals, evidence-based guidelines

Farmer et al., 2008130

Francke et al., 2008

141

Giguère et al., 2012142

4.3% (range -8.0% to +9.6%) for process outcomes (e.g. ordering x-rays, prescribing)

Median absolute risk difference 0.13 compared to no treatment (range -0.16 to +0.36)

12 RCTs 11 nonrandomised studies

45 studies (14 RCTs and 31 time series)

Outreach visits (mentoring) – where trained

O’Brien et al., 20075 Prescribing behaviour 4. 8%

(range 3.0% to 6.5%)

Other behaviour 6.0% (range 3.6% to 16.0%)

17 RCTs

17 RCTs

Effects on more complex behaviours not certain.

Journal clubs Harris et al., 2011134

No meta-analysis due to heterogeneity of interventions

18 studies (no RCTs)

No firm evidence supporting or refuting effectiveness of journal clubs.

Financial incentives Flodgren et al., 201190

Meta-analysis unable to be performed

32 studies Very low level evidence with serious methodological issues.

Page 50: Knowledge translation intervention to improve evidence

Ch

apter 2

– Literature R

eview

36

Intervention Reference Effect sizes – median absolute improvement (unless otherwise stated)

Number of studies/ individual participants

Comments

Organisational change Parmelli et al., 2011135

Meta-analysis unable to be performed

No studies met inclusion criteria

No evidence to support or refute the effectiveness of changing organisational culture.

Tailored interventions Baker et al., 2010123

Cheater et al., 2005

143

Meta-regression (12 RCTs). Pooled odds ratio 1.52 (95% CI 1.27,1.82; p < 0.001)

26 RCTs More likely to improve professional practice than no intervention or dissemination of guidelines.

Reminders

(a) Computer generated reminders delivered on paper

Arditi et al., 2012144

7. 0% (+3.9% to +16.4%)

32 RCTs

Two features associated with greater effect size were:

• providing space for a response on the form

• providing an explanation for the content or advice.

(b) On-screen, point of care computerised reminders

Shojania et al., 2009145

4.2% (+0.8% to +18.8%) 28 RCTs Most studies have investigated effect on simple reminders. Impact on more complex systems, such as decision support for clinical decision making are less certain, with some studies showing no change.

Multifaceted interventions

Menon et al., 20096 Meta-analysis not attempted 12 studies (4

RCTs) Improvements in knowledge, skill and behaviour. No change in attitudes.

Page 51: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

37

Knowledge translation in the allied health

professions

The majority of KT research has occurred in the fields of medicine and

nursing.16 A recent systematic review examining the effect of KT strategies

on the allied health professions identified only five RCTs. Four of these were

in physiotherapy and one in speech pathology. No RCTs were found in the

fields of occupational therapy, social work or psychology. A description and

findings of these studies summarised from the research literature are

detailed in Table 3. A systematic review conducted by Menon and

colleagues6 suggested that multifaceted KT strategies may change EBP

behaviour. The more recent and comprehensive systematic review by Scott

and colleagues16 however concluded that no clear inferences can be made

about the effectiveness of KT strategies in the allied health professions due to

low methodological quality, reporting bias and equivocal results. The

majority of KT strategies relied solely on educational approaches (n = 23/32

included studies in the systematic review), a trend which is mirrored in

nursing146 and medicine.138 Scott et al.16 suggested that for EBP behaviour to

change, the KT intervention needs to be based on a sold theoretical

framework, to target multiple levels (AHPs, decision makers), and to have

significant resources to support the change.

Page 52: Knowledge translation intervention to improve evidence

Ch

apter 2

– Literature R

eview

38

Table 3: Evidence table – KT strategies in the allied health professions

Reference Study design Area Intervention (EPOC)

Specific intervention Outcomes measured Outcomes and comments

Bekkering et al. (2005)

147

RCT 113 physiotherapists (500 clients) Individually randomised to receive passive KT strategy (guidelines by mail) or active multifaceted intervention

Low back pain

1. Educational materials 2. Educational meetings

Multifaceted KT strategy – education, discussion, role playing, feedback and reminders

Adherence to guidelines measured by patient forms recording treatments. Number of treatments sessions, goals, interventions and patient education were recorded.

Moderate improvement adhering to guidelines

Hoeijenbox et al. (2005)

148

RCT 113 physiotherapists

Low back pain

1. Educational materials 2. Educational meetings

Multifaceted KT strategy – education, discussion, role playing, feedback and reminders

Cost of care Direct medical costs, productivity costs and quality of life were calculated.

Passive strategy more cost-effective than active strategy

Rebbeck et al (2006)

149

Cluster RCT 27 physiotherapists

Acute whiplash

1. Educational meetings 2. Educational outreach visits 3. Educational materials

Multifaceted KT strategy – education by opinion leaders, 1-day workshop, educational materials (guidelines & algorithms) and 2-hr follow-up visit

Adherence to guidelines (self-report and file audit) Knowledge of guidelines (exams) Patient outcomes (Functional Rating Index) Cost of care

Experimental group adhered to guidelines more (small–mod effect) & increased knowledge No difference between groups for patient outcomes or cost of care

Stevenson et al (2006)

150

Cluster RCT (2 clusters) 30 physiotherapists

Low back pain

1. Educational meetings 2. Local opinion leaders

Educational meeting led by local opinion leader (5 hrs)

Treatments offered to clients. Data collected from a discharge summary where participants self-reported the various treatments that were offered

No significant differences between groups

Page 53: Knowledge translation intervention to improve evidence

Ch

apter 2

– Literature R

eview

39

Reference Study design Area Intervention (EPOC)

Specific intervention Outcomes measured Outcomes and comments

Pennington et al (2005)

15

Cluster RCT (17 clusters) 34 speech pathologists

Swallowing post-stroke

1. Educational meetings

Group A – 2.5 day workshop on critical appraisal compared to Group B – 5 day workshop on critical appraisal + change management

Adherence to clinical guidelines, and engagement in research activities via audit tool + file audit

Group B engaged in more research related activity, but 6 mths later no discernible difference between groups with regard to clinical practice was detected. Differences between departments was clear – advise to have local opinion leaders/more customised individualised approach

Page 54: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

40

Measuring the outcomes of multifaceted KT

strategies

Measuring outcomes of KT strategies is a complex, multidimensional

process.151 In a multifaceted KT strategy, the measurement tools depend

upon each target outcome. For example, measuring change in health

professional behaviour, skill and knowledge, organisational responsiveness

to change or client outcomes will each require different techniques.

Domains of evaluation

Shaneyfelt et al.152 conducted a systematic review and categorised evaluation

domains into:

1) EBP knowledge

2) EBP skills

3) EBP attitudes

4) EBP behaviours

5) Client outcomes.

These categories overlap with Kirkpatrick’s153 four levels of training

evaluation:

Level 1 – Reaction

• Satisfaction and opinions

• Often practical aspects, e. g. venue, food, basic course content

Level 2 – Learning, measuring changes in:

• Knowledge

• Skills

• Attitudes

Level 3 – Transfer

• Lasting behaviour change

• Did the change in knowledge, skills or attitudes carry over to another

setting (work)

Page 55: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

41

Level 4 - Effect

• Client outcomes

• Costs

• Organisational benefits.

Measurement of outcomes can occur at the client level, health professional

level and organisational level.154 Although Shaneyfelt et al.152 found that

there were some evaluation tools with strong psychometric properties, only

20% of the studies included in the systematic review reported on reliability

or validity of the instrument used.152 Additionally, these evaluation tools

only measured knowledge, skill or satisfaction (Kirkpatrick Levels 1 and 2)

and most others measured compliance to guidelines. Shaneyfelt et al.152

emphasise the need for future studies to use valid, reliable outcome

measurement tools, ideally measuring how EBP skills are used in actual

practice (Kirkpatrick Level 3). The present study aimed to measure change in

behaviour, knowledge and attitudes.

Behaviour

Audit tools with proven validity and reliability were used by Straus et al.155

and Lucas et al.156 to measure EBP behaviour/practices (Kirkpatrick Level 3).

These tools however, only measured a narrow domain of context specific

EBP practice behaviour — rating of evidence levels supporting interventions

by hospital doctors. There are no evaluation tools that comply with all of the

following points:

1) Designed to measure EBP behaviour

2) Strong psychometric properties

3) Developed for AHPs

4) Flexible enough to be customised to specific contexts

5) Measure a broad range of EBP behaviour and domains.157

Page 56: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

42

For this reason a flexible, adaptable and individualised measurement tool

was selected to measure change in practice behaviour.

Goal attainment scale

The measurement we undertook in this study was aimed at Kirkpatrick

Levels 2, 3 and 4. Our primary aim was to change EBP behaviour. GAS is an

individualised outcome measurement tool that measures individual progress

towards pre-defined goals. These goals may pertain to client outcomes,

service outcomes or health professional outcomes. Its most common use now

is as an individualised tool to evaluate client outcomes, although it was

initially developed to measure change in community mental health programs

and has been used in a wide variety of areas.158-160 GAS has been used to

evaluate the outcomes of educational programs, although it has not been

tested for psychometric properties in these contexts.161,162 It is designed to

evaluate whether pre-established goals have been attained. GAS measures

change in a target behaviour using a 5-point ordinal scale describing 5

different potential outcomes. More detail about GAS is found in Chapter 3.

Psychometric properties

GAS was chosen as the primary outcome measurement tool for the following

reasons:

1) Responsivity – GAS has established validity, reliability, and high

responsivity to change, whereas systematic review evidence indicated

that for nearly all valid and reliable EBP instruments, test responsivity

is unknown152

2) Tailoring – GAS is an individualised measure of change, and so

progress towards any target behaviour (including health professional

behaviours)163 could be validly, reliably and sensitively measured,

including tailored EBP behaviours unique to the study site, such as

notifications to the Cerebral Palsy Register

3) Comprehensive measurement – GAS is an individualised measure of

change, and so we could comprehensively measure all desired EBP

Page 57: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

43

behaviours, whereas systematic review evidence indicated that other

psychometrically sound EBP instruments measure knowledge instead

of behaviour, or are limited because they only measure one discrete

aspect of EBP behaviour152,155,156,164,165

4) Lack of gold standard tool – Accurate, flawless measurement of EBP

behaviour is not yet established in the literature.166 Even though direct

observation of EBP behaviour (such as simulated patients,

video/audio recordings of practice) is perceived as methodologically

preferable to indirect (proxy) reports of EBP behaviour (such as chart

audit, patient report, self-report, or peer-report), systematic review

evidence indicated that direct measures often fail validity testing.166

This could have introduced other flaws to our clinical trial. Moreover,

collecting direct measures throughout NSW, being a state-wide

service, would have introduced prohibitive trial costs (NSW’s

landmass is 3.25 times larger than the United Kingdom, and is larger

than California and New Mexico combined), when the cost-benefit of

a potentially invalid measure is weighed up. Even though self-report

proxy measures are an imperfect measure of actual behaviour,167

leading KT agencies, such as the Canadian Institutes of Health

Research advocate for self-report because the process of self reflection

plays a critical role in initiating behavioural changes within

organisations.

In light of current EBP behaviour measurement limitations, GAS offered the

best way forward since it was psychometrically sound, it comprehensively

measured EBP behaviour, was practical across an entire state and could be

tailored to the study site.

Knowledge and skill

Although there are EBP evaluation tools that measure knowledge and skill

168,169, we only needed to measure knowledge. A key component in our KT

strategy was the development of an evidence-based resource that

Page 58: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

44

summarised cerebral palsy research and this bypassed the need for EBP

skills. We therefore developed an exam with correct/incorrect answers that

was specific to the knowledge and skill base required for the participants.

Attitudes

Evidence-based practice attitude scale

The evidence-based practice attitude scale (EBPAS) is a tool developed by

Gregory Aarons.124 Aarons developed this tool for mental health

professionals working in community settings. It is has strong validity and

reliability and has published normative data.170 Allied health professionals

(working in mental health or social services) formed part of the normative

sampling, however only social workers (40.7%) and psychologists (32%)

were explicitly mentioned. The EBPAS has been used to measure change in

EBP attitudes in other areas such as autism.171 The EBPAS was chosen in our

RCT as a secondary outcome measurement. It is designed to measure change

in attitudes towards EBP across four main domains:

1) Requirements for the use of EBP by government, management

2) Appeal (item examples: makes sense, intuitively appealing, colleagues

like it)

3) Openness to change (item examples: would follow guidelines,

research use is OK, like trying new things)

4) Divergence of EBP with usual practice (item examples: research not

useful, clinical expertise more important than research).

Gaps in the literature

Although there is a growing body of research studying the effectiveness of

KT strategies, there are still a number of knowledge gaps in the evidence

base and these will now be described.137

Page 59: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

45

1. No RCTs with an evidence-based information resource

as a key element of a KT strategy

Research has indicated that synthesising research in an evidence-based

information resource (such as the EAS) should result in increased access.172

Gülmezoglu et al. conducted a cluster RCT with doctors, midwives and

students working in obstetrics to measure the impact of a multifaceted

intervention including an evidence-based information resource. Participants’

use of the evidence-based information resource increased, however the

intervention did not affect the 10 target obstetric practices. There have been

no RCTs to date using an evidence-based information resource as a key

component of a KT strategy. This research program aims to fill this gap in

literature by ensuring that the KT strategy is the result of careful design

according to the KTA process, with ‘knowledge creation’ as a essential

component. The result is a RCT that tests the effectiveness of a KT strategy

centred around a highly customised information resource.

2. No studies involving AHPs have attempted to measure

a wide range of EBP behaviour

The RCT and 2-year follow-up study aimed to measure a range of EBP

behaviour considered to represent the activities of an evidence-based

practitioner. Previous studies have either used self-developed

measures147,149,150 or have only measured a narrow domain of EBP

behaviour.168,169 Previous studies have targeted simple behaviour by either:

1) Measuring one specific intervention area, e.g. whiplash149 or low back

pain.147,150

2) Measuring more interventions, but only measuring adherence to

guidelines, e.g. obstetrics,172 speech pathology.15

This research program used a measurement tool with strong psychometric

properties, and applied this tool to the study context with the aim of

measuring the broader, more complex behaviours that lead to EBP.

Page 60: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

46

3. No RCTs sampling a range of professional groups

The majority of KT research has involved physicians, with AHPs forming a

much smaller portion.137 Multifaceted KT strategies have been tested using

RCTs with speech pathologists,15 physiotherapists,147,150 however there are no

RCTs sampling occupational therapists,6 social workers or psychologists.

Both studies conducted as a part of this doctoral programme sampled speech

pathologists, physiotherapists, occupational therapists, social workers and

psychologists.

4. No RCTs with AHPs that have used a strong KT

theoretical framework

Very few theories have been tested in robust research53 and those that have

been tested have had mixed results. It is therefore recommended that a

combination of different theoretical perspectives be considered to develop a

sound plan.2 Interventions that are solidly based on theoretical frameworks

or conceptual models are needed.43,53 This doctoral programme used the

KTA process as a framework to develop the KT strategy. In accordance to the

KTA process, a range of theories underpinned the choice of strategies

employed (see Table 4).

Rationale for the studies

Rationale for the randomised controlled trial

The effect of a multifaceted KT strategy on a range of EBP behaviours,

involving a number of AHP groups6 is yet to be quantified in a rigorous

study. In the first study, an 8-week RCT was designed to evaluate the

effectiveness of a multifaceted KT strategy comprising of a 3-day workshop,

access to the EAS and policy changes (paid EBP time, mentoring, mandatory

use of outcome measures and changes in documentation) to improve AHPs’

EBP behaviour. The secondary aims were to measure the effect on EBP

attitudes and knowledge.

Page 61: Knowledge translation intervention to improve evidence

Chapter 2 – Literature Review

47

EBP behaviour, targeting a range of clinical behaviour across an array of

intervention areas was sought in this study, and thus novel and unique

approaches were required. In previous studies, components of KT strategies

used with health practitioners have included workshops, mentoring,

reminder systems, opinion leaders, outreach visits and journal clubs. The

unique and key component of the present study was the EAS that

summarised cerebral palsy research evidence with supporting clinical

algorithms (decision-making flowcharts).

Rationale for the follow-up study

Some types of EBP behaviour may take time to develop,4,173 and behaviour

change needs to be measured over a longer period to investigate the long-

term intervention effectiveness. This may be especially true considering the

types of organisational change initiatives that are a part of the KT strategy.

For example, system changes to documenting client goals and mentoring are

intervention areas that if they have an effect, may have an effect over the

medium to long term. The second study, the 2-year follow-up study was

therefore conducted to measure the long-term effectiveness of the KT

strategy to change EBP behaviour.

Synopsis

This chapter provided background research literature related to EBP, KT and

the allied health professions. Definitions of EBP and KT were provided along

with a brief background and history of EBP. A range of theories and models

that underpin EBP behaviour change were then described. The major barriers

to EBP were detailed along with a summary of the effectiveness of a range of

KT strategies. Tools measuring EBP knowledge, behaviour and attitudes

were outlined and rationale for conducting the research studies was

presented.

Chapter 3 details the methods for the cluster RCT that measured the

effectiveness of a KT strategy aiming to change AHPs’ EBP behaviour.

Page 62: Knowledge translation intervention to improve evidence

48

Overview

This chapter presents the methods of a cluster RCT that investigated the

effectiveness of a KT strategy with a range of AHPs by describing:

1) Aim and hypotheses

2) Trial design

3) Ethical approval

4) A description of the eligibility criteria and exclusion criteria for the

study

5) Methods of blinding

6) Methods and rationale of cluster randomisation

7) Development and theoretical background of the KT intervention

8) The interventions that the KT intervention and control groups

received

9) Details of the primary and secondary outcome measures

10) Procedures for the RCT

11) Information regarding data cleaning, sample size calculations and

statistical analysis.

Aim and hypotheses

The primary aim of this study was to measure the effectiveness of a KT

strategy to change EBP behaviours, knowledge and attitudes of AHPs. The

following hypotheses were devised for testing.

Page 63: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

49

EBP behaviour

At the primary end-point:

1) Allied health professionals that participate in an 8-week KT strategy

will have a behaviourally meaningful and statistically significantly

higher self-reported EBP behaviours measured by GAS T-scores than

the control group.

2) Allied health professionals that participate in an 8-week KT strategy

will have statistically significantly higher peer-reported EBP

behaviours measured by GAS T-scores than the control group.

3) Allied health professionals that participate in an 8-week KT strategy

will have statistically significantly higher per person web hits on the

EAS measured by web statistics, than the control group.

EBP knowledge

4) Allied health professionals that participate in an 8-week KT strategy

will have statistically significantly higher EBP knowledge exam scores

than the control group.

EBP attitudes

5) Allied health professionals that participate in an 8-week KT strategy

will have statistically significantly higher EBP attitude scores on the

EBPAS than the control group.

Trial design

A multi-site single-blinded, cluster RCT was conducted with AHPs at the

Cerebral Palsy Alliance. RCTs are considered the gold standard design to

determine whether a given intervention is effective.174,175 Figure 4

summarises the basic trial design.

Page 64: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

50

Figure 4: RCT trial design

Setting

Cerebral Palsy Alliance is a not-for-profit organisation providing a range of

community-based interventions to people with cerebral palsy in New South

Wales (NSW), Australia. NSW is the most populous state in Australia with

approximately 7.25 million people (32% of Australia’s total population).

Cerebral Palsy Alliance had 16 localities across NSW, organised into 4

geographically distinct regions where AHP services were provided. Each

region had centralised management for the sites within its boundaries

including clinical seniors, professional development activities and

mentoring, and thus were considered natural cluster groupings. Regions

were de-identified by assigning a number to each region to ensure

confidentiality. The four regions will be referred to as cluster 1, cluster 2,

cluster 3 and cluster 4 from this point onwards in this thesis. Staff members

within these clusters provided direct client services including physiotherapy,

speech pathology, occupational therapy, psychology and social work.

Page 65: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

51

Ethics

The project was approved by the National Health and Medical Research

Council Human Research Ethics Committee at Cerebral Palsy Alliance on

NSW on 6 May 2009 (Approval number: 2009-05-01), and University of Notre

Dame Ethics Committee on 9 September 2009 (see Appendix 2 for National

Ethics Application). The study was registered with Australian New Zealand

Clinical Trials Registry (ACTRN12611000529943) on 23 May 2011.

An adverse event log was not required because the intervention was

educational in nature and therefore posed no risk.

Eligibility

Inclusion criteria for clusters were:

1) work sites of the study organisation where AHPs were employed

2) work sites where AHPs provided direct client services to people with

cerebral palsy.

Exclusion criteria for clusters were:

1) worksites where direct client services were not provided, e.g. head

office.

Inclusion criteria for participants within the clusters were:

1) qualified AHPs

2) employed at the study site

3) providers of direct clinical services to people with cerebral palsy and

their families.

Exclusion criteria for participants within the clusters were:

1) managers (staff without any clinical caseload)

2) staff members without a formal allied health university qualification,

such as project officers or welfare workers

3) staff who did not attend work on the days of the study intervention,

e.g. annual leave taken.

Page 66: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

52

Blinding

Blinding was judiciously applied wherever pragmatically possible, resulting

in a single-blinded trial. This included: (1) independent evaluator-blinding to

group allocation and phase of the trial when scoring outcome data, (2) partial

participant and facilitator blinding to the specific EBP behaviour of interest

to the investigators. Participants and workshop facilitators were clearly

aware of the content of the workshops, however were not aware of which

intervention (KT intervention or communication skills) was of specific

interest to the researchers. Fidelity of the evaluator blinding was not formally

investigated.

Although the RCT employed the gold standard design to measure a cause-

effect relationship, pragmatic constraints inherent in any educational

intervention prevented double-blinding.176-178

Randomisation

An independent officer not associated with the trial, used Microsoft Excel to

generate random allocation numbers to create 4 opaque envelopes based

upon simple randomisation without limitations.179 The independent officer

randomly allocated the four geographically distinct clusters to either the KT

intervention or control group using the opaque envelopes. Cluster

randomisation according to the multiple worksites was chosen for two

reasons. First that cluster randomisation reduced the risk of contamination

that may have occurred if participants working at the same site had been

randomised to different interventions. Second that the workshops were

optimally suited to be delivered to whole clusters (for pragmatic and

professional reasons). Cluster randomisation occurred before participants

were recruited for pragmatic reasons, but group allocation notification was

withheld from participants until all clusters were randomised.

Page 67: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

53

Intervention

Assessment of barriers and facilitators

A comprehensive assessment of barriers and facilitators was done over a

one-year period. This took the form of meetings between managers, policy

makers, researchers, practicing senior clinicians and knowledge brokers; and

observation of clinical staff. The barriers assessment, although

comprehensive, was informal in nature. The barriers selected were

determined by concensus between those involved in meetings throughout

the year. As there is no firm evidence regarding the superiority of one KT

strategy over another65 researchers and knowledge brokers jointly designed

the KT strategy based on whether or not the barrier was modifiable by a

pragmatically feasible intervention. Modifiable barriers included lack of skill,

time, and knowledge. Partially modifiable or non-modifiable barriers were:

1) evidence that was considered not clinically relevant

2) staff who did not have access to full electronic databases

3) some staff had negative attitudes towards EBP.

Modifiable barriers, theoretical underpinnings and strategies for the KT

strategy are detailed in Table 4. Details of how the components of our

multifaceted intervention correspond to the KTA process are in Table 5.

Development of multifaceted intervention

Strategic planning meetings were held every 6-weeks in the year leading up

to the RCT and included researchers, knowledge brokers, policy makers and

managers. Knowledge brokers were senior staff with allied health

backgrounds (one per discipline employed in the most senior role for each

discipline). Policy makers were the senior executive staff and managers, who

were involved in direct management of AHPs in the organisation. Goals

around EBP behaviours were set and strategies to achieve these goals were

jointly selected based on barriers identified in the literature and assessment

of the study site.

Page 68: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

54

The EAS formed the basis of our KT strategy and was developed by research

staff and knowledge brokers using freely available software (MediaWiki)

(see Figure 5 and Appendix 3). Figure 5 outlines the workflow of the steps

invovled from the AHP’s inquiry to the information delivery. The EAS

included succinct summaries of all the cerebral palsy research evidence

about intervention, prognosis and outcome measurement. Intervention

evidence was labeled using the traffic light system102,180 where each

intervention was given a traffic light color with an actionable message

attached. Green = ’Go’ if high quality evidence supports the effectiveness of

this intervention, Yellow = ’measure’ where low quality or conflicting

evidence supports the effectiveness of this intervention, therefore measure

the outcomes of the intervention to ensure the goal is met, and RED = ’stop’

where high quality evidence demonstrates intervention is ineffective or

harmful, therefore do not use this approach. LC co-authored a journal paper

that used the traffic light system as a KT tool to communicate systematic

review finding for 63 cerebral palsy interventions.180 Decision-making

algorithms with embedded evidence summaries were also available on the

EAS. Each section of the EAS included abstracts of research articles,

descriptions of the intervention/assessment and a hyperlink to the article.

KT intervention group

The KT intervention group received a KT strategy that included: (1) access to

the EAS, (2) a 3-day workshop to receive user training, divided into 2 parts 8-

weeks apart, and (3) policy/organisational changes designed to overcome

EBP barriers (quarantined EBP time, mentoring, compulsory use of outcome

measures and documentation changes including reminder systems) made

available during the 8-week study period. The KT strategy was both at the

cluster level and at the individual level. See Table 5 for details of

intervention.

Page 69: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

55

Figure 5: EAS infogram

Page 70: Knowledge translation intervention to improve evidence

56

Ch

apter 3

– Ran

dom

ised C

ontrolled

Trial M

ethods

Table 4: Theoretical basis and strategies to address modifiable barriers

Barrier: Lack of confidence/skill searching, appraising and synthesizing research evidence

KT strategy Underpinning theory or group of theories Strategy/rationale

Workshop Problem-based learning, learning styles Workshops used problem-based learning approach and a variety of approaches to ensure that different learning styles were catered to, maximizing the likelihood of increased confidence and skill levels.

EAS Cognitive Accurate, relevant research evidence on cerebral palsy assessment and treatment was provided via the EAS building skill by modelling synthesis and summary of treatment areas. The EAS bypassed the need for high-level appraisal skills.

Mentoring Educational AHPs were included in the problem solving process during mentoring sessions and aimed to increase confidence and build skill base.

Barrier: Lack of time

KT strategy Group of theories that the intervention relates to Strategy/rationale

EAS Cognitive The provision of accurate, relevant research evidence bypassed the need for extensive time spent searching and appraising research via databases and journals.

Paid EBP time in policy Reimbursement

Leadership

Paid, protected time for AHPs to engage in EBP activities was provided.

Changing policy suggested management ‘buy in’ and endorsement to support changes throughout the organisation (leadership theory).

Documentation changes including a reminder system

Total quality management Patient documentation and work processes were reorganised to support clinical decision making and save time (reminder systems, checklists and directing participants to the EAS).

Page 71: Knowledge translation intervention to improve evidence

57

Ch

apter 3

– Ran

dom

ised C

ontrolled

Trial M

ethods

Barrier: Evidence considered as not clinically relevant

KT strategy Group of theories that the intervention relates to Strategy/rationale

Workshop teaching EAS Educational

Motivational

AHPs were involved in the problem solving process, so that they ‘owned’ and were a part of the process and could see the applicability of the EAS. Having the 8-week period in between workshops, allowed independent learning and time to apply the EAS information to a real client.

Facilitators aimed to convince AHPs of the relevance of research in their area by exploring the EAS through clinical examples and role playing

EAS Marketing An appealing product (the EAS) was developed and this was disseminated in a variety of ways (workshop, mentoring, documentation changes).

Barrier: No access to full articles and research databases

KT strategy Group of theories that the intervention relates to Strategy/rationale

EAS Organisational learning All staff members at every level of the organisation had access to current cerebral palsy evidence and exchange of information via mentoring sessions and team meetings was promoted.

Barrier: Some staff with negative attitudes towards EBP

KT strategy Group of theories that the intervention relates to Strategy/rationale

Workshop Social Credible staff facilitated workshops, modelled positive attitudes and emphasised ‘buy in’ from decision-makers in the organisation.

Mentoring Social Mentors were selected with positive attitudes towards EBP so that target behaviour was modelled.

Page 72: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

58

Access to the Evidence Alert System

The EAS was the cornerstone for all other interventions, representing the

central funnel on the KTA.51 KT intervention group participants were

informed about the EAS and educated in using it in the workshop. The EAS

was available on the Cerebral Palsy Alliance intranet.

3-day workshop

Workshop – Part 1

Part 1 (2-days) of the workshop provided training to the participants to

apply the EAS to decision-making within their daily clinical work.

A series of clinical examples were explored using the interface of the EAS,

training about evidence levels, clinical decision-making algorithms and use

of two psychometrically sound, cross disciplinary outcome measures.

Training was delivered based on recommendations from systematic review

literature that: (1) used a mix of instructional techniques including didactic

and interactive styles,4,181 (2) encouraged collaboration within and between

professional groups182 (3) used multiple media such including video,

simulated clinical scenarios, slideshows and written information,181 (4)

ensured multiple exposure to content throughout the entire KT intervention

period via different modalities in the workshops, mentoring and the EAS.181

The training content of the workshops provided:

1) Research evidence for; (a) goal-setting (b) prognosis (c) interventions

(d) modes of service delivery and (e) outcome measurement

2) Resources to assist with clinical decision-making including; (a) the

cerebral palsy EAS and (b) algorithms/decision aides.

3) Staff supports including; (a) a flowchart describing the service

delivery decision-making process, (b) clearly defined staff

expectations, (c) position papers to define service parameters, (d)

Page 73: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

59

pathways defining service responsibilities, (e) searchable wiki with

evidence summaries

4) Skills training with practice in; (a) developing measurable goals, (b)

using goal-setting measures, (c) selecting relevant prognostic

messages, (d) selecting evidence and (e) selecting relevant outcome

measures.

Workshop – Part 2

Part 2 (1-day) of the workshop 8-weeks later involved participants

presenting a case study detailing how they used the EAS to inform their

clinical decision-making with a real client.183 This was followed by discussion

with a small group of colleagues designed to help participants demonstrate

the integration of their learning into their own clinical work.184 Investigators

and senior clinicians led the workshops using knowledge brokering

strategies.185

Policy changes

Policy changes that were implemented during the 8-week study period

included: (1) paid, protected EBP time, (2) at least 1 scheduled mentoring

session with EBP trained knowledge brokers,5,102,186 and informal mentoring

upon request, (3) mandatory use of outcome measures, (4) changes to

documentation reminding AHPs to use outcome measures and record the

level of evidence for a given intervention. The 8-week implementation period

allowed the participants to experience the revised workplace EBP

expectations, practice using the EAS with clients, prepare their real world

case study for part two of the 3-day workshop and reflect on their changes to

practice.23 The KT strategy was directed at the cluster level (3-day workshop,

access to the EAS and policy changes) and individual level (mentoring and 3-

day workshop part 2).

Page 74: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

60

Control group

The control group received an equal intensity intervention about

communication skills with no EBP content using KT strategies and no use of

the EAS. The intervention included: (1) a 3-day workshop about AHP-client

communication skills and (2) policy changes (mentoring and quarantined

time for communication skills) about communication skills. Health

professional-client communication skill training was considered a valuable

use of staff time, and is reported to be effective in improving communication

skills.187,188 The content of the control group workshops were entirely

different to the KT intervention group minimising contamination. To further

minimise the risk of co-intervention and contamination, the control group

was not informed about the EAS, paid EBP time, knowledge brokers or EBP

mentoring until the end of the trial. The changes to documentation were not

implemented in the control group clusters until the end of the RCT.

Page 75: Knowledge translation intervention to improve evidence

Ch

apter 3

– Ran

dom

ised C

ontrolled

Trial M

ethods

61

Table 5: KT strategy with corresponding KTA phases

KT Strategy

What Part of the KTA Cycle did the Intervention Impact?

Who Implemented It?

Cre

ating

Know

ledge

Localis

ing

Know

ledge

Identify

ing

Barr

iers

Redre

ssin

g

Barr

iers

Main

tain

ing

Use

Before RCT

Strategic planning meetings � � � � Managers Human Resources Knowledge brokers Policy Makers

Policy Changes (policies developed but not implemented until RCT)

Provision of paid, dedicated EBP time

Provision of a policy endorsed EBP mentoring program

Mandated and compulsory use of psychometrically sound outcome measures with all clients embedded in workflow e. g. included within mandatory Individual Family Service Plans

� � Managers Human Resources Knowledge brokers Policy Makers

Evidence Alert System development � Research Investigators

During RCT (8-weeks; June – Aug 2009)

Skills Training Workshops (3 days) � � � � Peers Knowledge Brokers Research Investigators

Paid EBP time, mentoring, compulsory use of outcome measures (see policy changes above), documentation changes including reminder systems

� � � Managers

Human Resources

Knowledge brokers

Policy Makers

Page 76: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

62

Primary outcomes

The primary endpoint was change in self-reported and peer-reported EBP

behaviour from baseline to 8-weeks measured by Goal Attainment Scaling.

Study outcomes were measured at the individual level and cluster level and

are detailed with corresponding hypotheses in Table 6.

Goal attainment scaling

Procedure for goal attainment scaling

Participants rated themselves against the self-GAS scales, and then to limit

measurement bias, in a separate environment, a well-acquainted peer rated

their performance on the peer-GAS scales. The steps involved in setting GAS

goals are:

1) devising goals/target behaviours that are measurable

2) defining a continuum of possible outcomes — worst expected

outcome (-2), less than expected outcome (-1), expected outcome (0),

more than expected outcome (+1), and best expected outcome (+2)

3) specifying the criteria for scoring at each level

4) determine current or initial performance

5) intervening for a specified period

6) determining performance attained on each objective

7) evaluating extent of attainment.189,190

The goals in our study were devised by a multidisciplinary panel of experts,

familiar with practice behaviours of AHPs. Twenty-five goal scales were

developed, half relating to EBP behaviours and the other half relating to

communication behaviour as per the controlled comparison intervention (see

self-evaluation form in Appendix 5). The questions covered goal-setting

behaviour, use of outcome measures and cerebral palsy classification

systems, interactions with clients and their families, use of the EAS and

Page 77: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

63

support of research (in our case the Cerebral Palsy Register). The

traditionally used 5 point scale (-2 to +2) was expressed on the evaluation

form as a percentage of time to reflect how often self- and peer-reported

behaviour occurred. These equated to: never and 1–5% of the time (-2),

5–24% of the time (-1), 25–49% of the time (0), 50–74% of the time (+1), 75–

99% of the time and always (+2). To obtain the standard raw GAS score, the

percentage intervals were directly transposed back into the -2 through to +2

scores as per GAS scoring conventions. Raw GAS scores were then converted

to T-scores, enabling inferential statistical analysis of continuous data.

Using a measurement tool that had strong psychometric properties was one

of the strengths of our study. That said, the application of GAS in an

educational context using percentage intervals to reflect the regularity of a

specific behaviour is novel. Systematic reviews reveal a need for educational

outcomes to be measured with validated tools.152,191,192 One criticism of GAS

is that despite users’ best efforts, the intervals between GAS levels are not

always exactly equal161,193,194 making statistical analysis problematic. We

overcame this limitation by using percentage intervals within scale

descriptors, increasing the rigour of the measurement tool.161

Secondary study outcomes

Open-ended exam questions

Changes in EBP knowledge were measured by open-ended exam questions

with pre-set answers based on published evidence. The marking schedule

was pre-defined by the multidisciplinary panel of experts and was fully

supported by published evidence in cerebral palsy (see Appendix 5 and 7).

Evidence based practice attitude scale

Self- and peer-reported changes in attitudes to EBP were measured using

subsets 3 and 4 (with permission from Aarons), as subsets 1 and 2 were not

relevant for the context of our study (see Appendix 5).

Page 78: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

64

Use of the cerebral palsy evidence alert system

EAS utilisation was measured by number of web page hits collected via a

software program that tracked cluster-specific IP addresses in batches. Web

hit data collection was concealed from participants, minimising the

likelihood of observer bias affecting EAS use.

Page 79: Knowledge translation intervention to improve evidence

Ch

apter 3

– Ran

dom

ised C

ontrolled

Trial M

ethods

65

Table 6: Hypotheses matched to domain and measurement

Hypothesis Domain Instrument Psychometric properties

Measurement Data

Allied health professionals that participate in an 8-week KT strategy will have statistically significantly higher self-reported EBP behaviours measured by GAS T-scores than the control group.

EBP behaviours (self-report)

GAS1 Valid Yes The KT intervention group scores on

the self-report evaluation form - GAS EBP, sum of questions 1,3,5,7,9,11, 15,17,19,21,23 converted into a T-score.

Primary outcome measure. Analysis by inferential statistics. Reliable Yes

Sensitive to change

Yes

Allied health professionals that participate in an 8-week KT strategy will have statistically significantly higher peer-reported EBP behaviours measured by GAS T-scores than the control group.

EBP behaviours (peer-report)

GAS Valid Yes The KT intervention groups scores on peer GAS EBP questions 1,3,5,7,9, 11,15,17,19,21,23;converted to a T-score

Primary outcome measure. Analysis by inferential statistics. Reliable Yes

Sensitive to change

Yes

Allied health professionals that participate in an 8-week KT strategy will have statistically significantly higher per person web hits on the EAS measured by a web statistics, than the control group.

EBP behaviours

Frequency of use measured by web hits per person

N/A The KT intervention group will have more page hits on the wiki than the control group at 8-weeks post intervention.

Secondary outcome measure. Analysis by descriptive and inferential statistics.

Allied health professionals that participate in an 8-week KT strategy will have statistically significantly higher EBP knowledge exam scores than the control group.

EBP knowledge

Exam questions N/A The KT intervention group scores on Open ended questions 1,2,5 & 6 will significantly improve but there will be no change in the control group.

Secondary outcome measure. Analysis by descriptive and inferential statistics.

Page 80: Knowledge translation intervention to improve evidence

Ch

apter 3

– Ran

dom

ised C

ontrolled

Trial M

ethods

66

Hypothesis Domain Instrument Psychometric properties

Measurement Data

Allied health professionals that participate in an 8-week KT strategy will have statistically significantly higher EBP attitude scores on the EBPAS

EBP attitudes

EBPAS2 Valid Yes The KT intervention group’s EBPAS

score (subset 3 + subset 4) will significantly improve but there will be no change in the control group.

Secondary outcome measure. Analysis by descriptive and inferential statistics. Reliable Yes

Sensitive to change

Unknown

Notes:

1 GAS = goal attainment scaling

2 EBPAS = evidence based practice attitude scale

Page 81: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

67

Procedures and data collection

LC collected data between June 2009 and August 2009 (see Figure 4). The

workshops were held at the participant’s worksite or nearby venues with

educational facilities large enough to host the entire cluster. The structure

and measures of the study are summarised in Figure 6. The procedures and

time line for the study are detailed in Table 7.

Figure 6: Study structure and measures

Page 82: Knowledge translation intervention to improve evidence

Ch

apter 3

– Ran

dom

ised C

ontrolled

Trial M

ethods

68

Table 7: RCT study procedures

Date Procedure

March 2009 Randomisation — clusters randomised to KT intervention group or control group

April 2009 Information sheet — sent to potential all participants via email (Appendix 4)

KT intervention group Control group

June 2009 – Aug 2009

RCT – EBP workshop Part 1 (days 1 and 2)

• eligible participants invited to participate in study

• first author (LC) carried out coordination of

voluntary consent

• consent forms signed

• baseline data collected:

- participants nominated a codename

- participants completed self-GAS, EBPAS and exam questions (Appendix 5)

- participants then nominated a colleague (peer) and told them their codename

- colleagues (peers) moved to another part of the room to complete the peer-GAS form and EBPAS (Appendix 6)

- GAS and EBPAS forms (baseline data) collected

- participants attended part 1 of EBP workshop (see Table 5)

RCT – Communication skills workshop Part 1 (days 1 and 2)

• eligible participants invited to participate in study

• first author (LC) carried out coordination of voluntary consent

• consent forms signed

• baseline data collected:

- participants nominated a codename

- participants completed self-GAS, EBPAS and exam questions* (Appendix 5).

- participants then nominated a colleague (peer) and told them their codename

- colleagues (peers) moved to another part of the room to complete the peer-GAS form and EBPAS (Appendix 6)

- GAS and EBPAS forms (baseline data) collected

- - participants attended part 1 communication skills workshop (Table 5)

RCT – Access to EAS and policy changes (8-week period) (see Table 5)

RCT – Policy changes (8-week period):

• mentoring by knowledge brokers

• quarantined time for communication skills planning and reflection

Page 83: Knowledge translation intervention to improve evidence

Ch

apter 3

– Ran

dom

ised C

ontrolled

Trial M

ethods

69

Date Procedure

KT intervention group Control group

June 2009 – Aug 2009

RCT – Workshop Part 2 (day 3)

• participant attended EBP workshop part 2

• end of study data collected:

- participants nominated a codename

- participants completed self-GAS, EBPAS and exam questions (Appendix 5)

- participants then nominated a colleague (peer) and told them their codename

- colleagues (peers) moved to another part of the room to complete the peer-GAS form and EBPAS (Appendix 6)

- GAS and EBPAS forms (8-week data) collected

- participants attended part 2 of EBP workshop (see Table 5)

RCT – Workshop Part 2 (day 3)

• participant attended communication skills workshop part 2

• baseline data collected:

- participants nominated a codename

- participants completed self-GAS, EBPAS and exam questions (Appendix 5).

- participants then nominated a colleague (peer) and told them their codename

- colleagues (peers) moved to another part of the room to complete the peer-GAS form and EBPAS (Appendix 6)

- GAS and EBPAS forms (8-week data) collected

- participants attended part 2 communication skills workshop (see Table 5)

Nov 2011 Long-term follow-up data-point (see Chapters 5 and 6)

Page 84: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

70

Data cleaning

All items on the self and peer-reported GAS, exams and EBPAS forms were

scored using two different methods and then compared to identify and thus

correct scoring errors, ensuring the final score was accurate.

Scoring Method 1: All forms were scored manually and entered onto a

single, hard copy summary sheet. The total scores were added up by

calculator and then entered into Microsoft (MS) Excel by data entry

personnel. Error formulas in MS Excel were created to ensure that the correct

numbers of items were entered within an expected range of scores. Data

entry personnel were trained by myself to enter data and provided with

information sheets to ensure consistency of data entry. I conducted spot

checks for accuracy for 10% of participants. Two data entry errors were

found and each of these episodes involved the correct scores being entered in

the incorrect phase of treatment.

Scoring Method 2: Each item score was individually entered into MS Excel

by myself. To ensure intra-rater reliability, 10% of evaluation forms were re-

scored. No entry errors were found. MS Excel formulas were created to

calculate total scores and GAS T-scores.

There were no discrepancies between the scores entered via methods 1 and 2.

Sample size and power

The methodological decision to test the efficacy of an organisational KT

strategy within one agency imposed pragmatic limitations on the obtainable

sample frame. We successfully recruited 88% of the available sampling

frame, however the total number of employees at the agency was less than

the number of participants required to reach statistical power if correlation of

outcome variables within sites was observed (intra-cluster correlation). A

sample size calculation identified the probability of detecting an effect size of

1 at an alpha level of 0. 05 (one-tail) and a power of 90%. For Goal

Page 85: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

71

Attainment Scaling [mean T-score = 50, standard deviation (sd) = 10] an

improvement of 10-points or more in the KT intervention group than the

control group was sought, (improvement of 1 sd). The expert panel agreed

that a 10-point increase in GAS T-scores equated to significant clinical

improvement in EBP behaviours. The calculation assumed a 20% non-

consent rate and a 20% attrition rate indicating a sample size requirement of

72 (38 per group) for a non-cluster trial. We enrolled 135 professionals (n = 73

interventions and n = 62 controls) at 4 sites. Based on estimating an intra-

cluster correlation co-efficient (ICC) of 0. 1 we calculated that the study was

underpowered to demonstrate an improvement of 10 points between groups

if a cluster effect of this size was observed (Variance Inflation Figure = 4.3).

Statistical methods

All statistical analysis was carried out with individual participants as the

unit of analysis on an intention-to-treat basis by using SPSS for Windows 19.

0. 0 (SPSS Inc, Chicago, IL) and SAS 9. 3 (SAS Institute, Cary NC).

We conducted generalised linear regression analysis for primary and

secondary endpoints, using post intervention GAS T-score as the outcome

variable and adjusting for potential confounding variables (baseline GAS T-

score, profession, group allocation, grade level and years in the disability

field). Effect sizes with 95% confidence intervals (CIs) were calculated and

significance was set at 0. 05. These estimates would underestimate the

standard errors and confidence intervals for the effect size if participant

outcomes are correlated within cluster sites, thus mixed effects models with

cluster included as a random effect were used to adjust for a cluster effect to

calculate the effect size for each outcome.195 ICC was calculated from the

mixed effects model and bootstrapping (1000 samples generated) was

performed to calculate 95% CIs for the ICC.

Page 86: Knowledge translation intervention to improve evidence

Chapter 3 – Randomised Controlled Trial Methods

72

Synopsis

This chapter reported the methods of a cluster RCT by describing the

hypotheses to be tested, trial design, study eligibility, blinding and

randomisation. Details of the development and final KT intervention, along

with the intervention that the control group received were then presented.

Outcome measurement, procedures for the study, data cleaning, sample size

and statistical analyses were detailed. The next chapter presents the results

from the RCT.

Page 87: Knowledge translation intervention to improve evidence

73

Overview

This chapter presents the results of the cluster RCT including;

1) Baseline characteristics of the AHPs included in the study, including

profession, grade level, years of employment at Cerebral Palsy

Alliance, years of experience in the disability field and whether or not

the participant had previously attended EBP training.

2) Details about missing data

3) Statistical consideration of the clustering effect due to the method of

randomisation

4) Results of the effectiveness of the KT strategy for primary and

secondary outcomes.

Baseline characteristics

One hundred and thirty five AHPs (n = 73 interventions and n = 62 controls)

meeting eligibility criteria agreed to participate in the study. Descriptive

statistics were used to describe participant characteristics. For detailed

results see Table 8.

Page 88: Knowledge translation intervention to improve evidence

Ch

apter 4

– Ran

dom

ised C

ontrolled

Trial R

esults

74

Table 8: Baseline characteristics of participants

KT intervention group (%) Control group (%)

p value*

Cluster

Total (n = 73)

Cluster

Total (n = 62)

Cluster 1 (n = 40)

Cluster 2 (n = 33)

Cluster 3 (n = 32)

Cluster 4 (n = 30)

Professional Background

Occupational Therapy

Physiotherapy

Speech Pathology

Psychology

Social Work

Missing

11 (27.5)

11 (27.5)

9 (22.5)

5 (12.5)

4 (10)

0 (0)

12 (36.4)

5 (15.1)

11 (33.3)

2 (6.1)

3 (9.1)

0 (0)

23 (31)

16 (22)

20 (27)

7 (10)

7 (10)

0 (0)

12 (37.5)

9 (28.1)

8 (25)

1 (3.1)

2 (6.3)

0 (0)

14 (46.7)

7 (23.3)

8 (26.7)

0 (0)

1 (3.3)

0 (0)

26 (42)

16 (25.8)

16 (25.8)

1 (1.6)

3 (4.8)

0 (0)

0.060

0.596

0.835

0.060

0.294

Grade Level

Level 1

Level 2 (clinical specialist)

Level 3 (clinical senior)

Other

Missing

9 (22.5)

18 (45)

8 (20)

5 (12.5)

0 (0)

10 (30.3)

16 (48.5)

5 (15.2)

1 (3)

1 (3)

19 (26)

34 (46.6)

13 (17.8)

6 (8.2)

1 (1.4)

5 (15.6)

21 (65.7)

4 (12.5)

1 (3.1)

1 (3.1)

9 (30)

16 (53.4)

4 (13.3)

1 (3.3)

0 (0)

14 (22.6)

37 (59.7)

8 (12.9)

2 (3.2)

1 (1.6)

0.647

0.122

0.436

0.222

Years at Cerebral Palsy Alliance

<2-years

2-4 years 11months

5-9 years 11 months

>10 years

Missing

12 (30)

5 (12.5)

15 (37.5)

8 (20)

0 (0)

16 (48.5)

10 (30.3)

6 (18.2)

1 (3)

0 (0)

28 (38.4)

15 (20.5)

21 (28.8)

9 (12.3)

0 (0)

14 (43.8)

5 (15.6)

8 (25)

5 (15.6)

0 (0)

18 (60)

5 (16.7)

4 (13.3)

3 (10)

0 (0)

32 (51.5)

10 (16.2)

12 (19.4)

8 (12.9)

0 (0)

0.122

0.510

0.205

0.902

Page 89: Knowledge translation intervention to improve evidence

Ch

apter 4

– Ran

dom

ised C

ontrolled

Trial R

esults

75

KT intervention group (%**) Control group (%**)

p value*

Cluster

Total (n = 73)

Cluster

Total (n = 62)

Cluster 1 (n = 40)

Cluster 2 (n = 33)

Cluster 3 (n = 32)

Cluster 4 (n = 30)

Years’ experience in disability field

<2-years

2-4 years 11months

5-9 years 11 months

>10 years

Missing

4 (10)

7 (17.5)

10 (25)

19 (47.5)

0 (0)

7 (21.2)

3 (91)

15 (45.5)

8 (24.2)

0 (0)

11 (15)

10 (13.7)

25 (34.3)

27 (37)

0 (0)

5 (15.6)

2 (6.3)

10 (31.3)

15 (46.9)

0 (0)

11 (36.7)

10 (33.3)

4 (13.3)

5 (16.7)

0 (0)

16 (25.8)

12 (19.4)

14 (22.6)

20 (32.2)

0 (0)

0.120

0.375

0.136

0.566

Previous EBP continuing education?

Yes

No

Missing

35 (87.5)

5 (12.5)

0 (0)

29 (87.9)

4 (12.1)

0 (0)

64 (87.7)

9 (12.3)

0 (0)

19 (59.4)

13 (40.6)

0 (0)

22 (73.3)

8 (26.7)

0 (0)

41 (66.1)

21 (33.9)

0 (0)

0.003

English first language

Yes

No

Missing

36 (90)

4 (10)

0 (0)

30 (90.9)

3 (9.1)

0 (0)

66 (90.4)

7 (9.6)

0 (0)

31 (96.9)

1 (3.1)

0 (0)

30 (100)

0 (0)

0 (0)

61 (98.4)

1 (1.6)

0 (0)

0.013

* Pearson’s chi square test was used to determine whether distributions of variables differed from one another, resulting in a p value (p < 0.05 indicated a statistically significant difference in proportions between groups).

** Percentages have been documented to one decimal place in this table for accuracy, however have been rounded to whole numbers in the text for clear reporting

Page 90: Knowledge translation intervention to improve evidence

Chapter 4 – Randomised Controlled Trial Results

76

Professional background

Included professionals were occupational therapists (n = 49; 36%),

physiotherapists (n = 32; 24%), speech pathologists (n = 30; 26%),

psychologists (n = 8; 6%) and social workers (n = 10; 8%). Figure 7 displays

the proportion of each profession according to group allocation (KT

intervention group or control group). The professional background of

participants was comparable between the KT intervention group and the

control group (see Table 8), indicating that there was no statistically

significant difference of the distribution of professional background of

participants between groups (p > 0.05).

Figure 7: Percentage of participants in various professional backgrounds in intervention and control groups

Grade level

Twenty-four per cent of the sample were employed at the Cerebral Palsy

Alliance as level 1 AHPs (entry level AHP), 53% were level 2 (clinical

specialist), 15% were level 3 (clinical senior with supervision responsibilities

for level 1 and 2s) and the remaining 8% were either level 4 (knowledge

brokers with clinical caseloads) or clinical managers (with clinical caseloads

and AHP qualifications). The distributions between the KT intervention and

control groups were comparable (p > 0. 05) (see Table 8 and Figure 8).

0

5

10

15

20

25

30

35

40

45

OT PT SP Psych SW

Perc

en

tag

e o

f p

art

icip

an

ts

Profession

Intervention group

Control group

Page 91: Knowledge translation intervention to improve evidence

Chapter 4 – Randomised Controlled Trial Results

77

Figure 8: Percentage of participants for AHP grade levels in intervention and control groups

Years at Cerebral Palsy Alliance and experience in the

disability field

Although 45% of participants had worked at the Cerebral Palsy Alliance for

less than 2 years, 34% had over 10 years’ experience in the disability field.

Only 13% of participants had worked at Cerebral Palsy Alliance for more

than 10 years. There were no significant differences between years of

employment at the study site or overall years of experience between groups

(see Table 8 and Figures 9 and 10).

Figure 9: Percentage of participants according to number of years employed at Cerebral Palsy Alliance in intervention and control groups

0

10

20

30

40

50

60

70

Level 1 Level 2 Level 3 Level 4 orother

Perc

en

tag

e o

f p

art

icip

an

ts

Intervention group

Control group

0

10

20

30

40

50

60

Perc

en

tag

e o

f p

art

icip

an

ts

Intervention group

Control group

Page 92: Knowledge translation intervention to improve evidence

Chapter 4 – Randomised Controlled Trial Results

78

Figure 10: Percentage of participants according to number of years working in disability in intervention and control groups

English as first language

Ninety-four per cent of the sample had English as their first language

meaning that 8 participants from the whole sample had a language

background other than English (LBOTE) (see Table 8 and Figure 11). The KT

intervention group contained 7 of the 8 participants with LBOTE, however

the difference in distribution between groups was statistically insignificant

(p = 0.13).

0

5

10

15

20

25

30

35

40

Perc

en

tag

e o

f p

art

icip

an

ts

KT intervention group

Control group

Page 93: Knowledge translation intervention to improve evidence

Chapter 4 – Randomised Controlled Trial Results

79

Figure 11: Percentage of participants whose first language was English in intervention and control groups

Previous continuing education in EBP

Eighty-eight per cent of the KT intervention group had attended an EBP

seminar or workshop compared to 66% of the controls (see Table 9 and

Figure 12). The distribution between groups was significant (p = 0.03) and

was therefore included in the regression model as a covariate.

Figure 12: Percentage of participants who had previous continuing education in EBP in intervention and control groups

0

20

40

60

80

100

120

KT intervention group Control group

Perc

en

tag

e o

f p

art

icip

an

ts w

ith

E

ng

lish

as f

irst

lan

gu

ag

e

Yes

No

0

10

20

30

40

50

60

70

80

90

100

KT intervention group Control group

Perc

en

tag

e o

f p

art

icip

an

ts

wit

h p

rev

iou

s E

BM

tra

inin

g

Yes

No

Page 94: Knowledge translation intervention to improve evidence

Chapter 4 – Randomised Controlled Trial Results

80

Participant flow

A total of 154 attendees at the EBP workshop were eligible and invited to

participate in the study, with 135 (88%) providing consent and were

therefore enrolled. Nineteen eligible participants elected not to take part in

the study. Baseline demographic data were collected from all participants as

requested by Cerebral Palsy Alliance management, although the remainder

of the evaluation form was optional for those who did not participate in the

study. One participant in the KT intervention group withdrew from the

study via email during the 8-week intervention period (see Figure 13).

Figure 13: Participant flow diagram for RCT – from randomisation to primary analysis

Missing Data

Data were classified as missing if a participant did not submit an evaluation

form or submitted a completely blank evaluation form. Missing data were

analysed using the last observation carried forward analysis (LOCF).179 The

return rate for the GAS and EBPAS ratings were between 60-82% (see Figure

13), with the primary endpoint having more missing data. The KT

Page 95: Knowledge translation intervention to improve evidence

Chapter 4 – Randomised Controlled Trial Results

81

intervention group had 19/73 (31%) 8-week GAS forms missing, compared

to the control group who had 17/62 (30%). This difference between groups

was not statistically significant (chi square p = 0.95).

Clustering effect

The ICC for the primary endpoints were 0.33 ( 95% CI 0.16, 0.69) for self-

rated GAS T-scores, that is 33% of the total variation observed in self-rated

GAS T-scores can be attributed to differences between the sites, (rather than

differences between individuals within each site), and 0. 64 (95% CI 0.36,

0.80) for peer-report GAS T-scores (see Table 9), that is 64% of the total

variation observed peer-rated GAS T-scores can be attributed to differences

between sites. These results demonstrate the correlation of GAS T-scores

within sites was very large, whereas there was a large variation in scores

between sites. This cluster effect substantially depleted the study power

(because participant scores within each site cannot be regarded as

independent). ICCs were smaller for secondary outcomes (see Table 9).

Effectiveness of KT strategy

Primary outcome – EBP practice behaviours

Self-rated GAS T-scores increased more in the intervention group compared

to controls, however this difference was not statistically significant after

adjusting for the cluster effect (effect size 4.43; 95% CI -10.63,19.49; p = 0.56)

(see Table 9). Baseline self-rated GAS T-scores were a predictor in the model

(effect size 0.71; 95% CI 0.52, 0.90)(; p < 0.0001); indicating lower performers

improved but remained lower performers, and higher performers improved

and remained leading performers. No other covariates were significantly

predictive of outcome.

Peer-rated GAS T-scores of the intervention group also increased compared

to controls, but this difference was also not statistically significant after

adjusting for the cluster effect (effect size 6.75; 95% CI -16.95, 30.44; p = 0.57)

Page 96: Knowledge translation intervention to improve evidence

Chapter 4 – Randomised Controlled Trial Results

82

(see Table 9). Similar to the self-rated GAS T-scores, the final peer-rated GAS

T-score was predicted by the baseline peer-rated GAS T-score (effect size

0.30; 95%CI 0.15, 0.45; p < 0.0001). No other covariates were significantly

predictive of peer-rated GAS T-scores. The peer-rated GAS T-scores for each

cluster mirrored the self-rated GAS cluster T-scores, suggesting the observed

study effects were behaviourally meaningful, despite low study power to

demonstrate a statistically significant difference.

Secondary outcomes – knowledge, attitudes and EAS

EBP knowledge scores increased compared to controls, with a statistically

significant effect size of 2.97 (95% CI 1.97, 3.97; p < 0.0001). The ICC for this

outcome was zero, and this effect remained statistically significant after

adjusting for the cluster effect of 2.97 (95% CI 1.97, 3.97; p < 0.0001). Baseline

score (p < 0.0001) and professional category (p = 0.03) were also predictors in

the model. There was minimal to no correlation between participants within

sites for self- or peer-rated EBP attitudes, however we did not demonstrate a

statistically significant intervention effect (see Table 9). The intervention

group accessed the EAS more than the control group (KT intervention group

6123 total hits; control group 1677 hits).

Additional analyses

Secondary analyses examining mean outcome scores for each cluster

revealed that both clusters in the KT intervention group improved their self-

and peer-rated GAS T-scores as expected (see Table 10). One of the control

group clusters (cluster 3) also responded as expected, with very minimal

increases in self- and peer-rated GAS T-scores from baseline to 8-weeks (self-

rated T-score change = 0.22; peer-rated T-score change = 2.27). The other

control group cluster (cluster 4) had high baseline scores (self-rated GAS T-

score = 66.41; peer-rated GAS T-score = 73.32) and further improved by 10.15

points over the 8-week study period, despite not receiving the KT strategy

(see Table 10). We performed post-hoc Spearman’s correlation tests to assess

for correlation between knowledge and attitude scores (at baseline, 8-weeks

Page 97: Knowledge translation intervention to improve evidence

Chapter 4 – Randomised Controlled Trial Results

83

and change scores) overall, by treatment group, and within individual

clusters. No statistically significant positive correlations were found.

Synopsis

This chapter presented the cluster RCT results including baseline

characteristics, missing data, clustering effect and the effectiveness of the KT

strategy. The next 2 chapters (Chapters 5 and 6) present the methods and

results of the 2-year follow-up study. The discussion and conclusion chapter

(Chapter 7) explores the results from this chapter in more depth, as well as

offering an interpretation of the RCT and follow-up study.

Page 98: Knowledge translation intervention to improve evidence

Ch

apter 4

– Ran

dom

ised C

ontrolled

Trial R

esults

84

Table 9: Primary and secondary outcomes - RCT

Treatment

(n = 73) Control (n = 62) Base model Mixed effects model

Outcome n* Mean (sd) n* Mean (sd) Difference (95% CI) p

ICC (95% CI)

Difference (95% CI) p

EBP Behaviour

Self baseline 59 54.05 (13.80) 45 55.42 (10.92)

8-weeks 51 65.96 (13.49) 43 62.45 (19.50) 5.08 (0.40,10.55) 0.07 0.33 (0.16,0.69) 4.43 (-10.63,19.49) 0.56

Peer baseline 52 61.83 (13.69) 43 61.52 (16.95)

8-weeks 44 74.26 (8.51) 42 68.41 (16.63) 7.86 (1.97,13.75) 0.01 0.64 (0.36,0.80) 6.75 (-16.95,30.44) 0.57

EAS page hits** 6123 1677

EBP Knowledge baseline 57 7.91 (3.05) 50 8.09 (3.52)

8-weeks 52 10.69 (2.23) 45 8.02 (3.13) 3.29 (2.25,4.33) 0.00 0.01 (0.0,0.26) 3.29 (2.18,4.40) 0.00

EBPAS Self

subset 3 baseline 55 2.67 (0.75) 47 2.57 (0.70)

8-weeks 50 2.63 (0.74) 44 2.77 (0.61) -0.27 (-0.57,0.03) 0.08 0.0 (0.0,0.32) -0.27 (-0.57,0.03) 0.08

subset 4 baseline 55 3.00 (0.51) 47 2.98 (0.58)

8-weeks 50 3.03 (0.61) 44 2.98 (0.59) 0.03 (-0.22,0.28) 0.82 0.0 (0.0,0.25) 0.03 (-0.22,0.28) 0.82

Peer

subset 3 baseline 42 2.93 (0.63) 38 2.90 (0.72)

8-weeks 32 3.17 (0.56) 39 1.17 (0.80) 0.03 (-0.37,0.42) 0.88 0.0 (0.0,0.51) 0.03 (-0.37,0.43) 0.88

subset 4 baseline 42 0.89 (0.78) 32 3.19 (0.61)

8-weeks 32 0.87 (0.75) 32 1.13 (0.93) -0.23 (-0.75,0.23) 0.37 0.12 (0.0,0.65) -0.29 (-1.06,0.48) 0.45

* Number of participants who completed outcome measure.

** EAS page hit raw data could only be collected and analysed at the cluster level, not the individual level because the electronic data were collected in batches.

Page 99: Knowledge translation intervention to improve evidence

Ch

apter 4

– Ran

dom

ised C

ontrolled

Trial R

esults

85

Table 10: Mean outcome scores for each cluster

Outcome score, n

mean (sd)

Outcome Variable Time Cluster 1 (Exp) Cluster 2 (Exp) Cluster 3 (Control) Cluster 4 (Control)

EBP behaviour

Self GAS

baseline

35

50.73 (13.75)

24

58.88 (12.64)

28

48.75 (10.85)

17

66.41 (15.46)

8-weeks

24

66.39 (16.02)

27

65.58 (11.08)

22

48.97 (15.34)

21

76.56 (11.92)

Peer GAS

baseline

33

60.19 (14.26)

19

64.68 (12.51)

28

55.20 (15.69)

15

73.32 (12.57)

8-weeks

21

72.69 (9.93)

23

75.69 (6.90)

23

57.47 (13.11)

19

81.66 (9.05)

EBP knowledge Exam score

baseline

35

7.69 (2.76)

22

8.27 (3.51)

28

6.50 (3.08)

22

10.11(3.04)

8-weeks

25

10.80 (2.37)

27

10.59 (2.14)

23

6.98 (3.26)

22

9.11(2.65)

EBP attitude

Self EBPAS subset 3 score

baseline

35

2.73 (0.73)

20

2.57 (0.79)

27

2.53(0.61)

20

2.64(0.83)

8-weeks

24

2.55(0.78)

26

2.70 (0.70)

22

2.52 (0.57)

22

3.01 (0.55)

Self EBPAS subset 4 score

baseline

20

2.86 (0.48)

35

3.08 (0.54)

27

2.84 (0.56)

20

3.16 (0.58)

8-weeks

24

3.10 (0.59)

26

2.96 (0.64)

22

2.85 (0.60)

22

3.11 (0.58)

Peer EBPAS subset 3 score baseline

30

2. 80 (0. 60)

12

3.24 (0.63)

23

2.87 (0.74)

15

2.95 (0.73)

Page 100: Knowledge translation intervention to improve evidence

Ch

apter 4

– Ran

dom

ised C

ontrolled

Trial R

esults

86

Outcome score, n

mean (sd)

Outcome Variable Time Cluster 1 (Exp) Cluster 2 (Exp) Cluster 3 (Control) Cluster 4 (Control)

8-weeks

16

3.20 (0.47)

16

3.14 (0.65)

17

3.07 (0.63)

15

3.32 (0.57)

Peer EBPAS subset 4 score

baseline

30

0.83 (0.64)

12

1.03 (1.08)

23

1.45 (0.86)

16

0.77 (0.48)

8-weeks

16

1.05 (0.86)

16

0.69 (0.60)

17

1.41 (0.99)

15

0.82 (0.76)

Web hits Page hits 8-weeks 2987 3136 928 749

Page 101: Knowledge translation intervention to improve evidence

87

Overview

This chapter details the methods of follow-up study 2-years after a

multifaceted knowledge KT strategy was introduced to improve AHPs’ EBP

behaviours, and includes:

1) Background information

2) Aims and hypotheses specific to the 2-year follow-up study

3) Trial design

4) Setting and eligibility criteria

5) Ethical approval

6) Procedures

7) Statistical analysis.

Background

Although AHPs EBP behaviours are known to take time to develop,23 few

studies seek to measure longer term effectiveness of KT strategies.4,173,196

Measuring the impact of KT strategies at different points in time is important

as behaviour change may not be immediate and may not change in a linear

fashion. Measuring EBP behaviour over time may be particularly important

if the strategies involved policy changes and organisational initiatives196 as

these KT strategies may change behaviour indirectly by gradually changing

culture and attitudes.197 Even if a KT strategy did result in immediate or

behaviour change, it is recommended that longitudinal data be collected to

ensure that the behaviour has been maintained.173

Page 102: Knowledge translation intervention to improve evidence

Chapter 5 – 2-year Follow-up Study Methods

88

A RCT was conducted with AHPs working at the Cerebral Palsy Alliance

between June and August 2009 (see Chapters 3 and 4). Participants were

cluster randomised to either the KT intervention group (KT strategy) or the

control group (communication skills). EBP behaviours were measured using

Goal Attainment Scaling at baseline and 8-weeks (primary endpoint).

Immediately after the RCT primary endpoint, each group received the

alternative intervention (see Figure 14), with the KT intervention group

receiving the communication skills intervention and the control group

receiving the KT intervention. Therefore the 2-year follow-up study is of one-

group not two-groups, with some of the participants having 8-weeks less

experience of using the KT strategies. We therefore are not looking for

between group differences, instead all participants were seen has having had

roughly equal exposure to the KT intervention long-term.

Aims and hypothesis

The primary aim of the follow-up study was to measure the effectiveness of a

KT strategy on AHPs’ EBP behaviours 2-years after the KT strategy was

implemented. Secondary aims were to determine the level of utilisation,

patterns of use and opinions regarding usefulness of the EAS. The

hypothesis for the primary aim of the follow-up study was:

1) Allied health professionals’ 2-year post KT strategy GAS T-scores will

be equal to, or statistically significantly greater than the 8-week GAS

scores.

In addition to this hypothesis, the study sought to answer research questions

regarding the EBP behaviours of the cohort of AHPs working at the study

organisation in November 2011.

2) What are GAS T-scores of AHPs working at the study site (regardless

of whether they participated in the RCT or not)?

3) How do these GAS T-scores compare to the baseline and 8-week GAS

T-scores?

Page 103: Knowledge translation intervention to improve evidence

Chapter 5 – 2-year Follow-up Study Methods

89

Trial design

A longitudinal study was conducted 2-years after the completion of the KT

strategy using an online survey (Survey Monkey™ Premium). The survey

provided a snapshot in time of the EBP behaviours of AHPs at Cerebral

Palsy Alliance. The survey included the same questions based on GAS as

used in the RCT, and some additional questions relating to the utilisation

and usefulness of the EAS (see Appendix 8). An online survey was ideal as

GAS questions easily translated from the paper format used in the original

RCT to electronic format offered on Survey Monkey™ Premium. Survey

Monkey™ was frequently used within the Cerebral Palsy Alliance for other

surveys, and the survey participants were therefore familiar with the layout

and style of the survey.

Survey Design

The survey questions were designed ensuring clear wording, grammar and

layout.199-201 A covering letter was provided including information about the

present study along with contact details if any questions arose201 (see

Appendix 8). The survey was confidential and de-identified so that response

collectors were unable to re-identify survey participants except by codename.

Possible security breaches regarding confidentiality were reported as

problematic with online surveys,202 especially via email, however Survey

Monkey provided a high level of security.

The survey comprised of 3 sections:

1) Demographic information that mirrored the information collected in

the original RCT

2) GAS questions that were included in the original RCT. Two additional

GAS goals were formulated by the expert panel and added at the end

of the survey. These goals were developed in response to feedback

from clinical seniors and managers regarding AHPs’ use of outcome

measures. Our original goals questioned whether valid, reliable

Page 104: Knowledge translation intervention to improve evidence

Chapter 5 – 2-year Follow-up Study Methods

90

outcome measures were being used. The additional questions

explored whether outcome measures were being scored completely

and documented thoroughly.

3) Questions relating to the EAS. These questions were based on

categories designed to evaluate the McMaster Plus web-based EBP

library.203 The categories aimed to collect information on:

� utility of the EAS, whether survey participants found what they

were looking for

� use of the EAS, what the purpose of obtaining information from the

EAS was

� usefulness of the EAS, whether the survey participants found the

information clinically useful.

Pilot testing

The online survey was pilot tested with five research staff (qualified AHPs

employed as research assistants) and five untrained volunteers.201 Feedback

was sought regarding time taken, ease of use, difficulties understanding

wording or grammatical suggestions, flow and order of the survey and any

technical difficulties and appearance of the survey.

Eligibility

All AHPs at Cerebral Palsy Alliance were invited to participate in the present

study (the RCT cohort, see Chapter 3). This included both the control and

experimental groups from the original RCT because after the RCT each

group then received the alternative intervention to ensure equal educational

exposure for all staff (see Figure 14).

Inclusion criteria:

1) qualified AHPs

2) employed at the study site

3) providers of direct clinical services to people with cerebral palsy and

their families.

Page 105: Knowledge translation intervention to improve evidence

Chapter 5 – 2-year Follow-up Study Methods

91

Exclusion criteria:

4) managers (staff without any clinical caseload)

5) staff members without a formal allied health university qualification,

such as project officers or welfare workers.

Ethics

The original RCT ethics application included the 2-year follow-up of the RCT

and as previously described was approved.

Procedures

Eligible participants were invited to participate in the study via an email sent

by a senior staff member of the Cerebral Palsy Alliance. The email included a

web link to the online survey. The participants had 4 weeks to complete the

survey. Two email reminders were sent after 2 weeks and 3 days before the

primary endpoint date, as reminders are known to increase survey response

rate.204-207 The participants were asked their original codename and if they

had forgotten it, were provided with a list of the codenames to assist recall.

Statistical analysis

Data analysed are summarised in Table 11. Data analysis included: (1)

descriptive statistics to summarise baseline characteristics of survey

participants who also were a part of the original RCT, and all eligible survey

participants, (2) calculation of differences between 8-week/2-year

characteristics for participant who were involved in the RCT – chi-squared

test, (3) calculation of mean GAS T-scores, standard deviations and range of

all eligible AHP staff. Chi-squared testing was performed to explore

significant differences, and regression analysis performed to measure

whether the particular covariate predicted outcome.

Page 106: Knowledge translation intervention to improve evidence

Chapter 5 – 2-year Follow-up Study Methods

92

Table 11: Data Analysed – follow-up study

Variable Description Outcomes

Demographic information – nominal variables

Attendance at 2009 training

Whether or not the participant attended training held June-Nov 2009

2 (yes, no)

Cluster The cluster at CP Alliance where the participant works

4

Profession Professional qualification gained at university (if any)

6 (SW, PT, OT, SP, Psych, other)

Role Job title/role at CP Alliance 9 (SW, PT, OT, SP, Psych, FT, manager, pathways, other)

Grade Level Grade/level that the participant is employed as at CP Alliance (may be a different role e.g. Manager)

8 (level 1,2,3,4,5,manager, team leader, other)

Previous continuing education in evidence-based medicine

Whether the participant has attended EBP workshops (including 2009 training)

2 (yes, no)

Previous continuing education in communication skills

Whether the participant has attended workshops in communication skills (incl 2009)

2 (yes, no)

Engl. first language?

Whether English is the participant’s first language

2 (yes, no)

Access to the EAS* How often the participant accesses the EAS

5 (daily, 1-4 times/wk, 1-4 times/mth, 1-4 times/yr, never)

EAS content* Whether the participant normally finds what they are looking for on the EAS

4 (yes, no, sometimes, don’t look for specific info)

EAS content usefulness*

The participant’s opinion of usefulness of information on EAS

5 (almost always useful, often useful, occasionally useful, rarely useful, never useful)

Purpose for using EAS*

Purpose for using the EAS 4 (information for client, general interest, conference etc, service planning)

Demographic information – continuous variables

Employment years How many years the participant has been employed by the organisation

Any number – expressed to 2 decimal places

Disability experience

How many years’ experience the participant has had in the disability field

Any number – expressed to 2 decimal places

Outcome measures – continuous variables

EBP GAS T-scores The GAS score (or T-score) is calculated using a formula devised by the original authors (Kiresuk and Sherman 1968). It has a mean of 50 and a SD of 10.

A numerical value to 5 decimal places

Page 107: Knowledge translation intervention to improve evidence

Chapter 5 – 2-year Follow-up Study Methods

93

Calculating change in GAS T-scores

The 8-week and 2-year EBP self GAS T-score means were compared using

paired t-tests (significance set at 0.05) and 95% CIs calculated. Only staff

members who were participants in the RCT were included in this analysis.

Missing data

It was anticipated that there would be missing data at the 2-year mark due to

staff changes and response rate of the follow-up survey. Missing data were

excluded from the analysis.

Synopsis

This chapter provided information about the methods used in the 2-year

follow-up study and included details of the design of the study, pilot testing,

setting and participants, eligibility criteria, ethics, procedures and data

analysis. The following chapter will present the results from the follow-up

study. Discussion and interpretation of the follow-up study are included in

the final chapter of the thesis (Chapter 7).

Page 108: Knowledge translation intervention to improve evidence

94

Overview

This chapter presents the results from the 2-year follow-up study and has 2

components.

1) Results from the follow-up study relating to all survey participants.

This related to the research questions:

� What are GAS T-scores of AHPs working at the study site?

� How do these GAS T-scores compare to the RCT baseline and 8-

week GAS T-scores?

2) Results from follow-up study relating to survey participants who

were a part of the RCT as well as the follow-up study. This related to

the hypothesis: AHPs’ 2-year post KT strategy GAS T-scores will be

equal to, or statistically significantly greater than the 8-week GAS

scores.

Survey results – all survey participants

Participant flow & baseline characteristics

There were 147 AHPs working at Cerebral Palsy Alliance at the time of the

survey (November 2011). Sixty-five AHPs responded, representing 44% of

the sampling frame. Table 12 details the survey participants’ baseline

characteristics.

Page 109: Knowledge translation intervention to improve evidence

Chapter 6 – 2-year Follow-up Study Results

95

Table 12: Survey participants’ baseline characteristics (n = 65)

Characteristic n (%)

Profession

Physiotherapist

Speech Pathologist

Occupational Therapist

Psychologist

Social Worker

Other

13 (20)

18 (27.7)

17 (26.2)

5 (7.7)

5 (7.7)

7 (10.8)

Grade level

Level 1

Level 2

Level 3

Other

Missing

14 (21.5)

34 (52.3)

12 (18.5)

5 (7.7)

0

Years at Cerebral Palsy Alliance

<1 year 11months

2-4 years 11months

5-9 years 11 months

>10 years

16 (24.6)

16 (24.6)

8 (12.3)

25 (38.5)

Years’ experience in disability field

<1 year 11months

2-4 years 11months

5-9 years 11 months

>10 years

1 (1.5)

12 (18.5)

14 (21.5)

38 (58.5)

Previous continuing education in evidence-based practice?

Yes

No

59 (90.8)

6 (9.2)

Is English your first language?

Yes

No

62 (95.4)

3 (4.6)

Cluster

Cluster 1

Cluster 2

Cluster 3

Cluster 4

16 (24.6)

21 (32.3)

17 (26.2)

11 (16.9)

Total n (%) 65 (100)

Comparison to all staff at Cerebral Palsy Alliance

The only information available for all staff at Cerebral Palsy Alliance was

professional group and cluster. The test for one proportion208 was performed

Page 110: Knowledge translation intervention to improve evidence

Chapter 6 – 2-year Follow-up Study Results

96

to test for differences in proportion between the survey participants and all

AHPs working at Cerebral Palsy Alliance.

There were no significant differences in proportions of physiotherapists or

speech pathologists. There were however, statistically significant differences

(p > 0.05; see Table 13) in the proportions of occupational therapists,

psychologists and social workers. There were no significant differences in

proportions between clusters (p > 0. 05; see Table 13).

Table 13: Survey respondents’ professional backgrounds

Survey participants (n = 65)

All allied health staff at Cerebral Palsy Alliance Nov 2011 (n = 147) p value

Profession

Physiotherapist

Speech Pathologist

Occupational Therapist

Psychologist

Social Worker

Other

13 (20)

18 (27.7)

17 (26.2)

5 (7.7)

5 (7.7)

3 (4.6)

35 (23.8)

41 (27.9)

65 (44.2)

4 (2.6)

2 (1.3)

0.51

0.85

0.01

0.01

0.001

Cluster

Cluster 1

Cluster 2

Cluster 3

Cluster 4

16 (24.6)

21 (32.3)

17 (25.7)

11 (16.9)

36 (24.5)

41 (27.9)

35 (23.8)

35 (23.8)

0.79

0.61

0.90

0.07

Results relating to Evidence Alert System

Results from survey questions relating to the frequency and type of use of

the EAS are detailed in Table 14. Due to pragmatic constraints we were

unable to compare web page hits from the RCT period to 2-year data as the

EAS was made available to all 1050 non-AHP Cerebral Palsy Alliance staff

immediately after the RCT was completed. This meant that non-AHPs also

used the EAS and we were therefore unable to extract accurate data for

AHPs only. The follow-up survey therefore included 4 questions about use

and usefulness of the EAS (see Appendix 8).

Page 111: Knowledge translation intervention to improve evidence

Chapter 6 – 2-year Follow-up Study Results

97

Table 14: Evidence Alert System survey question results (n = 65)

Question Percent

How often do you access the knowledge hub (intervention section with evidence levels, assessment, prognosis/prevalence or clinical algorithms)? Every day 0

1-4 times/week 25

1-4 times/month 36.5

1-4 times/year 32

Never 6.5

Do you normally find what you are looking for?

Yes 30.8

No 6.4

Sometimes 48.7

I browse rather than looking for specific information 14.1

Is the information you find on the knowledge hub useful?

Almost always useful 15.2

Often useful 46.8

Occasionally useful 27.8

Rarely useful 8.9

Never useful 1.3

For what purpose do you access the knowledge hub?

Information seeking with a specific client(s) in mind 76

General interest (not related to a specific client) 61.3

Presentation at conference, seminar, team meeting 24

Service planning 42.7

RCT follow-up study

Participant flow

There were 65 survey participants, 25 of whom were also participants in the

2009 RCT. De-identified data obtained from Human Resources indicated that

63/135 RCT participants had resigned from their positions at Cerebral Palsy

Alliance between November 2009 and November 2011. This meant that 35%

of the original participants in the RCT who still worked at Cerebral Palsy

Alliance responded to the survey. Figure 14 illustrates the flow of

participants from June 2009 to November 2011.

Page 112: Knowledge translation intervention to improve evidence

Chapter 6 – 2-year Follow-up Study Results

98

Figure 14: Participant flow throughout entire study

Baseline characteristics

Descriptive statistics were used to summarise participant characteristics (see

Table 15).

Profession

Included professionals were physiotherapists (24%), speech pathologists

(20%), occupational therapists (36%), psychologists (8%) and social workers

(12%). Table 15 displays the proportion of each profession at 8-weeks

(n = 135 AHPs) and 2-years (n = 25 AHPs). The professional background of

Page 113: Knowledge translation intervention to improve evidence

Chapter 6 – 2-year Follow-up Study Results

99

participants was comparable between the 8-week group and 2-year group

(see Table 15 for p values, indicating that there was no statistically significant

difference of the distribution of professional background of participants

between groups).

Grade level

At the 2-year mark, 20.8% of the sample were employed at the Cerebral Palsy

Alliance as grade 1 AHPs, 50% were grade 2 (clinical specialist), 20.8% were

grade 3 (clinical senior) and the remaining 8.4% were either consultants or

clinical managers. The distributions between the 8-week and 2-year groups

were comparable (see Table 15).

Years at Cerebral Palsy Alliance and years in disability

Forty-four per cent of respondents at the 2-year mark had worked for

Cerebral Palsy Alliance for less that 2-years, and 20% had worked at the

organisation for more than 10-years. Interestingly, 44% of respondents had

over 10-years’ experience in the disability sector. These percentages mirrored

the proportions in the 8-week group, with no statistically significant

differences found (see Table 15).

Previous EBP training

Seventy-two per cent of respondents in the 2-year group indicated that they

had participated in some form of evidence-based practice training, compared

to 88% in the 8-week group. Seven respondents did not complete this

question (missing data) in the survey. It can be assumed that all respondents

in this group (n = 25) have had previous EBP training as they all attended the

EBP workshops as a part of the RCT. P values were therefore not calculated

for this variable.

Page 114: Knowledge translation intervention to improve evidence

Chapter 6 – 2-year Follow-up Study Results

100

English as first language

Ninety-six per cent of the 2-year group had English as their first language

compared to 91% in 8-week group. These proportions were not significant

(see Table 15).

Table 15: Participant characteristics (RCT participants) – follow-up study

n (%)

8-weeks (n = 73) 2-years (n = 25) p value*

Profession

Physiotherapist

Speech Pathologist

Occupational Therapist

Psychologist

Social Worker

Missing

16 (22.0)

20 (27.4)

23 (31.4)

7 (9.6)

7 (9.6)

0

6 (24.0)

5 (20.0)

9 (36.0)

2 (8.0)

3 (12.0)

0

0.81

0.41

0.62

0.79

0.68

Grade level

Level 1

Level 2

Level 3

Other

Missing

19 (26)

34 (46.6)

13 (17.8)

6 (8.2)

1 (1.4)

5 (20.0)

12 (48.0)

5 (20.0)

2 (8.0)

1 (4.0)

0.55

0.73

0.70

0.13

1.0

Years at Cerebral Palsy Alliance

<2-years

2-4 years 11months

5-9 years 11 months

>10 years

Missing

28 (38.4)

15 (20.5)

21 (28.8)

9 (12.3)

0

11 (44.0)

4 (16.0)

5 (20.0)

5 (20.0)

0

0.57

0.57

0.33

0.13

Years’ experience in disability field

0-2 yrs

2-5 yrs

5-10 yrs

10+ yrs

Missing

11 (15.0)

10 (13.7)

25 (34.3)

27 (37.0)

0

5 (20.0)

4 (16.0)

5 (20.0)

11 (44.0)

0

0.48

0.74

0.13

0.47

Previous continuing education in evidence-based practice?

Yes

No

Missing n = * (%)

64 (87.7)

9 (12.3)

0

18 (72.0)

7 (28.0)

Is English your first language?

Yes

No

Missing

66 (90.4)

7 (9.6)

0

23 (95.8)

1 (4.2)

1 (4.2)

0.36

0.36

* p values were calculated by using the Test for One Proportion.208

Page 115: Knowledge translation intervention to improve evidence

Chapter 6 – 2-year Follow-up Study Results

101

Long-term effectiveness of KT strategy

Comparison of means – RCT participants

Eight-week and 2-year mean Goal Attainment Scaling (GAS) T-scores were

compared using paired t-tests (see Table 16). Samples compared were

participants who were a part of the RCT KT intervention group at 8-weeks as

well as participants at the 2-year mark (n = 19). Of the 25 survey participants

who were RCT participants, n=19 were a part of the original KT intervention

group, and n=6 were a part of the control group. The mean 8-week GAS T-

score was 60.71 compared to the 2-year GAS T-score of 90.29.

Table 16: GAS T-score 8-week to 2-year comparison (n = 19)

GAS mean T-score sd

Mean change 95% CI p value

8-weeks after KT strategy 60.71 19.10 — — —

2-years after KT strategy 90.29 21.89 29.58 12.66–46.50 0.02

Comparison of means based on attendance at 2009 EBP

training

Survey participants who attended EBP training, regardless of whether they

agreed to participate in the RCT (n = 31) had a mean GAS T-score of 93.57,

compared to those who were new staff whose GAS T-score of 82.45 (see

Table 17). A one-sample t-test indicated that the mean difference between

GAS T-scores was significant (p = 0.00). A regression analysis was performed

to see if attending the 2009 EBP training was predictive of GAS T-score

outcome. The finding was confirmed with an effect size of 11.12 (95% CI 1.86,

20.38; p = 0.019).

Page 116: Knowledge translation intervention to improve evidence

Chapter 6 – 2-year Follow-up Study Results

102

Table 17: GAS T-score comparison based on attendance at original EBP training

GAS mean T-score

sd 95% CI of the difference

p value

Respondents who had not attended 2009 EBP training (n = 34)

82.45 15.65 75.68–89.21 —

Respondents who had attended 2009 EBP training (n = 31)

93.57 18.65 87.52–99.61 0.001

Evidence-based practice behaviours of survey

participants according to cluster

The mean GAS T-score for all survey participants was 89.44 (sd 18.29). This is

in contrast to the baseline GAS T-scores (prior to the RCT) of 54.05 (sd 13.81)

and the 8-week KT intervention group GAS T-score (and the end of the RCT

– primary endpoint) of 65.96 (sd 13.49). Respondents from cluster 4 were the

highest performers, and cluster 3 were the poorest performing cluster at the

2-year mark with mean GAS T-score of 78.68 (see Table 18).

Table 18: GAS T-score according to original cluster

Cluster GAS mean T-score

Cluster 1 (n = 16) 91.15

Cluster 2 (n = 21) 95.94

Cluster 3 (n = 17) 78.68

Cluster 4 (n = 11) 96.42

Synopsis

This chapter presented the results from the 2-year follow-up study. The

participant flow and results relating to all survey participants were

presented first. Secondly, the flow of participants and results relating to

participants who were in the original RCT as well as the 2-year survey were

presented. Discussion and interpretation of these results are included in the

following chapter, along with strengths, limitations, recommendations and

conclusions for the entire project.

Page 117: Knowledge translation intervention to improve evidence

103

Overview

The aim of this thesis was to measure the effectiveness of a multifaceted KT

strategy to change AHPs’ EBP behaviour. We measured effectiveness by

conducting a cluster RCT in 2009 and a follow-up study 2-years later. This

chapter will provide interpretation of the findings along with implications

and recommendations for research and practice.

1) Key findings providing a brief summary of the findings from both

studies included in the doctoral programme

2) Interpretation and discussion of results regarding EBP behaviour

3) Interpretation and discussion of results regarding EBP knowledge

4) Interpretation and discussion of results regarding EBP attitudes

5) Interpretation and discussion regarding use of the EAS

6) Strengths and limitations of the studies

7) Recommendations for organisations and future research

8) Conclusions.

Key findings

Table 19: Key findings at a glance

Study EBP behaviour Self-rated (GAS)

EBP behaviour Peer-rated (GAS)

EBP knowledge (exam scores)

EBP attitudes (EBPAS subsets)

RCT Uncertain* Uncertain* Improved No change**

Follow-up Improved Not measured Not measured Not measured

* Uncertain = unable to confirm whether or not behaviour improved.

** No change = statistically significant improvement not detected.

Page 118: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

104

Evidence-based practice behaviour

The multifaceted knowledge translation strategy did not

result in statistically significant behaviour change over the 8-

week RCT period

The KT intervention group in the RCT improved within the study period,

but not statistically significantly more than the control group once clustering

was accounted for. We consider this null finding to be a possible type II error

because our study was underpowered owing to the fact that the number of

participants required to account for clustering of EBP behaviours within sites

exceeded the number of employees available. Owing to the type II error we

remain unsure of the true effect of our KT strategy, but we discovered a

number of potentially important findings that may contribute to future KT

endeavours and the body of research.

Important findings

Outlying cluster

The high ICCs (ranging from 0.33 to 0.64) for EBP behaviour measures,

indicated substantial correlation of behaviours within clusters, and indicated

differences in behaviours between clusters. When we examined the mean

change scores for each cluster, cluster 3 (who were a part of the control

group) showed no statistically significant GAS T-score change from baseline

to 8-weeks. Clusters 1 and 2, who received the KT strategy improved their

GAS T-scores from baseline to 8-weeks. The remaining cluster (cluster 4,

which was part of the control group) was an obvious outlier with the highest

baseline GAS T-scores (higher than the post intervention scores of the other

clusters receiving the KT intervention), high baseline knowledge scores and

increased self- and peer-rated GAS T-scores over the study period.

Variability between natural groupings (such as clinical, departmental or

regional) has been noted in the KT literature previously.15,164 Perhaps the

high baseline EBP scores for cluster 4 reflected positive EBP culture and

Page 119: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

105

practices due to cluster 4’s manager.15,83,209 The notion that a manager can

strongly influence research culture is by no means new,89,164 as some opinion

leaders are known to strongly influence EBP behaviour.209,210 Cluster 4’s

manager was active in promoting EBP behaviour amongst staff. A large

range of KT strategies were in place in cluster 4 prior to this study, including

policies regarding certain EBP behaviours to be compulsory, audit and

feedback, financial incentives, workshops and mentoring. It is conceivable

that cluster 4 therefore had both better readiness and receptivity to EBP

supports as they had essentially been engaging in active KT for a longer

period than the other clusters.15 That said, positive EBP culture is considered

to be related to positive EBP attitudes89 and EBPAS scores measuring attitude

change of cluster 4 were no different from the other clusters at baseline or 8-

weeks. This may have reflected measurement error, or may indicate that

positive attitudes in cluster 4 were not necessary as mandatory policies

within that cluster were the driving force behind the higher GAS scores.

Behaviourally meaningful gains

In the RCT, improvement in EBP behaviour was not statistically significant

after adjusting for cluster effect, however similar improvements from peer-

ratings suggest possible improvements that were behaviourally meaningful.

The 2-year follow-up study adds weight to the notion that the improvement

in the RCT was genuine, detecting improvement in EBP behaviour amongst

survey participants. The large variability in behaviour observed between

clusters in both the RCT and follow-up study suggests barrier assessments

and subsequent KT strategies may need to target subgroups within an

organisation.

Allied health professional evidence-based practice

behaviours improved over a 2-year period

Knowledge translation intervention group at 2-years

Our hypothesis that AHPs’ 2-year post KT strategy GAS T-scores would be

equal to or statistically significantly greater than the 8-week GAS T-scores

Page 120: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

106

was confirmed (GAS T-score change = 29.58; 95%CI 12.66, 46.52; p = 0.02).

This finding needs to be interpreted in light of the small sample in the 2-year

group (25/135 original RCT participants responded to the survey, that is 35%

of staff who were still employed). It is possible that the higher performers

comprised a sizable part of the survey participants, and low responders

chose not to do the survey.207,211 That said, an increase of 29.58 GAS T-score

points is considered a clinically significant improvement in EBP behaviour,

even if only a portion of AHP staff achieved that level of behaviour change.

The fact that EBP behaviour improved over 2-years may mean that there was

behaviour change during the RCT that was unable to be detected due to the

type II error. Alternatively, it may suggest that EBP behaviours did not

improve in the 8-week period but rather took time to improve.23 This

position is supported by the fact that AHPs who received the KT strategy

had statistically significantly higher 2-year GAS T-scores than AHPs who

were not employed at the time of the KT strategy (93.57 compared to 82.53;

p = 0.00).

It is also possible that the high GAS T-scores at 2-years are not representative

of the RCT participants, and that the lower performers who did not respond

would have lowered the mean score, however we are unable to confirm or

deny this.

All survey participants after 2-years

‘All survey participants’ refers to AHPs who were a part of the RCT (n = 25),

as well as AHPs who had joined the organisation since November 2009

(n = 41). The overall GAS T-score (89. 44) was substantially higher than the

KT intervention group’s 8-week GAS T-score, again suggesting considerable

change in EBP behaviour. This however, must be considered in light of the

low response rate (44% of all AHPs employed). It is plausible that the

improvement in GAS T-scores was partially due to EBP behaviours being

embedded in documentation and client processes. These included

mandatory use of outcome measures and documentation of level of evidence

Page 121: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

107

used when selecting client treatments. Interestingly, when 2-year GAS T-

scores were examined according to the originally allocated clusters, one

cluster (cluster 3) had a much lower mean GAS T-score than the other 3

clusters. Clusters 1, 2 and 4 all had GAS T-scores over 91, but cluster 3’s GAS

T-score was 78. 68. This may have been due to any of the following: (1) the

documentation changes not being consistently applied in this cluster, (2)

lower performers in this cluster electing to respond to the survey and the

sample was therefore not representative of the entire cluster’s performance,

or (3) the manager of that cluster not leading the change effectively.209,210

Whatever the reason, this finding suggests that KT strategies may need to be

designed for different subgroups within an organisation, as EBP barriers

may vary according to natural groupings such as worksite or profession.

Evidence-based practice knowledge

The multifaceted KT strategy improved evidence-based

practice knowledge over the 8-week RCT period

Our hypothesis that the KT strategy would improve knowledge was

supported with the KT intervention group knowledge exam scores showing

a statistically significant improvement compared to the control group.

Interestingly, knowledge scores were not affected by the cluster effect. This

suggests that although participants within a cluster tend to have similar EBP

behaviours, knowledge is not as susceptible to the influences of workplace

context and peers. The finding also highlights how much more complex

measuring and changing EBP behaviour is compared to EBP knowledge.16,212

This supports previous KT research findings that changes in knowledge do

not always equate to changes in behaviour.14-16

Page 122: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

108

Evidence-based practice attitudes

The multifaceted knowledge translation strategy did not

change evidence-based practice attitudes over the 8-week

RCT period

Our hypothesis that EBP attitudes would improve was not proven correct

and thus had to be rejected. Research measuring attitude change is

conflicting, with some interventions reporting no change in attitudes49,149 and

other studies reporting improvement in attitudes.213,214 We postulate the lack

of change in EBP attitudes in our study may be explained by:

1) High baseline EBP attitudes, and there was conceivably a ceiling effect

on the EBPAS. This was plausible as EBP had been a focus in the

organisation for some time prior to the RCT. In this case, positive

attitudes at baseline, increased knowledge scores and policy changes

may together have resulted in the behaviourally meaningful changes

observed. There is however no normative data for AHPs on the

EBPAS, so it is difficult to say whether or not baseline attitudes were

high compared to AHPs in other organisations.

2) EBPAS subsets potentially not being sensitive enough to detect

attitude change and the psychometrics for sensitivity in this

population are unknown.

3) The EBPAS being an accurate, sensitive measure and that attitudes did

not improve from the KT strategy. This third possibility supports the

notion that improved knowledge was not adequate to lead to

statistically significant behaviour change, and that a shift in attitudes

was also needed.215 Conversely, the behaviourally meaningful change

that was observed potentially bypassed the need for attitude change

by employing strategies such as mandatory use of documentation and

outcome measures.

Page 123: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

109

4) EBP attitudes taking a longer period of time than knowledge to

change, and the 8-week trial was too short to detect change. We were

unable to confirm or refute this, as EBP attitudes were not measured

at 2-years. Interestingly, KT literature suggests that changing EBP

attitudes does not necessarily lead to behaviour change16 even though

there is some evidence suggesting that it is a precursor to behaviour

change.164,215,216

Use of the evidence alert system

Allied health professionals accessed the Evidence Alert

System and found it useful at 8-weeks and 2-years

The RCT demonstrated increased use of our evidence-based resource (the

EAS), however we were unable to confirm that this translated to a

statistically significant change in EBP behaviour. This supports previous

research that detected increased use and perceived usefulness of an

evidence-based resource along with no changes in behaviour.172,203 The 2-

year follow-up study suggested that the EAS has continued to be well

accessed (25% AHPs use EAS > 1/week; 36. 5% > 1/month). AHPs in study 2

reported that the EAS was almost always useful or often useful 62% of the

time, and 27. 8% found it occasionally useful. These results were also in-line

with previous research reporting 70-80% usefulness ratings.203

Strength and limitations

Strengths

RCT

The cluster RCT had a number of strengths including the rigorous design

and broad robust behaviour measurement. Our chosen measurement

instrument (GAS) was sensitive to change90,217 and appeared accurate as self-

Page 124: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

110

and peer-rated scores mirrored each other. Distinguishing features of our

study were that we measured a wide set of behaviours amongst AHPs

working with people with cerebral palsy. The mix of AHPs in our sample is

fairly representative of other community based disability organisations,

increasing external validity. This is the first RCT in the KT literature

involving social workers, psychologists or occupational therapists.16 The KT

strategy itself was a study strength being based on a solid theoretical

model,51,53,55 in response to a comprehensive barriers assessment, with

desired outcomes clearly defined, and included a range of interventions, not

only educational interventions.16

2-year follow-up study

There were a number of strengths of this study. First, by using GAS as our

primary outcome measure, we were able to nest this rigorous tool within a

survey, making 2-year follow up feasible. Second, we measured EBP

behaviour of a wide range of AHPs over a period of time, that were again a

representative mix of AHPs in disability organisations. Third, the survey

design enabled the development of additional questions relating to EAS use.

Fourth, that data gathered provided important information for the

organisation in planning future KT strategies. Fifth, the inherent strength of

survey design obtained a snapshot of the EBP behaviours of the AHPs

working at Cerebral Palsy Alliance at that point in time.

Limitations

RCT

There are a number of study limitations. First and foremost, the pragmatic

constraints that limited the number of available clusters and participants led

to low statistical power causing a probable type II error.

Second, the large differences observed between clusters suggest that we

should have tailored the KT strategy to each cluster rather than the whole

organisation as it appears the whole organisation was not homogenous.

Page 125: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

111

Third, the evidence base regarding whether proxy behaviour measures

represent actual behaviour is not firmly established, but with preferred rival

direct measures also lacking validity and reliability.189,218 Moreover, direct

measurement was not affordable in our study given the geography involved,

and indirect measurement tools were therefore used.163,219 To minimise

measurement bias, systematic review recommendations regarding indirect

measures were followed, and included using: (1) acceptable indirect

measures189,219 (such as self- and peer-rated behaviour triangulated with

unbiased web hit data),152 (2) measurement tools with strong psychometric

properties,166 (3) more than one tool to measure behaviour change,167 and (4)

a sound theoretical model as a basis of the intervention.55

Fourth, the time frame of the trial was short considering that many EBP

behaviours and system/organisational changes (such as documenting client

goals and mentoring) take time to develop.173 Fifth, the return rate of the

GAS exam form and EBPAS was not perfect (60–82%), with the 8-week data

having more missing data.

2-year follow-up study

First causal links between the original KT strategy and the 2-year data were

unable to be definitively drawn for a number of reasons: (1) the nature of

longitudinal design utilising survey methodology precluded certainty of

findings, (2) at the 2-year mark there was no control group as both groups

had received the interventions, (3) there was a lot of missing data due to staff

turnover (47%) and low response rates. Low response rates are a consistent

problem in research involving health professionals.203,207 Low response rates

lead to an unknown level of bias207,211 as we cannot be certain whether this

sample were indeed representative of all AHPs in the organisation.

Page 126: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

112

Recommendations

Future research

First, documenting the detail of each component of KT strategies along with

barriers and facilitators is integral so that replication of successful strategies

amongst AHPs is possible.16 Second, the RCT highlighted the methodological

challenges of conducting empirical research in a community-based

organisation with fixed cluster and participant numbers. Whether or not

RCTs are a feasible option in community organisations is debatable. For this

reason, conducting future KT research in the context of a solid theoretical

framework or model, such as the KTA process is highly recommended. It

may be that other research designs such as case studies, interrupted time

series, qualitative studies and mixed methods are more appropriate164,220 to

further explore which KT strategies are most effective. Third, the follow-up

study encountered the well-reported problem amongst health professionals

of low response rate, and it may be that incentives need to be offered to

improve this.203 Fourth, research is needed measuring the effectiveness of KT

strategies to improve not only AHPs’ EBP behaviour, but also the impact of

KT strategies on client outcomes. Fifth, research is needed regarding the

relative cost-effectiveness of KT strategies especially given that many

components of KT strategies (workshops, paid EBP time, maintenance of

evidence-based resources) are likely to be costly and ongoing.

Recommendations for organisations

Barriers assessment targeting subgroups

KT literature recommends tailoring KT strategies to overcome known

barriers within organisations,65,221 however our findings suggest that this

may need to go even further with KT strategies being designed for

subgroups within an organisation. The impact of different workplace micro-

cultures may mean that there are dramatically different barriers needing

different KT strategies to be effective.15

Page 127: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

113

Ongoing process of knowledge translation

All organisations experience turnover of staff including managers, AHPs and

decision makers. When existing staff leave an organisation or new staff join

there is an inevitable shift in organisational and interpersonal dynamics. The

resultant dynamic may facilitate or impede the flow of research into practice.

This means that monitoring EBP behaviour and assessing new barriers and

facilitators is not a one-off task, but rather continuous, as depicted in the

KTA process. The KTA process provides a flexible, pragmatic model to

design, implement and measure a KT strategy in any setting. Decision

makers need to be aware that embarking on KT to improve EBP behaviour is

an ongoing long-term endeavour that may require extra resources.

Targeting managers and decision makers

Considering the importance of management-led change, targeting policy

makers and managers may be beneficial. No studies directing KT to policy

makers/management was found in the allied health literature. In the public

health domain, Dobbins et al.164 found that sending individualised evidence

to decision makers at the right time, led to an increase in evidence based

policies. As managers are key people involved in implementing systemic

changes that can lead to EBP behaviour changes, targeting KT strategies to

managers and decision makers may be a wise use of resources.

Development and maintenance of evidence-based resources

Provision of high quality evidence is the cornerstone to KT, and evidence-

based resources such as the EAS are therefore critical. Evidence-based

resources need to be regularly updated to reflect most recent research

findings and accommodate needs of AHPs 13. This role can be time

consuming and decision makers need to ensure that adequate resources are

allocated. The cost of employing staff to build and maintain an evidence-

based resource may however be less than the cost of each AHP’s time to

search and appraise research individually. Although resources such as the

Page 128: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

114

EAS are an integral part of KT, published studies suggest that provision of

evidence-based resources such as the EAS, are not enough to change EBP

behaviour.13,164,203 It is therefore recommended that an evidence-based

resource is one part of an ongoing KT strategy, and the EAS be developed

further. In order for the EAS to be a level 5 evidence based information

resource on the 5S pyramid, content would need to be integrated into client

documentation systems to ensure that evidence is always a part of AHPs’

clinical decision making. Evidence that is individualised to the person and

embedded so that the right information is delivered at the right time (‘push’

messages) are considered the gold standard.13,164

Co-operation between organisations

Considering that the development and maintenance of evidence-based

resources are costly and complex, opportunities for organisations to

collaborate may be mutually beneficial.222 The opportunity for organisations

to co-operate may however extend further than this. KT strategies could be

designed jointly with barrier assessments conducted for each setting.

Commonly beneficial KT strategies such as workshops and research

syntheses could be developed and delivered collaboratively, saving

significant resources and potentially improving overall outcomes.

Conclusion

This thesis presents original research investigating the effectiveness of KT

strategies with AHPs. Two studies measuring change in EBP behaviour were

conducted and although EBP behaviour appeared to improve in the

hypothesised direction, methodological issues due to pragmatic constraints

preclude certainty of our findings. This raises the question as to whether

other research designs may be better suited to KT research in community-

based organisations.164 Despite this, both studies make an important

contribution to the scant AHP evidence base in KT.16,66 Our findings suggest

that KT is a long-term process and KT strategies need to be customised to

Page 129: Knowledge translation intervention to improve evidence

Chapter 7 – Discussion

115

subgroups within an organisation. Researchers, policy makers and clients

need to effectively collaborate to ensure that reliable, relevant research

becomes embedded into everyday care in an ongoing way.

Page 130: Knowledge translation intervention to improve evidence

116

1. Cochrane LJ, Olson CA, Murray S, Dupuis M, Tooman T, Hayes S. Gaps between knowing and doing: understanding and assessing the barriers to optimal health care. Journal of Continuing Education in the Health

Professions. 2007;27(2):94-102. 2. Grol R, Grimshaw J. From best evidence to best practice: effective

implementation of change in patients' care. The Lancet.

2003;362(9391):1225-1230. 3. Straus S, Tetroe J, Graham I. Defining knowledge translation. Canadian

Medical Association Journal. 2009;181(3-4):165. 4. Forsetlund L, Bjorndal A, Rashidian A, et al. Continuing education

meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2009(2):CD003030.

5. O'Brien MA, Rogers S, Jamtvedt G, et al. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane

Database Syst Rev. 2007(4):CD000409. 6. Menon A, Korner-Bitensky N, Kastner M, McKibbon K, Straus S. Strategies

for rehabilitation professionals to move evidence-based knowledge into practice: A systematic review. Journal of Rehabilitation Medicine.

2009;41(13):1024-1032. 7. O'Connor S, Pettigrew C. The barriers perceived to prevent the successful

implementation of evidence based practice by speech and language therapists. International Journal of Language & Communication Disorders.

2009;44(6):1018-1035. 8. McCluskey A. Occupational therapists report a low level of knowledge,

skill and involvement in evidence-based practice. Australian

Occupational Therapy Journal. 2003;50(1):3-12. 9. Salbach N, Jaglal S, Korner-Bitensky N, Rappolt S, Davis D. Practitioner

and organizational barriers to evidence-based practice of physical therapists for people with stroke. Physical Therapy. 2007;87(10):1284.

10. Glasziou P, Ogrinc G, Goodman S. Can evidence-based medicine and clinical quality improvement learn from each other? BMJ quality & safety.

2011;20(Suppl 1):i13. 11. Saleh M, Korner-Bitensky N, Snider L, et al. Actual vs. best practices for

young children with cerebral palsy: A survey of paediatric occupational therapists and physical therapists in Quebec, Canada. Developmental

Neurorehabilitation. 2008;11(1):60-80. 12. Hanna SE, Russell DJ, Bartlett DJ, Kertoy M, Rosenbaum PL, Wynn K.

Measurement practices in pediatric rehabilitation: a survey of physical therapists, occupational therapists, and speech-language pathologists in Ontario. Phys Occup Ther Pediatr. 2007;27(2):25-42.

13. Straus S, Haynes R. Managing evidence-based knowledge: the need for reliable, relevant and readable resources. Canadian Medical Association

Journal. 2009;180(9):942. 14. McCluskey A, Lovarini M. Providing education on evidence-based

practice improved knowledge but did not change behaviour: a before and after study. BMC Medical Education. 2005;5(1):40.

Page 131: Knowledge translation intervention to improve evidence

References

117

15. Pennington L, Roddam H, Burton C, Russell I, Russell D. Promoting research use in speech and language therapy: a cluster randomized controlled trial to compare the clinical effectiveness and costs of two training strategies. Clinical rehabilitation. 2005;19(4):387.

16. Scott SD, Albrecht L, O'Leary K, et al. Systematic review of knowledge translation strategies in the allied health professions. Implementation

Science. 2012;7(1):70. 17. Campbell MK, Piaggio G, Elbourne DR, Altman DG. Consort 2010

statement: extension to cluster randomised trials. BMJ: British Medical

Journal. 2012;345. 18. Justice L. Evidence-based terminology. American Journal of Speech-

Language Pathology. 2008;17(4):324. 19. Jeffries L, Prior M, Kumar S. Evidence Based Practice: the three little

words in Allied Health2007. 20. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS.

Evidence based medicine: what it is and what it isn't. BMJ. Jan 13 1996;312(7023):71-72.

21. Guyatt G. Evidence-based medicine. A new approach to teaching the practice of medicine (Guyatt, G et al). JAMA. Nov 4 1992;268(17):2420-2425.

22. Sackett D, Straus S. Finding and applying evidence during clinical rounds: the" evidence cart". JAMA. 1998;280(15):1336.

23. Bates DW, Kuperman GJ, Wang S, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. Journal of the American Medical Informatics

Association. 2003;10(6):523-530. 24. Jette D, Bacon K, Batty C, et al. Evidence-based practice: beliefs, attitudes,

knowledge, and behaviors of physical therapists. Physical Therapy.

2003;83(9):786. 25. Metcalfe C, Lewin R, Wisher S, Perry S, Bannigan K, Moffett JK. Barriers to

Implementing the Evidence Base in Four NHS TherapiesFunding:: Dietitians, occupational therapists, physiotherapists, speech and language therapists. Physiotherapy. 2001;87(8):433-441.

26. Jenicek M. Evidence-based medicine: Fifteen years later. Golem the good, the bad, and the ugly in need of a review? Medical science monitor.

2006;12(11). 27. Holmes D, Murray S, Perron A, Rail G. Deconstructing the evidence-based

discourse in health sciences: truth, power and fascism. International

Journal of Evidence-Based Healthcare. 2006;4(3):180-186. 28. Biller-Andorno N, Lenk C, Leititis J. Ethics, EBM, and hospital

management. Journal of medical ethics. 2004;30(2):136. 29. Chen LM, Jha AK, Guterman S, Ridgway AB, Orav EJ, Epstein AM. Hospital

cost of care, quality of care, and readmission rates: penny wise and pound foolish? Archives of Internal Medicine. 2010;170(4):340-346.

30. Colyer H, Kamath P. Evidence based practice. A philosophical and political analysis: some matters for consideration by professional practitioners. Journal of Advanced Nursing. 1999;29(1):188-193.

31. Leeder SR, Rychetnik L. Ethics and evidence-based medicine. The Medical

Journal of Australia. 2001;175(3):161.

Page 132: Knowledge translation intervention to improve evidence

References

118

32. Tetroe J, Graham I, Foy R, et al. Health research funding agencies' support and promotion of knowledge translation: an international study. Milbank

Quarterly. 2008;86(1):125-155. 33. Reilly S, Douglas J, Oates J. Evidence-based practice and speech pathology:

future directions: London: Whurr Publishers; 2004. 34. Dollaghan C. Evidence-based practice in communication disorders: what

do we know, and when do we know it? Journal of Communication

Disorders. 2004;37(5):391-400. 35. Gira E, Kessler M, Poertner J. Influencing social workers to use research

evidence in practice: Lessons from medicine and the allied health professions. Research on Social Work Practice. 2004;14(2):68.

36. Grimmer K, Bialocerkowski A, Kumar S, Milanese S. Implementing evidence in clinical practice: the [] therapies' dilemma. Physiotherapy.

2004;90(4):189-194. 37. Wilkinson SA, Hinchliffe F, Hough J, Chang A. Baseline Evidence-based

Practice Use, Knowledge, and Attitudes of Allied Health Professionals: A Survey to Inform Staff Training and Organisational Change. Journal of

Allied Health. 2012;41(4):177-184. 38. Forsyth K, Mann S, Kielhofner G. Scholarship of practice: Making

occupation-focused, theory-driven, evidence-based practice a reality. The

British Journal of Occupational Therapy. 2005;68(6):260-268. 39. Reilly S. Making speech pathology practice evidence based: A response to

Beecham, Elliot, Enderby, Logemann and Vallino-Napoli. International

Journal of Speech-Language Pathology. 2004;6(2):138-140. 40. Reilly S. The challenges in making speech pathology practice evidence

based. International Journal of Speech-Language Pathology.

2004;6(2):113-124. 41. Davis D. Continuing education, guideline implementation, and the

emerging transdisciplinary field of knowledge translation. Journal of

Continuing Education in the Health Professions. 2006;26(1):5-12. 42. Ketelaar M, Russell D, Gorter J. The challenge of moving evidence-based

measures into clinical practice: lessons in knowledge translation. Physical & Occupational Therapy In Pediatrics. 2008;28(2):191-206.

43. Graham, Tetroe J. Some theoretical underpinnings of knowledge translation. Academic Emergency Medicine. 2007;14(11):936-941.

44. Mularski R, Asch S, Shrank W, et al. The Quality of Obstructive Lung Disease Care for Adults in the United States as Measured by Adherence to Recommended Processes*. Chest. 2006;130(6):1844.

45. Abraham NS, El-Serag HB, Johnson ML, et al. National adherence to evidence-based guidelines for the prescription of nonsteroidal anti-inflammatory drugs. Gastroenterology. 2005;129(4):1171-1178.

46. Davis D, Evans M, Jadad A, et al. The case for knowledge translation: shortening the journey from evidence to effect. BMJ. Jul 5 2003;327(7405):33-35.

47. Eisenberg M, Califf R, Cohen E, Adelman A, Mark D, Topol E. Use of evidence-based medical therapy in patients undergoing percutaneous coronary revascularization in the United States, Europe, and Canada. The

American journal of cardiology. 1997;79(7):867-872.

Page 133: Knowledge translation intervention to improve evidence

References

119

48. Vallino-Napoli L. A move by Speech Pathologists towards evidence-based practice: A commentary on Reilly. International Journal of Speech-

Language Pathology. 2004;6(2):136-137. 49. Stevenson K, Lewis M, Hay E. Do physiotherapists' attitudes towards

evidence-based practice change as a result of an evidence-based educational programme? Journal of Evaluation in Clinical practice.

2004;10(2):207-217. 50. Dysart A, Tomlin G. Factors related to evidence-based practice among US

occupational therapy clinicians. The American Journal of Occupational

Therapy. 2002;56(3):275-284. 51. Graham, Logan J, Harrison M, et al. Lost in knowledge translation: Time

for a map? Journal of Continuing Education in the Health Professions.

2006;26(1):13-24. 52. Estabrooks C, Thompson D, Lovely J, Hofmeyer A. A guide to knowledge

translation theory. Journal of Continuing Education in the Health

Professions. 2006;26(1):25-36. 53. Grol RP, Bosch MC, Hulscher ME, Eccles MP, Wensing M. Planning and

studying improvement in patient care: the use of theoretical perspectives. Milbank Quarterly. 2007;85(1):93-138.

54. Grol R. Personal paper. Beliefs and evidence in changing clinical practice. BMJ: British Medical Journal. 1997;315(7105):418.

55. Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci. 2010;5:14.

56. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. American

Journal of Public Health. 1999;89(9):1322. 57. Glisson C, Schoenwald SK. The ARC organizational and community

intervention strategy for implementing evidence-based children's mental health treatments. Mental Health Services Research. 2005;7(4):243-259.

58. Frambach RT, Schillewaert N. Organizational innovation adoption: a multi-level framework of determinants and opportunities for future research. Journal of Business Research. 2002;55(2):163-176.

59. Mendel P, Meredith LS, Schoenbaum M, Sherbourne CD, Wells KB. Interventions in organizational and community context: a framework for building evidence on dissemination and implementation in health services research. Administration and Policy in Mental Health and Mental

Health Services Research. 2008;35(1):21-37. 60. Greenhalgh T. Intuition and evidence--uneasy bedfellows? The British

Journal of General Practice. 2002;52(478):395. 61. Aarons G, Hurlburt M, Horwitz S. Advancing a conceptual model of

evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services

Research. 2010:1-20. 62. Kitson A, Rycroft-Malone J, Harvey G, McCormack B, Seers K, Titchen A.

Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges. Implementation Science. 2008;3(1):1.

Page 134: Knowledge translation intervention to improve evidence

References

120

63. Logan J, Graham I. The Ottawa Model of Research Use. Models and

Frameworks for Implementing Evidence-based Practice: Linking Evidence

to Action. 2010:83. 64. Graham ID, Logan J. Innovations in knowledge transfer and continuity of

care. The Canadian journal of nursing research= Revue canadienne de

recherche en sciences infirmières. 2004;36(2):89. 65. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge

translation of research findings. Implementation Science. 2012;7(1):50. 66. Dizon JM, Grimmer-Somers KA, Kumar S. Current evidence on evidence-

based practice training in allied health: a systematic review of the literature. Int J Evid Based Healthc. Dec 2012;10(4):347-360.

67. McKinlay RJ, Cotoi C, Wilczynski NL, Haynes RB. Systematic reviews and original articles differ in relevance, novelty, and use in an evidence-based service for physicians: PLUS project. Journal of clinical epidemiology.

2008;61(5):449-454. 68. Badgett R. Why would physicians undervalue reviews by the Cochrane

Collaboration? Journal of clinical epidemiology. 2008;61(5):419-421. 69. Tricco AC, Straus SE, Moher D. How can we improve the interpretation of

systematic reviews? BMC medicine. 2011;9(1):31. 70. Lai NM, Teng CL, Lee ML. Interpreting systematic reviews: are we ready

to make our own conclusions? A cross-sectional study. BMC medicine.

2011;9(1):30. 71. Hider PN, Griffin G, Walker M, Coughlan E. The information-seeking

behavior of clinical staff in a large health care organization. Journal of the

Medical Library Association: JMLA. 2009;97(1):47. 72. Coumou HCH, Meijman FJ. How do primary care physicians seek answers

to clinical questions? A literature review. Journal of the Medical Library

Association. 2006;94(1):55. 73. Grol R, Wensing M. What drives change? Barriers to and incentives for

achieving evidence-based practice. The Medical Journal of Australia.

2004;180(6 Suppl):S57. 74. Albanese MA, Mitchell S. Problem-based learning: A review of literature

on its outcomes and implementation issues. Academic Medicine. 1993. 75. Botti M, Reeve R. Role of knowledge and ability in student nurses' clinical

decision making. Nursing & health sciences. 2003;5(1):39-49. 76. Norman G. Research in medical education: three decades of progress.

BMJ. 2002;324(7353):1560. 77. Nickerson RS. Confirmation bias: A ubiquitous phenomenon in many

guises. Review of General Psychology. 1998;2(2):175. 78. Kok G, De Vries H, Mudde AN, Strecher VJ. Planned health education and

the role of self-efficacy: Dutch research. Health education research.

1991;6(2):231. 79. Petty RE, Wegener DT. Attitude change: Multiple roles for persuasion

variables. Vol 11998. 80. Rosenstock IM, Strecher VJ, Becker MH. Social learning theory and the

health belief model. Health Education & Behavior. 1988;15(2):175. 81. Doumit G, Gattellari M, Grimshaw J, O'Brien M. Local opinion leaders:

effects on professional practice and health care outcomes. status and

date: Edited (no change to conclusions), published in. 2007;1.

Page 135: Knowledge translation intervention to improve evidence

References

121

82. Rogers E. A prospective and retrospective look at the diffusion model. Journal of Health Communication. 2004;9:13-19.

83. Rogers EM. Diffusion of innovations: Free Pr; 1995. 84. Gladwell M. The tipping point. The New Yorker. 1996;72(14):32-36. 85. Mintzberg H. Organisational Structures. . In: Prentice-Hall., ed. Upper

Saddle River, N.J.1996.: www.questia.com. Accessed December 15, 2011. 86. Donaldson L. Management for doctors: conflict, power, negotiation. BMJ.

1995;310(6972):104-107. 87. Ferlie EB, Shortell SM. Improving the quality of health care in the United

Kingdom and the United States: a framework for change. Milbank

Quarterly. 2001;79(2):281-315. 88. Ortenbiad A. A typology of the idea of learning organization. Management

learning. 2002;33(2):213. 89. Aarons G, Sawitzky A. Organizational climate partially mediates the effect

of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health and Mental Health Services

Research. 2006;33(3):289-301. 90. Flodgren G, Eccles M, Shepperd S, Scott A, Parmelli E, Beyer F. An

overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane database of systematic reviews (Online). 2011;7:CD009255.

91. Grol R, Wensing M, Eccles M. Improving patient care: the implementation

of change in clinical practice: Elsevier Butterworth Heinemann Edinburgh; 2005.

92. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. Oct 20 1999;282(15):1458-1465.

93. Heiwe S, Kajermo KN, Tyni-LennÈ R, et al. Evidence-based practice: attitudes, knowledge and behaviour among allied health care professionals. International Journal for Quality in Health Care. 2011.

94. Lizarondo L, Grimmer-Somers K, Kumar S. A systematic review of the individual determinants of research evidence use in allied health. Journal

of Multidisciplinary Healthcare. 2011;4:261-272. 95. Lyons C, Brown T, Tseng MH, Casey J, McDonald R. Evidence based

practice and research utilisation: Perceived research knowledge, attitudes, practices and barriers among Australian paediatric occupational therapists. Australian Occupational Therapy Journal.

96. Vallino-Napoli L, Reilly S. Evidence-based health care: A survey of speech pathology practice. International Journal of Speech-Language Pathology.

2004;6(2):107-112. 97. Adeodu A, Agius R, Madan I. Attitudes and barriers to evidence-based

guidelines among UK occupational physicians. Occupational medicine.

2009. 98. Hakkennes S, Dodd K. Guideline implementation in allied health

professions: a systematic review of the literature. Quality and Safety in

Health Care. 2008;17(4):296. 99. Bennett S, Tooth L, McKenna K, et al. Perceptions of evidence-based

practice: A survey of Australian occupational therapists. Australian

Occupational Therapy Journal. 2003;50(1):13-22.

Page 136: Knowledge translation intervention to improve evidence

References

122

100. McColl A, Smith H, White P, Field J. General practitioners' perceptions of the route to evidence based medicine: a questionnaire survey. British

Medical Journal. 1998;316(7128):361. 101. Young J, Ward J. Evidence based medicine in general practice: beliefs and

barriers among Australian GPs. Journal of Evaluation in Clinical practice.

2001;7(2):201-210. 102. Novak I, McIntyre S. Education with workplace supports improves

practitioners' evidence-based practice knowledge and implementation behaviours. Australian Occupational Therapy Journal. 2010.

103. Peach H. Should Australia’s hospitals be reviewing the use of research in patient care by nurses, managers and allied health professionals?: a systematic review of recent evidence. Australian Health Review.

2003;26(2):49–62. 104. van Dijk N, Hooft L, Wieringa-de Waard M. What Are the Barriers to

Residents' Practicing Evidence-Based Medicine? A Systematic Review. Academic Medicine. 2010;85(7):1163.

105. Detering KM, Hancock AD, Reade MC, Silvester W. The impact of advance care planning on end of life care in elderly patients: randomised controlled trial. BMJ: British Medical Journal. 2010;340.

106. Upton D, Upton P. Knowledge and use of evidence-based practice by allied health and health science professionals in the United Kingdom. J Allied Health. Fall 2006;35(3):127-133.

107. Estabrooks CA, Floyd JA, Scott Findlay S, O'Leary KA, Gushta M. Individual determinants of research utilization: a systematic review. Journal of Advanced Nursing. 2003;43(5):506-520.

108. McEvoy M, Williams M, Olds T. Evidence based practice profiles: Differences among allied health professions. BMC Medical Education.

2010;10(1):69. 109. Ortega Egea JM, Gonz lez MVR, MenÈndez MR. eHealth usage patterns of

European general practitioners: A five-year (2002-2007) comparative study. International Journal of Medical Informatics. 2010;79(8):539-553.

110. Masters K. For what purpose and reasons do doctors use the Internet: A systematic review. International Journal of Medical Informatics.

2008;77(1):4-16. 111. Hyland P, Harvie C, Macgregor R. Do organisational characteristics

explain the differences between drivers of ICT adoption in rural and urban general practices in Australia. Australasian Journal of Information

Systems. 2009;16(1). 112. Taylor R, Lee H. Occupational therapistsí perception of usage of

information and communication technology (ICT) in Western Australia and the association of availability of ICT on recruitment and retention of therapists working in rural areas. Australian Occupational Therapy

Journal. 2005;52(1):51-56. 113. Gosling A, Westbrook J. Allied health professionals' use of online

evidence: a survey of 790 staff working in the Australian public hospital system. International Journal of Medical Informatics. 2004;73(4):391-401.

Page 137: Knowledge translation intervention to improve evidence

References

123

114. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: How will we ever keep up? PLoS medicine.

2010;7(9):e1000326. 115. Candy P. Preventing “information overdose”: Developing information-

literate practitioners. Journal of Continuing Education in the Health

Professions. 2000;20(4):228-237. 116. Druss BG, Marcus SC. Growth and decentralization of the medical

literature: implications for evidence-based medicine. Journal of the

Medical Library Association. 2005;93(4):499. 117. Sherrington C, Moseley A, Herbert R, Elkins M, Maher C. Ten years of

evidence to guide physiotherapy interventions: Physiotherapy Evidence Database (PEDro). British Journal of Sports Medicine. 2010;44(12):836.

118. Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the Impact of Systematic Reviews in Health Care Decision Making: A Systematic Scoping Review of Knowledge Translation Resources. Milbank Quarterly. 2011;89(1):131-156.

119. Gauld R, Williams S. Use of the internet for health information: a study of Australians and New Zealanders. Informatics for Health & Social Care.

2009;34(3):149-158. 120. Hanif F, Read JC, Goodacre JA, Chaudhry A, Gibbs P. The role of quality

tools in assessing reliability of the internet for health information. Informatics for Health and Social Care. 2009;34(4):231-243.

121. Romana HW. Is evidence-based medicine patient-centered and is patient-centered care evidence-based? Health services research.

2006;41(1):1. 122. Aarons G. Measuring Provider Attitudes Toward Evidence-Based

Practice: Consideration of Organizational Context and Individual Differences* 1. Child and adolescent psychiatric clinics of North America.

2005;14(2):255-271. 123. Baker R, Camosso-Stefinovic J, Gillies C, et al. Tailored interventions to

overcome identified barriers to change: effects on professional practice and health care outcomes. Cochrane Database Syst Rev.

2010(3):CD005470. 124. Aarons G. Mental health provider attitudes toward adoption of evidence-

based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research. 2004;6(2):61-74.

125. Grimshaw J, Eccles M, Lavis J, Hill S, Squires J. Knowledge translation of research findings. Implement Sci. 2012;7(1):50.

126. McGowan J, Grad R, Pluye P, et al. Electronic retrieval of health information by healthcare providers to improve practice and patient care. status and date: Edited (no change to conclusions), published in.

2009;3. 127. Jousimaa J, Makela M, Kunnamo I, MacLennan G, Grimshaw J. Primary

care guidelines on consultation practices: the effectiveness of computerized versus paper-based versions. International Journal of

Technology Assessment in Health Care. 2002;18(03):586-596. 128. Bero L, Grilli R, Grimshaw J, Harvey E, Oxman A, Thomson M. Closing the

gap between research and practice: an overview of systematic reviews of

Page 138: Knowledge translation intervention to improve evidence

References

124

interventions to promote implementation of research findings by health care professionals. British Medical Journal. 1998;317.

129. Grimshaw JM, Shirran L, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Medical care.

2001;39(8). 130. Farmer AP, Legare F, Turcot L, et al. Printed educational materials:

effects on professional practice and health care outcomes. Cochrane

Database Syst Rev. 2008(3):CD004398. 131. Flodgren G, Parmelli E, Doumit G, et al. Local opinion leaders: effects on

professional practice and health care outcomes. The Cochrane Library.

2011. 132. Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD. Audit

and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2006(2):CD000259.

133. Linzer M, DeLong ER, Hupart KH. A comparison of two formats for teaching critical reading skills in a medical journal club. Journal of

medical education. 1987. 134. Harris J, Kearley K, Heneghan C, et al. Are journal clubs effective in

supporting evidence-based decision making? A systematic review. BEME Guide No. 16. Medical Teacher. 2011;33(1):9-23.

135. Parmelli E, Flodgren G, Beyer F, Baillie N, Schaafsma ME, Eccles MP. The effectiveness of strategies to change organisational culture to improve healthcare performance: a systematic review. Implement Sci. 2011;6:33.

136. Foxcroft D, Cole N. Organisational infrastructures to promote evidence based nursing practice. status and date: Edited (no change to conclusions),

published in. 2000;1. 137. Sudsawad P. Knowledge translation: introduction to models, strategies

and measures. Austin, TX: Southwest Educational Development

Laboratory, National Center for the Dissemination of Disability Research.

Retrieved April. 2007;14:2008. 138. Grimshaw JM, Thomas R, MacLennan G, et al. Effectiveness and efficiency

of guideline dissemination and implementation strategies. Health

Technol Assess. 2004;8(6):1-72. 139. Grimshaw J, Eccles M, Tetroe J. Implementing clinical guidelines: current

evidence and future implications. Journal of Continuing Education in the

Health Professions. 2004;24(S1):S31-S37. 140. Flores-Mateo G, Argimon JM. Evidence based practice in postgraduate

healthcare education: a systematic review. BMC Health Services Research.

2007;7(1):119. 141. Francke A, Smit M, de Veer A, Mistiaen P. Factors influencing the

implementation of clinical guidelines for health care professionals: a systematic meta-review. BMC Medical Informatics and Decision Making.

2008;8(1):38. 142. Giguère A, Légaré F, Grimshaw J, et al. Printed educational materials:

effects on professional practice and healthcare outcomes. The Cochrane

Library. 2012. 143. Cheater F, Baker R, Gillies C, et al. Tailored interventions to overcome

identified barriers to change: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2005;3.

Page 139: Knowledge translation intervention to improve evidence

References

125

144. Arditi C, Rège-Walther M, Wyatt J, Durieux P, Burnand B. Computer-generated reminders delivered on paper to healthcare professionals; effects on professional practice and health care outcomes. Cochrane

Database Syst Rev. 2012 Dec. 145. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J.

The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev. 2009(3):CD001096.

146. Thompson D, Estabrooks C, Scott-Findlay S, Moore K, Wallin L. Interventions aimed at increasing research use in nursing: a systematic review. Implementation Science. 2007;2(1):15.

147. Bekkering G, Hendriks H, Van Tulder M, et al. Effect on the process of care of an active strategy to implement clinical guidelines on physiotherapy for low back pain: a cluster randomised controlled trial. British Medical Journal. 2005;14(2):107.

148. Hoeijenbos M, Bekkering T, Lamers L, Hendriks E, Van Tulder M, Koopmanschap M. Cost-effectiveness of an active implementation strategy for the Dutch physiotherapy guideline for low back pain. Health

Policy. 2005;75(1):85-98. 149. Rebbeck T, Maher C, Refshauge K. Evaluating two implementation

strategies for whiplash guidelines in physiotherapy: A cluster-randomised trial. Australian Journal of Physiotherapy. 2006;52(3):165.

150. Stevenson K, Lewis M, Hay E. Does physiotherapy management of low back pain change as a result of an evidence-based educational programme? Journal of Evaluation in Clinical practice. 2006;12(3):365-375.

151. Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA: The Journal of the American Medical Association.

2002;287(2):226. 152. Shaneyfelt T, Baum K, Bell D, et al. Instruments for evaluating education

in evidence-based practice: a systematic review. JAMA.

2006;296(9):1116. 153. Kirkpatrick D. Evaluating training programs: The four levels: Berrett-

Koehler; 1998. 154. Hakkennes S, Green S. Measures for assessing practice change in medical

practitioners. Implement Sci. 2006;1:29. 155. Straus SE, Ball C, Balcombe N, Sheldon J, McAlister FA. Teaching Evidence

based Medicine Skills Can Change Practice in a Community Hospital. Journal of General Internal Medicine. 2005;20(4):340-343.

156. Lucas BP, Evans AT, Reilly BM, et al. The impact of evidence on physicians’ inpatient treatment decisions. Journal of General Internal

Medicine. 2004;19(5p1):402-409. 157. Ilic D. Assessing competency in Evidence Based Practice: strengths and

limitations of current tools in practice. BMC Medical Education.

2009;9(1):53. 158. Becker H, Stuifbergen A, Rogers S, Timmerman G. Goal attainment scaling

to measure individual change in intervention studies. Nursing Research.

2000;49(3):176.

Page 140: Knowledge translation intervention to improve evidence

References

126

159. Bravo G, Dubois MF, Roy PM. Improving the quality of residential care using goal attainment scaling. Journal of the American Medical Directors

Association. 2006;7(3):S30-S37. 160. Ottenbacher K, Cusick A. Goal attainment scaling as a method of clinical

service evaluation. The American journal of occupational therapy.: official

publication of the American Occupational Therapy Association.

1990;44(6):519. 161. Tennant S, Field R. Continuing professional development: does it make a

difference? Nursing in Critical Care. 2004;9(4):167-172. 162. Fleck E, Fyffe T. Changing nursing practice through continuing education:

a tool for evaluation. Journal of Nursing Management. 1997;5(1):37-41. 163. Cusick A, Ottenbacher K. Goal attainment scaling: Continuing education

evaluation tool. Journal of Continuing Education in the Health Professions.

1994;14(3):141-154. 164. Dobbins M, Hanna S, Ciliska D, et al. A randomized controlled trial

evaluating the impact of knowledge translation and exchange strategies. Implementation Science. 2009;4(1):61.

165. Curran JA, Grimshaw JM, Hayden JA, Campbell B. Knowledge translation research: The science of moving research into policy and practice. Journal of Continuing Education in the Health Professions.

2011;31(3):174-180. 166. Hrisos S, Eccles M, Francis J, et al. Are there valid proxy measures of

clinical behaviour? a systematic review. Implementation Science.

2009;4(1):37. 167. Hrisos S, Eccles MP, Francis JJ, et al. Are there valid proxy measures of

clinical behaviour? a systematic. Implementation Science. 2009;4:37. 168. Ramos KD, Schafer S, Tracz SM. Validation of the Fresno test of

competence in evidence based medicine. BMJ. 2003;326(7384):319. 169. Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer H, Kunz R. Do short

courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ. 2002;325(7376):1338.

170. Aarons G, Glisson C, Hoagwood K, Kelleher K, Landsverk J, Cafri G. Psychometric Properties and US National Norms of the Evidence-Based Practice Attitude Scale (EBPAS). Psychological Assessment.

2010;22(2):356-365. 171. Stahmer AC, Aarons GA. Attitudes toward adoption of evidence-based

practices: A comparison of autism early intervention providers and children’s mental health providers. Psychological Services;

Psychological Services. 2009;6(3):223. 172. Gülmezoglu A, Langer A, Piaggio G, Lumbiganon P, Villar J, Grimshaw J.

Cluster randomised trial of an active, multifaceted educational intervention based on the WHO Reproductive Health Library to improve obstetric practices. BJOG: An International Journal of Obstetrics &

Gynaecology. 2007;114(1):16-23. 173. Thomson O, Freemantle N, Oxman A, Wolf F, Davis D, Herrin J. Continuing

education meetings and workshops: effects on professional practice and health care outcomes. Cochrane database of systematic reviews (Online).

2001(2):CD003030.

Page 141: Knowledge translation intervention to improve evidence

References

127

174. Altman D, Bland J. Statistics notes: Treatment allocation in controlled trials: why randomise? British Medical Journal. 1999;318(7192):1209.

175. Davis D, Bordage G, Moores LK, et al. The science of continuing medical education: terms, tools, and gaps: effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines. Chest. Mar 2009;135(3 Suppl):8S-16S.

176. Jadad A, Britton A, McKee M, et al. Randomised controlled trials: a user’s guide. Health Technology Assessment. 1998;2(13):214.

177. Roland M, Torgerson DJ. Understanding controlled trials: What are pragmatic trials? BMJ. 1998;316(7127):285.

178. Torgerson CJ, Torgerson DJ. The need for randomised controlled trials in educational research. British Journal of Educational Studies.

2001;49(3):316-328. 179. Pocock SJ. Clinical Trials: A Practical Approach: John Wiley & Sons Ltd.;

1983. 180. Novak I, Mcintyre S, Morgan C, et al. A systematic review of interventions

for children with cerebral palsy: state of the evidence. Developmental

Medicine & Child Neurology. 2013. 181. Mazmanian P, Davis D, Galbraith R. Continuing medical education effect

on clinical outcomes. Chest. 2009;135(3 suppl):49S. 182. Cordingley P. Research and evidence‚Äêinformed practice: Focusing on

practice and practitioners. Cambridge Journal of Education.

2008;38(1):37-52. 183. Coomarasamy A, Khan K. What is the evidence that postgraduate

teaching in evidence based medicine changes anything? A systematic review. BMJ. 2004;329(7473):1017.

184. Cordingley P, Bell M, Rundell B, Evans D. The impact of collaborative continuing professional development (CPD) on classroom teaching and learning. Review: How do collaborative and sustained CPD and sustained

but not collaborative CPD affect teaching and learning. 2005. 185. Russell D, Rivard L, Walter S, et al. Using knowledge brokers to facilitate

the uptake of pediatric measurement tools into clinical practice: A before-after intervention study. Implementation Science. 2010;5(1):92.

186. Russell DJ, Rivard LM, Walter SD, et al. Using knowledge brokers to facilitate the uptake of pediatric measurement tools into clinical practice: a before-after intervention study. Implementation Science. 2010;5(1):92.

187. Fellowes D, Wilkinson S, Moore P. Communication skills training for health care professionals working with cancer patients, their families and/or carers. Cochrane Database Syst Rev. 2004(2):CD003751.

188. Gysels M, Richardson A, Higginson IJ. Communication training for health professionals who care for patients with cancer: a systematic review of training methods. Supportive care in cancer. 2005;13(6):356-366.

189. Kiresuk TJ, Sherman RE. Goal attainment scaling: A general method for evaluating comprehensive community mental health programs. Community mental health journal. 1968;4(6):443-453.

190. Cardillo JE, Choate RO. Illustrations of goal setting. In: Kiresuk SC, ed. Goal attainment scaling: Applications, theory, and measurement London: Erlbaum1994:15-60.

Page 142: Knowledge translation intervention to improve evidence

References

128

191. Squires JE, Estabrooks CA, O'Rourke HM, Gustavsson P, Newburn-Cook CV, Wallin L. A systematic review of the psychometric properties of self-report research utilization measures used in healthcare. Implementation

Science. 2011;6(1):83. 192. Walczak J, Kaleta A, Gabrys E, et al. How are" teaching the teachers"

courses in evidence based medicine evaluated? A systematic review. BMC

Medical Education. 2010;10(1):64. 193. Calsyn RJ, Davidson WS. Do we really want a program evaluation

strategy based solely on individualized goals? Community mental health

journal. 1978;14(4):300-308. 194. Lewis AB, Spencer JH, Haas GL, DiVITTIS A. Goal Attainment Scaling:

Relevance and replicability in follow-up of inpatients. Journal of Nervous

and Mental Disease. 1987. 195. Donner A, Klar N, Klar NS. Design and analysis of cluster randomization

trials in health research: Arnold London; 2000. 196. Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a

systematic review of 102 trials of interventions to improve professional practice. CMAJ. Nov 15 1995;153(10):1423-1431.

197. Glanz K, Rimer BK, Viswanath K. Health behavior and health education:

Theory, research, and practice: Jossey-Bass; 2008. 198. Survey Monkey. http:/www.surveymonkey.com. Accessed September

23, 2011. 199. Robson C. Real world research: a resource for social scientists and

practitioner-researchers. Malden: Blackwell Publishing. 2002. 200. Domholdt E. Physical therapy research: principles and applications:

Saunders Philadelphia; 2000. 201. Kelley K, Clark B, Brown V, Sitzia J. Good practice in the conduct and

reporting of survey research. International Journal for Quality in Health

Care. 2003;15(3):261-266. 202. Marra RM, Bogue B. A critical assessment of online survey tools2006. 203. Haynes R, Holland J, Cotoi C, et al. McMaster PLUS: A cluster randomized

clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. Journal of the American Medical

Informatics Association. 2006;13(6):593-600. 204. Burgess C, Nicholas J, Gulliford M. Impact of an electronic, computer-

delivered questionnaire, with or without postal reminders, on survey response rate in primary care. Journal of Epidemiology and Community

Health. 2012. 205. VanGeest JB, Johnson TP, Welch VL. Methodologies for improving

response rates in surveys of physicians. Evaluation & the Health

Professions. 2007;30(4):303-321. 206. Anthony S, Sung-Hee J, Catherine J, John H, Guyonne K, Julia W. A

randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Medical Research Methodology.11.

207. Cook JV, Dickinson HO, Eccles MP. Response rates in postal surveys of healthcare professionals between 1996 and 2005: an observational study. BMC Health Serv Res. 2009;9:160.

Page 143: Knowledge translation intervention to improve evidence

References

129

208. Sheskin D. Handbook of parametric and nonparametric statistical

procedures: CRC PressI Llc; 2004. 209. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of

innovations in service organizations: systematic review and recommendations. Milbank Quarterly. 2004;82(4):581-629.

210. French B, Thomas L, Baker P, Burton C, Pennington L, Roddam H. What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organisational context. Implementation Science. 2009;4(1):28.

211. Bowling A. Data collection methods in quantitative research: questionnaires, interviews and their response rates. Research methods in

health: Investigating health and health services. 1997:257-272. 212. Dizon JM, Grimmer-Somers K. Complex interventions required to

comprehensively educate allied health practitioners on evidence-based practice. Advances in Medical Education and Practice 2011;2:105–108.

213. Baum KD. The Impact of an Evidence-Based Medicine Workshop on Resident's Attitudes towards and Self-Reported Ability in Evidence-Based Practice. Medical Education Online. 2003;8(4).

214. Shuval K, Berkovits E, Netzer D, et al. Evaluating the impact of an evidence-based medicine educational intervention on primary care doctors' attitudes, knowledge and clinical behaviour: a controlled trial and before and after study. Journal of Evaluation in Clinical practice.

2007;13(4):581-598. 215. Graham ID, Bick D, Tetroe J, Straus SE, Harrison MB. Measuring outcomes

of evidence-based practice: Distinguishing between knowledge use and its

impact. Vol 12010. 216. Aarons G. Transformational and transactional leadership: Association

with attitudes toward evidence-based practice. Psychiatric Services.

2006;57(8):1162. 217. Steenbeek D, Gorter JW, Ketelaar M, Galama K, Lindeman E.

Responsiveness of Goal Attainment Scaling in comparison to two standardized measures in outcome evaluation of children with cerebral palsy. Clinical rehabilitation. 2011;25(12):1128-1139.

218. Dickinson HO, Hrisos S, Eccles MP, Francis J, Johnston M. Statistical considerations in a systematic review of proxy measures of clinical behaviour. Implement Sci. 2010;5:20.

219. Eccles M, Hrisos S, Francis J, et al. Do self- reported intentions predict clinicians' behaviour: a systematic review. Implementation Science.

2006;1(1):28. 220. Walshe K. Understanding what works--and why--in quality

improvement: the need for theory-driven evaluation. Int J Qual Health

Care. Apr 2007;19(2):57-59. 221. Davies H, Powell A, Rushmer R. Healthcare professionals’ views on

clinician engagement in quality improvement. A literature review. 2007. 222. Kothari A, Armstrong R. Community-based knowledge translation:

unexplored opportunities. Implement Sci. 2011;6(1):59. 223. Reddihough DS, Collins KJ. The epidemiology and causes of cerebral

palsy. Australian Journal of Physiotherapy. 2003;49(1):7-14.

Page 144: Knowledge translation intervention to improve evidence

References

130

224. Novak I, Hines M, Goldsmith S, Barclay R. Clinical prognostic messages from a systematic review on cerebral palsy. Pediatrics. Nov 2012;130(5):e1285-1312.

225. Kiresuk T, Sherman R. Goal attainment scaling: A general method for evaluating comprehensive community mental health programs. Community mental health journal. 1968;4(6):443-453.

Page 145: Knowledge translation intervention to improve evidence

131

Brief description prepared by Bob Phillips.

Background

The Oxford Centre for Evidence-based Medicine (OCEBM) Levels of

Evidence and Grades of Recommendation 1999 [1] were developed in

response to a need for assessment of evidence beyond therapeutic

interventions. They are an evolution of the Canadian Task Force on the

Periodic Health Examination grading system of 1979. The development of

the Oxford Levels of Evidence was in response to the writing of a series of

guidelines for junior medical staff, the "Evidence-based On Call" project.

They cover many aspects of the medical management of patients, including

causation and diagnosis as well as therapeutic interventions.

Quality of evidence

The levels of evidence are derived from a matrix which has four axes,

corresponding to the broad type of clinical question under consideration.

These are "interventions/aetiology", "prognosis", "diagnosis" and "economic

analysis". Each of these axes is divided into 5 broad levels of evidence,

ranked from 1 (least potential bias) to 5 (most potential bias). The level

allocation is primarily dependent on study design factors (e.g. randomisation

in interventions, or independent reference standards for diagnosis). Other

factors include outcome assessment (e.g. 'minus' when a result is too

imprecise) and clinical sensibility (e.g. 'appropriate spectrum' of patients in

diagnostic tests). See http://cebm.jr2.ox.ac.uk/docs/levels.htm

Strength of recommendations

The grade of recommendation is a compression of the 10 'levels' into 4 'grades',

without any added deliberation or assessment. Level 1a to 1c studies give

Page 146: Knowledge translation intervention to improve evidence

Appendix 1 – Oxford Centre for Evidence-based Medicine

132

grade A recommendations; 2a to 3b map to grade B; level 4 studies are grade

C and level 5 or imprecise ('minus' level) studies give a grade D

recommendation.

Strengths and weaknesses

The strengths of the OCEBM approach are in the detailed development of the

levels of evidence. The different axes allow for questions related to diagnosis,

aetiology and prognosis to be considered as 'evidence-based' as well as

traditionally intervention-orientated recommendations. Another strength is

in the partial incorporation of aspects of heterogeneity into the grade of

recommendation. The detailed description of the study levels, and their

objectivity, make reproducibility likely to be high. However, this detail may

introduce problems for inexperienced users. A study estimating inter-tester

reliability has been performed in the Oxford CEBM, and is under analysis

(Personal Communication: RSP).

The weakness of the OCEBM approach can be summarised as the simplistic

translation of level of evidence into grade of recommendation. No assessment

is made of the clinical importance of the outcomes under consideration.

There is no way of balancing of benefits or harms, nor assessment of

applicability of the studies. There is no clear way of compiling the body of

evidence (often of separate levels) into a single grade of recommendation, or

differentiation of direct or indirect evidence.

Target audiences

The OCEBM levels of evidence and grades of recommendation are intended

to be used by clinicians in practice. This approach is not intended for use by

consumers or policy makers.

Page 147: Knowledge translation intervention to improve evidence

Appendix 1 – Oxford Centre for Evidence-based Medicine

133

Guidelines made with the use of this approach

The OCEBM approach has been used most extensively by "Evidence-based

On Call" to produce 37 guidelines in general (internal) acute medicine [2,3].

This project develops guidelines which are focussed currently on the needs

of the postgraduate trainee clinician. The process is of systematic search of

the literature, critical abstraction, explicit allocation of a level of evidence and

summary into a guideline, with each statement given a summary grade of

recommendation. All aspects of management, from initial presentation,

diagnosis, investigation, treatment and prognostication are included in the

guides.

The "Evidence-based On Call" internet system has recently been adopted by

the UK National Health Service National electronic Library of Health (NeLH)

[4]. An evaluation of user feedback and utilisation is planned.

Within the field of the project (guidelines in general acute medicine), the

homogeneity of the clinical environment and the secondary or tertiary nature

of most evidence used, ironed out some of the possible problems. Using the

OCEBM approach at a different level in the health care system (e.g. primary

care, where different populations are cared for) or across disciplines (e.g.

with physiotherapists, when different training and structures are present)

may be difficult. We are not aware of any group that has used the OCEBM

grading system outside hospital medical practice.

Studies evaluating the application of guidelines

made with this approach

Formal evaluations completed:

None to date.

Formal evaluations underway or planned:

The NeLH evaluation may include aspects of audit against selected

"Evidence-based On Call" guidelines.

Page 148: Knowledge translation intervention to improve evidence

Appendix 1 – Oxford Centre for Evidence-based Medicine

134

Informal evaluations:

Focus groups used during the development of the 'Evidence-based On Call'

project demonstrated a desire for such information. A number of clinicians

working with the developers of the "Evidence-based On Call" guidelines

believed their practice had been altered by the information presented.

References

1. http://cebm.jr2.ox.ac.uk/docs/levels.html

2. http://www.eboncall.co.uk

3. Ball, CM & Phillips, RS [Eds.] Evidence-based On Call; Acute Medicine. Harcourt

Brace 2001

4. http://www.nelh.nhs.uk

Page 149: Knowledge translation intervention to improve evidence

Ap

pen

dix 1

– Oxford

Cen

tre for Evid

ence-b

ased M

edicin

e

135

Levels of Evidence and Grades of Recommendations – 23 November 1999

Grade of Recommendation

Level of Evidence

Therapy/Prevention, Aetiology/Harm

Prognosis Diagnosis Economic analysis

A 1a SR (with homogeneityi)

of RCTs SR (with homogeneity

i) of

inception cohort studies; or a CPG

ii validated on a

test set

SR (with homogeneityi) of Level 1

diagnostic studies; or a CPG validated on a test set

SR (with homogeneityi) of Level

1 economic studies

1b Individual RCT (with narrow Confidence Interval

iii)

Individual inception cohort study with > 80% follow-up

Independent blind comparison of an appropriate spectrum of consecutive patients, all of whom have undergone both the diagnostic test and the reference standard

Analysis comparing all (critically-validated) alternative outcomes against appropriate cost measurement, and including a sensitivity analysis incorporating clinically sensible variations in important variables

1c All or noneiv All or none case-series

v Absolute SpPins and SnNouts

vi Clearly as good or better,

vii but

cheaper. Clearly as bad or worse but more expensive. Clearly better or worse at the same cost.

B 2a SR (with homogeneityi)

of cohort studies SR (with homogeneity

i) of

either retrospective cohort studies or untreated control groups in RCTs

SR (with homogeneityi) of Level

>2 diagnostic studies SR (with homogeneity

i) of Level

>2 economic studies

2b Individual cohort study (including low quality RCT; e.g., <80% follow-up)

Retrospective cohort study or follow-up of untreated control patients in an RCT; or CPG not validated in a test set

Any of:

• Independent blind or objective comparison

• Study performed in a set of non-consecutive patients, or confined to a narrow spectrum of study individuals (or both) all of

Analysis comparing a limited number of alternative outcomes against appropriate cost measurement, and including a sensitivity analysis incorporating clinically sensible variations in important variables

Page 150: Knowledge translation intervention to improve evidence

136

Ap

pen

dix 1

– Oxford

Cen

tre for Evid

ence-b

ased M

edicin

e

Grade of Recommendation

Level of Evidence

Therapy/Prevention, Aetiology/Harm

Prognosis Diagnosis Economic analysis

whom have undergone both the diagnostic test and the reference standard

• A diagnostic CPG not validated in a test set.

2c “Outcomes” Research “Outcomes” Research

B 3a SR (with homogeneityi)

of case-control studies

3b Individual Case-Control Study

Independent blind comparison of an appropriate spectrum, but the reference standard was not applied to all study patients

Analysis without accurate cost measurement, but including a sensitivity analysis incorporating clinically sensible variations in important variables

C 4 Case-series (and poor quality cohort and case-control studies

viii)

Case-series (and poor quality prognostic cohort studies

ix)

Any of:

• Reference standard was unobjective, unblinded or not independent

• Positive and negative tests were verified using separate reference standards

• Study was performed in an inappropriate spectrum of patients.

Analysis with no sensitivity analysis

D 5 Expert opinion without explicit critical appraisal, or based on

Expert opinion without explicit critical appraisal, or based on physiology,

Expert opinion without explicit critical appraisal, or based on physiology, bench research or

Expert opinion without explicit critical appraisal, or based on

Page 151: Knowledge translation intervention to improve evidence

137

Ap

pen

dix 1

– Oxford

Cen

tre for Evid

ence-b

ased M

edicin

e

Grade of Recommendation

Level of Evidence

Therapy/Prevention, Aetiology/Harm

Prognosis Diagnosis Economic analysis

physiology, bench research or “first principles”

bench research or “first principles”

“first principles” economic theory

i. By homogeneity we mean a systematic review that is free of worrisome variations (heterogeneity) in the directions and degrees of results between individual studies. Not all systematic reviews with statistically significant heterogeneity need be worrisome, and not all worrisome heterogeneity need be statistically significant. As noted above, studies displaying worrisome heterogeneity should be tagged with a “-“ at the end of their designated level.

ii. Clinical Prediction Guide.

iii. See note #2 for advice on how to understand, rate and use trials or other studies with wide confidence intervals.

iv. Met when all patients died before the Rx became available, but some now survive on it; or when some patients died before the Rx became available, but none now die on it.

v. Met when there are no reports of anyone with this condition ever avoiding (all) or suffering from (none) a particular outcome (such as death).

vi. An “Absolute SpPin” is a diagnostic finding whose Specificity is so high that a Positive result rules-in the diagnosis. An “Absolute SnNout” is a diagnostic finding whose Sensitivity is so high that a Negative result rules-out the diagnosis.

vii. Good, better, bad, and worse refer to the comparisons between treatments in terms of their clinical risks and benefits.

viii. By poor quality cohort study we mean one that failed to clearly defined comparison groups and/or failed to measure exposures and outcomes in the same (preferably blinded), objective way in both exposed and non-exposed individuals and/or failed to identify or appropriately control known confounders and/or failed to carry out a sufficiently long and complete follow-up of patients. By poor quality case-control study we mean one that failed to clearly define comparison groups and/or failed to measure exposures and outcomes in the same blinded, objective way in both cases and controls and/or failed to identify or appropriately control known cofounders.

ix. By poor quality prognostic cohort study we mean one in which sampling was biased in favour of patients who already had the target outcome, or the measurement of outcomes was accomplished in <80& of study patients, or outcomes were determined in an unblinded, non-objective way, or there was no correction for confounding factors.

Notes:

1. These levels were generated in a series of iterations among members of the NHS R&D Centre for Evidence-Based Medicine (Chris Ball, Dave Sackett, Bob Phillips, Brian Haynes, and Sharon Straus).

2. Recommendations based on this approach apply to “average” patients and may need to be modified in light of an individual patient’s unique biology (risk, responsiveness, etc.) and preferences about the care they receive.

3. Users can add a minus-sign “-“ to denote the level of that fails to provide a conclusive answer because of:

Page 152: Knowledge translation intervention to improve evidence

138

Ap

pen

dix 1

– Oxford

Cen

tre for Evid

ence-b

ased M

edicin

e

a. EITHER a single result with a wide Confidence Interval (such that, for example, an ARR in an RCT is not statistically significant but whose confidence intervals fail to exclude clinically important benefit or harm)

b. OR an SR with troublesome (and statistically significant) heterogeneity.

c. Such evidence is inconclusive, and therefore can only generate Grade D recommendations.

Abbreviations:

SR – Systematic review

RCT – Randomised Controlled Clinical Trial

CPG – Clinical Prediction Guide

ARR – Absolute Risk Reduction

Rx – Prescription

Page 153: Knowledge translation intervention to improve evidence

139

Page 154: Knowledge translation intervention to improve evidence

Appendix 2 – National Ethics Application

140

Page 155: Knowledge translation intervention to improve evidence

141

Home page with four main sections – assessment, intervention, prognosis/prevalence and clinical algorithms. The following screenshots will

show information within each of these sections.

Page 156: Knowledge translation intervention to improve evidence

Appendix 3 – Evidence Alert System

142

Ass

ess

me

nt

ind

ex

pa

ge

Page 157: Knowledge translation intervention to improve evidence

Ap

pen

dix 3

– Evid

ence A

lert System

143

Assessment - Examples of types of assessments included

Page 158: Knowledge translation intervention to improve evidence

Ap

pen

dix 3

– Evid

ence A

lert System

144

Assessment - Example of an assessment that an occupational therapist might use

Page 159: Knowledge translation intervention to improve evidence

Ap

pen

dix 3

– Evid

ence A

lert System

145

Intervention index page (p 1/2) (listed alphabetically: A through to M) - all interventions that have been rated are listed here

Page 160: Knowledge translation intervention to improve evidence

Ap

pen

dix 3

– Evid

ence A

lert System

146

Intervention index page (p 2/2) (alphabetically: M through to W) – all interventions that have been rated are listed here

Page 161: Knowledge translation intervention to improve evidence

Ap

pen

dix 3

– Evid

ence A

lert System

147

Intervention - Example of an intervention (Botulinum Toxin A) (p 1/2)

Page 162: Knowledge translation intervention to improve evidence

Ap

pen

dix 3

– Evid

ence A

lert System

148

Intervention - Example of an intervention (Botulinum Toxin A) (p 2/2)

Page 163: Knowledge translation intervention to improve evidence

Appendix 3 – Evidence Alert System

149

Pro

gn

osi

s/p

rev

ale

nce

in

de

x p

ag

e

Page 164: Knowledge translation intervention to improve evidence

Ap

pen

dix 3

– Evid

ence A

lert System

150

Prognosis/prevalence – example of a prognosis/prevalence page

Page 165: Knowledge translation intervention to improve evidence

Appendix 3 – Evidence Alert System

151

Cli

nic

al

Alg

ori

thm

s in

de

x p

ag

e

Page 166: Knowledge translation intervention to improve evidence

Appendix 3 – Evidence Alert System

152

Cli

nic

al

Alg

ori

thm

s –

ex

am

ple

Page 167: Knowledge translation intervention to improve evidence

153

Evidence Based Decision-Making & Communication Skill Study • Information Sheet for Staff Participants What is evidence based practice? Evidence based practice (EBP) is the use of current best research evidence in making decisions about health care. Health professionals’ agree that EBP is the optimal approach to providing services. EBP compels health professionals to ask important clinical questions, to attain and interpret the findings, and most importantly integrate the answers into healthcare services to optimise clinical outcomes.

The benefits of adopting a systematic EBP approach to health care are multiple: (a) increasing both the effectiveness and efficiency of the services provided; (b) assisting allied health professionals to be more reflective and analytical, whilst remaining creative; (c) providing justification of the need for allied health interventions; and (d) enhancing the credibility of the professions.

Good communication between health professionals and clients/patients is essential for the delivery of high quality care (Fellowes et al, 2008) and for communicating research findings to health consumers. Research has shown that communication skills training programmes in oncology are effective for improving communication skills; (Fellowes et al, 2008; Gysels et al; 2005), however there little to no research of this topic area in the disability field.

What is the purpose of the study? You are invited to participate in a research project about the impact of providing an evidence-based practice (EBP) library along with a one/two day workshop on clinical decision-making and outcomes of care. The training and all tasks associated with it are compulsory for Spastic Centre allied health and community links staff to attend. The research project component is voluntary and is no extra work on top of the training; you just submit your assessment tasks to the research team to be included in the study. All information that is included in the research study is de-identified. You will assign yourself a code name and the researchers will not be able to re-identify you.

There are 3 broad aims of this study. 1. To find out whether the EBP library along with training for 3 days (2 days

initially and 1 day 8 weeks later) changes the clinical decisions that the participants (allied health staff) make before/after the training

2. To find out whether the EBP library along with training for 3 days changes client outcomes

Page 168: Knowledge translation intervention to improve evidence

Appendix 4 – Information Sheet for Staff Participants Code Name:_____________________________________________________________

154

3. To find out whether the communication training for 3 days (2 days initially and 1 day 8 weeks later) changes the types of goals set for intervention and or changes the messages given to families before/after the training.

This project is being conducted by The Cerebral Palsy Institute. The research team includes: Lanie Campbell, Research Assistant; Dr Iona Novak, Head of Research; Sarah McIntyre, Research Fellow; Shona Goldsmith, Research Assistant; and Elise Stumbles, Manager of Professional Development.

What will you need to do?

1 Information First we would make sure you are fully aware of what is involved in the study and ensure that you meet the criteria to be involved in the study.

2 Consent

We would then ask you to sign a consent which ensures you have read and understood the material provided about the study and that you are willing to participate. We would also ensure that you have a consent form signed from the client/s that you plan to work with during the project.

3 Baseline Assessment

At the commencement of the training sessions, time will be set aside to complete the baseline assessments. There are a range of assessments, these include: completing a clinical case scenario exam, a survey questionnaire, and a case study form. You will be able to use whatever resources you normally use at work to complete these types of tasks, e.g. client files, computer, books

4 Randomisation

Your regional office will be randomised to one of 2 groups, either: evidence decision-making training or advanced communication training. You will not get a choice which group you are randomised to, but you will get to participate in both groups. After you have finished one type of training then you will proceed to the other type of training.

5 Training Part 1

Evidence-based decision-making You will be provided with 2-days of workshop training on how to use an EBP library to assist you with decision making.

Advanced communication training You will be provided with 2-days of workshop training on how to hone your communication skills necessary for delivering prognostic messages to clients and their families.

Part 2

Eight weeks later, you will present a case-study to your peers in the group using power-point about how you have integrated using the EBP library with a client on your case-load and what happened

Eight weeks later, you will present a case-study to your peers in the group using power-point and an audio-tape about how you have integrated using the communication techniques with a client on your case-load and what happened

6 Midway Assessment After the first 2 parts of the training is complete, you will complete the mid-way assessments. These include: completing a clinical case scenario exam, a survey questionnaire, and a case study form.

Part 3

Advanced communication training You will then be provided with the 2-days of workshop training on how to hone your communication skills necessary for delivering prognostic messages to clients and their families.

Evidence-based decision-making You will then be provided with 2-days of workshop training on how to use an EBP library to assist you with decision making.

Part 4

Another eight weeks later, you will present a case-study to your peers in the group using power-point about how you have integrated using the EBP library with a client on your case-load and what happened

Another eight weeks later, you will present a case-study to your peers in the group using power-point and an audio-tape about how you have integrated using the communication techniques with a client on your case-load and what happened

7 Final Assessment After the training is complete, you will complete the final assessments. These include: completing a clinical case scenario exam, a survey questionnaire, and a case study form.

Page 169: Knowledge translation intervention to improve evidence

Appendix 4 – Information Sheet for Staff Participants Code Name:_____________________________________________________________

155

The research team will collect all work that consenting participants have completed to analyse.

Page 170: Knowledge translation intervention to improve evidence

Appendix 4 – Information Sheet for Staff Participants Code Name:_____________________________________________________________

156

Are there benefits in participating? Both workshops are considered to be beneficial for the professional development of allied health staff at The Spastic Centre. The EBP workshop aims to equip participants with the confidence, knowledge and practical skills to find, interpret and apply the latest evidence into their daily work. The Communication Skills workshop uses case studies and problem based learning to explore the approaches of delivering prognostic messages to clients and their carers.

Are there any discomforts, side effects and risks involved with the study? There are no anticipated risks from being involved in this study. That said, in both workshops participants will be encouraged to reflect on their current therapy practice and this may be a challenging process for some participants. Some participants may find that the information being presented is quite different from their current practice and this also may be confronting. If you experience any distress from participating in this study – contact another investigator, your manager or the staff helpline.

Privacy and Disclosure of Data The research team will respect all aspects of your privacy and you can be assured that your personal details will remain confidential at all times. Only the researchers will have access to information about you and the other participants and it will always be viewed in de-identified format. When the project is finished, a report about the study will be written. This report will be available for other people to read. The report will only present statistical and research findings. It will not reveal identifying information about any individual and no one will be named. All study information will be stored in locked cupboards or password protected electronic files.

Consent and Withdrawal Participation in the research component of the training examining the effectiveness of EBP intervention is entirely voluntary. We will only include your information if you sign a consent form. If, in the future, you change your mind about being involved, you can withdraw your consent to participate. You do not need to provide any reason. You may access the information collected about you at any stage, by contacting The Spastic Centre. You will be informed about your progress throughout the study and will also be provided with a copy of the study results.

This Information Sheet is for you to keep. If you have any questions or would like to know more about this project, please contact: Lanie Campbell Iona Novak Research Assistant Head of Research Cerebral Palsy Institute Cerebral Palsy Institute

• Ph: 9802 4497 Ph: 98024492 Email:[email protected] Email: [email protected]

Should you wish to talk to someone not involved in the study or make a complaint about the conduct of the research project, please contact: Human Research Ethics Committee The Spastic Centre Telephone: 9479 7200 Email: [email protected]

Page 171: Knowledge translation intervention to improve evidence

157

SERVICE DELIVERY AND DECISION MAKING and ADVANCED COMMUNICATION AND COACHING EVALUATION FORM – SELF RATING

PART 1: Participant Information (8 questions)

PART 2: Self-Ratings of Communication, Coaching, Goal-Setting, Evidence Based Practice & Outcome Measurement

Competencies (25 questions) PART 3: Evidence Based Practice & Outcome Measurement Competencies (6 open-ended questions) PART 4: Evidence-Based Practice Attitude Scale Items (8 questions)

Page 172: Knowledge translation intervention to improve evidence

Ap

pen

dix 5

– Self-Evalu

ation Form

158

PART 1: Participant Information

1. Profession

❍ Conductor ❍ Early Educator

❍ OT ❍ PT ❍ Psych ❍ SP ❍ SW ❍ Welfare ❍ Other (please specify)

_________________________________

2. I am employed at The Spastic Centre as…….(eg. Speech Pathologist, Family Support Worker)

3. Employment I have been working at The Spastic Centre for… ______ year/s

4. Grade Level I am employed as a …

❍ ❍ ❍ ❍ ❍ Level 1 Level 2 Level 3 Manager (PM, RM) Other or N/A

4. Clinical experience in the disability field Including my time at The Spastic Centre I have been working with people with disabilities for…

______ year/s

5. Previous continuing education I have attended evidence based practice training before.

❍ ❍ Yes No

6. Previous continuing education I have attended communication skills training before.

❍ ❍ Yes No

* The same codename that you chose the first time you completed this form.

Page 173: Knowledge translation intervention to improve evidence

Ap

pen

dix 5

– Self-Evalu

ation Form

159

7. Language English is my first language.

❍ ❍ Yes No

Page 174: Knowledge translation intervention to improve evidence

Ap

pen

dix 5

– Self-Evalu

ation Form

160

PART 2: Self-Ratings of Communication, Coaching, Goal-Setting, Evidence Based Practice & Outcome Measurement Competencies

INSTRUCTIONS: Select the answer that most accurately reflects your practice today. If you do not know what an abbreviation or term means, tick ‘never’.

1. I develop and document measurable goals with families/clients

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

2. I explore the feelings of families/clients during conversations ❍ ❍ ❍ ❍ ❍ ❍ ❍

Never 1- 5% of the time 5-24% of the time

25-49% of the time

50-74% of the time

74-99% of the time

Always

3. I conduct and document COPM interviews with families/clients to assist with service planning

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

4. I explore and express understanding to families/clients when strong emotions are present

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

5. I construct and document GAS scales to describe the expected outcome from intervention for families/clients

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

6. I undertake “difficult conversations” with families/clients rather than avoid the topic

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

7. I score and document my client’s COPM and GAS measures and use this information for planning

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

8. I name emotions that families/clients are experiencing during conversations

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

Page 175: Knowledge translation intervention to improve evidence

Ap

pen

dix 5

– Self-Evalu

ation Form

161

9. I determine and document my client’s GMFCS or MACS level to help inform decision-making

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

10.I ask families/clients if they have access to personal support when I detect anxiety, or depression, or distress

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

11.I ask parents/clients to consent to joining the CP register and notify them to the register

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

12. I confirm that families/clients understood what I meant, even when the topic is difficult

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

13. I communicate news or facts to families/clients, to help them develop realistic expectations from intervention

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

14.I use empathetic and supportive statements in response to emotion

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

15. I identify if a goal (in my speciality) is realistic based on assessment information and prognostic evidence

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

Page 176: Knowledge translation intervention to improve evidence

Ap

pen

dix 5

– Self-Evalu

ation Form

162

16. I ask open-ended questions to illicit more information ❍ ❍ ❍ ❍ ❍ ❍ ❍

Never 1- 5% of the time 5-24% of the time

25-49% of the time

50-74% of the time

74-99% of the time

Always

17. I reword goals with families/clients to be realistic, if they set goals that are unrealistic

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

18.I draw solutions out of families/clients rather than directing them to answers

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

19. I check what interventions (in my speciality) have higher levels of supporting evidence, using e.g. databases, CATs

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

20. I listen, reflect and give feedback for the greater part of conversations

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1-5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

21. I select interventions with the highest levels of evidence that match the goals identified by my families/clients using a systematic EBP approach, e.g. CATs, PICO searches

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1-5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

22. I prepare for conversations that I anticipate will be difficult prior to the meeting

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

23. I communicate the outcomes of intervention to families/clients using outcome measures, even when goals aren’t achieved

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

Page 177: Knowledge translation intervention to improve evidence

Ap

pen

dix 5

– Self-Evalu

ation Form

163

24. I name the issue when mine and the family’s/client’s viewpoints conflict

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

25. I summarise and check that the client understands the information I have shared

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

Page 178: Knowledge translation intervention to improve evidence

Ap

pen

dix 5

– Self-Evalu

ation Form

164

PART 3: Evidence Based Practice & Outcome Measurement Competencies 1. Name up to two valid, reliable, sensitive to change outcome measures that could be used with a client with cerebral palsy.

2. Choose 3 interventions from the list (attachment) and state the level of research evidence according to the STOP, MEASURE, GO system (attached).

Intervention

Stop/Measure or Go?

3. A client is referred who wants to improve his walking, especially at school. He walks independently but falls quite a lot. He also is being bullied at school but is too frightened to tell anyone. He wonders if his poor articulation might have something to do with why he is bullied. He wants the bulling to stop but is not sure how to make it happen. Write one hypothetical goal that you could set for this client.

4. A client is referred who has a GMFCS of 5. He is 5 years old. What key messages would you be telling his parents regarding expectations for his future? OR an existing adult client stops being able to walk due to pain and wants to use a wheelchair. What key messages would you be telling them regarding this decision

5. What types of studies/articles are considered to be high evidence?

Name 2 interventions for people with cerebral palsy that have high level evidence supporting their effectiveness.

Page 179: Knowledge translation intervention to improve evidence

Ap

pen

dix 5

– Self-Evalu

ation Form

165

Part 4: Evidence-Based Practice Attitude Scale Items INSTRUCTIONS: Select the answer that most accurately reflects your attitude today NOTE: Manualized therapy, treatment, or intervention refers to any intervention that has specific guidelines and/or components that are outlined in a manual and/or that are to be followed in a structured or predetermined way.

I like to use new types of therapy/interventions to help my clients

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

I am willing to try new types of therapy/interventions even if I have to follow a treatment manual

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

I know better than academic researchers how to care for my clients

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

I am willing to use new and different types of therapy/interventions developed by researchers

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

Research based treatments/interventions are not clinically useful

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

Clinical experience is more important than using manualized therapy/interventions

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

I would not use manualized therapy/interventions ❍ ❍ ❍ ❍ ❍

Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

I would try a new therapy/intervention even if it were very different from what I am used to doing

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

Page 180: Knowledge translation intervention to improve evidence

166

PART 1: Peer-Ratings of Communication, Coaching, Goal-Setting, Evidence Based Practice & Outcome Measurement

Competencies (25 questions) PART 2: Evidence-Based Practice Attitude Scale Items (8 questions)

Page 181: Knowledge translation intervention to improve evidence

Ap

pen

dix 6

– Peer E

valuation

Form

167

Part 1: Peer-Ratings of Communication, Coaching, Goal-Setting, Evidence Based Practice & Outcome Measurement Competencies

INSTRUCTIONS: Select the answer that you think most accurately reflects your colleague. If you do not know what an abbreviation or term means, tick ‘never’.

They develop and document measurable goals with families/clients

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They explore the feelings of families/clients during conversations

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They conduct and document COPM interviews with families/clients to assist with service planning

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They explore and express understanding to families/clients when strong emotions are present

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They construct and document GAS scales to describe the expected outcome from intervention for families/clients

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They undertake “difficult conversations” with families/clients rather than avoid the topic

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They score and document their client’s COPM and GAS measures and use this information for planning

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

Page 182: Knowledge translation intervention to improve evidence

Ap

pen

dix 6

– Peer E

valuation

Form

168

They name emotions that families/clients are experiencing during conversations

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They determine and document their client’s GMFCS or MACS level to help inform decision-making

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They ask families/clients if they have access to personal support when they detect anxiety, or depression, or distress

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They ask parents/clients to consent to joining the CP register and notify them to the register

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They confirm that families/clients understood what they meant, even when the topic is difficult

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They communicate news or facts to families/clients, to help them develop realistic expectations from intervention

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They use empathetic and supportive statements in response to emotion

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They identify if a goal (in their speciality) is realistic, based on assessment information and prognostic evidence

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

Page 183: Knowledge translation intervention to improve evidence

Ap

pen

dix 6

– Peer E

valuation

Form

169

They ask open-ended questions to illicit more information ❍ ❍ ❍ ❍ ❍ ❍ ❍

Never 1- 5% of the time 5-24% of the time

25-49% of the time

50-74% of the time

74-99% of the time

Always

They reword goals with families/clients to be realistic, if they set goals that are unrealistic

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They draw solutions out of families/clients rather than directing them to answers

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They check what interventions (in my speciality) have higher levels of supporting evidence, using e.g. databases, CATs

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They listen, reflect and give feedback for the greater part of conversations

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1-5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They select interventions with the highest levels of evidence that match the goals identified by my families/clients using a systematic EBP approach, e.g. CATs, PICO searches

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1-5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They prepare for conversations that they anticipate will be difficult prior to the meeting

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They communicate the outcomes of intervention to families/clients using outcome measures, even when goals aren’t achieved

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

Page 184: Knowledge translation intervention to improve evidence

Ap

pen

dix 6

– Peer E

valuation

Form

170

They name the issue when theirs and the family’s/client’s viewpoints conflict

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

They summarise and check that the client understands the information they have shared

❍ ❍ ❍ ❍ ❍ ❍ ❍ Never 1- 5% of the time 5-24% of the

time 25-49% of the

time 50-74% of the

time 74-99% of the

time Always

Page 185: Knowledge translation intervention to improve evidence

Ap

pen

dix 6

– Peer E

valuation

Form

171

Part 2: Evidence-based practice attitude scale INSTRUCTIONS: Select the answer that most accurately reflects your attitude today

NOTE: Manualized therapy, treatment, or intervention refers to any intervention that has specific guidelines and/or components that are outlined in a manual and/or that are to be followed in a structured or predetermined way.

They like to use new types of therapy/interventions to help their clients

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

They are willing to try new types of therapy/interventions even if they have to follow a treatment manual

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

They think know better than academic researchers how to care for their clients

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

They are willing to use new and different types of therapy/interventions developed by researchers

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

They think that research based treatments/interventions are not clinically useful

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

They think that clinical experience is more important than using manualized therapy/interventions

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

They would not use manualized therapy/interventions ❍ ❍ ❍ ❍ ❍

Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

They would try a new therapy/intervention even if it were very different from what they are used to doing

❍ ❍ ❍ ❍ ❍ Not at All To a Slight Extent To a Moderate Extent To a Great Extent To a Very Great Extent

Page 186: Knowledge translation intervention to improve evidence

172

PART 3: Evidence Based Practice & Outcome Measurement Competencies Scoring criteria

Name up to two valid, reliable, sensitive to change outcome measures that could be used with a client with cerebral palsy. COPM GAS GMFM Russell, D et al. (2000). Improved Scaling of the Gross Motor Function Measure for Children With Cerebral Palsy: Evidence of Reliability and Validity. Physical Therapy. Vol. 80, No. 9, September 2000, pp. 873-885 Notes: GMFCS is a classification system, not an outcome measure SP/Psych assessments are not outcome measures

Choose 3 interventions from the list (attached*) and state the level of research evidence according to the STOP, MEASURE, GO system (attached). 6 Points in total - 2 points for each correctly chosen intervention and matching evidence level If an intervention is written with no level of evidence = 0 points If intervention is written with partially correct level of evidence = 1 point

Intervention examples

Stop/Measure or Go? examples

Botox

Green – 1 point as it is a partially correct answer Green/Orange – 2 points as Botox evidence varies according to intervention area

A client is referred who wants to improve his walking, especially at school. He walks independently but falls quite a lot. He also is being bullied at school but is too frightened to tell anyone. He wonders if his poor articulation might have something to do with why he is bullied. He wants the bulling to stop but is not sure how to make it happen. Write one hypothetical goal that you could set for this client.

Page 187: Knowledge translation intervention to improve evidence

Appendix 7 – Marking Criteria for Exam

173

1 point for each SMART component – no half points allowed. Specific – Is it clear what is going to be achieved? Measurable – Is there a clear way stated to measure the progress and achievement of the goal? Achievable/realistic – Is this goal realistic for the client? Is the time frame realistic? Relevant - Is this a goal that will directly affect the client’s stated problem? Time framed – Is a specific time frame mentioned? Maidment, A & Merry, L (2010). Setting SMART seating goals.

What types of studies/articles are considered to be high evidence? (max 2 points) 1 point each for: Randomised controlled trials, RCTs Systematic reviews ½ point each for: Meta-analysis Cochrane Collaboration reviews

Name 2 interventions for people with cerebral palsy that have high level evidence supporting their effectiveness. (max 2 points) See Evidence Alert System for evidence levels. 1 point for any GREEN intervention. ½ point for an intervention that is GREEN + ORANGE/RED

Page 188: Knowledge translation intervention to improve evidence

174

Page 189: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

175

Page 190: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

176

Page 191: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

177

Page 192: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

178

Page 193: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

179

Page 194: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

180

Page 195: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

181

Page 196: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

182

Page 197: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

183

Page 198: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

184

Page 199: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

185

Page 200: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

186

Page 201: Knowledge translation intervention to improve evidence

Appendix 8 – 2-year Follow-up Survey

187

Page 202: Knowledge translation intervention to improve evidence

188

A KT intervention including the Evidence Alert System to improve clinician’s evidence-based practice behavior – a cluster randomized controlled trial

Lanie B Campbell1§, Iona Novak2, Sarah McIntyre3, Sarah J Lord4 1 Doctoral candidate, School of Medicine, University of Notre Dame Australia, corner Oxford Street & Victoria Street, Darlinghurst NSW, 2010 Australia 2 Head of Research, Cerebral Palsy Alliance, PO Box 560, Darlinghurst NSW, 1300 Australia and School of Medicine, University of Notre Dame Australia, corner Oxford Street & Victoria Street, Darlinghurst NSW, 2010 Australia 3 Senior Research Fellow, Cerebral Palsy Alliance, PO Box 560, Darlinghurst NSW, 1300 Australia and School of Medicine, University of Notre Dame Australia, corner Oxford Street & Victoria Street, Darlinghurst NSW, 2010 Australia 4 Head of Epidemiology and Medical Statistics, University of Notre Dame Australia, corner Oxford Street & Victoria Street, Darlinghurst NSW, 2010 Australia and Senior Research Fellow, NHMRC Clinical Trials Centre. The University of Sydney, Camperdown NSW, 2050 Australia

§ Corresponding author Email addresses:

LC: [email protected] IN: [email protected] SM: [email protected] SL: [email protected]

Word count: abstract – 323, manuscript excluding references – 5417

Page 203: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

189

Background: It is difficult to foster research utilization amongst allied health

professionals (AHPs). Tailored, multifaceted knowledge translation (KT) strategies

are now recommended but are resource intensive to implement. Employers need

effective KT solutions but little is known about; (a) the impact and viability of

multifaceted KT strategies using an online KT tool (b) their effectiveness with AHPs

and (c) their effect on evidence-based practice (EBP) decision-making behavior. The

study aim was to measure the effectiveness of a multifaceted KT intervention

including a customized KT tool, to change EBP behavior, knowledge and attitudes of

AHPs.

Methods: Evaluator-blinded, cluster randomized controlled trial conducted in an

Australian community-based cerebral palsy service. 135 AHPs (physiotherapists,

occupational therapists, speech pathologists, psychologists and social workers) from

4 regions were cluster randomized (n=4), to either the KT intervention group (n=73

AHPs) or the control group (n=62 AHPs), using computer-generated random

numbers, concealed in opaque envelopes, by an independent officer. The KT

intervention included 3-day skills training workshop and multifaceted workplace

supports to redress barriers (paid EBP time, mentoring, system changes and access to

an online research synthesis tool). Primary outcome (self- & peer-rated EBP

behavior) was measured using the Goal Attainment Scale (individual level).

Secondary outcomes (knowledge and attitudes) were measured using exams and the

Evidence Based Practice Attitude Scale.

Results The intervention group’s primary outcome scores improved relative to the

control group, however when clustering was taken into account, the findings were

non-significant: self-rated EBP behavior [effect size 4.97 (95% CI -

10.47,20.41)(p=0.52)]; peer-rated EBP behavior [effect size 5.86 (95% CI -

Page 204: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

190

17.77,29.50)(p=0.62)]. Statistically significant improvements in EBP knowledge

were detected [effect size 2.97 (95% CI 1.97,3.97(p<0.0001)]. Change in EBP

attitudes was not statistically significant.

Conclusions Improvement in EBP behavior was not statistically significant after

adjusting for cluster effect, however similar improvements from peer-ratings suggest

behaviorally meaningful gains. The large variability in behavior observed between

clusters suggests barrier assessments and subsequent KT interventions may need to

target subgroups within an organization.

Trial Registration Registered on the Australian New Zealand Clinical Trials

Registry (ACTRN12611000529943).

Key words

KT, allied health, evidence-based practice, online KT tool.

Page 205: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

191

Introduction

Cerebral palsy is the most common physical disability in childhood 223. Of people

with cerebral palsy 3 in 4 are in pain; 1 in 2 have an intellectual disability; 1 in 3

cannot walk; 1 in 3 have a hip displacement; 1 in 4 cannot talk; 1 in 4 have epilepsy;

1 in 4 have a behavior disorder; 1 in 4 have bladder control problems; 1 in 5 have a

sleep disorder; 1 in 5 dribble; 1 in 10 are blind; 1 in 15 are tube fed; and 1 in 25 are

deaf 224. Allied health professionals (AHPs) who treat people with cerebral palsy are

therefore faced with complex clinical decision-making. Also, like many other fields,

new evidence-based cerebral palsy treatments are rapidly emerging 13. AHPs provide

the majority of health services to these people and therefore need to have up-to-date

knowledge and skills in providing evidence-based interventions. AHPs endorse

providing evidence-based care 49,93, but good-will alone does not guarantee the latest

research is translated and applied within practice 41,102. Survey research suggests that

there is a significant gap between best available evidence and what treatments are

actually offered to people with cerebral palsy 11,12. Lack of time 7, lack of skill

searching and appraising research 8,9, and lack of access to databases compounded by

large volumes of published research are barriers to new knowledge being translated

in a timely and efficient way 10.

Knowledge translation (KT) strategies including workshops 4, mentoring 135,

outreach visits 5, audit and feedback 132 and reminders and memos 145 aim to embed

research into practice and lead to small to moderate changes in health professional’s

behavior. Even though KT is an emergent science, it is known that KT strategies

should be tailored to be context specific, and planned in response to a thorough

assessment of barriers and facilitators 65,221. Although there is no firm evidence that

Page 206: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

192

multifaceted strategies are more effective than single interventions it is plausible that

they would be more effective if each component and the overall strategy were

designed in response to a barriers analysis 65. In the field of cerebral palsy a tailored

KT intervention was pilot tested with good results, but the lack of a controlled

comparison group precludes certainty of the findings 102.

In addition to tailoring KT interventions, it is recommended that theory is used to

guide the KT journey 55. A number of KT frameworks have been proposed, that

incorporate key theories suited for various target settings and professional groups.

One example is the knowledge-to-action process (KTA) 51 (Figure 1) which provides

a comprehensive and flexible framework to guide and monitor a multifaceted KT

intervention. Although the use of theory is recommended there are few rigorous

studies detailing the application of theory to a KT intervention 53.

<Insert Figure 1 approximately here – Knowledge-to-action process (Graham et al.,

2006 - used with permission)>

Central to the KTA process, and indeed the basic unit of a KT intervention is up-to-

date research being available and accessible to the target group 51,65. The basis of a

KT intervention is synthesis of research in the form of systematic reviews, evidence

summaries or online KT tools. Although health professionals generally prefer

systematic reviews to original research articles 67 they still report that systematic

reviews do no always answer their clinical questions 13,68. There is an increasing call

for customized, easy to read summaries. Straus and Haynes (2009) describe the ‘5S’

model 13,118 for organizing evidence-based information resources (Figure 2). The

Page 207: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

193

model is displayed in a pyramid with 5 levels (studies, syntheses, synopses,

summaries, systems) that aim to be increasingly readable, reliable and relevant as

one moves up the pyramid. The top two levels (summaries and systems) may also be

referred to as KT tools 65. Straus and Haynes recommend a top down approach for

answering clinical questions.

<Insert figure 2 approximately here - 5S pyramid with examples from the allied

health professions (adapted from Straus & Haynes, 2009)>

Previous studies measuring the effectiveness of evidence-based information

resources (5S pyramid level 3) detected a change in use however did not detect a

change in EBP behavior 172,203. Dobbins and colleagues 164 found that targeted

messages (5S pyramid level 3-4) were more effective than knowledge brokering and

access to research evidence for incorporating evidence into public health policies and

programs. Although evidence-based information resources are available for AHPs

(PEDro, OTseeker, SpeechBite) they are at 5S pyramid level 3 (synopses), and no

studies have rigorously evaluated the usefulness of these tools. There are no KT

tools (5S pyramid levels 4 or 5) found in literature specifically targeting AHPs

working with people with cerebral palsy.

KT tools presenting up to date research in a user-friendly way, is however only one

piece of a KT strategy. Changing EBP behavior is complex as there is a range of

behaviors required to be an ‘evidence-based AHP’. Previous studies have either used

self-developed measures 15,147,149,150 or have only measured a narrow domain of EBM

Page 208: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

194

behavior 168,169. KT research in the allied health professions measuring EBP behavior

across a range of AHPs is also absent from our evidence base 16,66.

The primary aim of this cluster RCT was to evaluate the effectiveness of a

multifaceted KT intervention for improving EBP behavior of AHPs. The central

element of the KT intervention was an online evidence-based information resource

called the Evidence Alert System (EAS). The EAS contained actionable messages

(5S pyramid level 4 and 5), clinical decision-making tools and used the ‘top-down’

approach 13. The other elements of the multifaceted intervention (workshop,

mentoring and documentation changes) reinforced, educated and supported the

approach set out in the EAS ensuring that the decision-making tools were embedded

into the participant’s workflow. The secondary aims were to measure the effect of

the KT intervention on EBP knowledge and attitudes. Our study sought to address

key gaps in the current KT evidence by: (a) using an RCT to measure the effect of a

multi-component KT intervention centred around the EAS (b) measuring a wide

range of EBP behaviors, and (c) sampling a wide range of AHPs. Aims were

measured at the individual participant level. Findings are reported according to the

updated CONSORT statement for cluster randomized trials 17.

Methods

Trial design and study setting

A multi-site evaluator-blinded, cluster RCT was conducted in a community based

cerebral palsy service in New South Wales (NSW), Australia. NSW is the largest

state with a population of approximately 7.25 million people (32% of Australia’s

total population). The cerebral palsy service had 16 sites across NSW, organized

Page 209: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

195

into 4 geographically distinct regions, where AHP services were provided. Each

region had centralized management for the sites within its boundaries including

clinical seniors, professional development activities and mentoring, and thus were

considered natural cluster groupings. An independent officer not associated with the

trial, used computer generated random numbers, to create four opaque envelopes

based upon simple randomization. Four geographically distinct clusters were

randomized to the intervention or control group. Cluster randomization was chosen

to reduce risk of contamination that may have occurred if individuals working at the

same site were randomized to different interventions. Individual participants were

consented after randomization for pragmatic reasons. The first author (LC) obtained

participant’s written consent and data collection took place before and after the

workshops, at worksites or nearby locations, between June 2009 and August 2009.

Ethics

The project was approved by the National Health and Medical Research Council

Human Research Ethics Committee at Cerebral Palsy Alliance (Approval number:

2009-05-01) and University of Notre Dame Ethics Committee. The study was

registered with Australian New Zealand Clinical Trials Registry

(ACTRN12611000529943).

Participants

Eligible participants were AHPs employed at the study site providing direct clinical

services to people with cerebral palsy and their families. Figure 3 shows the flow of

participants through the study. Exclusion criteria were: (1) managers (non-clinical

staff); (2) staff without university qualifications, and (3) staff who were not

scheduled to work on the day of the workshops.

Page 210: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

196

<Insert figure 3 approximately here - Flowchart of randomization, enrolment and

participation>

Intervention

Theoretical model

The theoretical model underpinning the project was the KTA process (Figure 1)

developed by KT field leaders 51. The KTA process first, involves knowledge

creation (i.e. production of research syntheses) and second, knowledge application

(i.e. identification of the research-practice gap, adaption of the research syntheses to

local context; identification of utilization barriers; selection of tailored KT strategies

to redress barriers; monitoring, evaluating and sustaining EBP implementation use).

Emerging evidence suggests that KT interventions underpinned by theory may be

superior to those that are not theoretical-informed although more research is needed

to confirm this 16. The advantage of theory-informed KT interventions is that they

offer a generalizable framework for other researchers and organizations and provide

guidance for designing KT interventions to overcome known barriers 16.

Assessment of barriers and facilitators

A comprehensive assessment of barriers and facilitators was done over a one-year

period. This took the form of (a) meetings between managers, policy makers,

researchers, senior clinicians and knowledge brokers (b) observation of clinical staff.

As there is no firm evidence regarding the superiority of one KT intervention over

another 65 researchers and knowledge brokers jointly designed the KT intervention

based on whether or not the barrier was modifiable by a pragmatically feasible

Page 211: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

197

intervention. Modifiable barriers included lack of (a) skill (b) time and (c)

knowledge. Partially modifiable or non-modifiable barriers were: (a) that evidence

was considered not clinically relevant (b) that staff did not have access to full

electronic databases and (c) some staff had negative attitudes towards EBP.

Modifiable barriers, theoretical underpinnings and strategies for the KT intervention

are detailed in Table 1. Details of how the components of our multifaceted

intervention correspond to the KTA process are shown in Table 5.

Development of multifaceted intervention

Strategic planning meetings were held every 6-weeks in the year leading up to

baseline and included researchers, knowledge brokers, policy makers and managers.

Knowledge brokers were senior staff with allied health backgrounds (one per

discipline employed in the most senior role for each discipline). Policy makers were

the senior executive staff and managers involved in direct management of AHPs in

the organization. Goals around EBP behaviors were set and strategies to achieve

these goals were jointly selected based on barriers literature and assessment of the

study site. The EAS formed the basis of our KT intervention and was developed by

research staff and knowledge brokers using freely available software MediaWiki

(Figure 4). The EAS included succinct summaries of all the CP research evidence

about intervention, prognosis and outcome measurement. Intervention evidence was

labeled using the traffic light system 102 where each intervention was given a traffic

light color with an actionable message attached. Green=’Go’ if high quality evidence

supports the effectiveness of this intervention, Yellow=’measure’ where low quality

or conflicting evidence supports the effectiveness of this intervention, therefore

measure the outcomes of the intervention to ensure the goal is met, and RED=’stop’

Page 212: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

198

where high quality evidence demonstrates intervention is ineffective, therefore do

not use this approach. Decision making algorithms with embedded evidence

summaries were also available on the EAS. Each section of the EAS included

abstracts of research articles, descriptions of the intervention or assessment and a

hyperlink to access the full article.

Insert Figure 4 approximately here - Infogram showing the Evidence Alert System

(EAS)

Experimental group intervention

The intervention group (total n=73; region A=39; region B=34) received a

multifaceted KT intervention. (1) 3-day skills training workshop that included: Part 1

(2 days) of the interactive workshop provided training to apply the EAS to decision-

making within daily clinical work. A series of clinical examples were explored using

the interface of the EAS, training about evidence levels, clinical decision-making

algorithms and use of two psychometrically sound, cross disciplinary outcome

measures. Part 2 (1 day) of the workshop 8-weeks later involved participants

presenting a case study detailing how they used the EAS to inform their clinical

decision-making with a real patient. This was followed by discussion with a small

group of colleagues designed to help participants demonstrate the integration of their

learning into their own clinical work. Investigators and each senior clinician 131 led

the workshops using knowledge brokering strategies 185. There was a mix of

instructional techniques including didactic, interactive, role-playing and reflection.

There was collaboration within and between professional groups. (2) Access to the

EAS, (3) policy changes that participants were informed of included: paid,

Page 213: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

199

quarantined EBP time, changes to client documentation including reminders to use

the EAS, embedding outcome measurement within workflow and mentoring by

knowledge brokers.

The KT intervention was directed at the cluster level (3-day workshop-part 1, access

to the EAS and policy changes) and individual level (mentoring, and 3-day

workshop-part 2). Details of the KT intervention are shown in Table 5.

<Insert Tables 1 & 2 approximately here>

Control group

The control group (total n=62; region C=29, region D=33) received an equal

intensity intervention about communication skills with no EBP content and no use of

the EAS: (1) 3-day workshop about AHP-client communication skills and (2)

workplace supports (paid communication time, strategic planning, mentoring) about

communication skills. To minimize the risk of contamination, the control group was

not informed about the EAS, paid EBP time, knowledge brokers or mentoring until

the end of the trial. The changes to documentation were not implemented in the

control group clusters until the end of the RCT.

Outcome measures

Primary outcome

The primary endpoint was change in self- and peer-rated EBP behavior from baseline

to 8-weeks (individual and cluster level) measured using Goal Attainment Scaling

(GAS) 225. Participants rated themselves against the self-GAS scales, and then to

limit measurement bias, in a separate environment, a well-acquainted peer rated their

performance on the peer-GAS scales. Selection of the GAS instrument increased

Page 214: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

200

study rigor because it overcame known instrumentation limitations in the KT

literature surrounding EBP behavior measurement, including: (1) Responsivity –

GAS has established validity, reliability, and exquisite responsivity to change,

whereas systematic review evidence indicates that for nearly all valid and reliable

EBP instruments, test responsivity is unknown 152; (2) Tailoring – GAS is an

individualized measure of change, and so progress towards any target behavior

(including health professional behaviors 163) could be validly, reliably and sensitively

measured, including tailored EBP behaviors unique to the study site e.g. notifications

to the Cerebral Palsy Register; (3) Comprehensive measurement – GAS is an

individualized measure of change, and so we could comprehensively measure all

desired EBP behaviors, whereas systematic review evidence indicates that other

psychometrically sound EBP instruments measure knowledge instead of behavior, or

are limited because they only measure one discrete aspect of EBP behavior

152,155,156,164,165; (4) Lack of gold standard tool – Accurate, gold-standard, flawless

measurement of EBP behavior is not yet established in literature 166. Even though

direct observation of EBP behavior (such as simulated patients, video/ audio

recordings of practice) is perceived as methodologically preferable to indirect

(proxy) reports of EBP behavior (such as chart audit, patient report, self-report, or

peer-report), systematic review evidence indicates that direct measures often fail

validity testing 166. This could have introduced other flaws to our clinical trial.

Moreover, collecting direct measures throughout NSW, being a state-wide service,

would have introduced prohibitive trial costs (NSW’s landmass is 3.25 times larger

than the United Kingdom, and is larger than California and New Mexico combined),

when the cost-benefit of a potentially invalid measure is weighed-up. Even though

self-report proxy measures are an imperfect measure of actual behavior 167, leading

Page 215: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

201

KT agencies, such as the Canadian Institutes of Health Research advocate for self-

report because the process of self reflection plays a critical role in initiating

behavioral changes within organizations. In light of current EBP behavior

measurement limitations, GAS offered the best way forward since it was

psychometrically sound, it comprehensively measured EBP behavior, was practical

across an entire state and could be tailored to the study site.

The GAS scales were devised by a multidisciplinary panel of experts familiar with

EBP behaviors of the eligible AHPs, as per literature recommendations for scale

establishment. Twenty-five goal scales were developed, half relating to EBP

behaviors and the other half relating to communication behavior for the control

group. The scales measured EBP behaviors such as: use of gold standard goal-setting

tools to plan services; use of cerebral palsy classification systems to accurately

prognosticate; use of evidence (e.g. via the EAS) to quickly choose evidence-based

classification systems, interventions and outcome measures; and use of gold standard

outcome measures to routinely evaluate services. The GAS scales are available from

the corresponding author by request. As per the test manual, raw scores were

converted to GAS T-scores, enabling inferential statistical analysis of continuous

data.

Secondary outcomes

Self- and peer-rated attitude changes were measured using subsets 3 and 4 of the

Evidence Based Practice Attitude Scale (EBPAS) 124, which is psychometrically

permissible. EBP knowledge was measured via open-ended exam questions with

right/wrong answers, pre-defined by the panel of experts, derived from published

evidence.

Page 216: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

202

EAS utilization was measured by number of web page hits collected via a software

program that tracked cluster-specific IP addresses in batches. Web hit data collection

was concealed from participants, minimizing the likelihood of observer bias

affecting EAS use.

Adverse events: An adverse event log was not required because the intervention was

educational in nature and therefore posed no risk.

Blinding

Blinding was judiciously applied wherever pragmatically possible, resulting in a

single-blinded trial. This included: (1) independent evaluator blinding to group

allocation and phase of the trial when scoring outcome data (2) partial participant

and facilitator blinding to the specific EBP behavior of interest to the investigators.

Participants and workshop facilitators were clearly aware of the content of the

workshops, however were not aware of which intervention (KT intervention or

communication skills) was of specific interest to the researchers. Fidelity of the

evaluator blinding was not formally investigated.

Sample size

We sought to test the efficacy of an organizational KT intervention and therefore

conducted the study within one agency, which is the largest of its kind in Australia.

This methodological decision imposed pragmatic limitations on the obtainable

sample frame. We successfully recruited 88% of the available sampling frame,

however the total number of employees at the agency was less than the number of

participants required to reach statistical power if correlation of outcome variables

within sites was observed (intra-cluster correlation). A sample size calculation

Page 217: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

203

identified the probability of detecting an effect size of 1 at an alpha level of 0.05

(one-tail) and a power of 90%. For Goal Attainment Scaling [mean T-score=50,

standard deviation (sd)=10] an improvement of 10-points or more in the KT

intervention group than the control group was sought, (improvement of 1 sd). The

expert panel agreed that a 10-point increase in GAS T-scores equated to significant

clinical improvement in EBP behaviors. The calculation assumed a 20% non-consent

rate and a 20% attrition rate indicating a sample size requirement of 72 (38 per

group) for a non-cluster trial. We enrolled 135 professionals (n=73 interventions and

n=62 controls) at 4 sites. Based on estimating an intra-cluster correlation co-efficient

(ICC) of 0.1 we calculated that the study was underpowered to demonstrate an

improvement of 10 points between groups if a cluster effect of this size was observed

(Variance Inflation Figure =4.3).

Statistical analysis

All statistical analysis was carried out with individual participants as the unit of

analysis on an intention-to-treat basis by using SPSS for Windows 19.0.0 (SPSS Inc,

Chicago, IL) and SAS 9.3 (SAS Institute, Cary NC).

We conducted generalized linear regression analysis for primary and secondary

endpoints, using post intervention GAS T-score as the outcome variable and

adjusting for potential confounding variables (baseline GAS T-score, profession,

group allocation, grade level and years in the disability field). Effect sizes with 95%

confidence intervals (CIs) were calculated and significance was set at 0.05. These

estimates would underestimate the standard errors and confidence intervals for the

effect size if participant outcomes are correlated within cluster sites, thus mixed

Page 218: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

204

effects models with cluster included as a random effect were used to adjust for a

cluster effect to calculate the effect size for each outcome 195. ICC was calculated

from the mixed effects model and bootstrapping (1000 samples generated) was

performed to calculate 95% confidence intervals for the ICC.

Results

A total of 135 AHPs (n=73 interventions and n=62 controls) were recruited (see

Figure 3), which was 88% of the available sampling frame. At baseline, participant

attributes were mostly comparable between groups, the exception being prior EBP

education attendance (88% compared to 66% for controls) (see Table 3). To account

for this baseline difference, prior EBP education was treated as a covariate in the

regression model. Included professionals were physiotherapists (24%), speech

pathologists (26%), occupational therapists (37%), psychologists (6%) and social

workers (7%). 64% of participants had over 5 years experience working with people

with disabilities although 63% of the cohort had worked at the study site for less than

5 years. 94% of the sample had English as their first language. The return rate for the

GAS and EBPAS ratings were between 60-82% (see Figure 3), with the primary end-

point having more missing data. The KT intervention group had 19/73 (31%) 8-week

GAS forms missing, compared to the control group who had 17/62 (30%). This

difference between groups was not statistically significant (chi square p=0.95).

<Insert Table 3 approximately here>

Clustering effect

The ICC for the primary endpoints were 0.33 ( 95% CI 0.16,0.69) for self-rated GAS

T-scores, that is 33% of the total variation observed in self-rated GAS T-scores can

Page 219: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

205

be attributed to differences between the sites, (rather than differences between

individuals within each site), and 0.64 (95% CI 0.36,0.80) for peer-report GAS T-

scores (Table 4), that is 64% of the total variation observed peer-rated GAS T-scores

can be attributed to differences between sites. These results demonstrate the

correlation of GAS T-scores within sites was very large, whereas there was a large

variation in scores between sites. This cluster effect substantially depleted the study

power (because participant scores within each site cannot be regarded as

independent). ICCs were smaller for secondary outcomes (Table 4).

Effectiveness of KT intervention

Primary outcome – EBP behaviors

Self-rated GAS T-scores increased more in the intervention group compared to

controls however this difference was not statistically significant after adjusting for

the cluster effect; Effect size 4.43 [95% CI -10.63 to 19.49 (p=0.56)] (Table 4).

Baseline self-rated GAS T-scores were a predictor in the model [Effect size 0.71

(95% CI 0.52–0.90)(p<0.0001)]; indicating lower performers improved but remained

lower performers, and higher performers improved and remained leading performers.

No other covariates were significantly predictive of outcome.

Peer-rated GAS T-scores of the intervention group also increased compared to

controls, but this difference was also not statistically significant after adjusting for

the cluster effect: effect size 6.75 [95% CI -16.95 to 30.44 (p=0.57)] (Table 4).

Similar to the self-rated GAS T-scores, the final peer-rated GAS T-score was

predicted by the baseline peer-rated GAS T-score [effect size 0.30 (95%CI

0.150.45)(p<0.0001)]. No other covariates were significantly predictive of peer-rated

GAS T-scores. The peer-rated GAS T-scores for each cluster mirrored the self-rated

GAS cluster T-scores, suggesting the observed study effects were behaviorally

Page 220: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

206

meaningful , despite low study power to demonstrate a statistically significant

difference.

<Insert Table 4 approximately here>

Secondary outcomes – EBP knowledge and attitudes

EBP knowledge scores increased compared to controls, with a statistically significant

effect size of 2.97 (95% CI 1.97, 3.97,– p<0.0001). The ICC for this outcome was

zero, and this effect remained statistically significant after adjusting for the cluster

effect: 2.97 (95% CI 1.97, 3.97, p<0.0001). Baseline score (p<0.0001) and

professional category (p=0.03) were also predictors in the model. There was minimal

to no correlation between participants within sites for self- or peer-rated EBP

attitudes, however we did not demonstrate a statistically significant intervention

effect (Table 4). The intervention group accessed the EAS more than the control

group (KT intervention group 6123 total hits; control group 1677 hits).

Secondary analyses examining mean outcome scores for each cluster revealed that

both clusters in the KT intervention group improved their self- and peer-rated GAS

T-scores as expected (Table 5). One of the control group clusters (cluster 3) also

responded as expected, with very minimal increases in self- and peer-rated GAS T-

scores from baseline to 8-weeks (self-rated T-score change = 0.22; peer-rated T-

score change=2.27). The other control group cluster (cluster 4) had high baseline

scores (self –rated GAS T-score=66.41; peer-rated GAS T-score=73.32) and further

improved by 10.15 points over the 8-week study period, despite not receiving the KT

intervention (Table 5). We performed post-hoc Spearman’s correlation tests to assess

for correlation between knowledge and attitude scores (at baseline, 8-weeks and

Page 221: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

207

change scores) overall, by treatment group, and within individual clusters. No

statistically significant positive correlations were found.

<Insert Table 5 approximately here>

Discussion

We conducted a cluster RCT to evaluate whether a multifaceted KT strategy changed

AHP’s EBP behaviors. Both clusters in the KT intervention group improved within

the study period, but not statistically significantly more than the control group. We

consider this null finding to be a probable type II error because our study was

underpowered owing to the fact that the number of participants required to account

for clustering of EBP behaviors within sites exceeded the number of employees

available. Our study demonstrated increased use of our evidence-based resource (the

EAS), however we were unable to confirm that this translated to a statistically

significant change in EBP behavior. This finding is in line with previous research

involving evidence-based resources 172,203. Owing to the type II error we remain

unsure of the true effect of our KT intervention, but we discovered a number of

potentially important findings that may contribute to future KT endeavours and the

body of research.

The high ICCs (ranging from 0.33 to 0.64) for EBP behavior measures, indicated

substantial correlation of behaviors within clusters, and indicated differences in

behaviors between clusters. When we examined the mean change scores for each

cluster, one of the four clusters (cluster 4), which was randomly allocated to the

control group, was an obvious outlier with high baseline GAS T-scores, high

Page 222: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

208

baseline knowledge scores and increased self- and peer-rated GAS T-scores over the

study period.

Variability between natural groupings (such as clinical, departmental or regional) has

been noted in the KT literature previously 15,164. Perhaps the high baseline EBP

scores for the cluster 4 reflected positive EBP culture and practices due to cluster 4’s

manager 15,83,209. The notion that a manager can strongly influence research culture is

by no means new 89,164, as some opinion leaders are known to strongly influence EBP

behavior 209,210. The cluster 4’s manager was active in promoting EBP behavior

amongst staff. A large range of KT interventions were in place in cluster 4 prior to

this study, including audit and feedback, financial incentives, workshops and

mentoring. It is conceivable that cluster 4 therefore had both better readiness and

receptivity to EBP supports as they had essentially been engaging in active KT for a

longer period than the other clusters 15. That said, positive EBP culture is considered

to be related to positive EBP attitudes 89 and EBPAS scores measuring attitude

change of cluster 4 were no different from the other clusters at baseline or 8-weeks.

This may have reflected measurement error, or may indicate that positive attitudes in

cluster 4 were not necessary as mandatory policies within that cluster were the

driving force behind the higher GAS scores.

Secondary outcomes

Our hypothesis that the KT intervention would improve knowledge was supported

with the KT intervention group knowledge exam scores showing a statistically

significant improvement compared to the control group. This finding supports

previous research suggesting that knowledge change alone does not consistently

Page 223: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

209

translate into behavior change 14-16,102. Interestingly, change in knowledge scores was

not affected by the cluster effect suggesting that knowledge is not as susceptible to

peer influences as behavior.

We found no correlation between behavior, knowledge and attitude change scores

within and between clusters. Attitudes remained unchanged. We hypothesise the lack

of change in EBP attitudes in our study may be explained by: (1) high baseline EBP

attitudes and there was conceivably a ceiling effect on the EBPAS. This was

plausible as EBP had been a focus in the organization for some time prior to the

RCT. In this case, positive attitudes at baseline, increased knowledge scores and

policy changes may together have resulted in the behaviorally meaningful changes

observed. There is however no normative data for AHPs on the EBPAS, so it is

difficult to say whether or not baseline attitudes were high compared to AHPs in

other organisations; (2) EBPAS subsets potentially not being sensitive enough to

detect attitude change and the psychometrics for sensitivity in this population are

unknown; (3) the EBPAS being an accurate, sensitive measure and that attitudes did

not improve from the KT intervention. This third possibility supports the notion that

improved knowledge was not adequate to lead to statistically significant behavior

change, and that a shift in attitudes was also needed 215. Conversely, the behaviorally

significant change that was observed potentially bypassed the need for attitude

change by employing strategies such as mandatory use of documentation and

outcome measures; and (4) EBP attitudes taking a longer period of time than

knowledge to change, and the 8-week trial was too short to detect change.

Strengths and Limitations

Page 224: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

210

The study had a number of strengths including the rigorous design and broad robust

behavior measurement. Our chosen measurement instrument (GAS) was sensitive to

change 90,217 and appeared accurate as self- and peer-rated scores mirrored each other.

Distinguishing features of our study were that we measured a wide set of behaviors

amongst AHPs working with people with cerebral palsy. The mix of AHPs in our

sample is fairly representative of other community based disability organizations,

increasing external validity. This is the first RCT in the KT literature involving

social workers, psychologists or occupational therapists 16. The KT intervention itself

was a study strength being based on a solid theoretical model 51,53,55, in response to a

comprehensive barriers assessment, with desired outcomes clearly defined, and

included a range of interventions, not only educational interventions 16.

There are a number of study limitations. First and foremost the pragmatic constraints

that limited the number of available clusters and participants led to low statistical

power causing a probable type II error. Second, the large differences observed

between clusters suggests that we potentially should have tailored the KT

intervention to each cluster rather than the whole organization. Third, the evidence

base regarding whether proxy behavior measures represent actual behavior is not

firmly established, but with preferred rival direct measures also lacking validity and

reliability 189,218. Moreover, direct measurement was not affordable in our study

given the geography involved, and indirect measurement tools were therefore used

163,219. To minimize measurement bias, systematic review recommendations

regarding indirect measures were followed, and included using: (1) acceptable

indirect measures 189,219 (such as self- and peer-rated behavior triangulated with

unbiased web hit data) 152, (2) measurement tools with strong psychometric

Page 225: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

211

properties 166 (3) more than one tool to measure behavior change 167, and (4) a sound

theoretical model as a basis of the intervention 55. Fourth, the time frame of the trial

was short considering that many EBP behaviors and system/organizational changes

(such as documenting client goals and mentoring) take time to develop 173. A follow-

up study is needed to measure whether the EBP behaviors were sustained 4. Fifth, the

return rate of the GAS exam form and EBPAS was not perfect (60-82%), with the 8-

week data having more missing data.

Conclusions

KT literature recommends tailoring KT interventions to overcome known barriers

within organizations 65,221, however our findings suggest that this may need to go

even further with KT interventions being designed for subgroups within an

organization. The impact of different workplace culture may mean that there are

dramatically different barriers needing different KT interventions to be effective 15.

Considering the importance of management-led change, targeting policy makers and

managers may be beneficial. This has been done in the public health sector 164,

however no studies customizing KT to policy makers/management was found in the

allied health literature. Our study provides extremely rich pilot study data to

planning and conducting an adequately powered cluster RCT in future.

Our study highlighted the methodological challenges of conducting empirical

research in a community-based organization with fixed cluster and participant

numbers. Whether or not RCTs are a feasible option in community organizations is

debatable, and it may be that other research designs are more appropriate 164,220.

Researchers, policy makers and clients need to effectively collaborate to ensure that

Page 226: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

212

reliable, relevant research becomes embedded into everyday care in a timely way.

Considering that the cornerstone of KT is access to reliable research, the authors plan

to make the EAS publically available.

Acknowledgements: The authors would like to thank Cerebral Palsy Alliance for their support of this study, for understanding the importance of EBP and adopting systemic changes. The authors also wish to acknowledge the clinical consultants (Cathy Morgan, Salli-Ann Craik, Natalie Morton, Leigha Dark & Elise Stumbles) and research staff at Cerebral Palsy Alliance for their leadership, contributions and assistance, and most importantly we would like to thank staff for their participation in the study. Contributors: The study was carried out as part of a Doctor of Philosophy candidature by the first author. Staff at the Research Institute of Cerebral Palsy Alliance (the study site) assisted in study design and developing the Evidence Alert System (searching databases for articles, synthesising results, converting the information to electronic format). Staff from the Research Institute and senior staff at Cerebral Palsy Alliance facilitated the workshops that formed part of the KT interventions (experimental and control groups). The participants of the study were all staff at the Cerebral Palsy Alliance. All authors had full access to all of the data, including statistical reports and tables and take responsibility for the integrity of the data and accuracy of the data analysis.

Page 227: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

213

Table 1 – Theoretical basis and strategies to address modifiable barriers

BARRIER: LACK OF CONFIDENCE/SKILL SEARCHING, APPRAISING AND SYNTHESIZING RESEARCH EVIDENCE

KT intervention Underpinning theory or group of theories Strategy/rationale Workshop

Problem based learning, learning styles

Workshops used problem based learning approach and a variety of approaches to ensure that different learning styles were catered to, maximizing the likelihood of increased confidence and skill levels

EAS Cognitive

Accurate, relevant research evidence on cerebral palsy assessment and treatment was provided via the EAS building skill by modeling synthesis and summary of treatment areas. The EAS bypassed the need for high-level appraisal skills.

Mentoring Educational

AHPs were included in the problem solving process during mentoring sessions and aimed to increase confidence and build skill base.

Page 228: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

214

BARRIER: LACK OF TIME

KT intervention Group of theories that the intervention

relates to Strategy/rationale

EAS Cognitive

The provision of accurate, relevant research evidence bypassed the need for extensive time spent searching and appraising research via databases and journals.

Paid EBP time in policy

Reimbursement

Leadership

Paid, protected time for AHPs to engage in EBP activities was provided

Changing policy suggested management ‘buy in’ and endorsement to support changes throughout the organization (leadership theory)

Documentation changes including a reminder system

Total quality management (TQM) Patient documentation and work processes were reorganized to support clinical decision making and save time (reminder systems, checklists and directing participants to the EAS)

Page 229: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

215

BARRIER: EVIDENCE CONSIDERED AS NOT CLINICALLY RELEVANT

KT intervention Group of theories that the intervention

relates to Strategy/rationale

Workshop teaching EAS

Educational

Motivational

AHPs were involved in the problem solving process, so that they ‘owned’ and were a part of the process and could see the applicability of the EAS. Having the 8 week period in between workshops, allowed independent learning and time to apply the EAS information to a real client

Facilitators aimed to convince AHPs of the relevance of research in their area by exploring the EAS through clinical examples and role playing

EAS Marketing An appealing product (the EAS) was developed and this was disseminated in a variety of ways (workshop, mentoring, documentation changes)

Page 230: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

216

BARRIER: NO ACCESS TO FULL ARTICLES AND RESEARCH DATABASES

KT intervention Group of theories that the intervention

relates to Strategy/rationale

EAS Organizational learning

All staff members at every level of the organization had access to current cerebral palsy evidence and exchange of information via mentoring sessions and team meetings was promoted

BARRIER: SOME STAFF WITH NEGATIVE ATTITUDES TOWARDS EBP

KT intervention Group of theories that the intervention

relates to Strategy/rationale

Workshop

Social Credible staff facilitated workshops, modeled positive attitudes and emphasized ‘buy in’ from decision-makers in the organization

Mentoring Social Mentors were selected with positive attitudes towards EBP so that target behavior was modeled

Page 231: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

217

Table 5– KT intervention with corresponding KTA phases

KT INTERVENTION

WHAT PART OF THE KTA CYCLE DID THE INTERVENTION IMPACT?

WHO IMPLEMENTED IT? Cre

atin

g K

now

ledg

e

Loc

alis

ing

Kno

wle

dge

Iden

tify

ing

Bar

rier

s

Red

ress

ing

Bar

rier

s

Mai

ntai

nin

g U

se

Before RCT Strategic planning meetings � � � � Managers

Human Resources Knowledge brokers Policy Makers

Policy Changes (policies developed however not implemented until RCT) Provision of paid, dedicated EBP time Provision of a policy endorsed EBP mentoring program Mandated and compulsory use of psychometrically sound outcome measures with all clients embedded in workflow e.g. included within mandatory Individual Family Service Plans

� � Managers Human Resources Knowledge brokers Policy Makers

Evidence Alert System development � Research Investigators During RCT (8-weeks; June – Aug 2009) Skills Training Workshops (3-days) � � � � Peers

Knowledge Brokers Research Investigators

Paid EBP time, mentoring, compulsory use of outcome measures (see policy changes above), documentation changes including reminder systems

� � �

Page 232: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

218

Table 3 - Baseline characteristics of participants

KT Intervention n=73 (%)

Control n=62 (%)

Professional Background Occupational Therapist Physiotherapist Speech Pathologist Psychologist Social Worker

23 (31) 16 (22) 20 (27) 7 (10) 7 (10)

26 (42) 16 (26) 16 (25) 1 (2) 3 (5)

Grade Level Level 1 Level 2 (clinical specialist) Level 3 (clinical senior) Manager or other

19 (26) 34 (47) 13 (18) 7 (9)

14 (23) 37 (60) 8 (13) 2 (3)

Years’ experience in disability field <2 years 2-4 years 11months 5-9 years 11 months >10 years

11 (15) 10 (14) 25 (34) 27 (37)

16 (26) 12 (19) 14 (23) 20 (32)

Previous EBP continuing education? Yes No

64 (88)* 9 (12)*

41 (66)* 21 (34)*

* Significant difference between groups at baseline therefore treated as a covariate in the analysis.

Page 233: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

219

Table 4 – Primary and secondary outcomes

Treatment n=73 Control n=62 Base model Mixed effects model Outcome n* Mean (sd) n* Mean (sd) Difference

(95% CI) p ICC (95%

CI) Difference (95% CI)

p

EBM Behavior

Self baseline 59 54.05 (13.80)

45 55.42 (10.92)

8-weeks 51 65.96 (13.49)

43 62.45 (19.50)

5.08 (0.40,10.55)

0.07 0.33 (0.16,0.69)

4.43 (-10.63,19.49)

0.56

Peer baseline 52 61.83 (13.69)

43 61.52 (16.95)

8-weeks 44 74.26 (8.51) 42 68.41 (16.63)

7.86 (1.97,13.75)

0.01 0.64 (0.36,0.80)

6.75 (-16.95,30.44)

0.57

EAS page hits** 6123 1677

EBM Knowledge

baseline

57 7.91 (3.05) 50 8.09 (3.52)

8-weeks

52 10.69 (2.23) 45 8.02 (3.13) 3.29 (2.25,4.33)

<0.0001

0.01 (0.0,0.26)

3.29 (2.18,4.40) <0.0001

EBP attitude EBPAS

Self subset 3

baseline

55

2.67 (0.75)

47

2.57 (0.70)

8-weeks

50 2.63 (0.74) 44 2.77 (0.61) -0.27 (-0.57,0.03)

0.08 0.0 (0.0,0.32)

-0.27 (-0.57,0.03)

0.08

subset 4

baseline

55 3.00 (0.51) 47 2.98 (0.58)

8-weeks

50 3.03 (0.61) 44 2.98 (0.59) 0.03 (-0.22,0.28)

0.82 0.0 (0.0,0.25)

0.03 (-0.22,0.28) 0.82

Peer subset 3

baseline

42

2.93 (0.63)

38

2.90 (0.72)

8- 32 3.17 (0.56) 39 1.17 (0.80) 0.03 (- 0.88 0.0 0.03 (-0.37,0.43) 0.88

Page 234: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

220

weeks 0.37,0.42) (0.0,0.51) subset

4 baseline

42 0.89 (0.78) 32 3.19 (0.61)

8-weeks

32 0.87 (0.75) 32 1.13 (0.93) -0.23 (-0.75,0.23)

0.37 0.12 (0.0,0.65)

-0.29 (-1.06,0.48)

0.45

* Number of participants who completed outcome measure

** EAS page hit raw data could only be collected and analyzed at the cluster level, not the individual level because the electronic data was collected in batches.

Page 235: Knowledge translation intervention to improve evidence

Ap

pen

dix 9

– Journ

al pap

er accepted

for pu

blication

by Im

plem

enta

tion

Scien

ce

221

Table 5 - Mean outcome scores for each cluster

Variable Outcome score N, mean (sd) per cluster

Outcome time Cluster 1 (Exp) Cluster 2 (Exp) Cluster 3 (Control) Cluster 4(control)

EBP behavior

Self GAS Baseline

35 50.73 (13.75)

24 58.88 (12.64)

28 48.75 (10.85)

17 66.41 (15.46)

8-weeks 24

66.39 (16.02) 27

65.58 (11.08) 22

48.97 (15.34) 21

76.56 (11.92)

Peer GAS

Baseline 33

60.19 (14.26) 19

64.68 (12.51) 28

55.20 (15.69) 15

73.32 (12.57)

8-weeks 21

72.69 (9.93) 23

75.69 (6.90) 23

57.47 (13.11) 19

81.66 (9.05)

EBP knowledge Exam score

Baseline 35

7.69 (2.76) 22

8.27 (3.51) 28

6.50 (3.08) 22

10.11(3.04)

8-weeks 25

10.80 (2.37) 27

10.59 (2.14) 23

6.98 (3.26) 22

9.11(2.65)

EBP attitude

Self EBPAS subset 3 score

Baseline 35

2.73 (0.73) 20

2.57 (0.79) 27

2.53(0.61) 20

2.64(0.83)

8-weeks 24

2.55(0.78) 26

2.70 (0.70) 22

2.52 (0.57) 22

3.01 (0.55)

Self EBPAS subset 4 score Baseline

20 2.86 (0.48)

35 3.08 (0.54)

27 2.84 (0.56)

20 3.16 (0.58)

8-weeks 24

3.10 (0.59) 26

2.96 (0.64) 22

2.85 (0.60) 22

3.11 (0.58)

Peer EBPAS subset 3 score Baseline

30 2.80 (0.60)

12 3.24 (0.63)

23 2.87 (0.74)

15 2.95 (0.73)

8-weeks 16

3.20 (0.47) 16

3.14 (0.65) 17

3.07 (0.63) 15

3.32 (0.57)

Peer EBPAS subset 4 score Baseline

30 0.83 (0.64)

12 1.03 (1.08)

23 1.45 (0.86)

16 0.77 (0.48)

8-weeks 16

1.05 (0.86) 16

0.69 (0.60) 17

1.41 (0.99) 15

0.82 (0.76) Web hits Page hits 8-weeks 2987 3136 928 749

Page 236: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

222

References

1. Reddihough DS, Collins KJ: The epidemiology and causes of cerebral palsy. Australian Journal of Physiotherapy 2003, 49:7-14.

2. Novak I, Hines M, Goldsmith S, Barclay R: Clinical prognostic messages from a systematic review on cerebral palsy. Pediatrics 2012, 130:e1285-1312.

3. Straus S, Haynes R: Managing evidence-based knowledge: the need for reliable, relevant and readable resources. Canadian Medical Association

Journal 2009, 180:942. 4. Heiwe S, Kajermo KN, Tyni-LennÈ R, Guidetti S, Samuelsson M, Andersson

IL, Wengstrˆm Y: Evidence-based practice: attitudes, knowledge and behaviour among allied health care professionals. International Journal

for Quality in Health Care 2011. 5. Stevenson K, Lewis M, Hay E: Do physiotherapists' attitudes towards

evidence-based practice change as a result of an evidence-based educational programme? Journal of Evaluation in Clinical practice 2004, 10:207-217.

6. Davis D: Continuing education, guideline implementation, and the emerging transdisciplinary field of knowledge translation. Journal of

Continuing Education in the Health Professions 2006, 26:5-12. 7. Novak I, McIntyre S: Education with workplace supports improves

practitioners' evidence-based practice knowledge and implementation behaviours. Australian Occupational Therapy Journal 2010.

8. Saleh M, Korner-Bitensky N, Snider L, Malouin F, Mazer B, Kennedy E, Roy MA: Actual vs. best practices for young children with cerebral palsy: A survey of paediatric occupational therapists and physical therapists in Quebec, Canada. Developmental Neurorehabilitation 2008, 11:60-80.

9. Hanna SE, Russell DJ, Bartlett DJ, Kertoy M, Rosenbaum PL, Wynn K: Measurement practices in pediatric rehabilitation: a survey of physical therapists, occupational therapists, and speech-language pathologists in Ontario. Phys Occup Ther Pediatr 2007, 27:25-42.

10. O'Connor S, Pettigrew C: The barriers perceived to prevent the successful implementation of evidence based practice by speech and language therapists. International Journal of Language & Communication Disorders

2009, 44:1018-1035. 11. McCluskey A: Occupational therapists report a low level of knowledge,

skill and involvement in evidence-based practice. Australian Occupational

Therapy Journal 2003, 50:3-12. 12. Salbach N, Jaglal S, Korner-Bitensky N, Rappolt S, Davis D: Practitioner

and organizational barriers to evidence-based practice of physical therapists for people with stroke. Physical Therapy 2007, 87:1284.

13. Glasziou P, Ogrinc G, Goodman S: Can evidence-based medicine and clinical quality improvement learn from each other? BMJ quality & safety

2011, 20:i13. 14. Forsetlund L, Bjorndal A, Rashidian A, Jamtvedt G, O'Brien MA, Wolf F,

Davis D, Odgaard-Jensen J, Oxman AD: Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2009:CD003030.

Page 237: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

223

15. Parmelli E, Flodgren G, Beyer F, Baillie N, Schaafsma ME, Eccles MP: The effectiveness of strategies to change organisational culture to improve healthcare performance: a systematic review. Implement Sci 2011, 6:33.

16. O'Brien MA, Rogers S, Jamtvedt G, Oxman AD, Odgaard-Jensen J, Kristoffersen DT, Forsetlund L, Bainbridge D, Freemantle N, Davis DA, et al: Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2007:CD000409.

17. Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD: Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2006:CD000259.

18. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J: The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev 2009:CD001096.

19. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE: Knowledge translation of research findings. Implementation Science 2012, 7:50.

20. Davies H, Powell A, Rushmer R: Healthcare professionals’ views on clinician engagement in quality improvement. A literature review 2007.

21. Davies P, Walker AE, Grimshaw JM: A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci 2010, 5:14.

22. Graham, Logan J, Harrison M, Straus S, Tetroe J, Caswell W, Robinson N: Lost in knowledge translation: Time for a map? Journal of Continuing

Education in the Health Professions 2006, 26:13-24. 23. Grol RP, Bosch MC, Hulscher ME, Eccles MP, Wensing M: Planning and

studying improvement in patient care: the use of theoretical perspectives. Milbank Quarterly 2007, 85:93-138.

24. McKinlay RJ, Cotoi C, Wilczynski NL, Haynes RB: Systematic reviews and original articles differ in relevance, novelty, and use in an evidence-based service for physicians: PLUS project. Journal of clinical epidemiology

2008, 61:449-454. 25. Badgett R: Why would physicians undervalue reviews by the Cochrane

Collaboration? Journal of clinical epidemiology 2008, 61:419-421. 26. Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K:

Maximizing the Impact of Systematic Reviews in Health Care Decision Making: A Systematic Scoping Review of Knowledge Translation Resources. Milbank Quarterly 2011, 89:131-156.

27. Gülmezoglu A, Langer A, Piaggio G, Lumbiganon P, Villar J, Grimshaw J: Cluster randomised trial of an active, multifaceted educational intervention based on the WHO Reproductive Health Library to improve obstetric practices. BJOG: An International Journal of Obstetrics

& Gynaecology 2007, 114:16-23. 28. Haynes R, Holland J, Cotoi C, McKinlay R, Wilczynski N, Walters L, Jedras

D, Parrish R, McKibbon K: McMaster PLUS: A cluster randomized clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. Journal of the American Medical

Informatics Association 2006, 13:593-600.

Page 238: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

224

29. Dobbins M, Hanna S, Ciliska D, Manske S, Cameron R, Mercer S, O'Mara L, DeCorby K, Robeson P: A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implementation

Science 2009, 4:61. 30. Bekkering G, Hendriks H, Van Tulder M, Knol D, Hoeijenbos M,

Oostendorp R, Bouter L: Effect on the process of care of an active strategy to implement clinical guidelines on physiotherapy for low back pain: a cluster randomised controlled trial. British Medical Journal 2005, 14:107.

31. Rebbeck T, Maher C, Refshauge K: Evaluating two implementation strategies for whiplash guidelines in physiotherapy: A cluster-randomised trial. Australian Journal of Physiotherapy 2006, 52:165.

32. Pennington L, Roddam H, Burton C, Russell I, Russell D: Promoting research use in speech and language therapy: a cluster randomized controlled trial to compare the clinical effectiveness and costs of two training strategies. Clinical rehabilitation 2005, 19:387.

33. Stevenson K, Lewis M, Hay E: Does physiotherapy management of low back pain change as a result of an evidence-based educational programme? Journal of Evaluation in Clinical practice

2006, 12:365-375. 34. Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer H, Kunz R: Do short

courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ 2002, 325:1338.

35. Ramos KD, Schafer S, Tracz SM: Validation of the Fresno test of competence in evidence based medicine. BMJ 2003, 326:319.

36. Dizon JM, Grimmer-Somers KA, Kumar S: Current evidence on evidence-based practice training in allied health: a systematic review of the literature. Int J Evid Based Healthc 2012, 10:347-360.

37. Scott SD, Albrecht L, O'Leary K, Ball GDC, Hartling L, Hofmeyer A, Jones CA, Klassen TP, Burns KK, Newton AS: Systematic review of knowledge translation strategies in the allied health professions. Implementation

Science 2012, 7:70. 38. Campbell MK, Piaggio G, Elbourne DR, Altman DG: Consort 2010

statement: extension to cluster randomised trials. BMJ: British Medical

Journal 2012, 345. 39. Flodgren G, Parmelli E, Doumit G, Gattellari M, O'Brien MA, Grimshaw J,

Eccles MP: Local opinion leaders: effects on professional practice and health care outcomes. The Cochrane Library 2011.

40. Russell D, Rivard L, Walter S, Rosenbaum P, Roxborough L, Cameron D, Darrah J, Bartlett D, Hanna S, Avery L: Using knowledge brokers to facilitate the uptake of pediatric measurement tools into clinical practice: A before-after intervention study. Implementation Science 2010, 5:92.

41. Kiresuk T, Sherman R: Goal attainment scaling: A general method for evaluating comprehensive community mental health programs. Community mental health journal 1968, 4:443-453.

42. Shaneyfelt T, Baum K, Bell D, Feldstein D, Houston T, Kaatz S, Whelan C, Green M: Instruments for evaluating education in evidence-based practice: a systematic review. JAMA 2006, 296:1116.

Page 239: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

225

43. Cusick A, Ottenbacher K: Goal attainment scaling: Continuing education evaluation tool. Journal of Continuing Education in the Health Professions

1994, 14:141-154. 44. Curran JA, Grimshaw JM, Hayden JA, Campbell B: Knowledge translation

research: The science of moving research into policy and practice. Journal of Continuing Education in the Health Professions 2011, 31:174-180.

45. Straus SE, Ball C, Balcombe N, Sheldon J, McAlister FA: Teaching Evidence based Medicine Skills Can Change Practice in a Community Hospital. Journal of General Internal Medicine 2005, 20:340-343.

46. Lucas BP, Evans AT, Reilly BM, Khodakov YV, Perumal K, Rohr LG, Akamah JA, Alausa TM, Smith CA, Smith JP: The impact of evidence on physicians’ inpatient treatment decisions. Journal of General Internal

Medicine 2004, 19:402-409. 47. Hrisos S, Eccles M, Francis J, Dickinson H, Kaner E, Beyer F, Johnston M:

Are there valid proxy measures of clinical behaviour? a systematic review. Implementation Science 2009, 4:37.

48. Hrisos S, Eccles MP, Francis JJ, Dickinson HO, Kaner EFS, Beyer F, Johnston M: Are there valid proxy measures of clinical behaviour? a systematic. Implementation Science 2009, 4:37.

49. Aarons G: Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research 2004, 6:61-74.

50. Donner A, Klar N, Klar NS: Design and analysis of cluster randomization

trials in health research. Arnold London; 2000. 51. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O: Diffusion of

innovations in service organizations: systematic review and recommendations. Milbank Quarterly 2004, 82:581-629.

52. Rogers EM: Diffusion of innovations. Free Pr; 1995. 53. Aarons G, Sawitzky A: Organizational climate partially mediates the

effect of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health and Mental Health

Services Research 2006, 33:289-301. 54. French B, Thomas L, Baker P, Burton C, Pennington L, Roddam H: What

can management theories offer evidence-based practice? A comparative analysis of measurement tools for organisational context. Implementation

Science 2009, 4:28. 55. McCluskey A, Lovarini M: Providing education on evidence-based

practice improved knowledge but did not change behaviour: a before and after study. BMC Medical Education 2005, 5:40.

56. Graham ID, Bick D, Tetroe J, Straus SE, Harrison MB: Measuring outcomes of evidence-based practice: Distinguishing between knowledge use and its impact. Evaluating the Impact of Implementing Evidence-Based

Practice 2010, 1:18. 57. Steenbeek D, Gorter JW, Ketelaar M, Galama K, Lindeman E:

Responsiveness of Goal Attainment Scaling in comparison to two standardized measures in outcome evaluation of children with cerebral palsy. Clinical rehabilitation 2011, 25:1128-1139.

Page 240: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

226

58. Flodgren G, Eccles M, Shepperd S, Scott A, Parmelli E, Beyer F: An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane database of systematic reviews (Online) 2011, 7:CD009255.

59. Dickinson HO, Hrisos S, Eccles MP, Francis J, Johnston M: Statistical considerations in a systematic review of proxy measures of clinical behaviour. Implement Sci 2010, 5:20.

60. Kiresuk TJ, Sherman RE: Goal attainment scaling: A general method for evaluating comprehensive community mental health programs. Community mental health journal 1968, 4:443-453.

61. Eccles M, Hrisos S, Francis J, Kaner E, Dickinson H, Beyer F, Johnston M: Do self- reported intentions predict clinicians' behaviour: a systematic review. Implementation Science 2006, 1:28.

62. Thomson O, Freemantle N, Oxman A, Wolf F, Davis D, Herrin J: Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane database of systematic

reviews (Online) 2001:CD003030. 63. Walshe K: Understanding what works--and why--in quality

improvement: the need for theory-driven evaluation. Int J Qual Health

Care 2007, 19:57-59.

Page 241: Knowledge translation intervention to improve evidence

Appendix 9 – Journal paper accepted for publication by Implementation Science

227

Email from the Implementation Science Editorial Team confirming that this paper

has been accepted for publication

Page 242: Knowledge translation intervention to improve evidence

228

Publications in peer reviewed journals

1. Campbell L., Novak I, McIntyre S, Lord S. A KT intervention including the Evidence Alert System to improve clinician’s evidence-based practice behaviour – a cluster randomized controlled trial. Implementation Science. In press.

2. Novak I, McIntyre S, Morgan C, Campbell L, Dark L, Morton N, Stumbles E, Wilson S, Goldsmith S. State of the evidence: Systematic review of interventions for children with cerebral palsy. Developmental Medicine &

Child Neurology. 2013;55(10):885

Conference presentations

1. Campbell L, Novak I, McIntyre S. Knowledge translation in cerebral palsy. Oral

poster session. Evidence Live 13. Oxford, United Kingdom. March 24-26, 2013. URL: http://www.evidencelive.org/2013/top-abstracts

2. Campbell L, Novak I, McIntyre S. Effectiveness of a multi-component knowledge translation strategy for changing health professional’s evidence-based practice behaviour: An evaluator blinded cluster-randomised controlled trial. Joanna Briggs Institute International

Convention, Adelaide. November 2011. 3. Campbell L, Novak I, McIntyre S. Patterns and rates of use of an evidence-

based practice intranet resources for allied health professionals: a randomised controlled trial. Developmental Medicine and Child Neurology, 52(S2): 31.

Award

Poster won top ten abstract prize at Evidence Live 2013 conference. URL link: http://www.evidencelive.org/2013/top-abstracts Campbell, L., Novak, I. & McIntyre, S. (2013). Knowledge translation in cerebral

palsy. Oral poster session. Evidence Live 13. Oxford, United Kingdom. March 24-26, 2013..