Upload
vuongliem
View
225
Download
2
Embed Size (px)
Citation preview
RESEARCH METHODS, EVIDENCE-BASED PRACTICE,
AND CT/RT
OBJECTIVES
TOPICS OF DISCUSSION:
•RESEARCH METHODS AND EVIDENCE-BASED
PRACTICE
• THE NATIONAL REGISTRY OF EVIDENCE-
BASED PROGRAMS AND PRACTICES (NREPP)
•CURRENT WGI RESEARCH ENDEAVORS
(Q&A)
ACTIVITY
BASE ON PREVIOUS RESEARCH, WE PREDIDT THE FOLLOWING DISTRIBUTIONS OF M&M COLORS IN A GIVEN SAMPLE SIZE: M&M'S Milk Chocolate: 30% brown, 20% yellow, 20% red, 10% orange, 10% green, and 10% blue. M&M'S Peanut: 20% brown, 20% yellow, 20% blue, and 20% red, 10% orange, and 10% green
RESEARCH METHODS DESCRIPTIVE
EXPERIMENTAL
DESCRIPTIVE METHODS
DESCRIPTIVE RESEARCH METHODS 4 TYPES:
• OBSERVATIONAL
• SURVEYS
• CASE STUDIES
• CORRELATION
DESCRIPTIVE RESEARCH METHODS
• DESCRIBE A SET OF OBSERVATIONS OR DATA COLLECTED
• PHENOMENON BEING OBSERVED MUST BE OPERATIONALIZED FOR VALIDITY & RELIABILITY OF MEASUREMENT
• CANNOT DRAW CONCLUSIONS FROM THE DATA
• CANNOT DO NOT MAKE ACCURATE PREDICTIONS
• CANNOT DETERMINE CAUSE AND EFFECT (DOES A CAUSE B?)
OBSERVATIONAL RESEARCH WHAT IS IT:
• DIRECT OBSERVATION OF
EXTRINSIC PHENOMENON OR
BEHAVIOR
• DATA IS COLLECTED ABOUT
OBSERVATIONS
• NO VARIABLES ARE
MANIPULATED (NOT AN
EXPERIMENT)
• RESEARCHERS ARE LOOKING
FOR PATTERNS OR THEMES
(FREQUENCY, INTENSITY,
DURATION OF BEHAVIORS)
EXAMPLES:
• OBSERVING/DESCRIBING/
QUANTIFYING:
−BEHAVIORS OF A TROUBLED
PRESCHOOLER
−THE NUMBER OF CIGARETTES
CONSUMED BY SOMEONE WHO
SMOKES
−TIME SPENT PLAYING VIDEO GAMES
−RECIDIVISM RATES IS PRISON
SYSTEMS
−NUMBER OF TIMES A GAMBLER
PULLS A SLOT MACHINE LEVER
OBSERVATIONAL RESEARCH (CONT.) ADVANTAGES:
• SIMPLISTIC DESIGN AND ANALYSIS
• DATA YIELDED CAN BE UTILIZED TO FORM HYPOTHESES ABOUT OBSERVATIONS
• DATA PROVIDES INFORMATION FOR PREDICTING FUTURE BEHAVIOR
• COMPELS FURTHER STUDY
DISADVANTAGES:
• CANNOT EXPLAIN THE CAUSE OF BEHAVIOR
• CANNOT ASSUME OBSERVED BEHAVIORS ARE TYPICAL FOR THE SUBJECT
• CANNOT ACCURATELY PREDICT FUTURE BEHAVIOR
• WEEK GENERALIZABILITY
• CANNOT ASSESS INTRINSIC QUALITIES (COGNITION, AFFECT, INTENTIONS, & PERCEPTIONS)
SURVEYS
WHAT IS IT:
• SELF REPORTED DATA
• THE INDIVIDUAL PROVIDES THE INFORMAATION
• CAN INCLUDE REPORTS OF THOUGHTS, FEELINGS, AND BEHAVIORS (INTRINSIC MATTERS)
EXAMPLES:
• SURVEY MONKEY
• LICKERT SCALE
QUESTIONNAIRES
• PAIN SCALES
SURVEYS
ADVANTAGES:
• IT’S RELATIVELY CHEAP
• EASILY ADMINISTERED
AND REPLICABLE
• A LARGE AMOUNT OF
DATA CAN BE GOT FROM
A LOT OF PEOPLE IN A
FAIRLY SHORT TIME
DISADVANTAGES:
• POINT IN TIME REPORTING –
NOT NECESSARILY
GENERALIZABLE
• CAN’T GENERALIE BETWEEN
INDIVIDUALS/GROUPS
SURVEYED
• NO CAUSE AND EFFECT
ESTASHISHED
CASE STUDIES WHAT IS IT:
• AN UP-CLOSE, IN-DEPTH, AND DETAILED EXAMINATION OF A SUBJECT OF STUDY (THE CASE)
• THE "CASE" BEING STUDIED MAY BE AN INDIVIDUAL, ORGANIZATION, EVENT, OR ACTION, EXISTING IN A SPECIFIC TIME AND PLACE
• NEARLY EVERY ASPECT OF THE SUBJECT'S LIFE AND HISTORY IS ANALYZED TO SEEK PATTERNS AND CAUSES FOR BEHAVIOR
• MAY USE OBSERVATION, SURVEYS AND INFORMATION COLLECTED FROM COLLATERLS TO FORMULATE THE CASE
• EXPLORES BEHAVIORAL, EMOTIONAL, COGNITIVE PATTERNS OF THE INDIVIDUAL OR GROUP
EXAMPLES:
• INVESTIGATION OF THE SYMPTOMS OF AN INDIVIDUAL WITH DEPRESSION IN ORDER TO IDENTIFY COMMON SYMPTOMS OF DEPRESSION
• FUNCTIONAL BEHAVIOR ASSESSMENT OF A TROUBLED STUDENT
• PSYCHOLOGICAL ASSESSMENTS
• INVESTIGATION OF THE SOCIAL SKILLS OF AN INDIVIDUAL WITH AUTISM IN ORDER TO BETTER UNDERSTAND THE SOCIAL UNDERSTANDING OF INDIVIDUALS WITH AUTISM
CASE STUDIES ADVANTAGES:
• THE POTENTIAL FOR THE DEVELOPMENT OF NOVEL HYPOTHESES FOR LATER TESTING
• THE CASE STUDY CAN PROVIDE DETAILED DESCRIPTIONS OF SPECIFIC AND RARE CASES
• CASE STUDIES CAN BE BENEFICIAL BECAUSE THEY CAN PROVIDE DETAILED INFORMATION AND INSIGHT INTO THE FEELINGS, THOUGHTS, AND BEHAVIORS OF A PERSON WHO MAY BE UNIQUE IN SOME WAYS
DISADVANTAGES:
• CANNOT MAKE CAUSAL CONCLUSIONS FROM CASE STUDIES
• CANNOT RULE OUT ALTERNATIVE EXPLANATIONS FOR BEHAVIORS
• MAY NOT GENERALIZE TO OTHER PEOPLE
CORRELATION WHAT IS IT:
• THIS RESEARCH METHOD
INVOLVES DETERMINING
THE STRENGTH OF THE
RELATIONSHIP BETWEEN
TWO OR MORE VARIABLES
• EXAMINES THE
COVARIATION OF TWO OR
MORE VARIABLES
EXAMPLES:
• IS THERE A
RELATIONSHIPS BETWEEN
PLAYING VIOLENT VIDEO
GAMES AND AGGRESSION
IN CHILDREN
• IS THERE A RELATIONSHIP
BETWEEN CIGARETTE
SMOKING AND LUNG
DISEASE
CORRELATION
ADVANTAGES:
• USEFUL FOR PREDICTING
THE LEVEL OF ONE
VARIABLE BASED ON
KNOWLEDGE OF THE
OTHER VARIABLE.
• PROVIDES BASE FOR
FURTHER STUDY
DISADVANTAGES:
• CANNOT DETERMINE IF
ONE VARIABLE CAUSES
ANOTHER
EXPERIMENTAL METHOD
CT/RT SCENARIO
OUR HUNCH:
CHOICE THEORY/REALITY THERAPY IS AN EFFECTIVE
INTERVENTION FOR RELIEVING SYMPTOMS OF MENTAL
DISORDERS
QUESTIONS OUR CT/RT RESEARCH TEAM NEEDS TO ANSWER • WHAT IS CHOICE THEORY/REALITY THERAPY?
• WHAT LEVEL OF CT/RT TRAINING IS REQUIRED OF A RESEARCHER PARTICIPATING IN THE STUDY?
• HOW DO WE ENSURE THAT ALL RESEARCHERS WILL IMPLEMENT THE SAME PROTOCOLS AND PROCEDURES DURING THE STUDY?
• WHAT MENTAL DISORDER DO WE WANT TO STUDY?
• HOW WILL WE DETERMINE IF A PARTICIPANT MEETS CRITERIA FOR MENTAL DISORDER WE CHOOSE TO STUDY?
• HOW WILL WE MEASURE SYMPTOMS?
• HOW CAN WE KNOW THAT OUR INTERVENTION, RATHER THAN SOME OTHER FACTOR, IS WHAT LEADS TO CHANGE IN RESEARCH PARTICIPANTS?
CT/RT SCENARIO HYPOTHESES
HYPOTHESES BASED ON OUR HUNCH AND AN UNDERSTANDING OF RESEARCH METHODS:
• “TEACHING THE CONCEPTS OF THE 5 BASIC NEEDS, QUALITY WORLD, CARING/DEADLY HABITS, AND TOTAL BEHAVIOR TO 5TH GRADE STUDENTS WILL DECREASE INCIDENCES OF BULLYING IN THE STUDENT COHORT.”
• “UTILIZING REALITY THERAPY FOCUSED ON INVOLVEMENT WITH THE CLIENT, SELF-EVALUATION, MAKING A PLAN FOR CHANGE, AND COMMITTING TO THE PLAN WITH DEPRESSED UNEMPLOYED POST HIGH SCHOOL GRADUATES WILL INCREASE EMPLOYMENT RATES IN THAT COHORT.”
QUALITIES OF EXPERIMENTAL RESEARCH
• EXPERIMENTAL OR TREATMENT GROUP:
THE GROUP THAT RECEIVES THE
EXPERIMENTAL
• CONTROL GROUP: A BASELINE
COMPARISON GROUP FROM WHOM
TREATMENT IS WITHHELD, OFTEN IN THE
FORM OF A WAITLIST
• INDEPENDENT VARIABLE: THIS IS THE
VARIABLE UNDER INVESTIGATION THAT THE
EXPERIMENTER MANIPULATES IN A STUDY.
ITS INFLUENCE ON THE DEPENDENT
VARIABLE IS UNDER INVESTIGATION.
• DEPENDENT VARIABLE: THE
VARIABLE THAT IS MEASURED IN A
STUDY. THE EXPERIMENTER DOES NOT.
CONTROL THIS VARIABLE.
• RANDOM ASSIGNMENT: IN A STUDY,
EACH SUBJECT HAS AN EQUAL
PROBABILITY OF BEING SELECTED FOR
EITHER THE TREATMENT OR CONTROL
GROUP.
• DOUBLE BLIND: NEITHER THE
SUBJECT NOR THE EXPERIMENTER
KNOWS WHETHER THE SUBJECT IS IN
THE TREATMENT OF THE CONTROL
CONDITION.
EXPERIMENTAL WHAT IT IS:
• CAN INCLUDE VARIOUS DESCRIPTIVE METHODS IN COLLECTING DATA
ADVANTAGES:
• THE RESEARCH METHOD THAT CAN DETERMINE CAUSALITY
• RULES OUT THE POSSIBILITY OF THIRD VARIABLES
• PROVIDES EVIDENCE THAT AN INTERVENTION IS OR IS NOT EFFECTIVE AT PRODUCING THE ANTICIPATED OUTCOME
DISADVANTAGES:
• MORE COMPLEX
RESEARCH
PROCEDURES AND
ANALYSIS
• REQUIRE MORE
RESOURCES ($,
TIME, HUMAN)
EVIDENCE-BASED PRACTICE
WHAT IS EVIDENCE BASED PRACTICE?
•ESTABLISHED EFFECTIVE CLINICAL
INTERVENTIONS THAT DEMONSTRATE
POSITIVE PSYCHOLOGICAL OUTCOMES
THAT ARE VALIDATED BY RESEARCH DATA
WHY EBPS?
• PREVENT OPINION-BASED TREATMENT
•MOVE AWAY FROM SUBJECTIVE CLINICAL
DECISION MAKING
•MOVE TOWARD INFORMED, EFFECTIVE
PSYCHOLOGICAL PRACTICES DEMONSTRATING
POSITIVE RESEARCH OUTCOMES.
BENEFITS FOR CONSUMERS
• INCREASED POTENTIAL FOR POSITIVE BEHAVIORAL HEALTH OUTCOMES BECAUSE INTERVENTIONS HAVE BEEN PROVEN EFFECTIVE
• INCREASED QUALITY OF CARE BECAUSE PROVIDERS ARE TRAINED IN PROTOCOLS THAT ARE DESIGNED TO ADDRESS CONSUMERS’ SPECIFIC NEEDS
•TIME EFFECTIVE AND COST EFFECTIVE
BENEFITS FOR PROVIDERS
• INCREASE PROVIDERS SKILLS, KNOWLEDGE AND COMPETENCE
• INCREASED CLINICAL CONFIDENCE AND EFFECTIVE CLINICAL DECISION MAKING
•CONSISTENCY OF CARE ACROSS PROFESSIONALS, RESULTING IN ENHANCED PROFESSIONAL CONSULTATION AND INCREASED CONSUMER CONFIDENCE
•SUCCESSFUL TREATMENT OUTCOMES RESULTS IN CONSUMER SATISFACTION AND INCREASED REFERRALS
NREPP
• SUBSTANCE ABUSE AND MENTAL HEALTH SERVICES
ADMINISTRATION’S (SAMHSA’S) NATIONAL
REGISTRY OF EVIDENCED-BASED PROGRAMS AND
PRACTICES (NREPP)
• NREPP HAS AN ONLINE REGISTRY OF OVER 260
INTERVENTIONS SUPPORTING MENTAL HEALTH
PROMOTION, MENTAL HEALTH TREATMENT,
SUBSTANCE ABUSE PREVENTION, SUBSTANCE ABUSE
TREATMENT, AND CO-OCCURING DISORDERS
REFERENCES
• HTTP://SLHALL.STUDENTS.NOCTRL.EDU/DIFFERENCE.HT
ML
• HTTP://PSYCHCENTRAL.COM/BLOG/ARCHIVES/2011/09/
27/THE-3-BASIC-TYPES-OF-DESCRIPTIVE-RESEARCH-
METHODS/
• HTTP://WWW2.WEBSTER.EDU/~WOOLFLM/STATMETHOD
S.HTML
Reviews & Submissions Once an intervention has been accepted for review, the developer and NREPP staff work together to identify the outcomes and materials that will be used in the review. A review generally takes several months to complete, from the initial scheduling of the kick-off call to the completion of an NREPP intervention summary. The three stages of the review process are summarized below.
Pre-Review When NREPP is ready to commence the review of an accepted intervention, NREPP staff contact the developer to schedule a kick-off call. This call should include the developer and any other individuals who are closely familiar with the intervention and can provide information related to research and dissemination. These individuals will serve as NREPP's contacts for the remainder of the review process. During the kick-off call, developers:
Meet the NREPP staff who will be involved in the review Learn about the NREPP review process and rating criteria Work with the NREPP staff to identify additional materials needed for review Have an opportunity to ask questions/gain clarification on any part of the NREPP review process
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
Review NREPP staff identify the reviewers who will participate in the review. NREPP staff send review packets to two pairs of reviewers. One pair of reviewers focuses on Quality of Research, while the other pair looks at Readiness for Dissemination. Each of the reviewers independently reviews the materials provided and calculates ratings using the predefined Quality of Research and Readiness for Dissemination review criteria. The reviewers submit their ratings to NREPP. If their ratings differ by a significant margin, NREPP staff may hold a consensus conference to discuss and resolve the differences.
Reporting NREPP staff compile the ratings and descriptive information into an intervention summary that is shared with the developer. Once the developer approves the intervention summary, SAMHSA reviews the summary and publishes it on the NREPP Web site.
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
Quality of Research NREPP's Quality of Research ratings are indicators of the strength of the evidence supporting the outcomes of the intervention. Higher scores indicate stronger, more compelling evidence. Each outcome is rated separately because interventions may target multiple outcomes (e.g., alcohol use, marijuana use, behavior problems in school), and the evidence supporting the different outcomes may vary. NREPP uses very specific standardized criteria to rate interventions and the evidence supporting their outcomes. All reviewers who conduct NREPP reviews are trained on these criteria and are required to use them to calculate their ratings.
Criteria for Rating Quality of Research Each reviewer independently evaluates the Quality of Research for an intervention's reported results using the following six criteria: Reliability of measures Validity of measures Intervention fidelity Missing data and attrition Potential confounding variables Appropriateness of analysis Reviewers use a scale of 0.0 to 4.0, with 4.0 being the highest rating given.
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
1.Reliability of Measures Outcome measures should have acceptable reliability to be interpretable. "Acceptable" here means reliability at a level that is conventionally accepted by experts in the field. 0 = Absence of evidence of reliability or evidence that some relevant types of reliability (e.g., test-retest, interrater, interitem) did not reach acceptable levels. 2 = All relevant types of reliability have been documented to be at acceptable levels in studies by the applicant. 4 = All relevant types of reliability have been documented to be at acceptable levels in studies by independent investigators.
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
2. Validity of Measures Outcome measures should have acceptable validity to be interpretable. "Acceptable" here means validity at a level that is conventionally accepted by experts in the field. 0 = Absence of evidence of measure validity, or some evidence that the measure is not valid. 2 = Measure has face validity; absence of evidence that measure is not valid. 4 = Measure has one or more acceptable forms of criterion-related validity (correlation with appropriate, validated measures or objective criteria); OR, for objective measures of response, there are procedural checks to confirm data validity; absence of evidence that measure is not valid
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
3. Intervention Fidelity The "experimental" intervention implemented in a study should have fidelity to the intervention proposed by the applicant. Instruments that have tested acceptable psychometric properties (e.g., inter-rater reliability, validity as shown by positive association with outcomes) provide the highest level of evidence. 0 = Absence of evidence or only narrative evidence that the applicant or provider believes the intervention was implemented with acceptable fidelity. 2 = There is evidence of acceptable fidelity in the form of judgment(s) by experts, systematic collection of data (e.g., dosage, time spent in training, adherence to guidelines or a manual), or a fidelity measure with unspecified or unknown psychometric properties. 4 = There is evidence of acceptable fidelity from a tested fidelity instrument shown to have reliability and validity.
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
4. Missing Data and Attrition Study results can be biased by participant attrition and other forms of missing data. Statistical methods as supported by theory and research can be employed to control for missing data and attrition that would bias results, but studies with no attrition or missing data needing adjustment provide the strongest evidence that results are not biased. 0 = Missing data and attrition were taken into account inadequately, OR there was too much to control for bias. 2 = Missing data and attrition were taken into account by simple estimates of data and observations, or by demonstrations of similarity between remaining participants and those lost to attrition. 4 = Missing data and attrition were taken into account by more sophisticated methods that model missing data, observations, or participants, OR there were no attrition or missing data needing adjustment.
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
5. Potential Confounding Variables Often variables other than the intervention may account for the reported outcomes. The degree to which confounds are accounted for affects the strength of causal inference. 0 = Confounding variables or factors were as likely to account for the outcome(s) reported as were the hypothesized causes. 2 = One or more potential confounding variables or factors were not completely addressed, but the intervention appears more likely than these confounding factors to account for the outcome(s) reported. 4 = All known potential confounding variables appear to have been completely addressed in order to allow causal inference between the intervention and outcome(s) reported.
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
6. Appropriateness of Analysis Appropriate analysis is necessary to make an inference that an intervention caused reported outcomes. 0 = Analyses were not appropriate for inferring relationships between intervention and outcome, OR sample size was inadequate. 2 = Some analyses may not have been appropriate for inferring relationships between intervention and outcome, OR sample size may have been inadequate. 4 = Analyses were appropriate for inferring relationships between intervention and outcome. Sample size and power were adequate
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
Minimum Requirements To be eligible for NREPP review, an intervention must meet the following minimum requirements: The intervention has produced one or more positive behavioral outcomes (p ≤ .05) in mental health or substance abuse among individuals, communities, or populations. Significant differences between groups over time must be demonstrated for each outcome. Evidence of the positive behavioral outcome(s) has been demonstrated in at least one study using an experimental or quasi-experimental design. Experimental designs include random assignment of participants, a control or comparison group in addition to the intervention group, and pre- and posttest assessments. Quasi-experimental designs include a control or comparison group and pre- and posttest assessments but do not use random assignment. Studies with single-group, pretest/posttest designs do not meet this requirement.
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
Minimum Requirements (cont.) The results of these studies have been published in a peer-reviewed journal or other professional publication (e.g., a book volume) or documented in a comprehensive evaluation report. Comprehensive evaluation reports must include the following sections or their equivalent: a review of the literature, theoretical framework, purpose, methodology, findings/results (with statistical analysis and p values for significant outcomes), discussion, and conclusions. Information must be included to enable rating of the six Quality of Research criteria. Implementation materials, training and support resources, and quality assurance procedures have been developed and are ready for use by the public.
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
• BEVERLY LAFOND
• MIKE FULKERSON
• JERRI ELLIS
• JANET MORGAN
• BOB WUBBOLDING
NREPP: NATIONAL REGISTRY OF EVIDENCE BASED PRACTICES AND PROGRAMS
• HOURS TURNED INTO DAYS, TURNED INTO WEEKS,
TURNED INTO MONTHS….