24
SOCW 671 # 8 Single Subject/System Designs Intro to Sampling

SOCW 671 # 8 Single Subject/System Designs Intro to Sampling

Embed Size (px)

Citation preview

SOCW 671 # 8

Single Subject/System DesignsIntro to Sampling

Single-Subject Designs

Evaluation designs that involve arrangements in which repeated observations are taken before, during, and/or after an intervention.

These observations are compared to monitor the progress and assess the outcome of that service.

Logic of Single Subject/System Designs

Unlike experimental designs that involve experimental and control groups, single system designs have one identified client/system

This identified client/system may be an individual or group

These designs are based on a time-series

Use on the Micro Level of Social Work Practice

If you are practicing at the micro level, this likely will be the most common method to use.

Directly related to client progress

Measurement Issues Need to specify targets of intervention by

having an operational definition of target behavior

Triangulation - the use of two or more indicators or measurement strategies when confronted with a multiplicity of measurement options

Self-report scales often used, these have plusses and minuses

Unobtrusive Measurement Preferred Will want to reduce bias and

reactivity through the use of unobtrusive measurement (means observing and recording behavioral data in ways that by and large are not noticeable to the person being observed).

First Need Baseline (control phase) Measures

Pattern should not reflect a trend of dramatic improvement to the degree that it suggests the problem is nearing resolution

Should have many measurement points

Chronologically graphed data should be stable

Alternative Designs

AB ABAB Multiple Baseline &

Successive Interventions Multiple Component

AB: Basic Single-Subject Design

Collect data during baseline period

Collect data during intervention

Problems is that it does not control well for history

ABAB: Withdrawal/Reversal Design

Two problems Improvement in target behavior may not

be reversible even when intervention is withdrawn

Practitioner may be unwilling to withdraw something that appears to be working

Multiple Baseline-Design (Successive Interventions)

Consists of several different interventions The interventions are staggered. Each intervention is applied one after

another in separate phases. The application of the intervention is

provided to different target problems, settings, or individuals

Multiple-Component Design Combines elements of the experimental

replication and successive intervention designs. Can be used with or without baselines/ Purpose is to compare the relative effectiveness

of two different interventions Problems with being able to infer that only one

component resulted in target behavior

Data Analysis

Two-standard deviation-band approach (Sheward Chart)

Chi-square t-test & ANOVA

Shewart Chart

Mean level of baseline data is identified Two standard deviation levels (bands) are

constructed above and below the mean line These bands are extended into the

intervention phase If two successive observations during

intervention, there is a significant change

Complicating Factors Carryover – occurs when the effects obtained in one

phase appear to carry over into the next phase Contrast – when the subject reacts to the difference

in the two interventions or phases Order of presentation – when the order of the phases by themselves may be part of a causal impact

Incomplete data – when a subject of client does not “fit” nicely into the phase time frame

Training Phase – client may not have the prerequisite skills for full participation in the intervention when it begins

Causality Criteria in Single Subject (System) Designs

temporal arrangement co-presence of the intervention & desired

change in target behavior repeated co-presence of the intervention and

the manifestations of the desired change consistency over time conceptually and practically grounded in

scientific/professional knowledge.

Design Validity & Reliability

Replication is very useful Statistical Conclusion Validity: Did

Change Occur? Internal Validity: Was change Caused by

Intervention? Construct Validity: Was Intervention and

Measurement of Outcomes Accurately Conducted?

Intro to Sampling

Non-probability

Probability

Non-probability

Reliance on available subjects Quota sampling Snowball sampling Selecting informants

Probability

Simple random Systematic Stratified Cluster

Issues in Program Evaluation

Evaluation as Representation Program evaluation is not the program, only a snap

shot of it

Organizations are complex, therefore evaluations often focus on select services

Evaluations can go beyond consumer focus, may review staff, community relations, continuing education, etc.

Common Characteristics

Program models Resource constraints Evaluation tools Politics and ethics Cultural considerations Presentation of evaluation findings

Common Characteristics (continued) Program models

Need blueprint as expressed by logic model Program survival requires that evaluation be

performed to maintain contracts Outputs and outcomes monitored

Outputs are non-client related objectives Outcomes are client related objectives

Infrastructure related objectives serve program maintenance function

Common Characteristics (continued) Resource constraints

Insufficient time, staff, money, or evaluation know-how

Typical implementation time Needs assessment 3 to 6 months Evaluability assessment 3 to 6 months Process evaluation 12 to 18 months Outcome evaluation 6 to 12 months Cost-benefit analysis 1 to 2 months