Evaluating Ongoing Programs: A Chronological Perspective to Include Performance Measurement...

Preview:

Citation preview

Evaluating Ongoing Programs:

A Chronological Perspective to Include Performance Measurement

Summarized from Berk & Rossi’s Thinking About Program Evaluation, Sage, 1990; Martin & Kettner’s Measuring the Performance of Human Service Programs, Sage, 1996

Stages of Assessment

• Stage 1: Determining Whether the Program is Reaching the Appropriate Beneficiaries.

• Stage 2: Making Sure the Program is Being Properly Delivered

• Stage 3: Ensuring Funds Are Being Used Appropriately

• Stage 4: Ensuring Effectiveness can be Estimated• Stage 5: Determining Whether the Program

Works• Stage 6: Determining Program Worth

Stage One: Program Impact

Program Impact Research is designed to identify who is actually served by a program to determine the number being served that meet program service criteria and those that are being served that do not meet service criteria.

Stage Two: Program Integrity

Program Integrity Research analyzes the essentials of program delivery such as:

• personnel qualifications & skill assessment

• consistency of program services with program mission

• targeting & marketing of services

• service coordination

Stage Three: Fiscal Accountability

Accountant Perspective:

• Out of Pocket Expenses

• Historical Costs

• Depreciation

• Current & Anticipated Revenues

• Product Inventory

• Income & Outgo of Funds

Stage Three: Fiscal Accountability

Economist Perspective & Opportunity Costs

• Opportunity Costs May be Considered as What was Given Up to Direct Resources in a Particular Direction

• Opportunity Costs May Also Be Construed as the “Next Best Use” of Resources

Stage Four: Evaluability

Criteria for Evaluability

• Clarifying Goals

• Specifying Program Goals

• Determining Possible Outcomes in the Absence of the Program

Stage Five: Program Effectiveness

• Comparisons Across Subjects

• Comparison Across Settings

• Comparison Across Time

• Comparison Across Criterion

• Pooled Comparisons

Research Designs for Estimating Effectiveness

• Random Assignment: Comparing Mean Outcomes of Control & Experimental S’s

• Interrupted Time Series: (Pre- and Post-Assessment Model)

• Cross Sectional Designs: Comparisons of Different Types of Units (e.g. comparing smaller & larger cities) with Comparisons Occurring at Only One Point In Time

Research Designs for Estimating Effectiveness

• Regression Time Series: Assignment of S’s by variables (Criterion Based). Objectives:– To provide estimates of values of the dependent

variable (outcome variable) from values of the independent variable (assignment Variable)

– To obtain measures of the error involved in using the regression line as a basis of estimation (I.E. Standard Error of Estimate)

– To obtain a measure of the degree of association or correlation between the two variable

Research Designs for Estimating Effectiveness

• Pooled Cross Sectional & Time Series: – Randomized Experiments &– Regression Designs

May Be Compared

– Across Units (Cross Sectional) &– Across Time (Time Series)

Stage Six: Cost Effectiveness

Ongoing Versus New Programs:

• Ongoing Programs Have Historical Data to work With.

• New programs lack such historical data from which to determine cost effectiveness

Performance Measurement

Defining Performance Measurement

The regular collection of and reporting of information about the efficiency, quality, and effectiveness of human service programs. (Urban Institute, 1980)

Perspectives of Performance Measurement

• Efficiency Perspective

• Quality Perspective

• Effectiveness Perspective

Systems Model Essentials

• Inputs: Includes anything used by a system to achieve its purpose

• Process: Involves the treatment or delivery process in which inputs are consumed to produce outputs

• Outputs: That which is produced

• Feedback: System information reintroduced into the process to improve quality, efficiency & effectiveness

Efficiency Perspective

• Productivity = ratio of outputs to inputs

• Efficiency = maximizing outputs to inputs– Efficiency can not reflect whether program

goals are being met – Inefficiency is how many programs are

regarded by the public - often in the absence of a full understanding of the goals, mission, clientele, resources, and services of the agency

Quality Perspective

• Typically involves benchmarking against standards and criteria of excellence (as in TQM, or Total Quality Management)

• TQM now defines productivity as the ratio of outputs that meet a specified quality standard

Effectiveness Perspective

• Focuses on outcomes such as the results, impacts and accomplishments of programs

• Effectiveness is the highest form or performance accountability

• Focuses upon which intervention works in which settings

• Effectiveness accountability is primarily concerned with ratios of outcomes to inputs.

Reasons for Adopting Performance Measurement

• Performance measurement has the potential to improve the management of human service programs

• Performance measurement has the potential to affect the allocation of resources to human service programs

• Performance measurement may be a forced choice for many, if not, most human service programs

Key Questions in Performance Measurement

• Who are the clients?

• What are their demographic characteristics?

• What are their social or presenting problems?

• What services are they receiving?

• In what amounts?

• What is the level of service quality?

• What results are being achieved?

• At what costs?

Performance Measurement as a Management Tool

• Performance Measurement promotes client centered approaches to service delivery

• Provides a shared language for comparing human service programs for quality, efficiency, & effectiveness

• Allows administrators to continuously monitor programs to identify areas for improvement

• Provides direct feedback to personnel, allowing them to improve their service provision

Performance Measurement Programs

Government Performance & Results Act (1993)

• Effective 1998, all federal agencies must begin reporting effectiveness data for their services & products

• This requirement will be passed on to agency contractors & subcontractors

• Increasingly Federal block - grant programs also have this requirement

National Performance Review

• Refers to governmental efforts at instituting program effectiveness, efficiency, and quality, to implement the 1992 report on government practices entitled Reinventing Government (Osborn & Gaebler, 1992)

Total Quality Management Movement

• National Movement to Improve Quality

• Focuses upon:– consumer satisfaction– outputs as measured against a quality standard

Managed Care

• Emanates from health care

• Promotes efficiency to assist health care industry shift from cost-based to capitated reimbursement

(SEA) Service Efforts and Accomplishments Reporting

• Standard introduced by the Governmental Accounting Standards Board (GASB)

• SEA is GASB’s term for performance measurement

SEA Reporting Model

• Built upon an expanded system model including:

• inputs

• outputs

• quality outputs, &

• outcomes BUT

• Excludes Process

SEA’s Lack Of Emphasis Upon Process

Absence of the Process component reflects SEA’s primary emphasis upon performance & performance cost considerations

SEA Reporting Elements

• Service Efforts

• Service Accomplishments

• Measures or Ratios Relating Service Efforts to Service Accomplishments

Service Efforts

Service Efforts are inputs utilized in a human service program, which are measured by the GASB in terms of

• Total Program Costs

• Total Full Time Equivalent Staff (FTE)

• Total Number of Employee Hours

Service Accomplishments

Outputs:Total Volume of Total Service Provided

Proportion of Total Service Volume Meeting Quality Standard

Outcomes:

Measures of results, accomplishments, impacts

Service Accomplishment Ratios

• Efficiency (output measures) cost per unit of service– cost per FTE– cost per service completion– service completions per FTE

• Effectiveness (outcome measures)– cost per outcome– outcome per FTE

Output Performance Measures

• Intermediate: – episode or contact unit of service– material unit of service

• Final: Service completions

Outcome Performance Measures

• Intermediate:– numeric counts,– standardized measures– level of functioning scales– client satisfaction

• Ultimate:– numeric counts– standardized measures– level of functioning scales

Recommended