Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
DEMONSTRATING PROGRAM IMPACT Analyzing and Improving a Nonprofit
CDFI’s Data Collection and Reporting
Practices
Baker, Victoria Baker I PUAD 701 I Fall 2017
Table of Contents
Introduction .................................................................................................................................................. 1
Background ............................................................................................................................................... 1
Understanding Evaluation ......................................................................................................................... 2
Project Overview ....................................................................................................................................... 5
Evaluating Program Impact .......................................................................................................................... 5
Best Practices: Program Evaluation .......................................................................................................... 7
Defining Success ...................................................................................................................................... 11
Examining Program Goals .............................................................................................................. 12
Developing a Logic Model .............................................................................................................. 13
Analyzing the Data Collection Process ...................................................................................................... 15
Best Practices: Data Collection .............................................................................................................. 16
Data Collection at SCCLF ......................................................................................................................... 17
Identifying Immediate Data Needs ......................................................................................................... 19
Findings and Recommendations ................................................................................................................ 20
Key Findings and Deliverables ................................................................................................................. 20
Practical Implications and Limitations ................................................................................................... 22
Recommendations and Next Steps ......................................................................................................... 24
Conclusion .................................................................................................................................................. 27
References .................................................................................................................................................. 29
Appendix A: Logic Model ........................................................................................................................ 30
Appendix B: Borrower Survey ................................................................................................................. 31
Appendix C: Data Reporting Requirements ............................................................................................ 36
Appendix D: Annual Borrower Check‐In Form ........................................................................................ 37
Demonstrating Program Impact 1
Introduction
South Carolina Community Loan Fund (SCCLF) is a statewide nonprofit Community Development
Financial Institution (CDFI) based out of Charleston. The organization is seeking to evaluate the
economic and social impact of its lending program on the state of South Carolina. This paper will
prepare SCCLF for their five‐year impact tracking project by reviewing their current impact tracking
practices, exploring best practices on data collection, developing materials to help improve their data
collection process, and outlining the steps that should be taken to carry out their analysis. More broadly,
the research provides insight into how to utilize a variety of metrics to evaluate a program’s
effectiveness over time as well as how to put those findings to use in creating more efficient and
impactful programs.
Background
South Carolina Community Loan Fund (SCCLF) was founded by the City of Charleston in 2004 as
Charleston Housing Trust in an effort to address the city’s affordable housing shortage. The organization
was formed with the mission of providing loans and technical assistance for the development of
affordable housing. Recognizing that vibrant, sustainable communities include more than affordable
housing, the organization decided to expand their mission in 2011 to include healthy food retail,
community facilities, and community businesses. In 2014, the organization expanded its service area to
the entire state of South Carolina and changed its name to South Carolina Community Loan Fund to
align with its new mission. Today, the organization provides loans and technical assistance, as well as
advocacy, to support the development of projects within its four focus areas in underserved
communities across the state. The research that follows will focus on SCCLF’s lending program, since
loans are the primary means through which SCCLF helps advance community development projects in
these areas.
Demonstrating Program Impact 2
SCCLF is part of a network of Community Development Financial Institutions (CDFIs) across the
country that provide mission‐driven loans and technical assistance for community development
projects. While CDFIs have been in existence for over 50 years, the industry gained credibility and
expanded dramatically in 1990s and continues to do so today. Today there are over 950 certified CDFIs
of serving urban and rural communities across the United States ranging in type from community loan
funds to credit unions. Although the specific structure and goals of these organizations differ, they all
share the common mission of catalyzing and spurring community development, primarily in
underserved communities (CFDI Coalition, n.d.). With this common mission, comes the shared challenge
of evaluating programs and measuring their impact in a way that balances what the CDFI Coalition refers
to as their “’double bottom line’: economic gains and the contributions they make to the local
community.”
As part of their expansion in 2014, SCCLF underwent a strategic planning process and sought to
align with the CDFI industry’s focus on impact measurement and evaluation. Over the last few years the
organization has identified a variety of metrics for tracking their effectiveness, dividing these metrics
into outputs, outcomes, and impacts. SCCLF has developed a somewhat consistent system for tracking
and reporting on their outputs over time, using data from loan application materials and a collection of
spreadsheets to communicate their results. However, although they have attempted to define some of
the outcome and impact metrics that are relevant to their mission, they have not yet developed a
system for evaluating the results of their loan program beyond its immediate outputs or even
determined what data must be collected in order to examine their medium and long term results.
Understanding Evaluation
Impact measurement and program evaluation are discussed widely across the CDFI industry and
nonprofit field as a whole, as well as in both scholarly and professional literature. Generally speaking,
evaluation research helps to “determine the value of some initiative” by identifying the initiative’s
Demonstrating Program Impact 3
“consequences as well as opportunities for modification and improvement” (O’ Leary, 2014, p. 157).
While specific methods for conducting evaluation research vary, it typically involves collecting evidence
from a representative sample, comparing it with some established criteria, then “draw[ing] conclusions
about the effectiveness, the merit, the success, of the phenomenon under study” (O’Leary, 2014, p. 5).
According to Posavac (2016), “there is only one overall purpose for program evaluation activities:
contributing to the provision of quality services to people in need” (p. 13). She goes on to explain that
program evaluation “contributes to quality services by providing feedback from program activities and
outcomes to those who can make changes in programs or who decide which services are to be offered”
and that “without feedback, human services programs (indeed, any activity) cannot be carried out
effectively” (Posavac, 2016, p. 13). Evaluation research is essential to helping organizations answer
fundamental questions about the effectiveness of their efforts and “decide if [they] are really
conducting the right program activities to bring about the result [they] believe (or better yet, have
verified) to be needed by [their] clients” (McNamara, 2002, p. 6).
The extent to which program evaluation is beneficial and useful to nonprofit organizations is
illustrated throughout program evaluation literature. According to McNamara (2002), program
evaluation “has become increasingly important for nonprofits, and funders are demanding it more and
more” (p. 6). This seems to be especially true for nonprofit community development loan funds who rely
heavily on funding from financial institutions, foundations, and government rather than donations from
individuals. Many CDFI’s, including SCCLF, also participate in a rating process through Aeris, an agency
dedicated to increasing community investment by providing information to potential investors on the
impact, financial strength, and performance of loan funds. Aeris has a rigorous rating process which
underscores data collection and program evaluation as being essential to improving an organization’s
impact and performance, and subsequently their Aeris ratings.
Demonstrating Program Impact 4
McNamara (2002) describes seven additional “legitimate expectations” for what program
evaluation can help organizations accomplish, including to “understand, verify or increase the impact of
[its] products or services on customers or clients” (p. 2) and to “produce data or verify results that can
be used for public relations and promoting services in the community” (p. 3). Reasons cited by the
National Resource Center (2010) for carrying out program evaluation include to measure the
effectiveness of an intervention, to identify effective practices and practices that need improvement, to
prove value to existing and potential funders, and to get clarity and consensus around the purpose of a
program (p. 5). OFN (2005) similarly cites satisfying reporting requirements from funders and regulatory
agencies, increasing competitiveness for resources, and strengthening a marketing and communications
strategy as external benefits of impact tracking, but also describes internal benefits like improving
organizational culture and better understanding customers (p. 4).
A variety of terms are used to describe evaluation activities and a distinction is sometimes made
in literature between these terms. For example, The National Resource Center (2010) discusses the
difference between program evaluation and outcome measurement, describing outcome measurement
as “a systematic way to assess the extent to which a program has achieved its intended results” and
program evaluation research as focusing on ‘causation’ (p. 6). They explain that unlike program
evaluation research, outcome measurement will “explore what your program provides, what its
intended impacts are, and whether or not it achieves them [but] it will not prove that the changes that
take place are a result of your program” (p. 7). Despite such suggested distinctions and subtle
differences, for simplicity, the terms ‘program evaluation’ and ‘outcome evaluation’ as well as ‘outcome
measurement’ and ‘impact tracking’ will be used interchangeably to describe program evaluation
activities for the purposes of this research.
Demonstrating Program Impact 5
Project Overview
Responding to the demands of funders, and understanding the value of such evaluation efforts,
SCCLF seeks to carry out an impact tracking project that examines the outcomes and impacts of their
lending program over a five‐year period. The desired final deliverable is a report including visuals and
narratives that communicate the following: 1) data on short‐to‐middle level outcomes collected from
borrowers and organized based on the organization’s desired outcomes, 2) an analysis detailing the
organization’s economic impact in the counties in which they have made loans as well as the state as a
whole, and 3) data collected through secondary sources that helps explain the broader changes that
have taken place to South Carolina’s housing, food, health, and job outlooks resulting (in part) from their
work. Understanding that SCCLF currently lacks the data and tools needed to effectively carry out this
evaluation, this research project intended to equip SCCLF with the information, tools, and
recommendations needed to effectively design and implement a five‐year evaluation of their lending
program’s impact.
The organization has collected data on direct outputs over the years but they will not be able to
truly evaluate the effectiveness of their lending program until they have further defined their outcome
and impact metrics, as well as their data collection and evaluation processes. As a result, this research
project sought to answer the question how can SCCLF refine their data collection process to help better
evaluate the impact of their lending program on community development in South Carolina? The
research process involved diving into SCCLF’s evaluation and data collection process as a whole to better
understand the organization’s existing processes and needs, as well as utilizing a variety of professional
sources and academic articles to gather information on program evaluation best practices and tools.
This research was then used to inform the development of key resources for impact tracking, such as a
logic model and borrower surveys, and to make recommendations for improving SCCLF’s data collection
process and reporting requirements moving forward.
Demonstrating Program Impact 6
The above research question provided direction for the literature review and helped guide the
methodology of the project. In attempting to examine the main research question, it quickly became
apparent that a number of sub‐questions exist which could help further shape the nature and direction
of the research. As a result, the research question was dissected into two distinct parts. First, the
research would determine how SCCLF can measure “the impact of their lending program on community
development in South Carolina.” Second, it would analyze how they can “refine their data collection
process to help better evaluate” this impact.
Determining how SCCLF can measure the impact of its lending program on South Carolina
involved answering two main sub‐questions: 1) How do similar organizations evaluate their program
impact and what information/ resources are needed for this evaluation? and 2) What does it mean to be
impactful/successful in the context of SCCLF’s lending program? In order to begin answering these
questions, information was gathered from relevant literature on program evaluation best practices and
tools, and from SCCLF staff on the goals of their lending program. Answering the second part of the
research question, focused on refining SCCLF’s data collection process for more effective evaluation,
involved exploring three additional sub‐questions: 1) What does an effective data collection process look
like? 2) What is SCCLF’s current data collection and evaluation process? and 3) What are SCCLF’s data
collection needs? Answering these questions required diving into SCCLF’s existing processes and policies,
as well as some academic and professional literature on best practices and tools for data collection.
Evaluating Program Impact
The purpose and process of evaluating a program varies greatly across organizations and
program types. Although there is no one correct way to carry out an evaluation, there are a variety of
best practices and common themes that emerge across evaluation literature that can help guide
organizations in the design and implementation of a program evaluation. This section will discuss some
Demonstrating Program Impact 7
of the best practices in program evaluation, both those that are relevant to nonprofit organizations as a
whole and those that are specific to the CDFI field. These practices will then be applied to SCCLF and
used to help prepare the organization for an evaluation of their lending program.
Best Practices: Program Evaluation
The exact steps one should take in evaluating a program’s outcomes and impacts differ across
evaluation literature but there is an underlying emphasis across the board on preparation, and the time
and diligence that should be dedicated to those essential steps that come before the analysis itself.
McNamara, for example, described six distinct steps to the outcomes evaluation process that come
before the actual analysis. These steps include defining the outcomes and impacts to be studied,
determining what information is needed to examine those outcomes, and determining the best way to
collect that information (McNamara, 2002, p. 7). National Resource Center (2002) describes four phases
to the evaluation process beginning with phase one to “identify outcomes and develop performance
indicators” and phase two, to “create and implement a data collection plan” (p. 8).
According to OFN, however, there is work to be done even before the outcome and impacts can
be defined. OFN explains that “CDFI’s with the most robust impact data collection systems engage in
planning processes to identify their community impact goals before identifying impact metrics to
collect” and also suggests reviewing and refining the organization’s strategic priorities and examining
their current data collection efforts (OFN, 2005, p. 5). Meanwhile, Baker (2000) argues that the most
critical issue in planning is actually determining “whether it is possible to begin the evaluation design
before the project is implemented and when the results will be needed” (p. 23). Planning for the
evaluation stage before a program is implemented involves are great deal of forward thinking and
sometimes is not possible given the time and resources available but can be worthwhile because it
allows organizations to “identify upfront at which points during the project cycle information from the
Demonstrating Program Impact 8
evaluation effort will be needed so that data collection and analysis activities can be linked” (Baker,
2000, p. 23).
The need for an evaluator to understand the organization’s existing culture and climate is
another theme that appears throughout evaluation literature. Posavac (2016) explains that “regardless
of who initiates an evaluation, evaluators need to become familiar with the nature of the program, the
people served, and the goals and structure of the program, and above all, learn why an evaluation is
being considered” (p. 22). OFN (2005) speaks to an organization’s culture as a whole being key to
successful evaluation efforts claiming that the “biggest success factor in creating an effective impact
assessment system is development and organizational culture that values impact data as a key tool for
achieving mission and organizational goals” (p. 10). This theme and those previously mentioned are
summarized in five evaluation best practices that were adapted from a survey by the Alliance for
Nonprofit Management which state that organizations should: 1) begin with the end in mind, 2) involve
stakeholders, 3) align closely with assessment, 4) understand the context, and 5) use the evaluation for
learning (National Resource Center, 2010, p. 8)
A major part of preparing for a program evaluation involves defining metrics for the program.
Among the metrics that must be defined are the program goals, outputs, outcomes, impacts, and
indicators, and it is important that evaluators understand the significance of each as well as the
differences between them. At the highest level, ‘goals’ (also sometimes referred to in literature as
objectives) are “broad statement[s] of the ultimate aims of a program” (National Resource Center, 2010,
p. 7). Goals “are essential to identifying information needs, setting output and impact indicators, and
constructing a solid evaluation strategy to provide answers to the questions posed” (Baker, 2000, p. 19).
Once the goals have been defined for a program, outputs, outcomes, impacts, and indicators are used to
measure an organization’s progress toward meeting those goals.
Demonstrating Program Impact 9
Evaluation literature notes important distinctions between outputs, outcomes, and impacts.
‘Outputs’ are the direct results of a program or activity, described by McNamara (2002), the “units of
service” (p. 6). Meanwhile, ‘outcomes’ are “changes in the lives of individuals, families, organizations, or
the community” that result from the program or activity (National Resource Center, 2010, p. 7). Finally,
program ‘impacts’ are the broader, long‐term outcomes of a program. In Figure 1 below, W.K. Kellogg
Foundation uses the basic example of planning a family vacation to illustrate the differences between
these outputs, outcomes, and impacts.
Figure 1. Outputs, Outcomes, and Impacts of Planning a Family Vacation
Note. Reprinted from "Logic Model Development Guide," p. 11. Copyright 1998 by the W.K. Kellogg Foundation.
Defining the outputs, outcomes, and impacts for a program evaluation, of course, tends to be a
more complex task than is revealed in the above example. In the case of a school literacy program, for
example, outputs could include the number of classes taught, number of class meetings held, and
number of students. As a result of these outputs, participating students could see a variety of short and
medium‐term outcomes such as learning a new technique for reading, passing a reading exam, or
Demonstrating Program Impact 10
beginning to read at grade level. However, “assessing the maintenance of improvement creates [a]
problem” because “changing long‐standing behaviors is difficult” (Posavac, 2016, p. 8). Posavac (2016)
explains that “although positive changes may be observed after a person’s participation in a program,
the changes may only be superficial and disappear, in a matter of months, weeks, or days.” Program
impacts are seen when positive outcomes do occur long‐term and “their effects [influence] other
behaviors and even improve the condition of other people.” An impact of the literacy program, for
example, could be that students who participated in the program (and saw positive outcomes)
continued reading at grade level long‐term and, as a result, went on to graduate from high school or get
a job.
Tools and resources available digitally such as the Urban Institute’s PerformWell website, and
industry resources such as those presentations released by the CDFI Fund and other leaders in the CDFI
industry, can help organizations with the process of identifying relevant outcomes and their associated
indicators. Indicators are “specific, observable, and measurable accomplishment(s) or change(s) that
show the progress made toward achieving a specific output or outcome” (CDC, n.d.). Baker (2000)
describes a hierarchy of indicators that aligns with the output, outcome, and impact levels, ranging from
“[short‐term] indicators such as school attendance to longer‐term indicators such as student
achievement” (p. 30). Indicators for the literacy program could include the number or percent of
participating students who pass a reading exam, or the change in reading level of students before,
during, and after their participation in the program.
A common way to organize goals, outcome and impact metrics, and their associated indicators
is by developing a logic model which provides a “picture of how [the] program works” as well as “the
theory and assumptions underlying the program” (W. K. Kellogg Foundation, 2004, p. 1). According to
the National Resource Center (2010), “the program logic model is a representation of the linkages
between program activities and the changes those activities will produce” and it is “presented in a clear
Demonstrating Program Impact 11
graphic format in precise language” (p. 16). W.K. Kellogg Foundation (2004), describes the logic model as
“a beneficial evaluation tool that facilitates effective program planning, implementation, and
evaluation,” and explains that using a logic model for evaluation “helps create shared understanding of
and focus on program goals and methodology” (p. 5). Baker (2000) explains that the “use of a logical
framework approach provides a good and commonly used tool for identifying the goals of the project
and the information needs around which the evaluation can be constructed” (Baker, 19). Further, logic
models are flexible and can be used throughout a program’s life allowing organizations to “adjust
approaches and change courses as program plans are developed [and evaluated]” (W.K. Kellogg
Foundation, 2004, p. 5).
Defining Success
It is apparent based upon the best practices described in evaluation literature that in order to
evaluate the impact of a program in an effective way, it is important to look at the organization’s
broader purpose and evaluations metrics. Successful evaluation of the impact of SCCLF’s lending
program, therefore, will first require examining the program goals and defining the associated
evaluation metrics. The section that follows will use best practices in program evaluation to examine
and refine SCCLF’s lending program goals then develop the metrics and resources needed to define
success for this program.
Examining Program Goals
SCCLF is a nonprofit CDFI with three main programs: lending, technical assistance, and advocacy.
The loans, technical assistance, and advocacy services SCCLF provides support the development of four
types of assets – affordable housing, healthy food enterprises, community facilities, and small
businesses – which they believe to be the building blocks of healthy communities. SCCLF’s mission,
vision, impact statement, and core goals, as established during their 2014 strategic planning process, are
detailed in Table 1.
Demonstrating Program Impact 12
Table 1. SCCLF's Mission, Vision, Impact Statement, and Core Goals
As indicated in the table, the overall goal of SCCLF’s lending program is to “provide capital for
the financing of affordable housing, healthy food, community facility, and community business projects
benefiting low to moderate income people and places.” However, this goal does not adequately
encapsulate the objectives that, based on the organization’s mission, vision, impact statement, and
borrowers, are at the heart of the lending program. Understanding that having strong goals in place is
essential to the successfully evaluating a program, it became apparent that new program‐specific goals
would need to be developed for SCCLF’s lending program. Using the previously developed organizational
statements and information gathered from recent discussions with SCCLF leadership and lending staff,
ORGANIZATIONAL STATEMENTS & GOALS
Mission Statement
To advance equitable access to capital by providing
loans, technical assistance, and advocacy for
affordable housing, healthy food, community
facilities and community business enterprises.
Vision Statement
To ensure equitable access to capital to create
thriving, prosperous, economically resilient
communities for all South Carolinians.
Impact Statement
To create community economic development opportunities through strategic partnerships and the
deployment of risk‐tolerant loans that increase access to capital for under‐resourced communities.
Core Goals
1 ‐ Lending: To provide capital for the financing of affordable housing, healthy food, community facility, and
community business projects benefiting low to moderate income people and places.
2 ‐ Technical Assistance: To deliver expert consulting and technical services that strengthen the business and
development capacity of non‐profit organizations, for‐profit developers, entrepreneurs, and governmental
entities.
3 ‐ Advocacy and Policy Change: To engage in strategic alliances that facilitate the deployment and/or
attraction of capital to benefit under resourced communities.
Demonstrating Program Impact 13
three new goals for the lending program were established. The primary goals of SCCLF’s lending
program are to:
1. provide access to capital for borrowers who have traditionally had difficulty securing financing;
2. increase the availability of affordable housing, healthy food enterprises, community facilities,
and small businesses benefiting low to moderate income people and places; and
3. create jobs and spur community and economic development in under‐resourced communities.
These three goals serve as guiding statements for SCCLF’s lending program, against which all of their
lending activities can be evaluated. The goals also provide the foundation needed to develop the output,
outcome, and impact metrics that will define success for SCCLF’s lending program and measure its
impact.
Developing a Logic Model
The logic model format, which is mentioned throughout evaluation literature, was employed to
help SCCLF define and illustrate success for its lending program. Creating a logic model involved using
the developed program goals to determine the program’s anticipated outputs, outcomes, and impacts,
as well as the indicators that would help track progress toward those anticipated short and long‐term
outcomes. There are a variety of outputs, or direct products, that result from SCCLF’s lending activities.
These outputs include the number of loans made, the number of jobs created/retained, the dollar
amount of financing, and the total number of loans made to various underserved populations. SCCLF’s
desired short and medium‐term results, or outcomes include 1) increasing access to housing, healthy
food, community facilities, and businesses, 2) increasing the availability of jobs in SC, and 3) increasing
economic activity within underserved communities. The anticipated long‐term outcomes, or impacts,
include decreasing the housing cost burden among low‐to‐moderate income communities, reducing the
number of people living in food deserts, and decreasing the unemployment rate in SC. A full list of
outputs, outcomes, and impacts for SCCLF’s lending program is detailed in Table 2.
Demonstrating Program Impact 14
Table 2. Output, Outcome, and Impact Metrics for SCCLF's Lending Program
Understanding how to measure the degree to which SCCLF’s lending program results in these
anticipated outcomes is central to evaluating the program’s success using the defined metrics. To
determine how to measure the program’s effectiveness against the expected results, SCCLF needed to
establish indicators for all of the specified outcomes and impacts. These indicators were determined
through a combination of meetings held with SCCLF staff and by referencing industry resources,
including Urban Institute’s Perform Well site and OFN’s “Understanding CDFI Impact” presentation, for
examples of effective indicators. A list of the indicators that were developed for SCCLF’s lending
program is included in Table 3.
OUTPUTS OUTCOMES IMPACTS
We expect that our lending activities will produce the following evidence of service delivery.
We expect that our lending activities will lead to the following changes in 1‐4 years.
We expect that our lending activities will lead to the following changes in 5‐10 years.
# of loans made
# of units financed
# of units for sale and for rent
# of jobs created/retained
# of square feet constructed
$ amount of SCCLF financing
Total $ amount of financing
Leverage
# of loans made to minority borrowers
# of loans made to female borrowers
# of loans made to veterans
# of loans made to individuals with disabilities
# of loans made for projects in rural census tracts
Increased availability of affordable housing, healthy food, community facility, and small business enterprises within underserved communities
Increased number of jobs available to underserved communities
Increased economic activity in SC communities
Decreased housing cost burden among low‐to‐moderate income communities
Reduced number of people living in food deserts
Increased access to essential services like health care facilities and education
Increased access to jobs in SC communities
Increase in economic activity in SC and within SC’s underserved communities (IMPLAN)
Demonstrating Program Impact 15
Table 3. Outcome and Impact Indicators for SCCLF's Lending Program
Once all of the outputs, outcomes, impacts, and indicators for SCCLF’s lending program were
established, the metrics were organized into the logic model format. Presenting this information as a
logic model will allow SCCLF staff to use the resource in the ongoing evaluation of their lending program.
The final logic model is attached as Appendix A.
Analyzing the Data Collection Process
With the logic model developed to guide SCCLF’s evaluation plan, the next step is to dive into
examining data collection best practices as well as the organization’s existing data collection process.
Taking an in‐depth look at these practices will help determine what tools and information are already in
place and what additional data need to be collected, and inform the development of any new methods
and tools needed to carry out the evaluation process. In addition to preparing SCCLF for this specific
evaluation of their lending program, this analysis will help put in place the tools needed to improve their
data collection and reporting practices as a whole, and support their ongoing program evaluation needs.
OUTCOME INDICATORS IMPACT INDICATORS
• # of units financed by AMI • Total population within 1 and 10 miles of project • # of individuals being served by projects (i.e.
residents, customers) • # of low to moderate income individuals being
served by projects (i.e. residents, customers) • # of individuals being employed by projects • # of low to moderate income individuals being
employed by projects • Annual total salary of those employed by projects • Annual sales numbers, operating budget, or annual
rental income • # of new businesses opened in community as a
result of loans
• Change in # of households in SC at various AMI
levels • Change in # of cost burdened households in SC at
various AMI levels • Change in % of SC population living in poverty • Change in total SC population living in USDA
designated food deserts at various years • Change in # of FQHC's in SC communities • Change in # of schools in SC communities • Change in SC unemployment rate • Change in time of commute to work • Total economic activity generated by projects over
five‐year period in SC • Total economic activity generated by projects over
five‐year period by county
Demonstrating Program Impact 16
Best Practices: Data Collection
Deciding on the best way to select and collect data is an important consideration for
organizations looking to carry out a program evaluation, and a topic that is discussed widely in
evaluation literature. According to O’Leary (2014) “the classic design for evaluations has been the
experimental model” which involves the “measurement of the relevant variables for at least two
equivalent groups – one that has been exposed to the program and one that has not” (p. 10). However,
“many other designs are used in evaluation research – case studies, post‐program surveys, time series,
correlational studies, and so on” (p. 10). Regardless of the research design selected, having data that is
“adequate and reliable” is key when evaluating project impact. According to Baker (2000) “high‐quality
data are essential to the validity of the evaluation results” and “assessing what data exist is a first
important step before launching any new data collection efforts” (Baker, 2000, p. 28).
A best practice in deciding how to collect data for an evaluation is to ensure you have a mix of
data types including quantitative and qualitative data from both primary and secondary sources. Baker
(2000) explains that “integrating quantitative and qualitative evaluations can often be the best vehicle
for meeting the project’s information needs” (p. 28). He describes how the two reinforce each other
throughout the data collection and evaluation process where “qualitative methods can be used to
inform the key impact evaluation questions” and “analyze the social, economic, and political context
within which a project takes place” while “quantitative methods can be used to inform qualitative data
collection strategies,” create representative samples, and prove causation (Baker, 2000, p. 9). OFN
(2005) suggests collecting both primary and secondary data when undertaking an analysis with primary
data being “collected from firsthand experience (in person or via loan documents)” and secondary data
being “collected and reported by a third party agency such as government agencies, proprietary, or
academic sources” (p. 3).
Demonstrating Program Impact 17
Baker (2000) makes a number of suggestions for organizations deciding what specific data to
collect for their program evaluation. Primarily, of course, they should collect information to assess their
outcomes during what Baker describes as “a period of time relevant to decision maker’s needs” (p. 30).
He also encourages evaluators to consider collecting information that takes into account “exogenous
factors that may have an effect on the outcome of interest” as well as “information on the
characteristics of the beneficiary population not strictly related to the impact evaluation but of interest
in the analysis” i.e. level of poverty or their opinion on the program (Baker, 2000, p. 30). Additionally, he
suggests it may be beneficial to collect “cost measures in order to do some cost‐effectiveness analysis or
other complementary assessments not strictly related to the impact evaluation” (Baker, 2000, p. 30).
There are a variety of tools available to assist in the data collection process. Baker cites the main
data collection instruments as being case studies, focus groups, interviews, observation, questionnaires,
and written document analysis. For CDFI’s specifically, OFN (2005) underscores the importance of
investing in appropriate data collection systems asserting that “CDFIs need to recognize up‐front that
data collection costs are real” (p. 17). Examples of technology that can help with the data collection
process are client relationship management systems like Salesforce, mapping software like PolicyMap
and ArcGIS, and custom database software like Oracle, and Microsoft Access (OFN, 2005, p. 17). OFN
(2005) also shares some strategies to mitigate the costs of data collection though including “planning,
studying internal data, mining cheap data sources, sharing best practices with peers, and using
technology effectively” (p. 17). Resources like PolicyMap, the Bureau of Labor Statistics, USDA’s Food
Access Research Atlas, and the Census’ Small Area Income and Poverty Estimates are useful tools for
gathering secondary data on program impact.
Data Collection at SCCLF
Examining SCCLF’s current data collection process involved exploring past borrower files and
meeting with key staff to dissect the processes and tools used for data collection. SCCLF’s lending team,
Demonstrating Program Impact 18
consisting of three staff members, has historically been responsible for collecting and managing data on
borrowers to be used when reporting on program impact. Quantitative and qualitative data related to
each loan’s outputs and expected outcomes are collected from borrowers through loan application
materials and presented in the credit memo and on a series of spreadsheets. SCCLF’s current data
collection process is outlined in detail in Figure 2. Unfortunately, project data is not currently organized
in a way that is easy to use or reference, and storing pieces of project data across multiple spreadsheets
makes it difficult to access the necessary information. In addition, because the entire lending process is
carried out offline using various Word documents and Excel spreadsheets, the upkeep and accuracy of
data relies on manual entry by SCCLF’s loan officers.
Figure 2. SCCLF’s Current Data Collection & Management Process
SCCLF currently uses the same eight output metrics to report their lending program impact on
their website, to stakeholders, and across all communication materials, updating these figures two times
annually in June and December. The organization currently maintains data and reports on the following
Communications Manager Compiles Updated Impact Numbers in June and December
Documents Saved in Drive and Uploaded to Docufree (Cloud‐Based Storage)
Loan Inquiry Form Loan Application Credit Memo Site Visit Forms
Site Visits Conducted by Portfolio Manager to Check‐In on Borrower and Project Status
Site Visit Form Completed
Loan Officers Add High‐Level Output Data for Project to Monitoring Spreadsheets
Loan Portfolio Spreadsheet Loan Year‐to‐Date Spreadsheet
Loan Officer Works w/ Borrower to Gather Additional Data Before Committee Review
Credit Memo Created
Initial Project Information and Output/Outcome Data Reported by Borrower
Loan Inquiry Form Loan Application
Demonstrating Program Impact 19
metrics: the number of loans made, total dollar amount of loans, total dollar amount of development,
number of jobs created or retained, number of housing units financed, number of community facilities
financed, number of community businesses financed, and number of healthy food retail outlets
financed. These metrics alone are not sufficient for effectively evaluating their lending program and
measuring progress toward their lending program goals. There are also a variety of additional metrics
the organization must report to various funders and rating agencies which they have scrambled to
produce upon request in the past.
Identifying Immediate Data Needs
The program evaluation and resulting impact report this project is preparing SCCLF for will
examine program metrics over a five‐year period, from 2013 through 2017. The logic model prepared in
partnership with SCCLF staff, and specifically the indicators listed in the model, serve as an inventory of
the SCCLF’s immediate data collection needs relating to the social impact of their lending program.
Trainings and discussions with the IMPLAN support team helped determine what data must be collected
from borrowers for each loan types in order to conduct an analysis of the lending program’s economic
impact in South Carolina using the IMPLAN software.
The data needed to evaluate SCCLF’s outcomes and impacts will be collected from both primary
and secondary sources. Primary data, collected through a survey to SCCLF’s active borrowers, will be
necessary in order to report on the medium‐level outcomes that have resulted from the lending
program. The borrower survey, attached as Appendix B, includes questions on results that occurred in
the years following the receipt of an SCCLF loan. The survey questions seek to provide an understanding
of the social impacts of SCCLF’s lending program (how many people are served by the projects, how
many individuals they employ, etc.) as well as how these projects have impacted the economies of
South Carolina communities (through employee payroll, total annual sales, etc.). The survey places an
Demonstrating Program Impact 20
emphasis on outcomes for low‐to‐moderate income communities and underserved populations
including minorities, minorities, and veterans.
Secondary data from a variety of sources will also be important in order to provide a complete
picture of SCCLF’s program impacts. In looking at impacts, the organization seeks to gain an
understanding of the long‐term outcomes of their work, or how the effects of their short and medium
range outcomes have been able to “influence other behaviors” and “improve the condition of other
people” (Posavac, 2016, p. 8). While this will partially be achieved by using data collected through the
borrower survey to determine the program’s long‐term economic impact using the IMPLAN software, it
will also involve consulting a variety of sources for relevant research and data on the broader issues
SCCLF seeks to address. Some of these sources include Policy Map, the Bureau of Labor Statistics,
USDA’s Food Access Research Atlas, and the Census’ Small Area Income and Poverty Estimates. SCCLF
recognizes that achieving most of these broader impacts cannot be done through their work alone but
instead involves the work of a variety of stakeholders and community groups. However, by tracking
these metrics they hope to show that they are playing a part in “moving the needle” on these issues. In
the case that the needle has not moved in a positive direction, the organization and their stakeholders
can use this information to reevaluate their efforts or make the case for the need for additional
programs/funding.
Findings and Recommendations
Key Findings and Deliverables
Through a review of literature on program evaluation best practices, this research revealed the
importance of preparation to the program evaluation process as well as methods for defining the goals
and metrics needed to evaluate impact. As a result, SCCLF has revised goals for its lending program and
the output, outcome, and impact metrics, as well as indicators, needed to measure progress to those
goals. These metrics have been organized into a logic model that defines success for the program and
Demonstrating Program Impact 21
will serve as an ongoing tool for SCCLF staff to track outcomes and inform future data collection.
Meetings with SCCLF staff and information on data collection best practices resulted an examination of
the organization’s existing data collection and reporting processes. Analyzing the organization’s lending
program goals, as well as existing data collection practices and data, led to the development of a
borrower survey that will be distributed to SCCLF’s active borrowers in spring 2018 to collect the
outcome and impact data needed to carry out their five‐year program evaluation and impact report.
Based on lessons from data collection efforts for Create Jobs for USA, OFN (2005) suggests that
CDFI’s improve their data collection process by employing consistent data collection methods, providing
staff training, creating written policies and procedures, and using electronic loan application systems (p.
18). These strategies are being used in combination with other best practices outlined in this paper to
refine SCCLF’s data collection and evaluation practices. For example, SCCLF has been working to
implement a client relationship management system called Outcome Tracker that will facilitate the
creation of an online inquiry form and loan application that automatically feed into a digital borrower
profile. All information about borrowers, loans, and impacts, will live in this system and the reporting
features will allow staff to export an up‐to‐date report of specified outputs, outcomes, and impacts at
any time. The Portfolio Manager will also be able to complete her annual site visit forms through
Outcome Tracker so updated borrower information is automatically filtered into the system. The
implementation of Outcome Tracker alone, expected to be completed in December, will result in
exponential improvements to SCCLF’s data collection and management practices.
Recognizing that program evaluation is an ongoing process, and understanding the importance
of preparation, SCCLF is working to incorporate program evaluation metrics into their ongoing data
collection process. In the early stages of the project, a master list of metrics was created based on the
organization’s requirements from various funders and reporting agencies (attached as Appendix C). This
master list was used along with the metrics established for SCCLF’s loan program evaluation to inform
Demonstrating Program Impact 22
the development of a new, more comprehensive online loan application. In addition, these metrics were
used to refine the site visit forms used by the Portfolio Manager in annual check‐ins to better align their
site visit questions with the goals and desired outcomes of the lending program.
To better equip the organization with the data they need to conduct evaluations of their lending
program impact in the future, a new annual borrower check‐in form has been created (attached as
Appendix D) which includes many of the same social and economic impact questions as the borrower
survey designed for the 2018 evaluation. Every SCCLF borrower with an active loan will be required to
complete the check‐in form annually in January for the previous calendar year. Once implemented, the
expanded borrower application, revised site visit form, and new annual borrower check‐in form will
together ensure that SCCLF is collecting and maintaining the data needed to effectively evaluate and
report on their lending program impact in the future. Additional columns indicating the data source for
SCCLF’s lending program outputs and indicators have been added to the program logic model (Appendix
A) in order to provide a more accurate picture of how the various data collection tools developed
through this project will be used in future evaluation efforts.
Practical Implications and Limitations
This research project provided an opportunity to help SCCLF strengthen its outcome tracking
and data collection processes, and to lead the development of the tools needed to better evaluate their
lending program impact. The findings of this research are currently being used to make improvements
to the organization’s future data collection and evaluation practices, and to inform plans for SCCLF’s
2018 program evaluation and impact report. The data and report resulting from this evaluation process
will have diverse, long‐term applications for the organization. In addition to serving as a key internal
resource for SCCLF, the report will prove relevant to a variety of audiences that are connected to
SCCLF’s work. The collected data and report will serve as a key marketing tool for SCCLF’s lending
program, communicating the outcomes and impacts of SCCLF’s lending program on South Carolina to
Demonstrating Program Impact 23
potential funders and borrowers. SCCLF receives funding from a number of financial institutions,
foundations and individuals that would be interested in reviewing (and sometimes require) a report of
the organization’s short/medium term outcomes as well as the long‐term impacts.
Further, other CDFI’s could find this research, including the best practices as well as methods of
collecting and reporting on data, useful to their work. The metrics in the report align with those
reporting metrics required by the CDFI rating agency, Aeris, so this will be a go‐to resources for the
agency when carrying out SCCLF’s rating renewal process next year, and could also be a valuable
resource for other loan funds taking part in the Aeris rating process. Finally, the organization’s
community partners and other nonprofits working in South Carolina on similar issues may be interested
in the findings and implications of this research, especially those organizations anticipating similar long‐
term impacts.
In addition to understanding its practical implications, it is important to also understand the
limitations of this research. The National Resource Center (2010) described a number of limitations to
consider when implementing outcome measurement plans, all of which are relevant to this research.
Those referencing this research or using the methodology/results to inform for their own evaluation
efforts should (p. 7):
1) Recognize that “’soft outcomes’ may be more important than the movement towards
metrics allows them to be. Building relationships between people or organizations or within
communities is an important result of activities undertaken by many nonprofits… but is hard
to measure.”
2) Be mindful of the fact that “measurement cannot take the place of judgement and
managerial decision making.”
3) Acknowledge that in some cases “the outcomes take years, if not decades, to materialize.”
Demonstrating Program Impact 24
4) Understand that “long‐range planning is difficult, and because performance data does not
speak to causality, managers are unable to definitively say how the agency’s activities
contributed to the improvements.”
Posavac (2016) adds to these considerations, explaining that “a narrow focus on objectives has made it
difficult for evaluators to become aware of the processes whereby participants come to change or to
notice good and bad unintended side effects” and that “the question of whether the objectives of a
program are the appropriate ones ought not to be overlooked” (p. 26).
The data collection process reveals some limitations as well. The 2018 borrower survey was
developed to fill gaps in 10 years of missing outcome and impact data because SCCLF was simply was
not previously collecting this information. The retroactive data collection process will reveal inherent
challenges because as O’Leary explains, “constructing and administering a survey that has the potential
to generate credible and generalizable data is truly difficult” (204). O’Learly describes some of the
challenges that can arise including “capturing the quantifiable data you require,” “going back to your
respondents if more data is required,” and “getting anyone at all to respond” (204). This has important
implications for the limitations of this research and the questions that should be considered when
thinking through the data collection process: What happens if we do not hear back from all of our
borrowers? Are there methods to approximating this data in a reliable way? How could the organization
ensure they have the data they need from borrowers to better prepare for these types of analyses in the
future? Substantial issues are likely to arise when looking at program impacts (primarily using data from
secondary sources) as SCCLF seeks to develop ways to demonstrate the organization’s direct
involvement on these long term impacts despite the undeniable role of other organizations and factors
in those changes.
Recommendations and Next Steps
Demonstrating Program Impact 25
While the findings and deliverables resulting from this research have provided a starting point
for improving SCCLF’s data collection and evaluation processes, the organization still has a great deal of
work to do in order implement the program evaluation best practices described in evaluation literature
and ensure long‐term improvements. In the coming months, SCCLF must continue to implement the
major changes to their data collection practices outlined in this paper, including completing the
implementation of the Outcome Tracker software, which will provide the organization with its first
online loan application and automate its data collection process. Implementing the annual borrower
check‐in form created through this project will require working with the Outcome Tracker team to build
an electronic version of the form through the software and add any missing fields to the borrower
profile template. Re‐creating the borrower check‐in form in Outcome Tracker will allow for the
information submitted through the form to be filtered directly into the appropriate borrower record,
automate the process of making the form available to borrowers each year, and enable SCCLF to set
reminders for borrowers when it is time to complete the form.
Once implementation of the Outcome Tracker software is complete, SCCLF can use the system
to access the borrower data needed to carry out the 2018 borrower survey. Ideally, SCCLF will build the
survey into Outcome Tracker and distribute it to borrowers through the system so that the data can be
stored in the system automatically for evaluation and future use. However, if it is not possible to build
the survey in Outcome Tracker (due to budget restraints, software functionality, etc.), the organization
can use its email marketing system, Constant Contact, to distribute the survey. If the survey is
distributed using Constant Contact, SCCLF will need to import the survey results into Outcome Tracker
once all responses after the survey has closed. Understanding the challenges associated with surveying,
SCCLF should work to provide borrowers with an incentive for completing the survey, develop a strategy
around notifying borrowers about the survey and importance of the resulting data, and set a timeline
for borrower communications including sending multiple reminders in advance of the submission
Demonstrating Program Impact 26
deadline. In addition, knowing that a 100% response rate is unlikely, the organization will need to
develop a method for coding borrower data to approximate outcomes and impacts for the purposes of
the 2018 report. To ensure they have the data needed to effectively evaluate and report on their
lending program impact and avoid the need to conduct such surveys in the future, SCCLF should
establish new borrower reporting requirements that support their data collection needs.
SCCLF should review and refine their existing borrower reporting requirements to ensure they
are collecting up to date, relevant data from borrowers on an ongoing basis moving forward. Currently,
the only reporting requirement explicitly detailed in the loan agreement requires borrowers to send
their updated financials at a pre‐defined intervals established by SCCLF. All active borrowers are
expected to take part in annual site visits with the Portfolio Manager but this requirement is not
enforced through the loan agreement or commitment letter. To support the updated data collection
efforts detailed in this paper, SCCLF should add the following requirements in writing to their standard
loan closing documents:
1. Borrowers are required to complete the annual check‐in form through Outcome Tracker in
January of each year for the duration of their loan, or at least five years. Should a borrower
pay off their loan sooner than five years, they agree to continue to report on outcome and
impact data until the five‐year minimum reporting requirement has been met.
2. Borrowers are required to take part in annual site visits with the Portfolio Manager and
provide updated project information related to the status of the project, financials,
employees, and clients, as requested.
SCCLF’s lending staff will need to meet with the data collection team to approve these updates and
decide whether this information should be included in the existing loan closing documents, or whether a
new agreement specific to data collection and reporting should be created. Additionally, SCCLF should
create a document that provides an overview of the metrics requested in the annual check‐in form to
Demonstrating Program Impact 27
provide to borrowers in advance of the reporting period to ensure borrowers know what information to
collect throughout the year and are prepared to report on that information. SCCLF could provide this
document to borrowers by email along with the information they receive on their borrower Outcome
Tracker portal directly following their loan closing.
Finally, SCCLF should develop an ongoing evaluation plan for their lending program. The logic
model created for SCCLF’s lending program is a tool that can support evaluation throughout the life of
the program but performing “ongoing assessment, review, and corrections” to the model “can produce
better program design and a system to strategically monitor, manage, and report program outcomes
throughout development and implementation” (W.K. Kellogg Foundation, 2004, p. 5). Knowing this,
SCCLF should include a system for regularly revisiting and updating their logic model into their
evaluation plan, perhaps incorporating this process into their annual staff or board retreat. In addition,
SCCLF should plan to measure their short and medium term outcomes annually, and present this
information to supporters in their annual report or through an outcomes scorecard. Further, SCCLF
should conduct a thorough evaluation of their lending program, as planned for 2018, every five years.
The program evaluation process should include an analysis of their outputs, outcomes, and social and
economic impacts, and the results of the evaluation should be communicated to supporters through a
comprehensive five‐year impact report.
Conclusion
This research project began in response to South Carolina Community Loan Fund’s (SCCLF)
desire to evaluate and report on the impact of their lending program on the state of South Carolina. It
quickly became apparent, however, based on the best practices and methods described in both
academic and professional literature, that SCCLF did not have the tools and data in place to carry out
such an evaluation. As a result, this research sought to determine how SCCLF can refine its data
Demonstrating Program Impact 28
collection process to better evaluate the impact of their lending program on community development in
underserved communities throughout South Carolina. This project and paper looked to scholarly and
professional literature to understand best practices around program evaluation and data collection that
could be used to improve SCCLF’s current and future practices. In addition, it utilized information and
feedback obtained through meetings with key SCCLF staff and IMPLAN support staff to better
understand what success looks like for SCCLF’s lending program, and to determine the metrics needed
to measure this success. The final section of this paper reviewed the key findings and deliverables that
resulted from this research, highlighted its practical applications and limitations, and discussed next
steps for carrying out the organization’s 2018 impact report project as well as recommendations for
continuing to improve SCCLF’s data collection process moving forward. This research, and the resulting
deliverables and recommendations, have provided the framework and resources needed for SCCLF, as
well as other CDFI’s, to make improvements to their data collection practices and more effectively
evaluate the impact of their lending programs on underserved communities.
Demonstrating Program Impact 29
29
References
Baker, J. L. (2000, May). Evaluating the Impact of Development Projects on Poverty: A Handbook for
Practitioners [PDF]. Washington, DC: The World Bank.
Developing Evaluation Indicators [PDF]. Center for Disease Control.
Evaluation Handbook [PDF]. (2004, January). Battle Creek, MI: W.K. Kellogg Foundation.
Identify Outcomes. (n.d.). Retrieved from www.performwell.org
Logic Model Development Guide [PDF]. (2004, January). Battle Creek, MI: W.K. Kellogg Foundation.
McNamara, C. (2002). A Basic Guide to Program Evaluation [PDF]. The Grantsmanship Center.
Measuring Outcomes [PDF]. (2010). National Resource Center.
O'Leary, Z. (2014) The Essential Guide to Doing Your Research Project. (2nd ed.). SAGE.
Understanding CDFI Impact [PDF]. (2005, August). Kansas City, MO: Opportunity Finance Network (OFN).
Patraporn, R. V. (2015). Complex transactions: Community development financial institutions lending to
ethnic entrepreneurs in Los Angeles. Community Development, 46(5), 479‐498.
Plantz, M. C., Greenway, M.T., & Hendricks, M. (1997). Outcome measurement: Showing results in the
nonprofit sector. New Directions for Evaluation, (75). Retrieved from onlinelibrary.wiley.com.
Posavac, E. (2016). Program Evaluation: Methods and Case Studies. New York, NY: Routledge.
What Are CDFIs?. (n.d.). CDFI Coalition. Retrieved from www.cdfi.org.
Weiss, C. H. (1972). Methods for assessing program effectiveness. Englewood Cliffs, NJ: Prentice
Hall, Inc.
Appendix A: Logic Model 30
Appendix B: Borrower Survey 31
Appendix B: Borrower Survey 32
Appendix B: Borrower Survey 33
Appendix B: Borrower Survey 34
Appendix B: Borrower Survey 35
Appendix C: Data Reporting Requirements 36
Appendix D: Annual Borrower Check‐in Form 37
Appendix D: Annual Borrower Check‐in Form 38