Upload
esmond-greene
View
212
Download
0
Embed Size (px)
Citation preview
Click to add title
Household energy efficiency programme evaluation: does it tell us what we need to know?
Dr Joanne Wade
CXC19-05-15
Click to add titleThe question
What is the evidence that energy efficiency programmes targeted at the household sector have delivered real energy savings?
• Conceptual and definitional issues
• Strengths and weaknesses of different methodological approaches
• Identify robust evaluations
• Range of energy savings likely
• Recommendations on the future design and focus of programme evaluation
Click to add title
1. Scoping note, agreed by expert group
2. Review literature on good practice evaluation
3. Search key databases and conferences for literature
4. Develop framework to characterise and analyse literature
5. Review literature and select key evidence
6. Use this to answer the question!
7. Draft report review, expert group and peers
8. Publication and dissemination
Expert group:• Ute Collier; Committee on Climate
Change
• Hunter Danskin; DECC
• Malcolm Keay; Oxford Institute for Energy Studies
• Michelle Shipworth, UCL Energy Institute
• Steve Sorrell; CIED, Univ. of Sussex
Peer reviewers:• Wolfgang Eichhammer; Fraunhofer
Institute
• Ed Vine; formerly Lawrence Berkeley Lab.
Methodology
Click to add titleScope of programme evaluation
Click to add titleThe evaluation problem
Click to add titleDefining the counterfactual
Click to add title
Constraints for evaluators
• Data issues
• Implementing Randomised Control Trials
• Transferability of findings
• Resourcing an evaluation
Evaluation in practice
Factor ReasonInnovation and Risk
High risk or innovative policies need robust evidence to show whether or not they are working as expected
Scale, value and profile
Programmes that are large or high profile need robust evaluation to meet accountability requirements
Pilots Evaluation needs to inform future activities
GeneralisabilityIf there is the potential for the results to be more widely relevant, then the evaluation needs to be robust enough to provide confidence in this generalisation
InfluenceGreater resources may be justified if an evaluation may report at a strategic point in time or if it will fill an important evidence gap
Variability of impact
Uncertain outcomes or behavioural effects that are more diffi cult to isolate may require more extensive evaluation
Evidence baseEvaluation is likely to require more resources where the existing evidence base is poor
Click to add titleResults - theoryex
ogen
ous
influ
ence
s
parti
cipa
nt s
pillo
ver
rebo
und
self-
sele
ction
bia
s
free
-rid
ersh
ip
non-
parti
cipa
nt s
pillo
ver
simple engineering
? ? ? x x xVery few data to collect;
cheapInaccurate
as cross-check when no better data available
enhanced engineering
? ? x xRelatively few data to
collect; relatively cheap
Potentially less accurate than quasi-
experimental approaches
as cross-check; when measures well understood; when interaction between measures of interest
before-after
x x xRequires participant
group onlyDoes not account for
exogenous influences
when there is unlikely to be much variation in exogenous influences;
when a comparator group cannot be found
quasi-experimental: cross-section
? x ? xDoes not require
'before' data
Needs data from comparison group;
non-participant spillover can cause
inaccuracies
when 'before' data are not available, and when there is not likely to be a
large non-participant spillover effect
quasi-experimental: difference-in-differences
? x ? xDoes account for some
of the effect of exogenous influences
Increased data requirements; non-participant spillover
can cause inaccuracies
Where there is good availability of data for participants and non-
participants; where non-participant spillover is not a major issue
quasi-experimental with exact matching
? xHas the potential to
accurately account for self-selection bias
Data requirements may make impractical;
non-participant spillover can cause
inaccuracies
When large datasets are available; where non-participant spillover is not
a major issue
experiments (Randomised control trials)
x
Has the potential to provide the most
accurate estimate of programme impact on participant households
Can only be used where
implementation conditions can be tightly controlled
For pilots of new interventions where there are unlikely to be non-
participant spillover effects
Method Key benefits Key drawbacks When to use
Issues in defining the counterfactual
Click to add titleResults – practical use of methods
• RCT = most accurate for well defined single interventions on clearly defined population
• Engineering estimate = least accurate BUT may well be ‘good enough’, especially for large programmes
• In between are the range of quasi-experimental approaches, each with strengths and weaknesses
Click to add titleAssessing the evidence
• What evaluation methods are used?
• Does the evaluation demonstrate an understanding of how the programme is likely to affect energy use, and hence seek to collect and use appropriate data?
• Is the scale and nature of the evaluation appropriate for the programme size and stage, and level of existing knowledge about outcomes?
• Is the choice of evaluation method appropriate for the available data?
• Are the limitations of the evaluation acknowledged and, where possible, adjusted for?
Click to add titleResults - The evidence base
• Widely spread: energy efficiency and evaluation conferences and 20 different journals
• Dominated by evaluation of energy company schemes, for regulatory purposes
• Significant lack of detail about evaluation methods – difficult to judge quality
NB this is the peer-reviewed evidence only; there is significant information in grey literature…
Click to add titleAddressing the evaluation
challenge• Well tackled:• Exogenous influences• Participant spillover• Direct rebound
• Less well tackled:• Free-ridership• Self selection
• Hardly addressed:• Indirect rebound and non-participant spillover
Click to add titleWhat we seem to know
• Minimum efficiency standards for buildings, appliance market transformation activities and investment programmes all reduce energy use; but by less than ex ante estimates would suggest.
•
• Savings from these types of programme in the order of 10% of total household energy use for participant households
• Average effects of feedback programmes 1-5% of participant household energy use
• Large range around this average at the individual household level
Click to add titleWhat we seem not to know
• Likely magnitude of effects like spillover and free-ridership.
•
• Outcomes of information / advice other than through feedback; of community-led programmes; of innovative finance
• ‘Reach’ of different types of programme
• Wider economic impacts
Click to add titleRecommendations: evaluation research
• Greater understanding of importance of effects like non-participant spillover.
•
• Economy-wide impacts of packages of energy efficiency programmes
• Outcomes of community-led, behaviour change and innovative finance programmes
• Analysis of the grey literature and reports in languages other than English
Click to add titleRecommendations: evaluation practice
• Methods: greater use of Randomised Control Trials and quasi-experimental alternatives where appropriate, together with more use of multiple evaluation methods to cross-check results
• Variability: deeper exploration of the variation in effects between different households, making innovative use of the large datasets (e.g. from building energy certification and smart metering) that are now becoming available; understanding which households are reached by which approaches
• Shared learning: greater exposure of evaluation results to discussion in the peer-reviewed literature
• Usefulness: presenting evaluation results in such a way that cross-programme comparison is easier (e.g. offering percentage savings figures as well as kWh).
Click to add titleRecommendations: policy
• Continue support for energy efficiency policies and programmes – these are likely to remain cost-effective.
•
• Well established approaches – standards and incentives should form the core in the short term
• New approaches need to be piloted and evaluated before any commitment to replacing existing approaches
• Policymakers need to respond to the significant opportunity to learn from experience in other countries and jurisdictions
Click to add title
UK Energy Research Centre+44 (0)20 7594 1574
www.ukerc.ac.ukfull report to be published soon :
http://www.ukerc.ac.uk/programmes/technology-and-policy-assessment/energy-efficiency-evaluation.html