26
Using experiments in innovation policy Albert Bravo-Biosca

Using experiments in innovation policy (short)

  • Upload
    nesta

  • View
    959

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Using experiments in innovation policy (short)

Using experiments ininnovation policy

Albert Bravo-Biosca

Page 2: Using experiments in innovation policy (short)

Three principles for delivering good innovation policy

1. Experiment

3. Judgment

2. Data

Page 3: Using experiments in innovation policy (short)

Innovation policy and experimentation – Two interpretations

Supporting experimentation in the economy and

society

Using experimentation to learn what works better to support

innovation

Page 4: Using experiments in innovation policy (short)

Supporting experimentation in the economy and society

“The task of industrial policy is as much about eliciting information from the private sector about significant externalities and their remedies as it is about implementing appropriate policies”

Rodrik, 2004

Page 5: Using experiments in innovation policy (short)

Innovation policy focused on information discovery

The public sector as partner/enabler on the innovation process by helping reduce uncertainty in the private sector:

– A project-based conception of innovation policy – sunset clauses framed explicitly in terms of learning: “the policy ends when the learning ends”

– Specific learning methods would include: experimental development funds, testbeds, challenge prizes, observatories…

Page 6: Using experiments in innovation policy (short)

Using experimentation to learn what works better to support innovation

• Large amounts of money invested in schemes to support innovation, but very limited evidence on their effectiveness

Experimental approach is a smarter, cheaper and more effective approach to develop better innovation policy instruments

Typical approach

• Introduce large new interventions without prior small-scale testing

Experimental approach

• Set up pilots to experiment with new instruments, evaluate them using rigorous methods, and scale up those that work (continuing to experiment to improve them)

Page 7: Using experiments in innovation policy (short)

What is an experiment? A continuum of definitions…

Trying something new

Trying something new and put in place the systems to learn

RCTs

• No rigorous learning or evaluation strategy

• No real “testing mindset”• A “pilot”

• Rigorous formal research design• Test a hypothesis• Codifying and sharing resulting

knowledge• Sometimes but not always with

some form of control group

• Randomized control trials• Control group created by the

programme manager/researcher using a lottery

• Field vs. “lab” experiments• Different from a natural experiment

Page 8: Using experiments in innovation policy (short)

What is a randomized controlled trial?

CompareImplementRandomizeDesign

Participants

Treatment group

Receive intervention Outcome

Control group

Don’t receive intervention Outcome

Participants can be individuals, but

also firms, public organizations,

villages, regions, etc

Participants are randomly placed in a "treatment" group and a “control” group, and the impact of the treatment estimated comparing the behaviour and outcomes of both of them

Different alternatives to run

the lottery (e.g., individual vs. group

level randomization, etc)

1/0 vs. A/B experiment:

control group gets nothing (0) vs.

alternative intervention (B)

Collect data using surveys and/or

administrative data sources and

estimate impact of the intervention

Page 9: Using experiments in innovation policy (short)

RCTs:• The lottery in an RCT addresses

selection biases

• Differences between the treatment and control groups are the result of the intervention

• Provides an accurate/unbiased estimate of the impact of the intervention

• “Gold standard” for evaluation

“Typical” evaluations:• Good answer to "how well did the

programme participants perform“ (before and after)

• Fail to provide a compelling answer to "what additional value did the programme generate“ Requires good knowledge on how participants would have performed in absence of the programme

• No credible control group (e.g., biased matching, selection biases)

• Programme recipient satisfaction survey/”what if” questions/case studies

Why are RCTs useful?

…even if they also have some weaknesses and do not always apply (so not the solution for everything but still a very valuable tool, yet almost missing in the innovation policy area)

Page 10: Using experiments in innovation policy (short)

RCTs can have two non-mutually exclusive aims

Testing the impact of an intervention

Understanding the behaviour of individuals and what drives it

Mechanism experimentvs. “Managers’ actions driven by inertia”

Focus: AdditionalityHypothesis (e.g.): “Intervention has an effect”

Page 11: Using experiments in innovation policy (short)

Some misconceptions about RCTs The criticism A potential response

Unethical• Assumes intervention does benefit rather than harm recipients• Can provide alternative treatment (compare two alternative interventions, or the same intervention with two different sets of conditions, rather than “all or nothing”) Replaces decisions based mostly on “opinions” with “data”• Often insufficient resources to support all potential recipients in any case• A lottery can be fairer (and cheaper) than some panel-based scoring approaches• Using resources in programmes that don’t work deprive other more effective programmes from funding Experimental pilots reduce this risk

Expensive• It is often the programme, not the evaluation, that is expensive• Data collection is expensive, regardless of the evaluation method used• RCTs require smaller sample size cheaper data collection• Analysis can be quite cheap (simple comparison between groups), even if initial design requires more work

Findings not applicable to other settings (Internal vs. external validity)

• Context matters, as is any other type of evaluation, but some lessons can be generalized (still, multiple evaluations always desirable)

Cannot capture unexpected/unintended effects

• Innovation is uncertain, so it may be difficult ex-ante to identify all potential effects. In contrast to before/after approaches, with an RCT you can collect data ex-post on an unanticipated outcome of particular interest (even if not ideal)

Don’t tell you why there is an effect

• It is possible to design the RCT to be able to find this out

Don’t use qualitative methods along side

• Can be combined with qualitative methods – mixed methods can be the most informative approach

Page 12: Using experiments in innovation policy (short)

Key questions to design an RCT

• What intervention do you want to test?

• Does the control group benefit from an alternative intervention?

• What is the outcome measure of interest?

• Is data available for the outcome?

• At what level should randomization be done?

• How large should be the treatment and the control group?

• Many other design choices available (e.g., randomizing the “treatment” vs “the promotion of the intervention” in a randomized encouragement design, etc)

Page 13: Using experiments in innovation policy (short)

The use of RCTs increasing around the world

Health

JPAL, IPA, World Bank, Oxfam…

Development

French experimentation fund for youth, UK job centres

Social Experimentation

Harvard EdLabs, UK Education Endowment Foundation..

Education

Page 14: Using experiments in innovation policy (short)

Over the last 10 years the JPAL network has worked with NGOs, governments and international organizations to conduct 445 randomized evaluations on poverty alleviation in 54 countries

Page 15: Using experiments in innovation policy (short)

But very limited use of RCTs on…

Innovation Entrepreneurship

Business growth

....even if it is feasible....in advanced economies

Page 16: Using experiments in innovation policy (short)

Creative credits: Nesta’s vouchers RCT

• Business-to-business innovation voucher experiment run by Nesta

• It awarded 150 vouchers x £4,000 with £1,000 co-funding from SMEs to pay for collaborations with creative businesses

• An RCT with longitudinal evaluation

Innovationvoucher

Innovation project Build connections

Formal evaluation

Business led

Page 17: Using experiments in innovation policy (short)

Creative credits: The results

High short term input

additionality

Short term output

additionality

No significant long term

additionality

SMEs receiving Credit 78% more likely to undertake their project

Strong evidence of short term output additionality in terms of increased innovations after six months

No significant output, network or behavioral additionality after 12 months

Source: Bakhshi et al (2013)

Page 18: Using experiments in innovation policy (short)

Creative credits: Methods

• Mixed methods evaluation Qualitative analysis extremely useful to complement rigorous quantitative analysis, but cannot replace it

• Traditional evaluation methods used in parallel gave a misleading assessment of the impact of the scheme, much more positive, contradicting RCT evaluation findings

• See Bakhshi et al (2013) for the full results

• Similar results to those obtained in the Dutch innovation vouchers RCT

Page 19: Using experiments in innovation policy (short)

The UK adopting RCTs in many different areas

• Behavioural experiments – “nudge unit” (BIT) e.g., HMRC letters

• Job centres – unemployment training

• Education – 50 RCTS in 1000+ schools on-going

• Business support (Growth vouchers, BIS)

• Innovation (Innovation vouchers, TSB)

Page 20: Using experiments in innovation policy (short)

Growth vouchers

• £30 million budget for a new BIS programme of advice for businesses which will be run as a trial

• 25,000 micro and small businesses on an equal cost sharing

• Vouchers will be available to firms with – less than 50 staff

– first time users of business advice

• Aims– increase the use of business advice

– to collect robust evidence

• Research questions:– Does subsidy encourage businesses to seek and use business advice?

– What is the impact of advice on our outcome measures (sales, employment, turnover, profit)?

– What type of advice is it most effective to subsidise?

Page 21: Using experiments in innovation policy (short)

Innovation vouchers

• Technology Strategy Board programme to connect UK SMEs with knowledge providers (both university-based and knowledge providers)

• £5000 pounds vouchers (rolling programme)• Process:

1. Very short application form (with evaluation questions embedded)2. Screening out of bad applicants3. Use lottery to select recipients (good for evaluation and has low

administration costs)4. Track innovation behaviour, relationships with knowledge providers,

and firm performance using survey instrument and administrative data.

Page 22: Using experiments in innovation policy (short)

Why haven’t governments and researchers used more RCTs to understand innovation and its drivers, in contrast to other policy areas?

• Lack of sufficient examples showcasing their feasibility and value have made governments and intermediary organizations very reluctant to consider using RCTs in this area

Governments

• Very few academic researchers in related fields have developed the capabilities and required support infrastructure necessary to set up and run experiments

Researchers• The networks between researchers and practitioners

are missing, so even when they would be interested in collaborating on an RCT, they typically don’t know how to find each other

Missing networks

• There is insufficient knowledge about when is appropriate and feasible to use RCTs in this domain, and a widely-held misperception that RCTs need to be expensive

Insufficient knowledge

A new global innovation, entrepreneurship and growth lab to tackle these 4 factors simultaneously

Page 23: Using experiments in innovation policy (short)

Nesta is seeding a new international initiative for experiments on innovation,

entrepreneurship and growth

Increase innovation

Support entrepreneurship

Accelerate business growth

Use RCTs to build the evidence base on the most effective approaches to

Page 24: Using experiments in innovation policy (short)

The approachProgram

me delivery partners

Researchers

Experiments that:• Generate

actionable insights for decision makers, by piloting new programmes and creating better evidence on their impact

• Push the knowledge frontier forward, by giving researchers the opportunity to test with RCTs different hypotheses on the drivers of innovation

Identify and pursue opportunities for experimentation, bringing together social science researchers interested in these questions and organizations (whether public or private) with the ability to undertake experiments

Page 25: Using experiments in innovation policy (short)

What will this new lab do?

Develop and run

experiments

Work with public programmes and other delivery

organizations to support their adoption, matching them with

interested researchers

Build a community of researchers that

undertake RCTs

Showcase RCTs’ value with real

examples to advocate wider use

Improve the knowledge-base on how to do RCTs in this

space, learn when they work and when don’t, and hence when to use and not to use

them

Act as an aggregator and translator of the evidence generated through RCTs

across countries

A version of the JPAL model but focused on innovation, entrepreneurship and growth, with the aim to expand the research and evaluation toolkit in these areas by facilitating the use RCTs

Page 26: Using experiments in innovation policy (short)

Thank [email protected]

Get in touch if you would like to find out more