Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

Embed Size (px)

Citation preview

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    1/146

    Evaluationof Small and Medium

    Enterprise Programsin Latin America and the Caribbean

    Editors

    Gladys Lpez-Acevedo

    Hong W. Tan

    Impact

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    2/146

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    3/146

    ImpactEvaluationof Small and MediumEnterprise Programsin Latin America and the Caribbean

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    4/146

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    5/146

    Editors

    Gladys Lpez-Acevedo

    Hong W. Tan

    ImpactEvaluationof Small and Medium

    Enterprise Programsin Latin America and the Caribbean

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    6/146

    2011 The International Bank for Reconstruction and Development / The World Bank1818 H Street NWWashington DC 20433Telephone: 202-473-1000

    Internet: www.worldbank.org

    All rights reserved

    1 2 3 4 13 12 11 10This volume is a product of the staff of the International Bank for Reconstruction andDevelopment / The World Bank. The findings, interpretations, and conclusions expressedin this volume do not necessarily reflect the views of the Executive Directors of The WorldBank or the governments they represent.The World Bank does not guarantee the accuracy of the data included in this work. Theboundaries, colors, denominations, and other information shown on any map in thiswork do not imply any judgment on the part of The World Bank concerning the legalstatus of any territory or the endorsement or acceptance of such boundaries.

    Rights and PermissionsThe material in this publication is copyrighted. Copying and/or transmitting portions orall of this work without permission may be a violation of applicable law. The InternationalBank for Reconstruction and Development / The World Bank encourages dissemination of

    its work and will normally grant permission to reproduce portions of the work promptly.For permission to photocopy or reprint any part of this work, please send a requestwith complete information to the Copyright Clearance Center Inc., 222 Rosewood Drive,Danvers, MA 01923, USA; telephone: 978-750-8400; fax: 978-750-4470; Internet: www.copyright.com.All other queries on rights and licenses, including subsidiary rights, should be addressedto the Office of the Publisher, The World Bank, 1818 H Street NW, Washington, DC 20433,USA; fax: 202-522-2422; e-mail: [email protected].

    ISBN: 978-0-8213-8775-7eISBN: 978-0-8213-8776-4DOI: 10.1596/978-0-8213-8775-7

    Cover and Design: Alejandro Espinosa/sonideas.comPhotographs: back cover Ray Witlin/World Bank Photo Library (left), Aravind Teki/Dreamstime.com (right)Photographs (interior): World Bank Photo Library

    Library of Congress Cataloging-in-Publication data have been requested.

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    7/146

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    8/146

    V

    I

    Contents

    Acknowledgments ......................................................................................................................................... xi

    Abbreviations ................................................................................................................................................ xii

    CHAPTER 1Motivation, Methodology and Main Findings ............................................................................................. 1

    Motivation for the Study ................................................................................................................................................... 1

    The Impact Evaluation Challenge..................................................................................................................................... 2

    Review of Recent Literature .............................................................................................................................................. 5

    The Four Country Studies ................................................................................................................................................. 6

    The Non-Experimental Data ............................................................................................................................................. 6

    Analytical Approach........................................................................................................................................................... 7

    Overview of Cross-Country Results ................................................................................................................................. 8

    Concluding Remarks ........................................................................................................................................................10

    CHAPTER 2A Review of Recent SME Program Impact Evaluation Studies ................................................................ 13

    Introduction ...................................................................................................................................................................... 13

    Studies Selected for Review ............................................................................................................................................14

    Enterprise Support Programs Studied ............................................................................................................................14

    Non-Experimental Data Used ..........................................................................................................................................14

    Analytic Approaches and Main Findings ...................................................................................................................... 18

    Selected References ..................................................................................................................................... 19

    ANNEXSummary of Individual Studies ................................................................................................................... 21

    CHAPTER 3Evaluating SME Support Programs in Chile .............................................................................................. 33

    1. Introduction .................................................................................................................................................................. 33

    2. Overview of SME Programs in Chile .......................................................................................................................... 34

    3. The Chile Data ............................................................................................................................................................. 37

    4. Empirical Approach and Initial Findings .................................................................................................................... 43

    5. Estimating Program Impacts Using the ICS-ENIA Panel .......................................................................................... 48

    6. Summary and Concluding Remarks .......................................................................................................................... 55

    CHAPTER 4Evaluating SME Support Programs in Colombia ....................................................................................... 57

    1. Introduction .................................................................................................................................................................. 57

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    9/146

    2. Support Policies for SMEs in Colombia ..................................................................................................................... 58

    3. Past Impact Evaluations of FOMIPYME ..................................................................................................................... 60

    4. Data Used in the Evaluation ....................................................................................................................................... 61

    5. Methodology ................................................................................................................................................................ 65

    6. Estimation and Results ................................................................................................................................................ 67

    7. Conclusions ...................................................................................................................................................................76

    Annex 4.1 Telephone Survey Questionnaire ............................................................................................... 77Annex 4.2 Telephone Survey Results .......................................................................................................... 78

    CHAPTER 5Evaluating SME Support Programs in Mexico ........................................................................................... 81

    1. Introduction .................................................................................................................................................................. 81

    2. SME Programs ............................................................................................................................................................. 82

    3. Past Evaluations ........................................................................................................................................................... 89

    4. Data ............................................................................................................................................................................... 92

    5. Model ............................................................................................................................................................................ 96

    6. Results .......................................................................................................................................................................... 99

    7. Conclusions ............................................................................................................................................................... 100

    ANNEX 5.1 Estimates of Program Impacts in Mexico ............................................................................ 102

    CHAPTER 6Evaluating SME Support Programs in Peru ............................................................................................. 109

    1. Introduction .................................................................................................................................................................109

    2. Size of SME Sector and Program Coverage .............................................................................................................110

    3. Description of SME Programs ................................................................................................................................... 111

    4. Data Description .........................................................................................................................................................114

    5. Methodology ...............................................................................................................................................................115

    6. Results .........................................................................................................................................................................116

    7. Sensitivity Analysis ..................................................................................................................................................... 119

    8. Conclusions ................................................................................................................................................................ 120

    Annex 6.1 Innovation Centers (CITES) ...................................................................................................... 122Annex 6.2 Designing a Supplementary Survey ....................................................................................... 123References ................................................................................................................................................... 126

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    10/146

    V

    III

    Figures and Tables

    FIGURES

    Figure 1.1 Impact on Firm Performance With and Without SME Program .................................................................... 3

    Figure 1.2 Selectivity Bias from Program Participation .................................................................................................. 4

    Figure 3.1 Time Paths of Y for Treatment and Control Groups ..................................................................................... 43

    Figure 3.2 Distribution of Propensity Scores and Region of Common Support......................................................... 46

    Figure 3.3 Time-Paths of Program Impacts on Selected Final Outcomes ................................................................... 53

    Figure 4.1 Distribution of FOMIPYME Projects by Activity and Sector ....................................................................... 59

    Figure 4.2 Distribution of Propensity Score and Region of Common Support .......................................................... 69

    Figure 4.3 Estimated Outcomes for Treatment and Control Groups ........................................................................... 70

    Figure 5.1 Distribution of Propensity Scores ................................................................................................................. 98

    Figure 6.1 Evolution of CITE-Calzado Revenue by Service Type (2001-2006) ............................................................ 113

    Figure 6.2 Distribution of Propensity Scores and Region of Common Support........................................................117Figure 6.3 Evolution of Mean Profits Per Worker for PROMPYME

    and BONOPYME, 2001-2006 (thousands of soles) .....................................................................................119

    Figure A6.2.1 Distribution of Propensity Scores and Region of Common Support ................................................ 124

    TABLES

    Table 1.1 Overview of Data and SME Programs in Four Latin American Countries ..................................................... 7

    Table 1.2 Impacts of Program Participation Fixed Effects Results ............................................................................... 9

    Table 2.1 Recent Impact Evaluation Studies of Enterprise Support Programs ........................................................... 15

    Table 2.2 Recent Impact Evaluation StudiesData Sources and Period Covered ..................................................... 16

    Table 2.3 Recent Impact Evaluation StudiesApproach and Findings ....................................................................... 17

    Table 3.1 SME Program Participation and Participation Status ................................................................................... 38

    Table 3.2 Distribution of Treatment and Control Groups in the Panel ......................................................................... 39

    Table 3.3 Distribution of Treatment and Control Groups by Firm Size and Sector ..................................................... 40

    Table 3.4 Summary Statistics on Intermediate and Final Outcomes For the Treatment and Control Groups ......... 42

    Table 3.5 Conditional Likelihood of Any Program Participation Estimates from Cox Proportional Hazards Model 46

    Table 3.6 Intermediate and Final Outcomes in 2004 Nearest Neighbor Estimator .................................................... 47

    Table 3.7 Program Impacts of Any Program and by Program Type Levels and

    Fixed Effects Model with Propensity Score Matching .................................................................................. 50

    Table 3.8 Attributes of Treatment Cohorts by Year of Program Entry ........................................................................... 51Table 3.9 Time Effects of Any Program Participation Fixed Effects Model with Propensity Score Matching ........... 52

    Table 3.10 Bounding Impacts of Program Participation Trimming

    Bottom 5% and 10% of Treatment Group Outcomes .................................................................................. 54

    Table 4.1 Project and Resources Executed by FOMIPYME (2008 Prices) .................................................................... 59

    Table 4.2 Impacts of FOMIPYME .................................................................................................................................... 61

    Table 4.3 Distribution of Firms in the Final Sample ...................................................................................................... 61

    Table 4.4 Distribution of Firms in the Final Sample ...................................................................................................... 63

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    11/146

    Table 4.5 Topics Covered During the Support Activities ............................................................................................... 63

    Table 4.6 How the Firms Got Involved in the Activities ................................................................................................ 63Table 4.7 Annual Average Sales by Sector (thousands 2008 US$) .............................................................................. 64

    Table 4.8 Average Assets by Sector (thousands 2008 US$) ......................................................................................... 64

    Table 4.9 Average Number of Employees by Sector .................................................................................................... 64

    Table 4.10 Average Years Doing Business by Sector ..................................................................................................... 65

    Table 4.11 Main Independent Variables Used in the Analysis ...................................................................................... 65

    Table 4.12 Propensity Score Matching Results .............................................................................................................. 68

    Table 4.13 Common Support .......................................................................................................................................... 69

    Table 4.14 Estimated Impact Via PSM (2002) ................................................................................................................. 69

    Table 4.15 Estimated Impact Using PSM in Differences (2002) .................................................................................. 69

    Table 4.16 Panel Regression Coefficients ...................................................................................................................... 71

    Table 4.17 Upper and Lower Bound Impacts .............................................................................................................. 72

    Table 4.18 Impacts on Total Factor Productivity ............................................................................................................ 73

    Table 4.19 Firms Falling in the Common Support (Two Different Treatments) ............................................................74

    Table 4.20 Impacts by Type of Program ......................................................................................................................... 75

    Table 4.2.1 Telephone Survey Summary ........................................................................................................................ 78

    Table 5.1 SME Support Funds and Programs in Mexico: Summary of Results, 2001-2006 ...................................... 82

    Table 5.2 Nafinsa: Main Results 2001-2006.................................................................................................................... 83

    Table 5.3 SME Funds and Programs from the Ministry of Economy: Main Results 1998-2006 ................................ 83

    Table 5.4 Funds of the Ministry of Economy: Main Results 2001-2006 ....................................................................... 84

    Table 5.5 PROMODE: Main Results 2001-2006 .............................................................................................................. 84

    Table 5.6 COMPITE: Main Results 2001-2006 ................................................................................................................ 85

    Table 5.7 Bancomext: Main Results 2001-2006 ............................................................................................................. 85

    Table 5.8 Fiscal Incentives: Main Results 2001-2006 ..................................................................................................... 86

    Table 5.9 Science and Technology Sectoral Fund: Main Results 2002-2006 ............................................................... 86

    Table 5.10 AVANCE: Main Results 2004-2006 ................................................................................................................ 87

    Table 5.11 CIMO-PAC: Main Results 2001-2006 ............................................................................................................. 87

    Table 5.12 Programs and Support Mechanisms ............................................................................................................ 88

    Table 5.13 Evaluation Studies in Mexico ....................................................................................................................... 89Table 5.14 Number of Panel Firms by Size and ENESTYC Years .................................................................................. 92

    Table 5.15 SME Program Participation ........................................................................................................................... 93

    Table 5.16 Distribution of Treatment and Control Groups ............................................................................................ 94

    Table 5.17 Distribution of Treatment and Control Groups by Firm Size and Sector ................................................... 95

    Table 5.18 Differences in Means Between the Treatment and the Control Group, Any Program .............................. 95

    Table 5.19 Estimates from Cox Proportional Hazards Model. Results from Any Program Participation Model ...... 97

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    12/146

    Table A5.1 Program Impacts of Any Program and by Program

    Agency. Levels and Fixed Effects Model with Propensity Score Matching ..............................................102

    Table A5.2 Program Impacts by Program in ENESTYC 2005. Levels and Fixed Effects Model with Propensity Score Matching ............................................................103

    Table A5.3 Time Effects of Any Program Participation

    (time since started the program). Fixed Effects Model with Propensity Score Matching.......................104

    Table A5.4 Bounding Impacts of Program Participation. Trimming

    Bottom 5% of Treatment Group Outcomes. Fixed Effects Model with PSM ............................................105

    Table A5.5 Bounding Impacts of Program Participation.

    Trimming Bottom 5% of Treatment Group Outcomes. Fixed Effects Model with PSM ..........................105

    Table A5.6 Program Impacts of CIMO in ENESTYC 2001. Models with Propensity Score Matching .......................106

    Table 6.1 Estimates of the Number of Micro and Small Firms (2006) ........................................................................ 110

    Table 6.2 Formal Firms that Accessed SME Support Programs .................................................................................110

    Table 6.3 Participation, Vouchers Used and Expenditures (2003-2006) ...................................................................... 111

    Table 6.4 Beneficiary Firms in the EEA According to Support Program ....................................................................114

    Table 6.5 Distribution of Treated and Untreated Firms ................................................................................................116

    Table 6.6 Logit Estimates for Program Participation ....................................................................................................117

    Table 6.7 Distribution of Treated and Untreated Sample by Program Type................................................................118

    Table 6.8 Estimates of Fixed-Effects and Between-Effects Models .............................................................................118

    Table 6.9 Fixed-effects Estimates by Trimming the Bottom 5% of the Distribution ...................................................119

    Table 6.10 Fixed-effects Estimates by Trimming the Top 5% of the Distribution ....................................................... 120

    Table A6.2.1 Results from Supplementary Survey by Support Program .................................................................. 123

    Table A6.2.2 Number of CITE-Calzado Users According to Registration Year* ......................................................... 123

    Table A6.2.3 Logit Model Dependent Variable: Ever treated by BONOPYME .......................................................... 124

    Table A6.2.4 Fixed-effects Model .................................................................................................................................. 125

    X

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    13/146

    Acknowledgments

    This report was co-funded by research grant RF-P105213-RESE-BB from the World BanksResearch Committee for a regional study Evaluating Small and Medium EnterpriseSupport Programs in Latin America and support from the Poverty Reduction andEconomic Management Division of the Latin America and Caribbean Region of the WorldBank. The objective of the study was to rigorously evaluate small and medium enterprise

    (SME) programs in four Latin American countriesMexico, Chile, Colombia and Peruto gaininsights into whether SME programs work, which programs perform better than others, and why.

    The research team was led by Gladys Lpez-Acevedo (Task Team Leader and Senior Economist,LCSPP) and Hong Tan (advisor and consultant, LCSPP). The introduction (Chapter 1) and Lit-erature Review (Chapter 2) were written by Hong Tan and Gladys Lpez-Acevedo. The countrystudies were written by different authors: Hong Tan on Chile (Chapter 3); Juan Felipe Duque and

    Mariana Muoz (consultants from Econometria) on Colombia (Chapter 4); Gladys Lpez-Acevedoand Monica Tinajero (consultant) on Mexico (Chapter 5); and Miguel Jaramillo and Juan Jose Diaz(consultants from GRADE) on Peru (Chapter 6). The team was assisted by consultant YevgeniyaSavchenko and ITESM consultants Jorge Mario Soto, Hugo Fuentes and Victor Aramburu, and byour World Bank colleagues Anne Pillay, Rosa Maria Hernandez-Fernandez and Lucy Bravo. Specialthanks go to David McKenzie (Senior Economist, DECRG) who guided the team on methodologicaland econometric issues throughout the study, and to Christopher Humphrey (consultant) whoseediting made the report more readable.

    The study would not have been possible without the assistance of and inputs from local partnerinstitutions and governments. We gratefully acknowledge INEGI, the national statistical ofce ofMexico, particularly Abigail Duran (Director of Industrial Surveys, General Direction of Economic

    Statistics) and Adriana Ramirez (Subdirector, Operations and Training, General Direction of Eco-nomic Statistics); DANE, the national statistical ofce of Colombia, in particular Eduardo Freire,(Technical Director of Statistics Methodology and Production) and the National Planning Depart-ment, Government of Colombia; INEI, the national statistical ofce of Chile, in particular MarioRodriguez, and Carlos Alvarez (UnderMinistry of Economy) and Alberto Ergas (Advisor); and fromPeru, Renan Quispe (Head of INEI) and Agnes Franco (Executive Director of the National Competi-tiveness Council). We are grateful to colleagues that provided comments and inputs to the variousdrafts of the report in particular, Jose Guilherme Reis (PRMTR), Michael Goldberg (LCSPF), andCristian Quijada Torres (LCSPF). The research also beneted from presentations of draft countrystudies at two workshops: an October 2009 seminar at the Rand Corporation in Santa Monica, CAand a December workshop in the World Bank as part of its DIME Impact Evaluation Workshopseries. We gratefully acknowledge the insightful comments and suggestions of participants at theseworkshops.

    This report should be of interest to country governments, policymakers with responsibilities forSMEs, local researchers and the private sector in the region, as well as World Bank staff and bilateraldonors. However, the ndings and conclusions expressed in this report are entirely those of theauthors, and do not necessarily represent the opinions of the World Bank, its Board of Directors orthe countries it represents.

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    14/146

    Abbreviations

    BDS Business Development Services

    CID Colectivo Integral de Desarrollo( Integral Development Collective)

    CIMO Calidad Integral y Modernizacion (Integral Quality and Modernization Program)

    CITE Centro de Innovacion Tecnologica (Technical Innovation Center)

    CONICyT Comision Nacional de Investigacion Cientifica

    y Tecnologica (National Science and Technology Research Council)

    CONSUCODE Consejo Superior de Contrataciones y

    Adquisiciones Del Estado (Council of State Contracting and Procurement)

    CORFO Corporacion de Fomento de la Produccion (Production Promotion Corporation)DANE Departamento Administrativo Nacional de Estadistica

    (National Statistics Administration Department )

    DID Difference-in-difference

    ENESTYC Encuesta Nacional de Empleo, Salarios, Capacitacion

    y Tecnologia (National Employment Salary, Training and Technology Survey)

    ENIA Encuesta Nacional Industrial Annual (Annual Industrial Survey)

    FAT Fondos de Asistencia Tecnica (Technical Assistance Funds)

    FDI Fondo de Desarrollo e Innovacion (Development and Innovation Fund)

    FOMIPYME Fondo Colombiano de Modernizacion y Desarrollo Tecnologico de las Micro,

    Pequeas y Medianas Empresas (Fund for the Modernization

    and Technological Development of Micro, Small and Medium Sized Firms)

    FONDEF Fondo de Fomento al Desarrollo Cientifico y Tecnologico

    (Science and Technology Development Fund)

    FONDOEMPLEO Fondo Nacional de Capacitacion Laboral y de Promocion del Empleo

    (National Fund for Training and Employment Promotion)

    FONTEC Fondo Nacional de Desarrollo Tecnologico y Productivo

    (Technical and Productive National Development Fund)

    GDP Gross Domestic Product

    ICS Investment Climate Survey

    IFI International Financial Institution

    IMF International Monetary Fund

    INE Instituto Nacional de Estadistica (National Statistical Institute)

    INEI Instituto Nacional de Estadistica e Informatica

    (National Statistics and Information Institute)

    ITESM Instituto Tecnolgico y de Estudios Superiores de

    Monterrey (Monterrey Institute of Tecnology and Higher Education)

    X

    II

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    15/146

    MP Ministerio de la Produccion (Production Ministry)

    MIMDES Ministerio de la Mujer y Desarrollo Social (Women and Human Development Ministry)

    MITINCI Ministerio de Industria, Turismo, Integracion y Negociaciones Comerciales

    Internacionales (Ministry of Industry, Tourism,

    Integration and International Negotiations)

    MTPE Ministerio de Trabajo y Promocion de Empleo (Labor Ministry)

    NSO National Statistics Office

    OECD Organisation for Economic Co-operation and DevelopmentOLS Ordinary least squares

    PDP Programa de Desarrollo de Proveedores (Supplier Development Program)

    PROCHILE Programa de Promocion de Exportaciones (Export Promotion Program)

    PROFO Proyectos Asociativos de Fomento (Association Development Projects)

    PROMPYME Comision de Promocion de la Pequea y Micro Empresa

    (Micro and Small Enterprise Promotion Commission)

    PSM Propensity score matching

    PTI Programas Territoriales Integrados (Integrated Territorial Programs)

    SENCE Servicio Nacional de Capacitacion y Empleo

    (National Training and Employment Service)

    SERCOTEC Servicio de Cooperacion Tecnica (Technical Cooperation Service)

    SME Small and Medium Enterprise

    STPS Secretaria de Trabajo y Provision Social (Ministry of Labor)

    SUNAT Superintendencia Nacional de Administracion Tributaria

    (National Tax Administration Authority)

    TFP Total factor productivity

    VAT Value-added tax

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    16/146

    J

    IMPICKERELL/WORLDBANKPHOTOLIBRARY

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    17/146

    chapter 1

    IMPACT EVALUATION OF SME PROGRAMS IN LAC

    CHAPT

    ER

    1This report is the product of a research project rigorously evaluating the net impacts ofparticipation in small and medium enterprise (SME) programs in four Latin American

    countries-Chile, Colombia, Mexico and Peru. The objective of the research was todetermine which SME programs improve firm performance, and to gain insights intowhy some programs may be more effective than others.

    To this end, the research team worked closely withnational statistics ofces in each of the four coun-tries to develop rm-level panel data on program

    beneciaries and a comparison group of non-program participants with similar rm attributes.The research team adopted a common analyticapproach to ensure comparability of ndings

    across countries. This drew upon methodologiesused in recent impact evaluation studies of SMEprograms in high income and developing countries(reviewed in Chapter 2) to address issues of selec-tion bias from program participation. The analysisalso extended evaluation methodologies in severalnew directions: to accommodate the presence ofmultiple treatment cohorts and participation inmultiple SME programs, to estimate the effects overtime of impacts from program participation, and totest the sensitivity of impact estimates to rm exit.The four country studies are presented in Chapters3 through 6.11

    The application of these evaluation techniquesrevealed generally positive and signicant impactsfor several (but not all) SME programs in the coun-tries reviewed. All four country studies found sta-tistically signicant impacts of participation in anySME program on sales, positive impacts on other

    1The project was co-funded by the Research Committee and the

    Poverty Reduction and Economic Management division of the

    Latin America and Caribbean Region of the World Bank.

    measures of rm performance varying by country,and differences in impacts across programs. Theanalyses highlighted the importance of accountingfor the biases that arise from non-random self-se-lection of rms into programs, and for using longerpanel data to measure impacts on rm performancethat may only be realized over time with a lag.

    These ndings imply that the pessimism of earlierSME program evaluations may have been largelydue to the methodologies used. The generallypositive results found in these country studies fora number of SME programs by using more renedtechniques suggests that the pessimistic view might

    be reconsidered, and that governments and inter-national development organizations should utilizesome of the evaluation techniques described inthis report to gain a better understanding of whichtypes of programs work better, and why. Thisinformation, in turn, can be applied to improving

    existing programs, winding down those shown tobe ineffective, and scaling up successful experienc-es to more efciently improve SME performance,economic activity and employment.

    Motivation for the Study

    In most countries, SMEs make up the vast major-ity of enterprises, and account for a substantialshare of gross domestic product (GDP) and the

    Motivation, Methodologyand Main Findings

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    18/146

    SME

    PROGRAMSINLAC

    2

    CHAPTER

    1

    workforce. However, SMEs often lag behind largerrms in many dimensions of performance. This iswidely believed to result from constraints SMEsface, including access to nance, weak manage-rial and workforce skills, inability to exploit scaleeconomies in production, and imperfect informa-tion about market opportunities, new technologiesand methods of work organization. In many casesthey also suffer from non-competitive real ex-change rates, cumbersome bureaucratic proceduresfor setting up, operating and growing a business,and investment climate constraints that are more

    burdensome to them than to their larger counter-parts. As a result, many SMEs remain small, fail toexport, and experience higher transaction costs andrates of business failure (World Bank 2007).

    In response, many high income as well as develop-ing countries have put in place a variety of pro-grams offering nancial products and subsidized

    business development services (BDS) to SMEs. BDSprograms include skills development for workers,management training, technology upgrading, qual-ity control and productivity improvement, marketdevelopment, network formation and exportpromotion. While the SME constraints noted aboveare usually used to justify these programs, manygovernments also introduce SME programs to ad-dress social and developmental challenges such aspoverty alleviation, poor working conditions, jobcreation, and promotion of strategic industries andexports. Early BDS programs were introduced often

    haphazardly by different ministries; most remainedsmall and involved direct delivery of BDS servicesto SMEs by public sector agencies. Over the pastdecade, however, there has been a trend towardsreforming SME support programs, incorporatingmarket principles and demanding greater account-ability from responsible agencies though impactevaluation studies.

    These reforms notwithstanding, SME programsare rarely evaluated rigorously, and then mostlyin high income countries such as the U.S. andEurope. In the U.S., evaluation studies have dem-

    onstrated that enterprise support programs suchas the Manufacturing Extension Partnership cansignicantly improve rm performance as com-pared to a control group (for example, see Jarmin,1999). By contrast, developing country govern-ments rarely evaluate their SME programs, andwhen they do, most rely on beneciary satisfactionsurveys or simple case studies which cannot tellprogram administrators (or development partners)whether a program is working. In the absence ofresearch on which SME programs work, why, and

    how programs can be better designed and imple-mented to maximize economic benets to rmsand workers, most developing countries continueto spend scarce resources on SME support pro-grams, many of dubious value.

    International nancial institutions such as theWorld Bank have also been largely silent onenterprise support programs. A review based onavailable evidence up to the late 1990s concludedthat most government-delivered SME programssupported by Bank projects were poorly executed,had little impact and were not cost effective.2In theabsence of credible evidence, the World Bank hasadvised developing country governments to focusinstead on improving the investment climate forall enterprises, large and small, and on developingtheir nancial markets and improving SME accessto nance.3The Bank has been largely disengagedfrom developing country efforts over the past de-

    cade to support SMEs, including ongoing reformsin many countries to introduce market principlesinto service delivery. In a recent 2007 report, the Or-ganization for Economic Cooperation and Develop-ment (OECD) highlighted the paucity of evidenceon the effectiveness of SME support programs, andcalled for a global stock-taking of best practice im-pact evaluation studies of SME programs that are

    both empirically rigorous and capable of informingthe design and implementation of SME programs.4

    This report takes a rst step in this direction byrigorously evaluating the impacts of SME programs

    in four Latin American countries.

    The Impact Evaluation Challenge

    The vast majority of SME program impact evalu-ations involve qualitative surveys of beneciariesthat are not very informative about whetherprograms are working. While useful for some pur-posesfor example, measuring satisfaction withservices provided or identifying areas of programdesign and implementation for improvementthey cannot accurately measure the net impacts of

    program participation. That requires knowledgeof the counterfactualwhat outcomes would have

    2 See Geeta Batra and Syed Mahmood (2003), Direct Sup-

    port to Private Firms: Evidence on Effectiveness.3 While there is broad consensus in the World Bank that SMEs face greater growth

    obstacles, there is limited support for treating small and large firms differently and for

    subsidizing SMEs. However, improving SME access to finance and more generally

    financial sector development would help remove investment climate constraints

    and allow SMEs to reach their growth potential (see Demirguc-Kunt et al, 2006).4 OECD (2007), OECD Framework for the Evaluation of SME

    and Entrepreneurship Policies and Programs, Paris

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    19/146

    been in the absence of the program. Most benecia-

    ries can only make guesses about this counterfac-tual, or they may provide responses that they thinksurvey enumerators want to hear.

    The manner in which the counterfactual can beused to identify the net impact of program partici-pation, and why this impact is not always easy toquantify can be illustrated graphically (Figure 1.1).The left-hand panel shows a scenario in which out-comes (for example, sales) are improving over timewith and without the program, as might happenin a period of robust growth. Assume that sales in

    a SME are $5 million prior to joining the program(the point where the two lines diverge); two yearslater, post-program sales are $10 million, comparedto $8 million without program participation. It istempting to attribute all of the $5 million improve-ment in sales to the intervention, but this would

    be incorrect since sales would have grown to $8million even without participating in the program.In this example, the program can only take creditfor the $2 million increase in sales, from comparingthe post-program outcome with its counterfactual.Without knowing the counterfactual, program

    beneciaries would tend to compare their own pre-

    and post-program outcomes in estimating impacts,and thus overstate the role of the intervention inimproving their performance.

    The right-hand panel shows the corresponding sce-nario for an economic downturn when all outcomemeasuresboth with and without the programare declining. A simple comparison of pre- andpost-program outcomes would reveal the counter-intuitive result that the intervention had a negativeimpact on performance. However, comparing the

    post-program outcome with the counterfactual

    would reveal a positive net impact of the interven-tion, in the sense that the program mitigated thenegative effects of adverse economic conditions onrm performance.

    If program beneciaries cannot be counted on toprovide the counterfactual, the program evalua-tor will have to develop one. Ideally, the evalua-tor would select a group of rms identical to thetreatment group in every respect except for thefact that they did not participate in the program.One possibility might be to identify a group of

    non-participants and control for any treatment andcontrol group differences in characteristics usingregression analysis. Another might involve select-ing a non-participant group to match the program

    beneciaries on observable characteristics such assector, rm size and location. However, neitherstrategy is satisfactory if rms self-select them-selves into programs on the basis of productivityattributes not observable to the evaluator.

    Self-selection of rms on unobservable attributescan bias efforts to estimate program impacts froma comparison of post-treatment outcomes of the

    treatment and control groups. For example, if onesupposes that relatively weaker rms are attractedto the subsidized services that SME programsprovide, one might expect them on average tohave lower performance levelsboth before andafter treatmentas compared to the control group,thus underestimating program impacts. If negativeselection is sufciently large (a rm with produc-tivity gap v1 in Figure 1.2), a simple comparisonmight actually suggest that the program had anegative impact, even though it improved the

    Figure 1.1 Impact on Firm Performance With and Without SME Program

    Source: Storey (2004).

    Outcome Outcome

    Time Time

    With intervention

    With interventionImpact

    Impact

    Without intervention

    Without intervention

    (counterfactual)

    (counterfactual)

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    20/146

    SME

    PROGRAMSINLAC

    4

    CHAPTER

    1treated rms performance (narrowed the produc-tivity gap v1) over time. An alternative scenariomight be when program administrators target

    those rms most likely to benet from support ser-vices. In this case (productivity gap v2), the com-parison with the control group would overstatethe programs impact. Thus, without explicitly ac-counting for self-selection of rms into programs,simple comparisons of post-program performanceof treatment and control group rms could lead toinaccurate estimates of program impacts.

    To clarify the nature of this evaluation challenge andhow researchers have sought to address it, considera general model for rm iin time twhich relates

    outcomes Yto observable rm attributes Xand anindicator variable Dfor participation in the program:

    (1)

    where is made up of a time-invariant rm-speciccomponent and a randomly distributed error termu. If rms are randomly assigned to the treatmentand control groups, then both groups have similardistributions of both the observed attributes Xandthe non-observed attributes and u. In such a case,

    ordinary least squares (OLS) regression models canbe used to estimate (1) from post-program data toget an unbiased measure of , the net impact of theprogram on outcome Y.

    Estimating net impacts free of bias becomes morechallenging when rms self-select into programs

    based on their observable and unobservable produc-tivity attributes. To see this, rewrite (1) separately forthe treatment and control groups and difference thetwo equations to get an expression for as in (2):

    (2)

    observed attributes selectivity bias

    The differenced equation in (2) identies twopotential sources of bias from non-random assign-ment, one due to differences between groups inobserved attributes (X1

    it-X0

    it)and another due to dif-

    ferences in the non-observable attributes ,E(v0

    i)(v1i). The selectivity bias in the estimation of the

    treatment effect arises because of the correlationbetween and the program indicator D.

    Researchers in this study have used regressionanalysis to address these two sources of bias. Therst source of bias can be minimized by including aset of control variables for all observable attributesthat are correlated with the outcome of interest.While this reduces the residual variance, the secondsource of bias from self-selection on unobservedattributes vstill remains. Some researchers address

    this second source of bias by jointly modeling theprogram selection process and its outcome using atwo-stage probit and regression model.5However,this approach relies on some strong assumptionsabout the bivariate normal distribution of the system

    5 See James Heckman (1978), Dummy Endogenous Variables in a Simultaneous

    Equation System, Econometrica 46, pp 695-712

    Essentially, a probit model of program participation is used in the first stage to

    calculate lamda, a selectivity correction variable, which is then used in a second

    stage regression to estimate the treatment effect free of selection bias.

    Outcome

    Time

    Control group

    V2

    V1

    Treatment group with

    high initial productivity

    Treatment group with

    low initial productivity

    Biased up

    Biased down

    Year start program

    Figure 1.2 Selectivity Bias from Program Participation

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    21/146

    of equations and, more critically, on the availabilityof a good instrumental variable that is correlatedwith the program indicator Dbut not with any otherdeterminants of the outcome variables of interest.6

    Instruments meeting these criteria are difcult tond.

    Matching strategies are another alternative totraditional regression methods to control for these

    biases. Building on Rosenbaum and Rubins (1983)work, recent studies have matched the treatmentand control groups on the basis of a propensityscore estimated from a probit or logit model of theprogram participation decision on a set of pre-treat-ment attributes. In this formulation, the programindicator Dis assumed to be independent of thepotential outcomes conditional on the attributesused to select the treatment group. By matching onthe propensity score, the treatment effect can beestimated as the weighted average of the net im-

    pacts of covariate-specic treatment-control groupcomparisons. Propensity score matching may not

    be enough by itself to eliminate the second sourceof bias from self-selection based on productivityattributes not observable to the evaluator.

    In the absence of good instrumental variables,studies have exploited the availability of paneldatarepeated observations on the same rmstoeliminate the confounding effects of unobservedattributes on using a difference-in-difference(DID) approach. The key to this approach is the as-

    sumption that is xed over time (in equation 1, appears without a time subscript). Let t=0and t=1represent the pre- and post-participation periods.First differencing equation (1) separately for thetreatment and the control groups eliminates thetime invariant term:

    (3)

    Where is a lag operator such that Y= Yit-Y

    i,t-1. The

    second difference between the differenced valuesof Yfor the treatment and control groups in (3) may

    be expressed as:

    (4)

    Equation (4) yields an unbiased estimate of if

    6 The challenge is to find exogenous variablessuch as a discrete policy change or

    institutional rules governing the selection processthat influence the program partici-

    pation decision but not the outcomes. These are difficult to find, with the result that

    identification of the first stage probit model is most often achieved by functional form.

    the evolution over time of observable attributes ofthe two groups is similar, X1

    it=X0

    it, and changes

    in unobserved characteristics have means whichdo not depend upon allocation to treatment, thatis, if u1

    it=u0

    it. Because the time-invariant term

    is eliminated by rst differencing, both regressionand matching methods can now be used to getunbiased estimates of the treatment effects , either

    by controlling for differences in observed attributesXattributes within a regression model context,or from treatment-control group comparisonsmatched on propensity scores estimated from X.7

    Review of Recent Literature

    As part of the study, the research team selectivelyreviewed the literature on about 20 non-experimen-tal impact evaluations of SME programs in bothhigh income and developing countries conducted

    over the past decade (see Chapter 2 for more de-tails). Collectively, the studies showed an evolutionover time in the methodological approaches usedto estimate program impacts. Studies from the late1990s and early 2000s relied on regression analysisto control for treatment-control group differencesin attributes, occasionally using difference-in-dif-ferences (DID) methods to control for unobservedrm heterogeneity or alternatively two-stageselectivity corrections. More recent studies tendedto favor propensity score matching techniquescombined with DID, and DID regression models to

    exploit the availability of long panel data combinedsometimes with instrumental variable methodsand dynamic models with lagged endogenousvariables.

    While earlier assessments of SME programs weregenerally pessimistic about their impacts (notablyBatra and Mahmood 2003, reviewing evidencefrom the 1990s), these more recent studies gener-ally nd positive impacts of program participationon intermediate outcomes, but mixed results forimpacts on rm performance. Many developingcountry studies nd gains in intermediate out-

    comes such as R&D expenditures, worker training,new production processes and quality controlprograms, and networking with other rms andwith different sources of information and funding.The majority of high income country studies foundpositive impacts on performance measures such as

    7 For a discussion of the efficacy of combined estimation strategies, see

    Blundell and Costa-Dias (2002), Alternative Approaches to Evaluation

    in Empirical Microeconomics, CeMMAP Working Papers, CWP10/02,

    University College of London, Institute of Fiscal Studies.

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    22/146

    SME

    PROGRAMSINLAC

    6

    CHAPTER

    1

    sales and employment and some found impacts onincreased investments in new plant and equipment,exports, probability of rm survival, and either la-

    bor productivity or total factor productivity (TFP).Half of the developing country studies foundpositive impacts on performance measured bysales, TFP, export markets or export intensity; nonefound evidence of employment gains. One possibleexplanation for the mixed ndings on performancein developing countries is the relatively shortpanels over which rms are followed as comparedto the panels used in high income country studies.Considering that performance outcomes may takeseveral years to materialize after program partici-pation, these panels may not have been sufcient tocapture performance impacts.

    The Four Country Studies

    Chapters 3 through 6 present impact evaluationsof SME programs in Chile, Colombia, Mexicoand Peru. These four country studies contributeto the growing literature on non-experimentalimpact evaluations of SME programs in severalways. First, working with national statisticsoffices, the four country studies developedrelatively long panel data on the treatment andcontrol groups ranging from six years (Peru andColombia) to between 10 and 15 years (Mexicoand Chile). The long panels were deemed es-sential if the longer-term impacts of programs on

    firm performance were to be measured. Second,while there were differences in the structureof the panel data across countries, the researchteam adopted a common methodological ap-proach for analyzing the data to address issuesof sample selection bias and model specification,so as to ensure comparability of findings acrosscountries. Finally, while the studies built uponthe impact evaluation methodologies reviewedabove, they also extended them in several newdirections: to accommodate the presence ofmultiple treatment cohorts and participation inmultiple SME programs, to estimate any time

    effects of impacts from program participation,and to test the sensitivity of impact estimates tofirm exit.

    The Non-Experimental Data

    Panel data needed to implement the non-experi-mental impact evaluation methodology for eachof the four countries were assembled from severalsources. Information on participation in SME

    programs already existed in three countries in theform of specialized rm surveys in Chile, Mexicoand Colombia, and comparable programmaticinformation was developed from administrativerecords for Peru as part of the research project. Thisinformation was then linked to annual establish-ment survey data maintained by national statisticalofces (NSOs) to create the non-experimental paneldata set, with information on establishment char-acteristics and a range of performance measuressuch as the value of production, sales, employment,wages and exports (Table 1.1).

    The treatment and control groups in Chile andMexico were identied from rm surveys thatasked respondents about participation in an open-ended list of major SME programs. The 2005 ChileInvestment Climate Survey elicited participationinformation on several programs managed bythe national development agency CORFO. In the

    case of Mexico, program participation informationwas elicited in two rm surveys, one in 2001 andanother in 2005, that covered SME programs ad-ministered by several different public agencies. In

    both countries, the treatment group included rmsthat reported program participation in one or moreSME program between the mid-1990s and 2004.The control group was drawn from the sample thatreported never having participated in any SMEprograms. The non-experimental panel data werethen created by linking both groups to the NSOsannual establishment surveys, the 1992-2006 ENIA

    in Chile and the 1995-2005 EIA in Mexico.

    The treatment and control groups in Colombiaand Peru were identied differently. In the caseof Colombia, the treatment group was a sampleof beneciaries of FOMIPYME (the main SMEsupport program in the country) included in a2006 survey elded by the Ministry of Commerce.Since FOMIPYME was established in 2001, a highproportion of beneciaries reported participationdates in 2002 and 2003. A brief telephone surveywas administered to a stratied random sample ofrms covering the 1999 to 2006 period, drawn from

    the NSOs annual establishment surveys, to: (i)screen rms for participation in any SME programsand (ii) select a control group of non-participantsand a second treatment group that had participatedin other non-FOMIPYME programs. In the case ofPeru, beneciary lists from three SME programsBONOPYME, PROMPYME and CITE-Calzadowere matched by tax registration numbers withthe Peru NSOs annual economic survey (EEA)for 2001 to 2006. The treatment group comprised

    beneciaries linked to EEA, while the control group

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    23/146

    was selected from a comparable non-linked EEA

    sample of rms which are assumed to not haveparticipated in any of these three programs.

    Analytical Approach

    Our approach followed the recent program impactevaluation literature in combining propensityscore matching and difference-in-difference (DID)methods to match the treatment and control groupson observable pre-treatment attributes and controlfor selectivity bias from unobserved heterogeneity.However, we extended this methodology in several

    directions to accommodate the specic structure ofour non-experimental data sets, as discussed below.

    First, unlike most studies which focus on evaluat-ing the impacts of participation in one program,the treatment group in each of our country studiesencompassed multiple SME programs. The pres-ence of multiple programs was handled by esti-mating two kinds of program impactsan overallimpact for participation in any SME program, andseparate impacts by type of program used. In the

    rst case, the treatment indicator takes on values

    of 1 in the year of program entry and in subse-quent years, and 0 otherwise; in the second case,separate treatment indicators are dened for eachtype of program. The use of multiple programs bya rm is readily accommodated with this frame-work: the treatment indicator for any program isturned on by the rst occurrence of a program,while the separate effects of multiple programsare estimated by the treatment indicators for eachprogram used.

    Second, our non-experimental data includedmultiple cohorts of beneciaries entering programs

    over many years, which complicated estimationof the propensity score to match the treatment andcontrol groups. In most studies focusing only onone treatment cohort and a control group, this isreadily accomplished by estimating a cross-sectionprobit model of the likelihood of program partici-pation on a set of pre-program attributes. A naturalway to address multiple treatment cohorts is toestimate a Cox proportional hazards model of timeto program entry to match the treatment and con-trol groups on a propensity score measured by the

    Table 1.1 Overview of Data and SME Programs in Four Latin American Countries

    Country SME Programs Program Type Data Sources

    Mexico

    Labor (CIMO-PAC)

    Economy (COMPITE, CRECE,

    FAMPYME, FIDECAP)

    Science & Technology

    (PMT, PAIDEC)

    BancoMext

    Other agencies

    Training

    BDS, technology, networking,

    supplier, development, R&D

    and technology, upgrading,

    Export promotion

    Other support

    2001 and 2005 ENESTYC and 2005 Micro-ENESTYC

    with module on SME program participation;

    2001 and 2005 ENESTYC linked to the 1995-

    2006 panel of annual industrial surveys (EIA)

    Chile

    FAT

    PROFO

    PDP

    FONTEC

    SENSE

    BDS, Group BDS, Supplier

    development, Technology,

    In-service training

    2005 Chile Investment Climate Survey (ICS)

    with module on SME program participation;

    2005 Chile ICS linked to 1992-2006 panel

    of annual industrial surveys (ENIA)

    Colombia

    FOMIPYME (different support

    lines by FOMIPYME providers);

    Non-FOMIPYME programs

    Training, BDS including

    supplier development, export

    promotion, technology,Other support

    2006 FOMIPYME Survey of beneficiaries;

    Linked to 1999-2006 annual survey of manufacturing

    (EAM), services (EAS) and commerce (EAC);

    Telephone survey to screen control

    sample for program participation

    PeruBONOMYPE PROMPYME

    CITE

    BDS, Public procurement,

    BDS, Technology

    Beneficiary lists with tax registration

    numbers from administrative records;

    Linked to 2001-2006 annual economic survey

    (EEA) by tax registration numbers.

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    24/146

    SME

    PROGRAMSINLAC

    8

    CHAPTER

    1

    relative hazard ratios.8This approach was adoptedin Mexico and Chile, but not in Peru or Colombiawhich, after experimentation with the Cox model,fell back on a cross-sectional probit or logit modelto estimate propensity scores.

    Third, the combined matching and DID methodswere implemented within a panel regressionframework rather than using a traditional matchingapproach. In the traditional approach, nearest-neighbor or other matching estimators are usedto make treatment-control group comparisons ofoutcomes at one point in time, typically severalyears after the treatment. In our data, time sincetreatment can vary considerably in a given post-treatment year because of the presence of multipletreatment cohorts. This variation in time since treat-ment cannot be controlled for using the traditionalmatching approach but is readily accommodatedwithin a panel regression framework. All four

    country studies relied on panel regressions modelsto implement DID estimators of treatment effects,focusing on the subsample of treatment and controlrms within the region of common support asmeasured by the propensity score.9

    Fourth, the panel regression framework providedthe exibility to exploit the long panel data totest for potentially important time effects ofprogram impacts. Studies typically estimate anoverall average treatment effect but rarely inves-tigate whether post-treatment impacts diminish

    or increase over time, or when impacts are rstmanifested.10 If program impacts are only realizedwith a time lag, this might offer one explanationfor why some studies with short panel data ndsignicant impacts on intermediate outcomes

    but no measurable improvements in rm perfor-mance. All four country studies estimated modelspecications in which the treatment indicatorwas also interacted with a measure of time sincetreatment to see whether impacts were constant,decreased or increased with years since exposure

    8 While the underlying hazard is not estimated in the Cox model, the

    conditional probability of program entry can be related to a vector of pre-

    treatment attributes (as in traditional probit matching models) and a set of

    year dummy variables to account for potential cohort-specific effects.

    9 The distribution of propensity scores in the treatment and control groups can

    differ significantly. The region of common support is that range of propen-

    sity scores within which both treatment and control group firms are found,

    and it thus defines a closely matched treatment and control group.

    10 Elizabeth King and Jere Behrman (2008), Timing and Duration of Exposure

    in Evaluations of Social Programs, World Bank Policy Research Working Paper

    4686, make a similar point that insufficient attention has been paid to the

    time patterns of impacts in many social programs. Evaluations conducted

    too soon after the treatment could result in promising programs being termi-

    nated too soon after a rapid assessment showed negative or no impacts.

    to the treatment. This latter measure of exposureranged from one year to four years in the case ofColombia and Peru, to eight years in Mexico, andup to 12 years in Chile.

    Finally, all four country studies investigated therobustness of program impact estimates to poten-tial biases from rm exit. A unique feature of ournon-experimental data is that rms are only ob-served in the panel data if they survived until theyear of the specialized rm survey2005 in thecase of Chile, 2001 and 2005 in the case of Mexico,and 2006 in the case of Colombia. This impliesthat the linked panel data from annual establish-ment surveys include only new entrants andsurviving rms, but not rm exits. To the extentthat program participation reduces the likelihoodof exit for the least productive rms, excludingrm exits from the treatment group potentially

    biases estimates of program impacts. While the

    Peru data were developed differently, similarbiases may still arise from the process of link-ing program beneciary lists to rms with paneldata, and survivors from both the treatment andcontrol groups are more likely to be linked thanrms with a high probability of exit. The countrystudies tested for this potential source of bias byre-estimating outcome models dropping the bot-tom 5 and 10 percent of the treatment group thatmight have failed in the absence of the program.

    Overview of Cross-Country ResultsAll four countries studies estimated propensityscores to identify a matched sample of treatmentand control groups. Some interesting patternsemerged from this exercise on the determinants ofprogram use. In common across countries, SMEprograms appeared to attract somewhat largerrms relative to the omitted group of micro andsmall rms (with less than 20 employees), andrms that have been in operation over ten years.This nding may be the result of diminishedincentives for new startups and small enterprises

    to participate, or a statistical artifact of the data,created by linking program beneciary data toannual industrial surveys that sample dispro-portionately from larger (over 10 employees)and therefore more established rms. When datawere available by sector, manufacturing rmswere more likely to participate compared to rmsin either services or trade sectors. In Mexicoand Chile, program use was higher outside thenational capitals of Mexico City and Santiago,which may simply reect the geographic location

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    25/146

    of industry outside the capital, a greater demandfor business support and credit services in remoteareas, or more active outreach to outlying regions

    by program administrators.

    In addition to these observed pre-program at-tributes, the matching models also includedmeasures of lagged sales and sales growth to takeinto account transitory shocks that might inu-ence program participation decisions. In Chileand Colombia, rms with lower lagged sales

    but good growth prospects were more likely toparticipate in programs (though only laggedsales are statistically signicant), suggesting thattemporarily depressed pre-program performancewas a motivation for seeking technical assistance

    and support in these countries. In contrast, SMEprograms in Mexico and Peru appeared to attract

    better performing rms relative to the controlgroupin those countries, rms with higher pre-treatment sales were more likely to participate in

    SME programs.

    The Chile evaluation used nearest-neighbor estima-tors to compare the 2004 intermediate outcomesof a sample of treatment and control group rmsmatched on their propensity scores. Relative tocomparable control rms, the treatment group wassignicantly more likely to: (i) have introducednew products or new production methods in thepast three years; (ii) invest in research and develop-ment (R&D); (iii) have quality control systems in

    Table 1.2 Impacts of Program Participation Fixed Effects Results

    Final Outcome Variables ATT Impacts

    Chile

    Any program

    By programTechnical assistance

    Cluster formation Technology Credit programs

    Log(sales), log(output), log(wages), log(output/L)Export share of salesLog(employment)

    Log(sales), log(output)Log(wages), log(output/L)Log(sales), log(output), log(wages)Log(wages), Export share of salesAll outcome variables

    7 to 9 %2 %

    No impact

    20 %8 to 15 %7 to 8 %

    5 %No impact

    Mexico

    Any programBy agency responsible

    Economy ministry Science & technology

    Labor ministry

    Foreign trade bank Other agencies

    Log(sales), log(output), log(employment)Log(wages), log(exports)

    Log(sales), log(output), log(employment)Log(sales), log(output), log(employment)Log(exports)Log(sales), log(output)Log(exports)Log(wages)Log(sales), log(output), log(employment), log(wages)

    5 to 6 %No impact

    3 to 7 %8 to 10 %

    25 %-3 to -5 %

    -25 %-3 %

    3 to 6 %

    Colombia

    Any program(simple models only)

    By program FOMIPYME Other programs

    Log(sales),log(employment), total factor productivityexport share of sales

    export share of salesexport share of salesAll other outcomes variables

    5 %13 to 17 %

    24 %

    40 %50 %

    No impact

    Peru

    Any programBy program BONOPYME PROMPYME CITE-Calzado

    Log(profits), log(sales), log(profits/L), log(sales/L)

    Log(profits), log(sales), log(profits/L), log(sales/L)Log(profits), log(sales), log(profits/L), log(sales/L)All outcome variables

    21 to 26 %

    15 to 32 %19 to 20 %No impact

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    26/146

    SME

    PROGRAMSINLAC

    10

    CHAPTER

    1

    place such as ISO-9000; and (iv) have in-house orexternal in-service training for its employees. Relat-ed research in Mexico and Colombia found similarimpacts of program participation on many of theseintermediate outcomes (Tan and Lpez-Acevedo,2007 and Econometria Consultores, 2007). Togetherwith results of the global impact evaluation studiesreviewed in Chapter 2, these ndings suggest thatSME programs are having tangible impacts on theshort and medium term intermediate outcomesthat they are targeting.

    Do these gains in intermediate outcomes translateinto longer-term improvements in rm perfor-mance? All four country studies found statisti-cally signicant and generally positive impactsof participation in any program on several rmperformance measures (Table 1.2). In commonacross countries, participation in any programimproved sales growth. The estimated impact on

    sales of any treatment ranged from 5 percent forColombia (simple models), 5-6 percent for Mexico,7-9 percent for Chile and over 20 percent for Peru.Estimated impacts on other performance mea-sures varied across countries. The employmentimpacts of any program participation were posi-tive in Mexico and Colombia, but insignicant inChile; the effects on export intensity were positive

    but modest in Chile (2 percent) but were largeand positive in Colombia (24 percent). Peru andColombia also saw program impacts on outcomemeasures not used in the other two countries. The

    Peru study found large positive impacts on protsand protability per worker from any treatment(over 20 percent), while the Colombia study esti-mated positive impacts on a measure of total factorproductivity (over 12 percent).

    The evaluations indicate that some programs weremore effective than others. In Chile, for example,technical assistance programs appeared to havelarger impacts on nal outcomes, followed by clus-ter programs and programs to promote technologydevelopment and adoption. In contrast, no impactswere found for programs providing just subsidized

    nance. In Mexico, programs administered by theEconomy Ministry and the Science and TechnologyCouncil had large positive impacts, while programsof the Labor Ministry and the export bank showednegative or insignicant impacts. In Colombia,

    both FOMIPYME and other programs only ap-peared to have an impact on exports. In Peru, bothtechnical assistance and public procurement pro-grams had large positive impacts on protabilityand sales, but no impacts were found for technicalcenters (CITEs) catering to the shoe industry.

    The country studies also addressed three other esti-mation issues. First, all studies found evidence thatprogram estimates were biased by self-selection

    based on unobserved rm heterogeneity. Programimpacts on key outcomes measured in levels wereeither negative or implausibly large, as comparedto outcomes measured in rst differences whicheliminate the unobserved (and time-invariant)heterogeneity. Second, studies experimentedwith model specications in which impacts wereallowed to vary with time since program partici-pation. The Chile study found evidence for timeeffects in program impacts, with many impacts

    becoming evident only four years after programparticipation. Mexico only found time effects ofprogram participation for xed assets, while no evi-dence of time effects were found in the other twocountries. Finally, to address the possibility thatrm exits (precluded in our panel data) potentially

    bias estimates of program impacts, all country

    studies re-estimated outcome models droppingthe bottom 5 or 10 percent of the treatment groupthat might otherwise have exited the sample in theabsence of the program. This sensitivity analysis

    bounding the results revealed no evidence of sys-tematic biases in our estimates of program impacts.

    Concluding Remarks

    SMEs make up the majority of enterprises in allcountries, and programs to support them are a

    common policy instrument in both high incomeand developing countries to promote growth,increased competitiveness and job creation. Yet,remarkably little is known about whether theywork, which programs are more or less effective,and why. The tools exist to rigorously evaluateSME programs and draw insights into how pro-grams may be better designed and implementedto improve their impacts on rm performance, butthey are rarely used.

    To address this paucity of research on SME pro-grams, this report set out to rigorously evaluate the

    impacts of SME programs in four Latin Americancountries using a non-experimental approach andpanel data developed in conjunction with NSOs ofthese countries. All four country studies found sta-tistically signicant impacts of participation in anySME program on sales, positive impacts on othermeasures of rm performance varying by country,and differences in impacts across programs. Theanalyses highlighted the importance of accountingfor the biases that arise from non-random self-se-lection of rms into programs, and for using longer

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    27/146

    panel data to measure impacts on rm performancethat may only be realized over time with a lag.

    The country studies included in this report add tothe accumulating body of recent evidence on theimpacts of SME programs on rm performance.All SME programs are not equally effective,as suggested by our evaluation and the nd-ings of similar evaluation studies in other highincome and developing countries. Surely someprograms are ineffective because of poor designand implementation. But failure to nd positiveimpacts in other programs may also be the resultof inadequate control for selectivity bias, choiceof a control group, or lags in the realization ofperformance impacts. While this body of researchcollectively advances our knowledge on how tomeasure program impacts, our understanding ofwhy some programs work while others do notand how programs can be made more effective

    remains quite limited.

    The World Bank and other international andbilateral development institutions can play agreater role in lling this knowledge gap on SMEprograms. In the past decade, the developmentcommunity has been largely silent on enterprisesupport programs, advising governments to focusinstead on improving the investment climate forall enterprises, large and small, and on facilitatingaccess to nance. That position should be revisitedin light of the growing body of evidence based on

    recent rigorous impact evaluations and ongoing re-forms in many developing countries to implementSME support programs along market principles.

    Some governments are beginning to mandate rigor-ous impact evaluations of SME programs, princi-pally in Latin America and less frequently in otherregions. The development community can facilitatethis process through research funding, dissemina-tion of best practices, and technical assistance todeveloping country governments on the designand implementation of rigorous impact evaluationsof their SME programs.

    Developing countries, for their part, can facilitateimpact evaluations by improving their information

    base on SME program beneciaries. Administrativedata on program beneciaries, when they exist,are often incomplete; they reside within individualministries, implementing agencies or serviceproviders and are rarely consolidated into a centraldata base; and they do not strategically collectinformation that would allow easy linkage withongoing surveys of rms by NSOs. Addressing

    these limitations would make it less time consum-ing and expensive to mount an impact evaluation.Including questions on program participation inperiodic establishment surveys elded by NSOs isone way of generating a non-experimental paneldata set, an approach used in the Chile, Mexicoand Colombia country studies. An alternative isto systematize the linking of administrative dataon program beneciaries with the NSOs ongo-ing annual establishment surveys. This approach,used in New Zealand, creates a panel dataset withrich information on program participation and

    rm performance that facilitates ongoing impactevaluations of different programs and other policyinterventions.

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    28/146

    ILKER/SXC.HU

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    29/146

    chapter 2

    IMPACT EVALUATION OF SME PROGRAMS IN LAC

    2A Review of Recent SME ProgramImpact Evaluation StudiesC

    HAPT

    ER

    Introduction

    For many years, governments in both high-incomeand developing countries have extended a widerange of subsidized business development sup-port and nancing to small and medium-sizedenterprises (SMEs). The theoretical justication forSME programs is to address market imperfectionsand the effects of regulations that are thought toaffect SMEs more than their larger counterparts,and to strengthen their productive or technologicalcapabilities.11While there is empirical support forthese theoretical arguments, the evidence is less

    compelling when it comes to the effectiveness ofpublic interventions in supporting SMEs.

    In a review of impact evaluation studies of SMEsupport policies, Storey (1998) pointed out thatwith some noteworthy exceptions, most studiesin high-income countries are better characterizedas exercises to monitor program implementation,participation rates, or beneciary satisfaction withsupport services. While informative about pro-gram delivery, they cannot provide a basis for as-sessing the effectiveness of the programs in achiev-ing improvements in enterprise performance.

    This paucity of rigorous SME program impactevaluations is even more pronounced in develop-ing countries. Based on available evidence fromthe 1990s, Batra and Mahmood (2003) concludedthat most public support programs for enterprisesin developing countries had little or no impact onperformance, were not cost-effective and did not

    11 See Hallberg (2000) for an overview of theoretical justifica-

    tions for government intervention to support SMEs.

    warrant continued public support. A recent OECD

    (2007) report reiterated this point, calling for aconcerted effort to develop global best practices inthe design, implementation and rigorous impactevaluation of enterprise support programs.12

    This chapter takes a rst step in this directionby selectively reviewing evaluation studies ofenterprise support programs that were com-pleted in the past decade in both high-incomeand developing countries. The literature reviewfocuses on non-experimental studies that com-pare the post-program performance of program

    beneciaries (treatment group) to that of agroup of enterprises with similar attributes as thetreatment group that did not participate in anyof these programs (control group). The motiva-tion is to gain insights into how other researchershave approached the evaluation of SME programimpactsdevelopment of the non-experimentaldata, analytic approaches used to estimate treat-ment effects free of selection bias, and hypothesestestedand what results they found. While usefulfor our own analyses of SME programs in LatinAmerica, the review also calls into question theearlier, pessimistic assessment about enterprise

    support programs in developing countries basedon available evidence from the 1990s.13 The greaterrigor and accumulated evidence from more recentstudies might warrant a reassessment of thatearlier position.

    12 OECD (2007), OECD Framework for the Evaluation of SME

    and Entrepreneurship Policies and Programmes, Paris.

    13 Geeta Batra and Syed Mahmood (2003), Direct Support to Pri-

    vate Firms: Evidence on Effectiveness, World Bank Policy Re-

    search Working Paper 3170, November, Washington DC.

  • 8/6/2019 Impact Evaluation of Small and Medium Enterprise Programs in Latin America and the Caribbean

    30/146

    SME

    PROGRAMSINLAC

    14

    CHAPTER

    2

    Studies Selected for Review

    A sample of recent impact evaluation studies wasselected from the published economics literature onenterprise support programs in high-income anddeveloping countries. Studies were selected usingStoreys (1998) six-level classication of the analyticrigor of impact evaluations studies, which rangefrom a level 1 simple qualitative satisfaction surveyof program beneciaries to the most rigorous level6 studies that yield estimates of net impacts usefulfor assessing the policy effectiveness of programs.We restricted the literature review to studies hav-ing a non-experimental design with a treatmentand control group (level 5) or addressing issues ofselectivity bias from program participation in addi-tion to having a non-experimental design (level 6),and that were completed or published in the past10 years.

    The literature search identied 19 rigorous impactevaluation studies, 10 from high-income countriesand nine from developing countries. The high-income country studies included those from theUnited States, United Kingdom, Republic of Irelandand Northern Ireland, Australia, New Zealand,Belgium and Japan. Among developing countries,the studies included Chile, Mexico, Argentina,Brazil, Bangladesh and Turkey. Most of the high-income country studies were published in the late1990s and early 2000s with some in the late 2000s,while developing country studies were published

    principally in the second half of this decade.

    Each study was reviewed following a commontemplate and written up as a one-page abstract.Each abstract begins with the citation of the study,classication by level of development (high-incomeor developing country), and the period covered bythe analysis. It then provides a brief overview ofthe study, a description of the SME or enterprisesupport program(s) being evaluated, methodologi-cal approaches used, and the main ndings. Theseabstracts are reproduced in Annex A.2, separatelyfor high-income countries (A) and developing

    countries (B), with studies sorted by year of pub-lication. The salient features of these studies aresummarized in Table 2.1 through 2.3.

    Enterprise Support Programs Studied

    The programs evaluated in these studies covered arange of subsidized support services targeting prin-cipally SMEs but also larger enterprises (Table 2.1).These support services fall into three categories:

    (a) business development services (BDS), whichinclude a range of consulting services, training forworkers, management and quality control prac-tices, technology upgrading, market developmentand export promotion; (b) research and develop-men