68
Crowdsourcing Accountability: ICT for Service Delivery Guy Grossman Melina Platas Jonathan Rodden § August 27, 2017 Abstract We examine the eect on service delivery outcomes of a new information commu- nication technology (ICT) platform that allows citizens to send free and anonymous messages to local government ocials, thus reducing the cost and increasing the e- ciency of communication about public services. In particular, we use a field experiment to assess the extent to which the introduction of this ICT platform improved monitor- ing by the district, eort by service providers, and inputs at service points in health, education and water in Arua District, Uganda. Despite relatively high levels of system uptake, enthusiasm of district ocials, and anecdotal success stories, we find evidence of only marginal and uneven short-term improvements in health and water services, and no discernible long-term eects. Relatively few messages from citizens provided specific, actionable information about service provision within the purview and resource constraints of district ocials, and users were often discouraged by ocials’ responses. Our findings suggest that for crowd-sourced ICT programs to move from isolated suc- cess stories to long-term accountability enhancement, the quality and specific content of reports and responses provided by users and ocials is centrally important. Data collection was conducted by Innovations for Poverty Action Uganda and Hatchile Consult, Ltd. The project was approved by the Uganda National Council for Science and Technology (#SS 3266), Oce of the President in Uganda, Mildmay Uganda Research Ethics Committee (#REC REF 0204-2015) and Institutional Review Boards at Stanford University and the University of Pennsylvania. A pre-analysis plan is registered at EGAP (ID 20160819AA). We would like to thank Jon Helfers, Maximillian Seunik, Zachary Tausanovich, Areum Han, Hardika Dayalani, Siyao Li, and Mitchell Goist for research assistance at various stages in the project. We also thank our research partners at Social Impact, and our implementing partners: GAPP, UNICEF Uganda, and the Uganda USAID mission. Most importantly, we thank the Arua district local government for their interest and collaboration in implementing the project, and the thousands of residents of Arua district who participated in the U-Bridge program and our research activities. Department of Political Science, University of Pennsylvania and EGAP, [email protected] Department of Political Science, New York University Abu Dhabi, [email protected] § Department of Political Science, Stanford University, [email protected]

ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Crowdsourcing Accountability: ICT for Service Delivery

Guy Grossman† Melina Platas‡ Jonathan Rodden§

August 27, 2017

Abstract

We examine the e↵ect on service delivery outcomes of a new information commu-nication technology (ICT) platform that allows citizens to send free and anonymousmessages to local government o�cials, thus reducing the cost and increasing the e�-ciency of communication about public services. In particular, we use a field experimentto assess the extent to which the introduction of this ICT platform improved monitor-ing by the district, e↵ort by service providers, and inputs at service points in health,education and water in Arua District, Uganda. Despite relatively high levels of systemuptake, enthusiasm of district o�cials, and anecdotal success stories, we find evidenceof only marginal and uneven short-term improvements in health and water services,and no discernible long-term e↵ects. Relatively few messages from citizens providedspecific, actionable information about service provision within the purview and resourceconstraints of district o�cials, and users were often discouraged by o�cials’ responses.Our findings suggest that for crowd-sourced ICT programs to move from isolated suc-cess stories to long-term accountability enhancement, the quality and specific contentof reports and responses provided by users and o�cials is centrally important.

⇤Data collection was conducted by Innovations for Poverty Action Uganda and Hatchile Consult, Ltd.The project was approved by the Uganda National Council for Science and Technology (#SS 3266), O�ceof the President in Uganda, Mildmay Uganda Research Ethics Committee (#REC REF 0204-2015) andInstitutional Review Boards at Stanford University and the University of Pennsylvania. A pre-analysis planis registered at EGAP (ID 20160819AA). We would like to thank Jon Helfers, Maximillian Seunik, ZacharyTausanovich, Areum Han, Hardika Dayalani, Siyao Li, and Mitchell Goist for research assistance at variousstages in the project. We also thank our research partners at Social Impact, and our implementing partners:GAPP, UNICEF Uganda, and the Uganda USAID mission. Most importantly, we thank the Arua districtlocal government for their interest and collaboration in implementing the project, and the thousands ofresidents of Arua district who participated in the U-Bridge program and our research activities.

†Department of Political Science, University of Pennsylvania and EGAP, [email protected]‡Department of Political Science, New York University Abu Dhabi, [email protected]§Department of Political Science, Stanford University, [email protected]

Page 2: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

1 Introduction

In many low-income countries, the long route of accountability—citizens holding service

providers to account via their representatives in government—has been frequently sidelined

in favor of community monitoring programs (Andrabi, Das, and Khwaja, 2017; Banerjee

et al., 2007; Bjorkman and Svensson, 2009; Duflo, Hanna, and Rya, 2012). Circumvent-

ing governments and politicians that have been seen as di�cult to reach, non-responsive

and ine↵ective, programs encouraging citizens to directly hold service providers to account

have yielded mixed results while also raising sustainability concerns (de Laat, Kremer, and

Vermeersch, 2008; Banerjee et al., 2010). Concurrently, in the past decade, access to mobile-

phones has increased exponentially (Figure 1), prompting an interest in harnessing innova-

tions in information communication technology (ICTs) to solve some of the most intractable

development challenges (Grossman, Humphreys, and Sacramone-Lutz, 2014). In this paper,

we return to the older “long route” model of accountability, armed with new technology that

dramatically reduces the cost citizens incur in accessing policymakers.

Specifically, we explore the e↵ects of a new ICT platform, U-Bridge, on the delivery

of local public services in the health, education and water sectors. The platform has been

designed to bridge the gap between citizens, who possess information on local service delivery

problems, and subnational government o�cials, who have the authority and ability to hold

service providers to account. Implemented in Arua district in northwestern Uganda, U-

Bridge allows citizens to report service delivery issues directly to district local government

o�cials through a free and anonymous text-messaging system.

The U-Bridge program is based on the simple idea that participatory grassroots programs

need not necessarily be designed to circumvent government actors. Instead, grassroots e↵orts

can be harnessed to better inform government o�cials of service delivery deficiencies. This

division of labor between those who are best positioned to collect information (community

members) and those who are best positioned to act upon the information (actors outside

the community, such as public o�cials), lies at the heart of U-Bridge and other recent

“crowdsourcing” initiatives (Meier and Munro, 2010).

We assume that public o�cials have the means to improve at least some areas of ser-

vice provision, but are constrained in part by a dearth of targeted information on local

service delivery problems. We further assume that citizens possess such information but

are constrained in the ability to share it with government o�cials due to high monetary

and social costs. To the extent that citizens believe that local government o�cials will be

responsive, we expect citizens to use the ICT platform to make requests, complain and raise

concerns (Grossman, Michelitch, and Santamaria, 2016). Knowing that local public o�cials

1

Page 3: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

0

20

40

60

80

(per

100

peo

ple)

1985 1990 1995 2000 2005 2010 2015

sub-Saharan Africa Uganda

Mobile cellular subscriptions

Figure 1: Source: The World Bank; International Telecommunication Union, World Telecommu-nication/ICT Development Report and database.

have more and better information can increase citizens’ expectations from government, which

in turn, may incentivize public o�cial to exert greater e↵ort (Gottlieb, 2016). Moreover, rel-

atively high usage of the platform can improve the e↵orts of service providers, independently

of public o�cials’ actions, to the extent that service providers internalize the possibility of

top-down sanctioning. The overall e↵ect should be an improvement in local governments’

monitoring capacity and input targeting, as well as service providers’ e↵orts.

To test the above theoretical expectations, we collaborated with GAPP, a non-government

organization that has been working in Arua to improve the capacity of the district local gov-

ernment. GAPP developed the U-Bridge platform and implemented it in the district local

government, training the district’s political and technocratic arms on utilizing the system’s

functions. While all district residents could potentially contact Arua local district govern-

ment via U-Bridge, only villages randomly selected by the research team were encouraged

to use the ICT platform during the study period. Specifically, we constructed clusters of

villages around Arua’s 48 mid-level government health centers, and randomly assigned half

of those clusters to the treatment group and half to the control. Each cluster consists of

about 4-5 villages that are served by the same public health center and by at least one

local public primary school. Villagers in treatment areas were informed of the new political

communication channel via community meetings and an extensive door-to-door registration

exercise that yielded more than 3,000 registered (potential) users.

We collected outcome data on the performance of schools and health centers culled from

Arua’s line ministries (e.g., district health o�ce) as well as from unannounced audits orga-

2

Page 4: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

nized independently by the research team. We also collected administrative data on village-

level requests for water parts, and the equipment received by villages to build and fix water

service points. In addition, we conducted an endline survey in sixteen treatment villages

to assess knowledge about, use of, and satisfaction with the U-Bridge service. For health

and education, we examine three core outcomes: monitoring (e.g., inspections and calls to

the facilities from district o�cials), e↵ort (e.g., rates of sta↵ absenteeism), and input out-

comes (e.g., supplies of essential medicines, books and desks). We further collected data on

health services utilization and on school test scores. We created indices combining these data

sources for each type of outcome in each sector, by period (baseline, midline and endline).

Between August 2014 and November 2015, more than 11,000 messages were sent via

U-Bridge to government o�cials in Arua district, though we estimate that only a quarter of

these were directly relevant to service delivery. We find that U-Bridge had a sizable e↵ect

on water parts and services (about two-fifths standard deviation) and on education services

(between one quarter and one third of a standard deviation in terms of the control group

outcome distribution), but only in the short-term (first year). By contrast, we do not find

that U-Bridge had a discernible e↵ect on education services beyond the first year, nor on

health services at any point in time.

In the final part of this article we discuss some of the reasons that may account for

our mixed findings regarding the e↵ectiveness of U-Bridge on service delivery outcomes, as

well as the reasons behind the fact that the modest e↵ects of the program on education

services were not sustained into the second year. Since there was relatively high uptake of

U-Bridge compared to similar ICT platforms for political communication, at least in the

first year, we can rule out the possibility that there was simply no demand for the service

by citizens.1 Instead, we find that uptake of U-Bridge was highly uneven, concentrated in a

handful of treatment villages, and that even in high-uptake villages, a majority of citizens

were not satisfied with the response of Arua district o�cials, which may have led to the

dramatic drop in usage we observe over time. Low satisfaction may have resulted in part

from a mismatch in expectations about what constituted a useful and appropriate response

to problems reported. For example, many responses by district o�cials referred users to

lower level local government or local institutions rather than directly following up on the

problem reported. We also find that a considerable number of messages did not contain

su�cient information that could be acted upon by local bureaucrats, or concerned issues

that could not have been addressed in the immediate term.

Our paper contributes to a growing literature on ICT for better governance by high-

1See Grossman, Humphreys, and Sacramone-Lutz (2014, 2016) that reach a similar conclusion withrespect to ICT platforms for communicating political priorities and preferences (rather than service issues).

3

Page 5: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

lighting several important insights. First, we show that ICT innovations linking citizens to

government o�cials has a genuine potential to improve local government services, at least in

the water and education sectors. Second, we point to the key role that common knowledge

likely plays in creating incentives for both government o�cials and service providers to exert

greater e↵ort. Third, we learn valuable lessons about the importance of citizens’ perceptions

and beliefs about what local government can and should do.

From a policy perspective, the paper points out that in order to a↵ect the behavior of

o�cials and service providers, it is important that messages from citizens are actionable

and that o�cials’ responses match user expectations. The first step in ensuring that ICT

platforms are useful is to ensure that the crowd-sourced information is itself meaningful

to message recipients. We demonstrate that in contexts such as rural Uganda, it is non-

trivial for citizens to craft a message containing information with a level of specificity that

is usable for government o�cials. One policy implication, therefore, is that for crowd-

sourced platforms to be e↵ective, potential users must receive substantial training on what

kind of information they should share. Second, we discuss how o�cials’ responses, even if

accurate, may not have met users’ expectations about how the program would work, leading

to dissatisfaction and ultimately disengagement. Third, the paper points toward the need to

better understand di↵erential uptake of new political communication technologies; i.e., why

seemingly similar communities have di↵erent capacity and/or willingness to take advantage

of community reporting platforms. Understanding the drivers of individual- and village-level

uptake is essential for the design and future implementation of similar ICT programs.

Our paper also contributes, more broadly, to the literature on information and account-

ability. This body of work is mostly concerned with the dearth of information in the hands

of citizens. Much of the existing literature assumes, often correctly, that voters lack infor-

mation about the quality of service provision that would otherwise allow them to better hold

front-line service providers to account. While there are certainly features of service quality

that are di�cult for users to observe, there are others that are visible—and perhaps uniquely

visible—to users.2 Even if citizens posses reliable targeted information, they may be reluc-

tant to confront service providers due to collective action problems inherent in community

monitoring. Instead, in our study, citizens themselves are the providers of information to

actors that are in a better position to act upon that information: public o�cials.

One possible benefit of an ICT platform like U-Bridge is to provide a way to transmit

information from far-flung villages about a one-o↵ problem that can be resolved quickly or

that requires emergency resources. Examples of such issues include wind blowing the roof

2Teacher absenteeism is an example of poor service quality that is visible to villagers, but not necessarilyto local government o�cials in far-flung district capitals.

4

Page 6: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

o↵ of a classroom block, or an outbreak of a disease a↵ecting farmers’ produce or husbandry.

In this way, such a program might solve problems in a piecemeal manner. However, a second

set of issues revolves not around isolated or idiosyncratic problems, but rather the more

intractable challenge of monitoring and accountability – in particular, making sure teachers

and health workers show up and perform their duties. Thus the second possible benefit

of a program like U-Bridge is to generate a sea change in accountability – an equilibrium

shift – via the production of common knowledge between citizens, bureaucrats, and service

providers, such that local service providers improve their performance because they believe

they are being monitored and might be punished for poor performance. Our study suggests

that the second goal is substantially more di�cult than the first.

2 Theoretical Framework: Bringing the State Back In

– with Technology

Notwithstanding the almost ubiquitous adoption of political liberalization and decentral-

ization reforms, many low-income countries still exhibit a disappointing persistence of an

abysmal state of public services. Teachers and nurses are often absent without authoriza-

tion (Chaudhury et al., 2006), and many of those who do show up to work are unmotivated

and exhibit low e↵ort levels (Kremer, Duflo, and Dupas, 2011). Patients regularly receive in-

correct diagnoses and treatments (Amin, Das, and Goldstein, 2008), and schools and clinics

are often short of vital inputs (Glewwe, Kremer, and Moulin, 2009). Weak service provider

e↵orts may be due in part to poor remuneration and resources, but they are also partly

the result of poorly functioning local governments that lack the capacity and/or the will to

adequately monitor front-line service providers and hold them accountable.

Following the publication of the 2004 World Development Report, Making Services Work

for Poor People, numerous programs have sought to improve front-line services by focusing on

the “short route” of accountability – empowering citizens to hold service providers account-

able directly rather than indirectly via public o�cials (Kosack and Fung, 2014). This has

coincided with a growing perception that local government o�cials are hard to reach and may

have interests and policy preferences that do not align with those of their constituents (Bard-

han, 2002). Short-route accountability models are based on two key assumptions: first, that

communities can identify service delivery problems; i.e., that they posses information on

both the quality of existing services and the level and quality of services to which they are

entitled; second, that communities can coordinate on their expectations from providers and

on their own monitoring, and can do so in a sustainable manner.

5

Page 7: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Studies of programs designed explicitly to achieve short-route accountability, however,

find mixed results. Alongside several notable successes (for example, Bjorkman and Svensson

(2009) and Andrabi, Das, and Khwaja (2017)), other studies find no e↵ect of community

monitoring initiatives on service delivery and associated outcomes (Banerjee et al., 2010). In-

deed, there is growing evidence that direct community monitoring is subjected to non-trivial

collective action problems (World Bank, 2016). Specifically, there is growing recognition

of the power and status asymmetry that make rural villagers reluctant to confront service

providers, partly due to fear of retribution (Kaawa-Mafigiri and Walakira, 2017).

Our research project revisits an old idea—bringing the state back in—while patching up

holes in the logic of long-route accountability models with a new technology that reduces the

costs of political communication. Long-route accountability models based on crowd-sourcing

start with the idea that the direct principals to which service provider agents answer are dis-

trict o�cials, not citizens. If a health worker is routinely absent, community members have

no means of firing, transferring, or otherwise penalizing the health worker, except through

informal social sanctions, to which the health worker may or may not respond. However,

contacting public o�cials to demand that they hold service providers accountable has tra-

ditionally been (a) expensive and thus infrequent, and (b) perceived as potentially risky for

citizens due to fear of retribution, especially for those residing in small communities. The

modal way of contacting public o�cials still entails traveling in-person to the district head-

quarters. Due to poor roads and dearth of personal and public transit options, transportation

costs in sub-Saharan Africa are notoriously high—at least twice those of the typical Asian

country (Kessides, 2005). The crowd-sourced model of accountability explicitly addresses

these twin problems. ICT platforms can link citizens directly to local government o�cials

and allow free, unmediated communication as frequently as citizens wish. If user anonymity

can be secured, personal risk can be dramatically minimized or eliminated.

Reducing the monetary and social costs of political communication, does not, however,

guarantee that citizens will share information that is di�cult for the local government to

collect (Grossman, Michelitch, and Santamaria, 2016). Citizens will only use ICT platforms

for service-related communication to the extent that they believe that message recipients

are both resourceful and responsive (Grossman, Humphreys, and Sacramone-Lutz, 2016).

Unlike similar ICT platforms linking citizens to public o�cials, the primary recipients of

messages in our study were not local politicians—who commonly lack the knowledge and

capacity to a↵ect social services (Ra✏er, 2016)—but rather district bureaucrats who are

directly responsible for implementing public services, control resources, and who have the

mandate to promote, sanction, hire, and fire service providers.

The crowd-sourced model assumes that public o�cials are interested in improving service

6

Page 8: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

provision, but are constrained by a dearth of reliable information on targeted problems. This

may be a reasonable assumption in meritocratic bureaucracies that cultivate an esprit de

corps among civil servants. It may, however, be overly optimistic in patronage-ridden sys-

tems where citizens have low expectations from public o�cials. We argue that crowd-sourced

platforms might be positioned to increase public o�cials’ motivation by creating common

knowledge around the reporting of service delivery problems. In the status quo, citizens know

that distant public o�cials lack reliable information on specific problems, which reduces cit-

izens’ expectations that o�cials will address service delivery deficiencies. Low expectations

from citizens further reduces the pressure on public o�cials to perform (Gottlieb, 2016). By

allowing citizens to easily report problems, ICT crowd-sourcing platforms may also increase

citizens’ expectations, since they now know that public o�cials know about problems, and

public o�cials know that citizens know that public o�cials posses better and more accurate

information. Such common knowledge—if created—can serve to increase citizens expecta-

tion, and in turn, public o�cials’ incentive to address problems.

Importantly, if citizens choose to use the platform to report service delivery problems,

and if public o�cials follow up on citizens’ reports, then service providers might increase

their e↵orts, endogenously, in order to avoid sanctioning. In this case, the possibility of

reporting ‘bad behavior’ (rather than actual reporting) may sustain relatively high level of

provision over time. Our study is thus designed to compare both short-term (year-1) and

longer-term (year-2) e↵ects.

We expect that there are a set of preconditions that must be met in order for either of

these two pathways to operate. These preconditions include the following:

1. Underlying demand for a means of communicating with local government.

2. Potential users do not fear retribution for reporting (e.g. they believe that messagesare anonymous).

3. Potential users believe the local government has the mandate, capability, and interestin solving service delivery problems.

4. Messages contain actionable information – that is, information that is a) new to thelocal government, b) detailed enough to allow for a response, and c) about a problemthe local government has the resources and capacity to address.

5. Local government has the capability and interest to solve problems reported by users

The data we present below allows us to evaluate the extent to which these preconditions

were met, and evaluate the evidence to determine if U-Bridge was able to a↵ect service

delivery outcomes.

7

Page 9: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Domains of Service Delivery and Hypotheses

Building on the above discussion, we developed the expectation that an ICT platform fa-

cilitating communication between citizens and district o�cials would a↵ect three distinct

domains of service delivery: (1) monitoring, (2) e↵ort, and (3) inputs, described below.

Monitoring : If the new communication platform is being used as expected, the first-order

change we anticipate is that local government o�cials increase their monitoring of facilities.

The district headquarters and thus district o�cials are often located quite far from facilities

(in our study, the mean distance between Arua town and study villages is 25 kilometers as

the crow flies, and longer by road), and have limited time and resources to spend physically

visiting facilities. Though district o�cials generally know that there are problems with

service delivery, as mentioned above, they commonly lack specific and timely information on

delivery deficiencies. Thus, we expect that if local government o�cials hear complaints from

a particular area via the ICT platform, they will be more likely to exert e↵ort on monitoring

and supervising these facilities, for example by making phone calls to or visiting the facility.

E↵ort : We expect that a secondary outcome may be an increase in e↵ort at the level of the

facility to improve services. Here we focus on activities that are under the direct control of

individuals employed in the facilities: for instance making requests to the district for addi-

tional goods and sta↵ and conducting outreach activities. We also consider direct measures

of e↵ort in schools and health clinics, including absenteeism among teachers and health care

workers. Increased e↵ort at the facility level might emerge either as a response to increased

monitoring from district-level o�cials, the expectation thereof, or the anticipation of greater

direct local involvement in the monitoring of service provision.

Inputs : If district-level o�cials grow accustomed to receiving feedback from citizens in the

areas where U-Bridge was implemented and thereby become more sensitive to the needs of

the corresponding facilities, we should expect to see improvements in inputs. These include

the number of sta↵ and supplies like drugs in health clinics; for schools, these inputs include

the number of teachers and school supplies; for water, these inputs include parts and repair

services.

We thus hypothesize that access to an ICT platforms for political communication will:

H1 increase monitoring in health and education.

H2 increase e↵ort in health and education.

H3 increase the availability of inputs in health, education, and water.

8

Page 10: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

3 Research Design

To test whether a crowd-sourced monitoring and reporting platform facilitates the long route

of accountability and improves service delivery, we employ a field experimental research

design. Specifically, we test the e↵ect of encouraging (randomly selected) villages in Arua

district in northern Uganda to report service delivery problems via U-Bridge. U-Bridge, an

open-source software package, is designed to open a new channel of communication from

citizens to local government o�cials.3 Citizens are able to contact district o�cials via U-

Bridge by sending a text message to a short-code number at no cost. District o�cials, in both

technical and political positions, are equipped with 3G tablets that enable them to access

the messages anywhere, provided they have Internet access. Once logged into the platform’s

dashboard, government o�cials can read tagged text messages from Arua residents, and reply

to messages from within the system.4 Importantly, incoming text messages from citizens are

anonymized such that government o�cials can only use a unique case ID (and not a phone

number) to track the status of complaints over time.

The U-Bridge Program

Receiving information and implementing a response in the line ministries is a district-wide

activity. The U-Bridge program also included three additional activities that were only

implemented in treatment villages:

1. Community meetings: GAPP organized both an inception meeting and periodic com-

munity dialogue meetings in treatment areas. The inception meeting was used to

introduce the ICT platform, and to provide attendees with information, based on cod-

ified service delivery standards, on the level and quality of services to which citizens

are entitled. Community dialogue meetings took place on a quarterly basis, allowing

citizens to interact directly with district o�cials that have attended the meetings and

providing an overview of actions that the district has taken around service delivery.

Community dialogue meetings were also intended to help create common knowledge

between district o�cials and villagers, the importance of which is detailed in the theory

of change above. Finally these meetings shared feedback with facilities about the mes-

sages that users had sent about service delivery problems. The first round of meetings

3The platform was been developed as part of a collaboration between UNICEF Uganda and RTI Interna-tional. RTI is the prime contractor for the United States Agency for International Development (USAID)-funded program, Governance, Accountability, Participation, and Performance (GAPP), of which U-Bridgeis a part.

4GAPP employed a full-time intern responsible for training public o�cials in Arua on how to log ontothe platform.

9

Page 11: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

was held by GAPP in the last quarter of 2014 as part of the launch of the U-Bridge

service. The last round of community dialogue meetings took place in early 2016.

2. User registration: GAPP created a database of potential users, which entailed register-

ing a phone number and basic demographic information.5 Registration took place at

community meetings, but also through a door-to-door registration drive implemented

by the research team. While registration is not necessary for sending or receiving mes-

sages via the U-Bridge platform, it allowed GAPP to contact registered users with

reminders, or to share information about the services to which they were entitled as

Ugandan citizens.

3. Periodic polls: U-Bridge registered members were also invited to respond to short

periodic polls, usually a single question, administered by the research team. The polls

were conducted on weekends via a robocall system operated by VotoMobile, a Ghanian-

based social enterprise. At the height of the program, in mid-2015, there were 3, 062

registered and verified U-Bridge users, who consented to respond to such polls.6

The U-Bridge program was rolled out beginning in August 2014, and several adjustments

were made to the program along the way. Perhaps most importantly, a few months into

the program, it became clear that in the absence of a fully dedicated community liaison

o�cer, district o�cials were unable or had insu�cient time to manage the organization

of the platform. Specifically, the filtering of relevant messages to the appropriate district

o�cial was time consuming. Filtering, which reduces the ratio of noise to signal, included

deciding which messages are relevant and thus “worth” being forwarded to district o�cials

as well as which o�ce is most appropriate to deal with the request. In December 2014 the

implementing partner GAPP hired a full-time sta↵ member to work with district o�cials to

filter messages and help follow up on user messages. This change is important to note as it

suggests that the platform would not have functioned as well in the absence of a third party

to facilitate district o�cials’ responses.

5The process entailed texting “Join” to 8500, after which the sender would receive a series of messagesto collect basic demographic information, such as age, gender, and village name.

6The majority of users (2, 591) have entered the program through a door-to-door registration campaignthat took place between October and November 2014 in treatment villages. An additional 251 users regis-tered following GAPP’s community meetings, where sign-up sheets were passed between attendees. Finallythere are 220 users from Arua that have registered with U-Bridge independently; for example, by followingregistration instruction on fliers that were distributed during the community meetings.

10

Page 12: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Randomization to Treatment

Our study focuses primarily on two salient public services—health and education—and as

such, the research team randomized treatment around places where these public services

are delivered. While there are hundreds of government primary schools in Arua, there

are only 48 mid-level government health centers.7 Thus we use health centers as the unit

of randomization. Specifically, we constructed 48 village-clusters, defined as the group of

nearby villages that are serviced by the same public health center. Half of Arua’s 48 clusters

were randomly assigned to treatment (U-Bridge access) and half to control. All the villages

within a cluster are assigned to the same treatment status.

In each cluster, about 5 villages were selected to be included in the study. These villages

included (a) the village in which the health center was located (primary village), and (b)

three or four additional nearby villages. The selection of additional villages was determined

by the presence of a government primary school, and the location of the village relative

to other clusters. Whenever possible, we selected villages that had a government primary

school within their administrative boundaries, and/or that were located nearby the health

center. We excluded villages that were located close to other clusters in order to limit the

likelihood of spillovers. In total, there are 131 villages in 24 clusters across Arua that were

assigned to the U-Bridge treatment group, and 112 villages in the control group. The mean

number of villages per cluster is 5; the minimum number of villages per cluster is 2 (Yinga)

and the maximum is 9 (Bondo). Figure 2 illustrates our randomization scheme.

A balance test at the cluster level, derived from the 2014 Ugandan census, reveals that

the treatment and control clusters are balanced on a number of dimensions commonly shown

to a↵ect service delivery outcomes and collective action, including ethnolinguistic fraction-

alization (ELF), share employed in the formal sector, poverty, and distance to the local

government headquarters (Table 2).8 The clusters are slightly unbalanced with respect to

the share of literate residents, but the di↵erence in means is substantively small and probably

not meaningful.

4 Data and estimation strategy

We measure outcomes in service delivery in the three domains outlined above using two pri-

mary data sources: (1) unannounced audits of schools and clinics conducted by the research

7Mid-level health centers include the second and third tiers (Health Center II and III) in of a four tiersystem. We excluded Health Center IV, district, and regional hospitals as these have quite di↵erent mandates,focusing on more specialized care, and often di↵erent management structures as well.

8We report covariates balance at the village level in the online appendix.

11

Page 13: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Figure 2: Treatment assignment in Arua district; treatment clusters are in yellow while controlclusters are in blue.

team and (2) administrative data culled from the district education o�ce (DEO), the district

health o�ce (DHO), the district finance o�ce (DFO), and the district water o�ce, compiled

and entered by research assistants hired and supervised by the research team.

Both of these data sources have strengths and weaknesses. On the one hand, a major

benefit of audits is that they are conducted by a source independent of the government or

facility, and are therefore less subject (though not immune) to manipulation. On the other

hand, some measures collected during the audit capture data from a specific day, producing

somewhat noisy measures. Administrative data, by contrast, may be subject to manipulation

and misreporting (a head teacher or health center in charge, for example, may have incentives

to report that the facility is performing well on a number of dimensions even if it is not in

fact), but is collected more regularly. Notwithstanding this limitation, there is no reason to

12

Page 14: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

believe that biases in reporting from the facility level to the district level are correlated with

treatment. In the analysis, we combine data from both the audits and administrative data

into indices organized by domain.

Unannounced audits

Unannounced audits were conducted in treatment and control schools and clinics at three

points in time: baseline (June and July 2014), before U-Bridge was implemented; midline

(June 2015), a year after the program launched; and endline (March 2016), almost two

years since inception and after all GAPP meetings and all poll messaging had concluded.9

Audits took place in each of the 48 health centers, and in 134 primary schools, the latter of

which were divided among 46 in treatment clusters, 44 in control clusters, and the rest in

quasi-control facilities in neighboring districts. The audits aimed to collect comprehensive

information on all three domains: monitoring, e↵ort, and inputs. For example, we collected

data on communication with district o�cials (monitoring), on absenteeism on the day we

visited the facility (e↵ort), and the stock of supplies (inputs).

Administrative data

Arua local district government collects regular reports from public facilities. Enumerators

from the research team compiled data collected by the district in order to supplement data

collection via the unannounced audits. Specifically, we assembled data at the district line

ministries that can help determine whether more resources (and attention) have been allo-

cated toward service provision in treatment communities; i.e., that have access to U-Bridge.

For example, the DHO received monthly reports from each facility in which the facility

reports the number of patients treated and procedures undertaken. The research team ex-

tracted performance trends directly from these reports. Both the DHO and DEO keep

records of inspectors’ visits in school and clinics; these data allow testing whether greater

administrative attention is being directed at service units in treatment areas. We also assem-

ble data on stockouts and sta�ng information from the DHO, and sta�ng information from

the DEO. These data thus provide a secondary source, in addition to the audits discussed

above, of absenteeism rates and stockouts. Descriptive statistics for all outcomes measured

can be found in the SI, Tables 1-3.

9The system was still operational at this time, but there were no supplementary activities.

13

Page 15: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Analysis

The key outcome variables are organized by sector (health, education, water) and domain

(monitoring, e↵ort, inputs and utilization). We combine outcomes from each sector-domain

into summary indices using two methods. First, following Kling, Liebman, and Katz (2007)

and Casey, Glennerster, and Miguel (2012), we estimate mean treatment e↵ect, which entails

(1) recoding outcome variables so that higher values always indicate “better” outcomes; (2)

standardizing those variables to allow comparability of e↵ect magnitudes;10 (3) imputing

missing values at the treatment assignment group mean; and (4) compiling a summary index

that gives equal weight to each outcome component. The second method follows Anderson

(2008), who recommends constructing the summary index at stage (4) as a weighted mean

of the standardized outcome component, where the weights—the inverse of the covariance

matrix—are used to maximize the amount of information captured by the index.11

We measure all outcomes in two periods, short-term and long-term. Short-term out-

comes are measured approximately one year into the program, while long-term outcomes are

measured after the program activities, including village meetings, have concluded, close to

two years since the program launched. Following Banerjee et al. (2007) we report results for

each time period separately. We further follow Bruhn and McKenzie (2009), in using the

following ordinary least squares (OLS) regression model:

Yijt = ↵ + ⌧T + Yij0 + �X + ✏ijt

where Yijt is a service point i (say health or school), in cluster j at time t (short or

long-term), modeled as a function of the treatment indicator T , the value of the dependent

variable at baseline (t = 0). We report all outcomes with and without our pre-specified set

of covariates X, cluster standard errors at the cluster level (the level of randomization), and

compute one-sided p-values using randomization inference.

5 Uptake

Prior to reporting the study’s core findings, we begin with reporting overall uptake, which

is not an experimental quantity. Data culled from the U-Bridge system suggests high levels

of adoption in comparison to similar ICT programs studied elsewhere (Blair, Littman, and

Paluck, 2017; Grossman, Humphreys, and Sacramone-Lutz, 2016). Between August 2014

10Standardization is obtained by subtracting the mean and dividing by the standard deviation in thecontrol group.

11Inverse covariance weighting applies an assumption that there is one latent trait of interest, and con-structs an optimal weighted average on the basis of that assumption.

14

Page 16: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

and November 2015 over 10,000 messages were sent via the platform. However, the content

of the messages was highly variable in quality, and in the majority of cases did not report

a specific problem that could have plausibly been addressed by the local government. The

vast majority of irrelevant messages were screened by the implementing partner and were

never seen by district o�cials.

We hand-coded the messages sent by users in this time period into three types: actionable,

relevant, and irrelevant. Actionable messages were those we determined to contain informa-

tion about a specific problem in a specific location that could plausibly have been addressed

by the district without further information. Relevant messages were messages about ser-

vice delivery, but included messages that were vague or included insu�cient information to

which local governments could have directly responded. Finally, irrelevant messages did not

contain content related to service delivery.12 Figure 3 provides information on monthly (left-

panel) and cumulative (right-panel) number of relevant and actionable messages overtime.

The figure underscores the fact that rural Ugandans were willing to use the ICT system, but

their e�cacy in reporting problems—knowing how to send an actionable message–has been

low. The figure also suggests that the monthly number of messages peaked after about six

months (May, 2015), and then dropped, though still registering over 100 monthly messages

even 15 months after the program launch.

6 Results

We turn to report our main findings with respect to service delivery outcomes in health,

education and water. The results, capturing both short and long-term e↵ect of U-Bridge

encouragement, are reported in Figures 4–9. Specifically we report models with covariates

adjustment for the weighted mean indices. Results in tabular form with and without co-

variates adjustment can be found in Tables 3-6; in these tables we report, in addition, the

e↵ect of the program on the variables that make up each family of outcomes. Finally, for ro-

bustness, we report similar results using mean e↵ect (unweighted) indices (with and without

covariates adjustment) in the online appendix.

First, we find that the U-Bridge program had no e↵ect on any domain of health services

in either the short (Figure 4) or long term (Figure 5). The coe�cients on the treatment

indicator (⌧) are negative across all health outcomes indices, suggesting that this null is not

simply due to relatively low statistical power.

12Table 7 provides a sample of messages that were coded as actionable, relevant (a subset of not actionablebut relevant messages shown here), and irrelevant messages. Any identifying information has been removed.Some content has been slightly edited to correct spelling mistakes or explain acronyms.

15

Page 17: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

0

50

100

150

200

N. M

essa

ges

2014m7 2014m10 2015m1 2015m4 2015m7 2015m10

Monthly

0

500

1000

1500

2000

2500

3000

N. M

essa

ges

2014m7 2014m10 2015m1 2015m4 2015m7 2015m10

Relevant Actionable

Cumulative

Message Intesnity Overtime

Figure 3: The monthly (left-panel) and cumulative (right-panel) number of relevant and actionablemessages over time. Lines are derived from locally weighted regression (lowess).

Second, we find suggestive evidence of positive e↵ects for education, at least in the short

term (Figure 6). There is a positive treatment e↵ect for school inputs where the e↵ect size

is both large (⌧ = .35) and significant (pvalue = .042), and to a lesser extent for both school

monitoring and e↵ort where the e↵ect size is about a quarter of a standard deviation, with p

values falling slightly below the 90% significance level (.134 and .115, respectively). Notably,

teachers in primary schools in treatment clusters exhibit greater e↵orts—they were more

likely to be present during our unannounced audits and engaged in meaningful teaching—

and thus treated schools seem to have received greater supervision and somewhat greater

input. These e↵orts did not translate to higher standardized test scores for students taking

the Primary Leaving Exam (Tables 3-4); however it is important to recognize that these

scores come at the culmination of seven years of schooling, of which the treatment occurred

in only the last.

Even if they are genuine, these suggestive positive e↵ects in the education sector, mea-

sured after the first year, were not sustained into the second year (Figure 7). While the

coe�cients on all three indices are positive, they are smaller than the coe�cients in year-1,

and all are insignificant.

16

Page 18: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

pval=.86τ=-.331

0

.5

1

1.5

2

Den

sity

-.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6 .7 .8_b[ treat ]

Health monitoring

pval=.91τ=-.486

0

.5

1

1.5

2

Den

sity

-.8-.7-.6-.5-.4-.3-.2-.1 0 .1 .2 .3 .4 .5 .6 .7 .8_b[ treat ]

Health effort

pval=.774τ=-.199

0

.5

1

1.5

2

Den

sity

-.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6 .7 .8_b[ treat ]

Health input

pval=.634τ=-.087

0

.5

1

1.5

2

Den

sity

-.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6 .7 .8_b[ treat ]

Health utilization

Health: Short-Term (RI)

Figure 4: The e↵ect of U-Bridge on health services– year 1 (short-term). Indices constructed asthe weighted mean of the standardized outcomes. Models control for baseline level of the dependentvariable and adjust for pre-specified set of cluster-level covariates.

pval=.8290000000000001τ=-.312

0

.5

1

1.5

Den

sity

-.8-.7-.6-.5-.4-.3-.2-.1 0 .1 .2 .3 .4 .5 .6 .7 .8_b[ treat ]

Health monitoring

pval=.67τ=-.159

0

.5

1

1.5

Den

sity

-.8-.7-.6-.5-.4-.3-.2-.1 0 .1 .2 .3 .4 .5 .6 .7 .8_b[ treat ]

Health effort

pval=.398τ=.074

0

.5

1

1.5

Den

sity

-.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6 .7 .8_b[ treat ]

Health input

pval=.675τ=-.122

0

.5

1

1.5

Den

sity

-.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6 .7 .8_b[ treat ]

Health utilization

Health: Long-Term (RI)

Figure 5: The e↵ect of U-Bridge on health services– year 2 (long-term). Indices constructed as theweighted mean of the standardized outcomes. Models control for baseline level of the dependentvariable and adjust for pre-specified set of cluster-level covariates.

17

Page 19: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

pval=.134

τ=.276

0

.5

1

1.5

2

2.5

Den

sity

-.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6_b[ treat ]

Education monitoring

pval=.115

τ=.268

0

.5

1

1.5

2

2.5

Den

sity

-.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6_b[ treat ]

Education effort

pval=.042

τ=.352

0

.5

1

1.5

2

2.5

Den

sity

-.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6_b[ treat ]

Education input

pval=.789

τ=-.174

0

.5

1

1.5

2

2.5

Den

sity

-.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6_b[ treat ]

Education outcome

Education: Short-Term (RI)

Figure 6: The e↵ect of U-Bridge on schooling- year 1 (short-term). Indices constructed as theweighted mean of the standardized outcomes. Models control for baseline level of the dependentvariable, adjust for pre-specified set of village-level covariates, and cluster standard errors at thecluster-level.

pval=.333

τ=.124

0

.5

1

1.5

2

Den

sity

-.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6_b[ treat ]

Education monitoring

pval=.621

τ=-.08

0

.5

1

1.5

2

Den

sity

-.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6_b[ treat ]

Education effort

pval=.285

τ=.116

0

.5

1

1.5

2

Den

sity

-.6 -.5 -.4 -.3 -.2 -.1 0 .1 .2 .3 .4 .5 .6_b[ treat ]

Education input

Education: Long-Term (RI)

Figure 7: The e↵ect of U-Bridge on schooling- year 2 (long-term). Indices constructed as theweighted mean of the standardized outcomes. Models control for baseline level of the dependentvariable, adjust for pre-specified set of village-level covariates, and cluster standard errors at thecluster-level.

18

Page 20: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Turning to the water sector (Figure 9 and Table 1), where the post-treatment village-

level data is aggregated across two years (2015-2016), we find positive coe�cients for the

treatment indicator on parts and services provided (⌧ = .427 standard deviations), and for

the number of village requests for any water service (⌧ = .122). As with the education

sector, these treatment e↵ects fall slightly below the 90% significance level (.107 and .185,

respectively). In order to gauge the substantive implications of the above coe�cients, for the

two outcome measures, Figure 8 provides information on the number villages by treatment

group and period. Ultimately five more villages in treatment clusters had made requests for

water parts and services in the post-treatment period (13 against 8 control villages), and

five more villages ultimately received any water service in that period (9 treatment villages

as compared to 4 control villages).

0

2

4

6

8

10

Num

ber o

f villa

ges

Baseline EndlineControl Treatment Control Treatment

Parts and services

0

5

10

15

20N

umbe

r of v

illage

s

Baseline EndlineControl Treatment Control Treatment

Village requests

Water services

Figure 8: Number of villages receiving parts and services (left panel) and making requests (right),by treatment status and period. Data obtained from Arua district local government, works o�ce.

19

Page 21: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

pval=.107

τ=.427

0

.5

1

1.5

Den

sity

-1 -.5 0 .5 1_b[ treat ]

water: parts and services

pval=.185

τ=.122

0

1

2

3

4

Den

sity

-.4 -.2 0 .2 .4_b[ treat ]

water: village requests

Water analysis (RI)

Figure 9: The e↵ect of U-Bridge on water: parts and services and village requests. Indicescreated using weighted mean of standardized outcomes. All models control for baseline level ofthe dependent variable, adjust for pre-specified set of village-level covariates, and cluster standarderrors at the cluster-level.

Table 1: Water services and requests

Parts and services Village requests

(1) (2) (3) (4)

Treatment 0.386 0.427 0.079 0.122(0.318) (0.330) (0.107) (0.120)

Constant 0.000 -0.043 -0.000 -0.047(0.083) (0.118) (0.080) (0.078)

Controls no yes no yesPeriod post post post postR2 0.17 0.19 0.03 0.07N 243 243 243 243

Notes: DV: Water services.p<0.10, ** p<0.05, *** p<0.01

20

Page 22: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

7 Discussion: exploring mixed results

In this section we explore our mixed findings. In particular, why is there suggestive evidence—

albeit slightly below conventional levels of statistical significance—of positive e↵ects of U-

Bridge on education and water but not on health? Why do these e↵ects, if in fact they exist,

disappear by year two? In exploring these questions, we further evaluate the extent to which

the preconditions for change, discussed above, were met.

Specifically, in addition to the audits and administrative data, we analyze the response

messages sent by district o�cials, an endline survey of a subset of treatment villages and

qualitative interviews and focus group discussions with local government o�cials and U-

Bridge users. The evidence at hands suggests that some but not all preconditions were met,

and that any e↵ects of U-Bridge on service outcomes are mostly due to piecemeal change.

That is, it appears U-Bridge was able to resolve isolated incidents, but the evidence is not

consistent with the creation of common knowledge that would allow for an equilibrium shift

away from underperformance of service providers.

Why no e↵ects in health?

Why did we find evidence of (short-term) e↵ects in education and water, but none whatsoever

in health? There are several possible explanations. First, it could be that there was less

demand for improvements in health than in the other sectors, perhaps related to the fact that

most citizens use schools and water points far more frequently than health clinics. Relatedly,

perhaps citizens are less able to construct useful messages in health than education, as

health may require a higher degree of comprehension or expertise. Second, perhaps the local

government has greater control over the education and water sectors than the health sector,

and thus is better able to a↵ect outcomes in the former than the latter. This could occur if

inputs and human resources are managed to a greater extent by the central government in

the case of health, or if health workers are more specialized and thus there is a lower supply

than in education, which would give health workers relatively more bargaining power and

job security.

To address the first possibility, we examine whether there are di↵erences in the number

of messages sent by sector over time. Messaging intensity can serve as a good proxy of

citizens’ demand for change. While many citizens interact with schools on a nearly daily

basis, perhaps reminded often about problems at their children’s schools, and are confronted

every day with the struggle associated with dysfunctional water sources, most people do not

interact with medical care on a regular basis. Thus, it may be that our mixed findings are

simply a result of a di↵erence in underlying demand for or salience of health care. Figure 10

21

Page 23: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

suggests that this is unlikely a plausible explanation. Consistent with public opinion data

on Ugandans’ priorities over public services (Gottlieb, Grossman, and Robinson, 2016), the

number of incoming messages (both relevant and actionable) exhibit similar patterns across

public service type. It is not the case that there are more messages about problems in

education and water than about problems in health. Nor are health-related messages less

likely to be relevant or actionable.

0

10

20

30

40

N. M

essa

ges

2014m7 2014m10 2015m1 2015m4 2015m7 2015m10

Education Health Water

Relevant messages

0

10

20

30

40

N. M

essa

ges

2014m7 2014m10 2015m1 2015m4 2015m7 2015m10

Education Health Water

Actionable messages

Messages overtime, by type

Figure 10: The number of relevant (left-panel) and actionable (right) messages by service sector,overtime. Lines are derived from locally weighted regression (lowess).

Next we explore whether it is easier for the district to address complaints in the education

sector than complaints in the health sector. This may be because health inputs are expensive

and many aspects of health services, such as hiring doctors or stocking medicines, involve

the central government, while education and water services are more decentralized. We thus

plot the change in sectoral outcome indices between the baseline and the midline (year-1),

against the number of relevant messages coming from each village. The data at hand does not

o↵er support to this alternative. At least in the case of Arua district local government, the

district education o�ce is not more e↵ective in translating messages into better outcomes

(Figure 11), as compared to the district health o�ce (Figure ??). Note, however, that

this analysis is not definitive because (a) not all incoming messages could be assigned to a

specific village or cluster, and (b) not all actionable complaints were captured by U-Bridge’s

platform.13

13GAPP, for example, organized quarterly community meetings in two villages in treatment clusters.

22

Page 24: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

-4

-2

0

2

4

Cha

nge

mon

itorin

g

0 100 200 300 400All relevant messages

School monitoring

-4

-2

0

2

4

Cha

nge

effo

rt

0 100 200 300 400All relevant messages

School effort

-4

-2

0

2

4

Cha

nge

inpu

t

0 100 200 300 400All relevant messages

School input

-4

-2

0

2

4

Cha

nge

mon

itorin

g

0 2 4 6Education messages

School monitoring

-4

-2

0

2

4

Cha

nge

effo

rt

0 2 4 6Education messages

School effort

-4

-2

0

2

4

Cha

nge

inpu

t

0 2 4 6Education messages

School input

Figure 11: Education: change from baseline to midline (short-term) against messages sent

-2

-1

0

1

2

Cha

nge

mon

itorin

g

0 100 200 300 400All relevant messages

Health monitoring

-2

-1

0

1

2

Cha

nge

effo

rt

0 100 200 300 400All relevant messages

Health effort

-2

-1

0

1

2

Cha

nge

inpu

t

0 100 200 300 400All relevant messages

Health input

-2

-1

0

1

2

Cha

nge

mon

itorin

g

0 5 10Health messages

Health monitoring

-2

-1

0

1

2

Cha

nge

effo

rt

0 5 10Health messages

Health effort

-2

-1

0

1

2

Cha

nge

inpu

t

0 5 10Health messages

Health input

Figure 12: Health: change from baseline to midline (short-term) against messages sent

Meeting attendees have certainly raised service delivery issues with district representatives and these mayor may not have been captured as a complaint in the system’s platform.

23

Page 25: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

More generally, Figure 11 suggests that positive e↵ects in the first year for education

were not driven disproportionately by villages with a higher volume of messages. Both

the above explanations implicitly assume that change occurs due to the responsiveness of

district o�cials to specific citizen complaints. However, another possibility identified in the

theory of change outlined above is that following the introduction of the ICT platform,

service providers internalize the possibility of sanctioning, and increase e↵ort independently

of actual messages. This mechanism is consistent with Figure 11, and could explain the

divergence across sectors if health workers somehow were more secure in their position than

teachers, perhaps due to shortages of qualified workers or less responsive management in the

district health o�ce. Unfortunately, we have no data that would allow us to test this claim.

Why don’t the e↵ects in education persist?

Perhaps the most exciting possibility of a program like U-Bridge is that it can generate

common knowledge between service providers, citizens, and district o�cials about monitor-

ing, thus generating incentives for better performance of teachers and headmasters in the

long run. If common knowledge of this kind were being created in treatment clusters in

Arua, however, we would expect that positive e↵ects would strengthen over time, even as

the volume of messages decline. For example, the e↵ect of U-Bridge could be sustained as

teachers gained personal experience of being monitored, or perhaps heard that colleagues had

been sanctioned for poor performance identified via the program, or that district education

o�cials were visiting more often or asking harder questions.

Since the positive e↵ects disappeared by the second year, it seems unlikely that this inter-

nalization took place. In our interviews with district o�cials, we learned of some examples

in which information gleaned from messages was indeed used for disciplinary action, but this

seems not to have been enough to generate sustainable changes in outcomes we could track.

Moreover, in some cases, the disciplinary action taken was not very costly to the service

provider. For example, district o�cials cited the transfer of a poor performing teacher from

one school to another as evidence of the program’s success. Such a transfer would obviously

have been a relief for the school where the complaint had been made, but a burden for the

recipient school, and not terribly costly for the teacher in question. Thus, it could be that

the sanctions the district is able or willing to impose on poorly performing service providers

simply are not costly enough to deter poor performance.

The results are consistent with a scenario in which initial excitement about and engage-

ment with the program led to a temporary increase in monitoring activity from district

o�cials, as well as short-term improvements in performance and physical inputs. However,

24

Page 26: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

it appears that this is di�cult to sustain. As demonstrated above, the volume of education

(and other) messages declined substantially over time, and it is possible that service providers

learned over time that the threat of vigorous monitoring and sanctioning was minimal. These

possibilities are addressed in further detail below.

Why did the number of messages decline over time?

Although, as discussed above, there does not appear to be a relationship between the number

of messages sent and a change in outcomes, it is still worth exploring the question of why the

number of messages sent declined over time. First, it could be the case that after an initial

registration drive and encouragement to use the system, which drove initial usage, users

found the district government’s responses unsatisfactory or ine↵ective in fixing the problems

they had reported about. Discouraged, they stopped using the platform.

A second possibility is that after a flurry of initial activity, people forgot about U-Bridge

and simply returned to their busy lives. U-Bridge was nowhere near as interactive or en-

tertaining as the most globally successful applications. Users would have to remember to

use it, and remember the shortcode required to submit a message. Registered users received

frequent informational messages from U-Bridge, but, like email spam, users could easily have

dismissed these messages. A third possibility is that the initial messages, combined with the

quarterly community meetings, satisfied users needs, such that there were fewer problems

they wanted to report.

In order to learn about citizens’ attitudes and knowledge of the U-Bridge system, we

conducted a number of interviews and focus groups with users and government o�cials. In

addition, we conducted an endline survey in spring 2016, interviewing all available adults

(3,192 respondents) in sixteen treatment villages; eight with high uptake of U-Bridge, and

eight with low uptake. This exercise included a set of basic demographic questions, a set of

questions that allow us to construct individuals’ social network, and questions on knowledge

and use of U-Bridge. This survey allows us to probe questions about variable uptake of

U-Bridge—which we address in a companion paper (Ferrali et al., 2017)—as well as to assess

the extent to which assumptions required for U-Bridge to a↵ect service delivery outcomes

were met. Specifically, the endline survey sheds further light on the extent to which citizens

expect that the district government has the capacity to a↵ect service delivery, and their

experiences in using U-Bridge.

The endline survey reveals important findings. First, despite having had an intensive

door-to-door registration campaign and a series of community meetings about the program,

less than third (31%) of respondents reported having heard of U-Bridge. Of those, 15 percent

25

Page 27: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

reported that they had sent at least one message. Thus, only a minority of potential users

had in fact used the program, making it impossible to create the kind of knowledge that

might sustain an initial response.

35 percent of users reported that they were somewhat or very unsatisfied with the dis-

trict’s response to their message (compared to 39% that were satisfied, with the remaining

are neither satisfied nor unsatisfied). The majority of unsatisfied users were also those who

had reported they never heard back from district o�cials. While attitudes toward the pro-

gram were not entirely negative, nor do they indicate an overwhelming endorsement of the

program. It is possible that the district did not respond to messages whose content was not

relevant, and that these users were unsatisfied through no fault of the district. Neverthe-

less, the relatively low levels of satisfaction may provide support to the idea that messaging

dropped o↵ after citizens realized it did not live up to their expectations.

On the other hand, 62 percent of users said they heard back from the district most or

all of the time after sending a message, and the vast majority of respondents, 84 percent,

stated that they believed the local government was “somewhat capable” or “very capable”

of improving public service delivery. Thus, it is unlikely that users stopped sending messages

because they came to believe district o�cials were unable to address service delivery prob-

lems. But perhaps despite hearing back from district o�cials, users did not see much change

on the ground, and thus came to believe the system would not solve the problems they had

hoped it would. The drop in messaging after six months, combined with low satisfaction

rates at endline, is consistent with users becoming disillusioned over time.

How did o�cials respond to the program?

We turn to explore how government o�cials respond to the program. Did they find it help-

ful? Did it provide them with new information? Did they provide regular and informative

responses to users? Did they use the information from messages they received to solve prob-

lems? To answer some of these questions, we conducted a focus group with district o�cials

in late 2015 and conducted a second round of interviews in mid 2017.

First, these discussions suggest that district o�cials were generally positive about the

program, and pointed to specific instances in which they received new information that they

were then able to act upon. Some illustrative comments from these focus groups are provided

below:

- “U-Bridge allows for responses in near real-time. Example: [date] Community Dia-logue Meeting showed that the In-Charge of [XXX] Health Centre had been absent forseveral months. Following the meeting, the DS reported to both the CAO and DHO.

26

Page 28: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

This resulted in district action and the In-Charge returned to his post, apologized andexplained his absence to constituents.”

- “While we remain constrained by resources, the platform improves the ability of coun-cilors to reach out to constituents.”

- “Additionally, the platform has catalyzed how quickly the district responds to issues.For instance, community members reported that the Youth Chairperson was embezzlingmillions of shillings. They communicated to the district and police action was takenimmediately.”

- “Some messages require drastic changes in budgetary allocations. These are the hardestto respond to (e.g. “there are not enough health workers”) or, those messages thatrequire action at the central level.”

- “U-Bridge also allows the District to broaden our mandate more fully. For instance,issues of domestic violence, child abuse, family matters, etc. can also be addressed. TheU-Bridge allows district o�cials to refer issues more e↵ectively (e.g. to the police).”

These comments and other input provided from district o�cials shed additional light

on the kinds of outcomes services such as U-Bridge could plausibly have a↵ected and how

o�cials perceived the program. First, district o�cials reported many requests that could not

have been implemented in a short-time period, and others that were unrealistic given their

budget. Thus, some of the information shared by users was not actionable due to binding

resource constraints. Second, it appears that a non-trivial number of individuals used U-

Bridge to report problems not necessarily related to service delivery, such as corruption by

politicians and crime. The message log also contains a number of complaints about issues

like noise and excessive drinking. As these outcomes were not among the targeted goals of

the project by either the implementer or the research team, we do not have systematic data

collected on such outcomes.

Third, district o�cials do report receiving information that they did not previously have,

and that they subsequently acted upon, as the anecdote about the absent health center in-

charge suggests. Several of these instances can be identified in the message logs in which U-

Bridge users thanked government o�cials for satisfactory responses to complaints, especially

about water infrastructure. Thus, there is evidence U-Bridge did help o�cials to resolve

at least some service delivery problems. Perhaps these were simply few and far between in

comparison to the problems that the district could not address, or that were beyond the

purview of this study.

While we do not have a full log of all the responses sent by district o�cials to users

through the system, we have responses for the period from the start of the program through

August 2015. An analysis of these messages reveals that many responses given by district

27

Page 29: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

o�cials referred users to lower-level local government or other local institutions or bodies.

For example, in response to concerns reported about the number of latrines, discipline of

teachers, quality of teacher housing, and misuse of school funds, the district education o�ce

frequently referred users to consult the Parent Teacher Associations (PTAs), School Man-

agement Committees (SMCs), or head teachers. Indeed, these institutions may be the first

point of call for parents, but it is possible that the reason users were reporting directly to the

district was because e↵orts at lower levels to resolve problems had failed. Similar responses

were often given to users reporting issues in the water and health sector.

It is therefore possible that part of the disappointment expressed by users was a result

of a mismatch in their expectations about the type of response that would be provided by

district o�cials and the response actually provided. Relatedly, the relatively weak e↵ects

of the program on service delivery outcomes could in part be the result of district o�cials’

responses, which in many cases did not directly solve problems but rather provided users

with additional information about the service delivery process and actors.

These insights yield potential lessons for future ICT programs like U-Bridge in settings

with multiple overlapping layers of local government. O�cials at the level of government

receiving the messages might respond—perhaps truthfully—that the solution to the problem

lies most clearly within the purview of another layer of government. Such responses are

unlikely to solve the problem or satisfy the sender. Arua district-level o�cials told us that

in the future, tablets should also be made available to lower-level (sub-county) o�cials as

well. This would have the beneficial impact of improving not only the flow of information

from citizens to o�cials, but also between layers of government for whom communication

and coordination are often di�cult.

Together, we can rule out some possible explanations for a lack of lasting impact, while

providing suggestive evidence of others. Many but not all of the preconditions for the e�cacy

of U-Bridge were met. First, we can rule out the possibility that there is no or low demand

for a program like U-Bridge. Uptake was high in comparison to similar programs, and this

was despite the fact that many of those living in treatment areas had never heard about the

program. We expect uptake would be even higher with a more extensive outreach campaign.

Second, we have evidence that many of the assumptions about the conditions under which

citizens would use U-Bridge were met: people believed the district had capacity to respond,

highlighted the importance of the anonymity of the program, and do not seem to have

been free-riding, at least with respect to messages sent by friends and peers (Ferrali et al.,

2017). Third, the evidence suggests that the content of messages was often insu�cient to

allow district o�cials to respond, and perhaps that even when messages were actionable, the

district did not have the resources to respond to many of them, at least in the short term.

28

Page 30: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

This precondition—that messages are actionable—was therefore only partially met, at best.

Self-reports from district o�cials as well as logs of outgoing messages from district o�cials

suggest that the cases in which o�cials were directly involved in resolving service delivery

problems were relatively few. More often, o�cials referred users to other institutions or indi-

viduals. Users may or may not have taken this advice, and those institutions and individuals

to whom they were referred may or may not have helped solve the problem. We further find

no sustained evidence of an increase in the monitoring activities by district o�cials that we

measured, such as calls or visits to facilities, though it is conceivable that district o�cials

employed forms of communication or monitoring that were not recorded, such as calling the

subcounty local government or a member of a local committee for a particular facility.

We thus cannot rule out the possibility that district o�cials had insu�cient incentives

to act upon all of the information they received. The primary recipients of messages were

bureaucrats holding unelected positions. We had positive and encouraging interactions with

them in our numerous visits to the district headquarters, and as noted, they seemed interested

and enthusiastic about the program. Many bureaucrats are surely motivated by a desire to

improve the work of government, but they are not directly accountable to the beneficiaries of

government programs. It is possible that for a program such as U-Bridge to be e↵ective, closer

“monitoring of the monitors” is necessary, but we cannot say definitively whether district

o�cial incentives, or lack thereof, were consequential in this particular case. The program

may also have been more e↵ective if lower levels of local government, such as subcounties,

were also involved in receiving and responding to messages.

8 Conclusion

Circumventing government actors has become a popular strategy to improve service deliv-

ery in recent years, as researchers have tested whether the short route of accountability—

empowering communities to directly monitor providers—can be e↵ective in improving ser-

vices, particularly in the context of rural and low-income settings. A key barrier in the long

route of accountability has always been the cost of sharing information between service users

and government o�cials responsible for overseeing providers of those services. We evaluated

a program designed to overcome this barrier by providing a free and anonymous text messag-

ing service through which citizens could quickly and cheaply share information about service

delivery problems they witnessed on the ground.

We examined the extent to which the U-Bridge program, operating in Arua district in

northern Uganda, could improve monitoring, e↵ort, and inputs in health, education and

water. Over the course of the period for which we have data, close to 10,000 messages were

29

Page 31: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

sent, and district o�cials reacted positively to the program. While anecdotes of service

improvements were plentiful, when we compared a variety of service outcomes in treatment

villages—where the program was actively introduced and facilitated and where service re-

quests were made—with a group of control villages, we do not find evidence of significant,

sustained improvements in a variety of (pre-specified) outcomes. We find short-term im-

provements in the education sector, but these were not sustained in the second year of the

program. We also find evidence of greater activity in the water sector.

We evaluate a variety of reasons why we may not have have found a robust e↵ect of the

program on service delivery. Importantly, we find that most of the preconditions for program

success were met, with the (important) exception that a relatively low share of messages had

su�ciently specific information to act upon. We are able to rule out the possibility that

there was no demand for such a program, that citizens did not believe the district would

respond, or that potential users feared retribution.

We also provide suggestive evidence for some possible explanations for our mixed results.

First, despite relatively high levels of uptake, knowledge about the program was relatively low

in treatment areas, such that usage was likely lower than it might have been if all potential

users had been fully aware of the program. Second, those that did send messages often sent

messages that did not contain information the local government could act upon; in other

words, messages that were not actionable. Then, even when information was actionable,

the district did not always have the resources or ability to respond, at least in the short-

term, and this may have been frustrating for some users. Third, some messages that were

actionable were not related to service delivery, such as crime, and thus their impact was

not measured. Fourth, users who had sent actionable messages were frequently referred to

lower-level local government or institutions, which they may or may not have contacted and

which may or may not have been responsive. Indeed, it is entirely likely that the reason users

sent a message through U-Bridge was because local institutions had failed or users feared

retribution from service providers. A related concern is that in a multi-layered decentralized

system, a direct line of communication from citizens to one layer of government might simply

allow o�cials at that layer to shift responsibility or blame for poor service provision to other

layers.

Perhaps because so many messages did not result in problem-solving on the ground, user

satisfaction with the program was relatively low, and declined over time. It is possible that

as a result of the decline in messages or decline in the salience of the program in communities

and government facilities, the already weak short-term treatment e↵ects were not sustained

over time.

A program like U-Bridge can provide information that is extremely useful in specific

30

Page 32: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

instances. For example, if an important bridge is washed out in a storm, a roof is blown o↵ a

classroom, a teacher is abusive, or if there is a disease outbreak, word will probably eventually

travel to the district headquarters. But a program like U-Bridge allows the information to

travel to district o�cials much more quickly, potentially facilitating a quicker response. In

this respect, it can function much like a hotline. This kind of piecemeal impact, however,

di↵ers considerably from a situation in which the program ushers in a new equilibrium in

monitoring and accountability, such that local service providers– when tempted to request

extra fees from users or take the day o↵—are deterred because of an enhanced concern that

monitoring will facilitate sanctioning. We are confident that U-Bridge achieved the first goal

in some instances, but were unable to find evidence for the second.

Several policy implications emerge from this work. First, there appears to be significant

demand for programs like U-Bridge, and thus it is worth investigating the ways in which such

programs can be improved to better a↵ect the intended outcomes. Second, such programs

are unlikely to bring about a substantial change in the incentives of service providers, and

hence discernible long-term improvements in service provision, unless large numbers of people

learn about the program and learn how to send actionable messages that focus on service

provision outcomes that governments have the resources and ability to improve. Third, even

if citizens report actionable problems, local government o�cials must have the interest and

mandate to resolve them, and coordination problems between layers of local government

must be resolved.

31

Page 33: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

References

Amin, Samia, Jishnu Das, and Markus P Goldstein. 2008. Are you being served?: new tools

for measuring service delivery. Washington D.C.: World Bank Publications.

Anderson, Michael L. 2008. “Multiple Inference and Gender Di↵erences in the E↵ects of Early

Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training

Projects.” Journal of the American Statistical Association 103 (484): 1481–1495.

Andrabi, Tahir, Jishnu Das, and Asim Ijaz Khwaja. 2017. “Report Cards: The Impact of

Providing School and Child Test-scores on Educational Markets.” American Economic

Review 107 (6): 1535—1563.

Banerjee, Abhijit V, Rukmini Banerji, Esther Duflo, Rachel Glennerster, and Stuti Khemani.

2010. “Pitfalls of Participatory Programs: Evidence from a randomized evaluation in

education in India.” American Economic Journal: Economic Policy pp. 1–30.

Banerjee, Abhijit V., Shawn Cole, Esther Duflo, and Leigh Linden. 2007. “Remedying Ed-

ucation: Evidence from Two Randomized Experiments in India.” The Quarterly Journal

of Economics 122 (3): 1235–1264.

Bardhan, Pranab. 2002. “Decentralization of Governance and Development.” The Journal

of Economic Perspectives 16 (4): 185–205.

Bjorkman, Martina, and Jakob Svensson. 2009. “Power to the People: Evidence from a

Randomized Field Experiment on Community-Based Monitoring in Uganda.” Quarterly

Journal of Economics 124 (2): 735–769.

Blair, Graeme, Rebecca Littman, and Elizabeth Levy Paluck. 2017. “Motivating the adop-

tion of new community-minded behaviors: An empirical test in Nigeria.” Unpublished

manuscript .

Bruhn, Miriam, and David McKenzie. 2009. “In Pursuit of Balance: Randomization in Prac-

tice in Development Field Experiments.” American economic journal: applied economics

1 (4): 200–232.

Casey, Katherine, Rachel Glennerster, and Edward Miguel. 2012. “Reshaping Institutions:

Evidence on External Aid and Local Collective Action.” Quarterly Journal of Economics

127 (4): 1755–1812.

32

Page 34: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Chaudhury, Nazmul, Je↵rey Hammer, Michael Kremer, Karthik Muralidharan, and F Halsey

Rogers. 2006. “Missing in action: teacher and health worker absence in developing coun-

tries.” The Journal of Economic Perspectives 20 (1): 91–116.

de Laat, Joost, Michael Kremer, and Christel Vermeersch. 2008. “Local Participation and

Teacher Incentives: Evidence from a randomized experiment.”.

Duflo, Esther, Rema Hanna, and Stephen P Rya. 2012. “Incentives work: Getting teachers

to come to school.” The American Economic Review 102 (4): 1241–1278.

Ferrali, Romain, Guy Grossman, Melina Platas, and Jonathan Rodden. 2017. “Social Net-

works and Community Reporting.” Working Paper .

Glewwe, Paul, Michael Kremer, and Sylvie Moulin. 2009. “Many children left behind?

Textbooks and test scores in Kenya.” American Economic Journal: Applied Economics 1

(1): 112–135.

Gottlieb, Jessica. 2016. “Greater Expectations: A Field Experiment to Improve Account-

ability in Mali.” American Journal of Political Science 60 (1): 143–157.

Gottlieb, Jessica, Guy Grossman, and Amanda Lea Robinson. 2016. “Do Men and Women

Have Di↵erent Policy Preferences in Africa? Determinants and Implications of Gender

Gaps in Policy Prioritization.” British Journal of Political Science pp. 1–26.

Grossman, Guy, Kristin Michelitch, and Marta Santamaria. 2016. “Texting Complaints to

Politicians Name Personalization and Politicians’ Encouragement in Citizen Mobilization.”

Comparative Political Studies .

Grossman, Guy, Macartan Humphreys, and Gabriella Sacramone-Lutz. 2014. ““I wld like

u WMP to extend electricity 2 our village”: On Information Technology and Interest

Articulation.” American Political Science Review 108 (03): 688–705.

Grossman, Guy, Macartan Humphreys, and Gabriella Sacramone-Lutz. 2016. “Informa-

tion technology and political engagement: Mixed evidence from Uganda.” Unpublished

manuscript .

Kaawa-Mafigiri, David, and Eddy Walakira, eds. 2017. Child Abuse and Neglect in Uganda.

Springer.

Kessides, Christine. 2005. “The Urban Transition in Sub-Saharan Africa: Implications for

Economic Growth and Poverty Reduction.” The World Bank, Africa Region Working

Paper Series No. 97 .

33

Page 35: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Kling, Je↵rey R, Je↵rey B Liebman, and Lawrence F Katz. 2007. “Experimental Analysis

of Neighborhood E↵ects.” Econometrica 75 (1): 83–119.

Kosack, Stephen, and Archon Fung. 2014. “Does Transparency Improve Governance?” An-

nual Review of Political Science 17: 65–87.

Kremer, Michael, Esther Duflo, and Pascaline Dupas. 2011. “Peer E↵ects, Teacher Incentives,

and the Impact of Tracking.” American Economic Review 101 (5).

Meier, Patrick, and Rob Munro. 2010. “The unprecedented role of SMS in disaster response:

Learning from Haiti.” SAIS Review 30: 91–103.

Ra✏er, Pia. 2016. Does political oversight of the bureaucracy increase accountability? Field

experimental evidence from an electoral autocracy. Technical report Working paper, Yale

University.

World Bank, The. 2016. Making Politics Work for Development: Harnessing Transparency

and Citizen Engagement. Conference edition ed. World Bank, Washington, DC.: Policy

Research Report.

34

Page 36: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Appendix

Table 2: Balance test (cluster)

ControlMean

TreatmentMean

Di↵erence ofMeans

p-value forDi↵erence of Means

Adult population (>= 16) 1460.292 1602.917 -142.625 0.464(116.970) (153.786) (193.215)

Mean age 20.575 21.059 -0.484 0.066(0.197) (0.166) (0.257)

Share Lugbara tribe 0.895 0.971 -0.076 0.162(0.052) (0.010) (0.053)

Ethnic polarization 0.501 0.514 -0.013 0.836(0.049) (0.041) (0.064)

Ethnic fractionalization (ELF) 0.060 0.049 0.011 0.669(0.019) (0.017) (0.025)

Religion fractionalization 0.289 0.282 0.007 0.860(0.031) (0.025) (0.039)

Share literate 0.595 0.636 -0.041 0.098(0.016) (0.018) (0.024)

Mean education (0-4 scale) 1.166 1.182 -0.016 0.744(0.025) (0.042) (0.049)

Share with secondary education 0.235 0.241 -0.007 0.784(0.011) (0.021) (0.024)

Share employed 0.845 0.855 -0.009 0.722(0.019) (0.018) (0.026)

Share employed non-agri sectors 0.226 0.255 -0.028 0.518(0.021) (0.038) (0.043)

Poverty Index -0.086 -0.119 0.033 0.468(0.023) (0.039) (0.045)

Distance to Arua (kms) 36.708 28.083 8.625 0.162(5.090) (3.309) (6.071)

N 24 24 48

35

Page 37: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 3: Education Outcomes Analysis (no covariates)

Short-term Long-term

Variable Coef SE P-val CI Coef SE P-val CI

Monitoring weighted index 0.170 (0.222) 0.442 [-0.264 ,0.605]

-0.002 (0.230) 0.994 [-0.453 ,0.449]

DEO calls 0.307 (0.202) 0.128 [-0.089 ,0.702]

0.025 (0.222) 0.911 [-0.411 ,0.460]

Inspector calls 0.432 (0.227) 0.057 [-0.013 ,0.876]

-0.177 (0.211) 0.401 [-0.590 ,0.236]

DEO visits 0.048 (0.217) 0.826 [-0.378 ,0.473]

0.257 (0.239) 0.281 [-0.210 ,0.725]

Inspector visits -0.218 (0.216) 0.313 [-0.641 ,0.205]

0.034 (0.215) 0.874 [-0.386 ,0.455]

E↵ort weighted index 0.240 (0.181) 0.186 [-0.116 ,0.596]

0.106 (0.237) 0.656 [-0.360 ,0.571]

% Teachers present (records) -0.186 (0.192) 0.333 [-0.563 ,0.191]

0.048 (0.225) 0.830 [-0.392 ,0.489]

% Teachers present (observed) 0.317 (0.202) 0.116 [-0.079 ,0.713]

0.317 (0.222) 0.153 [-0.118 ,0.752]

Meaningful board 0.201 (0.178) 0.258 [-0.147 ,0.550]

-0.034 (0.245) 0.889 [-0.515 ,0.446]

Teacher engaged 0.423 (0.215) 0.049 [0.001 ,0.845]

0.310 (0.227) 0.172 [-0.135 ,0.756]

Sta↵ meetings 0.024 (0.209) 0.910 [-0.386 ,0.434]

Input weighted index 0.372 (0.223) 0.096 [-0.066 ,0.810]

0.244 (0.194) 0.209 [-0.137 ,0.624]

N. teachers employed 0.250 (0.129) 0.052 [-0.002 ,0.502]

0.142 (0.171) 0.408 [-0.194 ,0.478]

Teachers transferred to school 0.394 (0.288) 0.171 [-0.170 ,0.958]

0.223 (0.214) 0.297 [-0.196 ,0.641]

Students per uniform 0.031 (0.243) 0.898 [-0.445 ,0.508]

-0.214 (0.211) 0.310 [-0.626 ,0.199]

Students per book 0.276 (0.185) 0.135 [-0.086 ,0.638]

0.124 (0.231) 0.593 [-0.330 ,0.577]

Students per pencil 0.099 (0.254) 0.696 [-0.398 ,0.596]

0.226 (0.203) 0.266 [-0.172 ,0.623]

Performance weighted index -0.180 (0.184) 0.329 [-0.541 ,0.181]

Enrollment 0.018 (0.116) 0.878 [-0.209 ,0.244]

% PLE Grade 1 0.052 (0.370) 0.889 [-0.673 ,0.776]

% PLE Grade 2 -0.304 (0.134) 0.023 [-0.567 ,-0.042]

PLE pass rate -0.106 (0.165) 0.520 [-0.429 ,0.217]

In columns 2-5 we report short-term e↵ects (1 year), and in columns 6-9, long-term e↵ects (year 2). Indices are constructed using

the approach developed by Anderson (2008). Standard errors are corrected by combining estimation results using Seemingly

Unrelated Regression estimation; p-values are two-sided.

36

Page 38: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 4: Education Outcomes Analysis (with covariates adjustment)

Short-term Long-term

Variable Coef SE P-val CI Coef SE P-val CI

Monitoring weighted index 0.276 (0.208) 0.185 [-0.132 ,0.684]

0.124 (0.289) 0.666 [-0.441 ,0.690]

DEO calls 0.240 (0.222) 0.279 [-0.194 ,0.675]

-0.010 (0.221) 0.963 [-0.444 ,0.424]

Inspector calls 0.402 (0.245) 0.101 [-0.078 ,0.882]

-0.189 (0.213) 0.375 [-0.607 ,0.229]

DEO visits 0.097 (0.188) 0.606 [-0.272 ,0.465]

0.241 (0.290) 0.405 [-0.327 ,0.809]

Inspector visits -0.152 (0.190) 0.425 [-0.525 ,0.221]

-0.064 (0.244) 0.792 [-0.543 ,0.415]

E↵ort weighted index 0.268 (0.195) 0.170 [-0.115 ,0.651]

-0.080 (0.224) 0.720 [-0.520 ,0.359]

% Teachers present (records) -0.330 (0.180) 0.066 [-0.683 ,0.022]

0.021 (0.193) 0.915 [-0.357 ,0.398]

% Teachers present (observed) 0.202 (0.212) 0.340 [-0.213 ,0.617]

0.241 (0.218) 0.267 [-0.185 ,0.668]

Meaningful board 0.324 (0.195) 0.097 [-0.059 ,0.708]

-0.199 (0.256) 0.437 [-0.700 ,0.303]

Teacher engaged 0.340 (0.214) 0.111 [-0.078 ,0.759]

0.202 (0.215) 0.348 [-0.219 ,0.622]

Sta↵ meetings -0.145 (0.178) 0.415 [-0.495 ,0.204]

Input weighted index 0.352 (0.172) 0.040 [0.015 ,0.689]

0.116 (0.205) 0.570 [-0.285 ,0.518]

N. teachers employed 0.269 (0.116) 0.021 [0.041 ,0.497]

0.120 (0.173) 0.487 [-0.219 ,0.459]

Teachers transferred to school 0.422 (0.266) 0.112 [-0.099 ,0.942]

0.102 (0.205) 0.617 [-0.299 ,0.504]

Students per uniform 0.116 (0.241) 0.629 [-0.355 ,0.588]

-0.163 (0.182) 0.372 [-0.520 ,0.194]

Students per book 0.215 (0.205) 0.295 [-0.187 ,0.617]

0.029 (0.230) 0.900 [-0.422 ,0.480]

Students per pencil 0.002 (0.234) 0.994 [-0.456 ,0.460]

0.010 (0.191) 0.959 [-0.364 ,0.384]

Performance weighted index -0.174 (0.160) 0.277 [-0.487 ,0.139]

Enrollment 0.034 (0.109) 0.755 [-0.180 ,0.248]

% PLE Grade 1 -0.111 (0.238) 0.641 [-0.577 ,0.355]

% PLE Grade 2 -0.210 (0.157) 0.182 [-0.517 ,0.098]

PLE pass rate -0.086 (0.170) 0.614 [-0.420 ,0.248]

In columns 2-5 we report short-term e↵ects (1 year), and in columns 6-9, long-term e↵ects (year 2). Indices are constructed using

the approach developed by Anderson (2008). Standard errors are corrected by combining estimation results using Seemingly

Unrelated Regression estimation; p-values are two-sided.

37

Page 39: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 5: Health Outcomes Analysis (no covariates)

Short-term Long-term

Variable Coef SE P-val CI Coef SE P-val CI

Monitoring weighted index -0.311 (0.246) 0.207 [-0.794 ,0.172]

-0.296 (0.253) 0.242 [-0.791 ,0.199]

DHO visits -0.122 (0.262) 0.641 [-0.635 ,0.391]

-0.128 (0.256) 0.617 [-0.629 ,0.373]

DHO calls -0.361 (0.257) 0.161 [-0.865 ,0.144]

-0.227 (0.125) 0.071 [-0.472 ,0.019]

Inspector calls -0.257 (0.254) 0.312 [-0.756 ,0.242]

-0.124 (0.276) 0.652 [-0.665 ,0.416]

Inspector visits -0.057 (0.279) 0.839 [-0.603 ,0.490]

-0.146 (0.277) 0.599 [-0.688 ,0.397]

Inspection reports -0.019 (0.055) 0.731 [-0.127 ,0.089]

-0.033 (0.095) 0.731 [-0.218 ,0.153]

E↵ort weighted index -0.331 (0.271) 0.222 [-0.862 ,0.201]

-0.154 (0.291) 0.597 [-0.723 ,0.416]

% Sta↵ Present 0.063 (0.280) 0.822 [-0.486 ,0.613]

0.298 (0.282) 0.290 [-0.255 ,0.852]

% unauthorized absent -0.014 (0.276) 0.960 [-0.555 ,0.527]

0.185 (0.243) 0.447 [-0.292 ,0.662]

Register book -0.301 (0.283) 0.287 [-0.856 ,0.254]

-0.301 (0.283) 0.287 [-0.856 ,0.254]

N. Outreach events -0.527 (0.282) 0.062 [-1.079 ,0.026]

-0.379 (0.304) 0.212 [-0.974 ,0.216]

Input weighted index 0.085 (0.244) 0.727 [-0.393 ,0.564]

0.148 (0.221) 0.503 [-0.285 ,0.582]

Days w/o antimalarials 0.335 (0.247) 0.174 [-0.148 ,0.819]

0.962 (0.227) 0.000 [0.518 ,1.407]

Days w/o ORS 0.151 (0.275) 0.583 [-0.388 ,0.691]

0.219 (0.246) 0.374 [-0.264 ,0.701]

Funds received (millions) 0.005 (0.370) 0.989 [-0.720 ,0.731]

0.093 (0.375) 0.804 [-0.641 ,0.828]

Antimalaria SO 0.056 (0.280) 0.841 [-0.493 ,0.605]

ORS SO -0.174 (0.268) 0.516 [-0.699 ,0.351]

Total SO -0.203 (0.241) 0.398 [-0.675 ,0.268]

Utilization weighted index -0.227 (0.260) 0.384 [-0.737 ,0.283]

-0.209 (0.263) 0.427 [-0.724 ,0.307]

Out patients 0.256 (0.205) 0.212 [-0.146 ,0.658]

-0.253 (0.306) 0.410 [-0.853 ,0.348]

N. patients visiting clinic -0.125 (0.094) 0.183 [-0.309 ,0.059]

0.070 (0.142) 0.619 [-0.207 ,0.348]

OP referrals -0.106 (0.176) 0.545 [-0.450 ,0.238]

-0.005 (0.302) 0.988 [-0.596 ,0.586]

OP diagnoses 0.040 (0.104) 0.702 [-0.164 ,0.244]

0.189 (0.164) 0.247 [-0.131 ,0.510]

Tetanus doses 0.159 (0.184) 0.388 [-0.202 ,0.520]

-0.073 (0.195) 0.706 [-0.455 ,0.308]

Children immunized 0.411 (0.329) 0.212 [-0.235 ,1.056]

-0.119 (0.212) 0.573 [-0.534 ,0.295]

IPT doses 0.122 (0.225) 0.588 [-0.319 ,0.563]

-0.047 (0.163) 0.771 [-0.367 ,0.272]

Iron acid -0.013 (0.280) 0.962 [-0.562 ,0.535]

-0.130 (0.217) 0.548 [-0.556 ,0.295]

Free ITN -0.001 (0.247) 0.998 [-0.485 ,0.484]

-0.370 (0.237) 0.119 [-0.834 ,0.095]

Syphilis test -0.268 (0.214) 0.210 [-0.687 ,0.151]

-0.162 (0.295) 0.583 [-0.741 ,0.417]

Maternity admission -0.180 (0.194) 0.353 [-0.560 ,0.200]

-0.144 (0.139) 0.299 [-0.417 ,0.128]

Maternity delivery -0.248 (0.174) 0.155 [-0.590 ,0.093]

0.175 (0.332) 0.597 [-0.475 ,0.825]

Vitamin A (mothers) -0.398 (0.165) 0.016 [-0.722 ,-0.074]

-0.216 (0.189) 0.253 [-0.585 ,0.154]

Post natal -0.127 (0.132) 0.337 [-0.385 ,0.132]

0.087 (0.246) 0.724 [-0.395 ,0.569]

Vitamin A (children) 0.036 (0.306) 0.907 [-0.564 ,0.636]

0.235 (0.303) 0.438 [-0.358 ,0.829]

Deworming doses 0.090 (0.378) 0.812 [-0.651 ,0.830]

-0.215 (0.175) 0.218 [-0.557 ,0.127]

In columns 2-5 we report short-term e↵ects (1 year), and in columns 6-9, long-term e↵ects (year 2). Indices are constructed using

the approach developed by Anderson (2008). Standard errors are corrected by combining estimation results using Seemingly

Unrelated Regression (SUR) estimation.

38

Page 40: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 6: Health Outcomes Analysis (with covariates adjustment)

Short-term Long-term

Variable Coef SE P-val CI Coef SE P-val CI

Monitoring weighted index -0.331 (0.265) 0.211 [-0.850 ,0.187]

-0.312 (0.363) 0.390 [-1.024 ,0.399]

DHO visits -0.104 (0.350) 0.767 [-0.791 ,0.583]

-0.142 (0.313) 0.649 [-0.755 ,0.471]

DHO calls -0.495 (0.353) 0.161 [-1.187 ,0.196]

-0.407 (0.184) 0.027 [-0.768 ,-0.046]

Inspector calls -0.566 (0.314) 0.071 [-1.182 ,0.049]

-0.327 (0.423) 0.438 [-1.156 ,0.501]

Inspector visits -0.454 (0.362) 0.209 [-1.164 ,0.255]

-0.337 (0.366) 0.357 [-1.054 ,0.381]

Inspection reports 0.010 (0.053) 0.846 [-0.094 ,0.114]

0.018 (0.092) 0.846 [-0.162 ,0.197]

E↵ort weighted index -0.486 (0.419) 0.246 [-1.308 ,0.335]

-0.159 (0.276) 0.564 [-0.699 ,0.381]

% Sta↵ Present 0.063 (0.391) 0.872 [-0.703 ,0.830]

0.294 (0.272) 0.280 [-0.239 ,0.827]

% unauthorized absent 0.047 (0.504) 0.926 [-0.942 ,1.035]

0.202 (0.248) 0.415 [-0.283 ,0.688]

Register book -0.363 (0.351) 0.302 [-1.052 ,0.326]

-0.192 (0.254) 0.450 [-0.689 ,0.306]

N. Outreach events -0.892 (0.353) 0.012 [-1.585 ,-0.200]

-0.552 (0.346) 0.111 [-1.231 ,0.126]

Input weighted index -0.199 (0.222) 0.369 [-0.635 ,0.236]

0.074 (0.270) 0.784 [-0.455 ,0.603]

Days w/o antimalarials -0.011 (0.235) 0.963 [-0.472 ,0.450]

0.770 (0.258) 0.003 [0.265 ,1.275]

Days w/o ORS -0.082 (0.363) 0.821 [-0.794 ,0.630]

0.247 (0.246) 0.317 [-0.236 ,0.730]

Funds received (millions) -0.223 (0.389) 0.566 [-0.987 ,0.540]

-0.131 (0.514) 0.799 [-1.138 ,0.876]

Antimalaria SO -0.244 (0.377) 0.518 [-0.983 ,0.495]

ORS SO -0.241 (0.246) 0.327 [-0.723 ,0.241]

Total SO -0.507 (0.264) 0.055 [-1.025 ,0.011]

Utilization weighted index -0.087 (0.228) 0.703 [-0.534 ,0.360]

-0.122 (0.272) 0.653 [-0.655 ,0.411]

Out patients 0.490 (0.212) 0.021 [0.075 ,0.906]

-0.024 (0.298) 0.936 [-0.608 ,0.560]

N. patients visiting clinic -0.235 (0.108) 0.029 [-0.446 ,-0.024]

0.061 (0.155) 0.693 [-0.242 ,0.364]

OP referrals -0.091 (0.148) 0.541 [-0.382 ,0.200]

0.018 (0.307) 0.953 [-0.583 ,0.619]

OP diagnoses -0.013 (0.115) 0.908 [-0.238 ,0.212]

0.161 (0.182) 0.376 [-0.195 ,0.517]

Tetanus doses 0.173 (0.220) 0.431 [-0.258 ,0.605]

-0.208 (0.201) 0.299 [-0.601 ,0.185]

Children immunized 0.337 (0.418) 0.420 [-0.483 ,1.156]

0.019 (0.211) 0.927 [-0.394 ,0.433]

IPT doses 0.410 (0.244) 0.092 [-0.067 ,0.888]

-0.084 (0.139) 0.544 [-0.356 ,0.188]

Iron acid 0.380 (0.304) 0.212 [-0.216 ,0.976]

0.038 (0.205) 0.853 [-0.365 ,0.441]

Free ITN 0.243 (0.301) 0.419 [-0.346 ,0.832]

-0.116 (0.265) 0.663 [-0.635 ,0.404]

Syphilis test 0.029 (0.187) 0.878 [-0.338 ,0.395]

-0.143 (0.350) 0.682 [-0.830 ,0.543]

Maternity admission 0.079 (0.202) 0.695 [-0.317 ,0.476]

-0.334 (0.131) 0.011 [-0.591 ,-0.078]

Maternity delivery 0.001 (0.176) 0.996 [-0.344 ,0.346]

0.035 (0.345) 0.919 [-0.642 ,0.712]

Vitamin A (mothers) -0.183 (0.139) 0.187 [-0.456 ,0.089]

-0.506 (0.158) 0.001 [-0.815 ,-0.196]

Post natal 0.176 (0.130) 0.178 [-0.080 ,0.431]

0.180 (0.232) 0.437 [-0.274 ,0.634]

Vitamin A (children) -0.212 (0.337) 0.529 [-0.872 ,0.448]

0.104 (0.386) 0.788 [-0.652 ,0.860]

Deworming doses -0.110 (0.277) 0.692 [-0.653 ,0.433]

-0.430 (0.260) 0.098 [-0.940 ,0.080]

In columns 2-5 we report short-term e↵ects (1 year), and in columns 6-9, long-term e↵ects (year 2). Indices are constructed using

the approach developed by Anderson (2008). Standard errors are corrected by combining estimation results using Seemingly

Unrelated Regression (SUR) estimation.

39

Page 41: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 7: Example messages

Type Message contextActionable In [XXX] health Centre, they have given us sta↵ but they are not at the

work placewe have problem of water source in [XXX] villageIn [XXX] village, [XXX] parish, [XXX] subcounty there is no road, noteachers in schools, and no sta↵ in the health centres please help us fromthese situations.Too much consumption of alcohol by the head teacher and some teachershas spoilt [XXX] primary school please help us.We in [XXX] Health Center are su↵ering from lack of nurses we lost oneof our nurses so can they help us with some nurses we are just left withone nurse hereIn our village [XXX], we have nothing like borehole. women go & fetchwater for 1km away from the village. How can government help ourvillage?There are no drugs in [XXX] HEALTH CENTER. In which ways canU-Report help us?[XXX] Health Centre charges women for delivery at Health Unit anddemand cash for visits

Relevant Help us to eradicate poverty, please!(not actionable) those who monitor Government programs, are those children important

or not?what do you think about EducationI want to discus about education.

Irrelevant HAPPY XMAS FOR YOU.Its pleasure greeting you, thank youI have good papers and commitment,punctuality,trustworth.Where is the game won?

40

Page 42: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

SUPPLEMNTARY INFORMATION

August 27, 2017

Contents

1 Variable description 3

2 Descriptive statistics 7

3 Outcomes at baseline 10

4 Results in tabular form 12

5 Change in outcome variables overtime 16

6 Treatment e↵ects and messaging intensity 19

7 Heterogenous e↵ects by distance to district HQs 21

8 Heterogenous e↵ects by community’s wealth 24

List of Tables

1 Education outcomes (descriptive statistics) . . . . . . . . . . . . . . . . . . . . . . . 7

2 Water outcomes (descriptive statistics) . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Health outcomes (descriptive statistics) . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 Balance test (village) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

5 Health Outcomes at Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

6 Education at Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1

Page 43: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

7 Education Outcomes Analysis (no covariates) . . . . . . . . . . . . . . . . . . . . . . 12

8 Education Outcomes Analysis (with covariates adjustment) . . . . . . . . . . . . . . 13

9 Health Outcomes Analysis (no covariates) . . . . . . . . . . . . . . . . . . . . . . . . 14

10 Health Outcomes Analysis (with covariates adjustment) . . . . . . . . . . . . . . . . 15

List of Figures

1 School monitoring: bar plots of outcome variables overtime . . . . . . . . . . . . . . 16

2 Health monitoring: bar plots of outcome variables overtime . . . . . . . . . . . . . . 16

3 School e↵ort: bar plots of outcome variables overtime . . . . . . . . . . . . . . . . . 17

4 Health e↵ort: bar plots of outcome variables overtime . . . . . . . . . . . . . . . . . 17

5 School inputs: bar plots of outcome variables overtime . . . . . . . . . . . . . . . . . 18

6 Health inputs: bar plots of outcome variables overtime . . . . . . . . . . . . . . . . . 18

7 Education: change from baseline to midline (short-term) against messages sent . . . 19

8 Health: change from baseline to midline (short-term) against messages sent . . . . . 19

9 Education: outcome indices at baseline against messages sent . . . . . . . . . . . . . 20

10 Health: outcome indices at baseline against messages sent . . . . . . . . . . . . . . . 20

11 Cluster-level: SMS messaging by distance to Arua . . . . . . . . . . . . . . . . . . . 21

12 Village-level: SMS messaging by distance to Arua . . . . . . . . . . . . . . . . . . . . 21

13 Education services: Heterogenous treatment e↵ect by distance to Arua . . . . . . . . 22

14 Health services: Heterogenous treatment e↵ect by distance to Arua . . . . . . . . . . 22

15 Water: Heterogenous treatment e↵ect by distance to Arua . . . . . . . . . . . . . . . 23

16 Cluster-level: SMS messaging by poverty . . . . . . . . . . . . . . . . . . . . . . . . . 24

17 Village-level: SMS messaging by wealth . . . . . . . . . . . . . . . . . . . . . . . . . 24

18 Education services: Heterogenous treatment e↵ect by poverty . . . . . . . . . . . . . 25

19 Health services: Heterogenous treatment e↵ect by poverty . . . . . . . . . . . . . . . 25

20 Water: Heterogenous treatment e↵ect by poverty . . . . . . . . . . . . . . . . . . . . 26

2

Page 44: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

1 Variable description

Below we describe the variables included in the indices for which we have both baseline and endline

data. For these outcomes, we will conduct a di↵erence-in-di↵erence analysis; for outcome measures

where we have endline data only, we will conduct a cross-sectional analysis. For education and

water, where the data is at the village level, we run two regressions without controls, one with and

one without cluster fixed e↵ects. For health we did not include cluster fixed e↵ects as there is only

one health center per cluster by design.

1. Health

(a) Monitoring:

• DHO visits: number of visits to the facility by DHO in the past three months, asrecorded in facility registration book. Count variable, (Audit variable, V dho visit)

• DHO calls: frequency of calls made by DHO to facility in the past three months,categorical variable. 1 = Never, 2 = about once a month, 3 = about once a week,4 = several times a week, 5 = every day (Audit variable, V dho call)

• Health inspector visits: number of visits to the facility by health inspector in thepast three months, as recorded in facility registration book, count variable (Auditvariable, V BE q67 q62 )

• Health inspector calls: frequency of calls made by health inspector to facility inthe past three months, categorical variables. 1 = Never, 2 = about once a month,3 = about once a week, 4 = several times a week, 5 = every day (Audit variable,V freq calls)

• Inspection reports (Counted at the District of Health): number of inspectionreports completed by the district/health inspector (Administrative data variable,A insp qtr)

• Inspection (yr.): number of times health center inspected according to inspectorsummary reports for financial year (Administrative variable, A insp yr)

(b) E↵ort:

• Outreach: frequency of outreach campaigns/events the clinic undertook in previousthree months, like a visit to a rural village or an immunization drive (Audit variable,V outreach)

• Sta↵ present: average of total sta↵ present across four days, based on registerbook (Audit variable, V sta↵present

• Sta↵ attendance rate: four day average of total employees present over totalemployees expected to be present (Audit variable, V perc employ pres)

• Register book: whether or not health center has attendance register book (Auditvariable, RegisterBook)

(c) Inputs:

• Days w/o antimalarials: number of days in the past 30 days that anti-malarialmedication has been out of stock (Audit variable, V inv so antimalarials)

• Oral rehydration stockout month: number of days in the past 30 days that oralrehydration salts (ORS) have been out of stock (Audit variable, V inv so ors analysis)

3

Page 45: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

• Anti-malarial stockout half-year (HMIS:Form 105 Section 5.1): numberof days in the 7 month period before/after program initiation that anti-malarialmedication has been out of stock (Admin variable, A so antimalarial)

• Oral rehydration stockout half-year (HMIS:Form 105 Section 5.1): numberof days in the 7 month period before/after program initiation that oral rehydrationsalts (ORS) have been out of stock (Admin variable HMIS:Form 105 Section 5.1,A so ors)

• Total stockout: the average number of days in a month that a preventative med-ication was out of stock (Admin variable, A total so rate)

• Funds received: the total amount of funds that a health center received overone financial year, millions of shillings (Admin variable HMIS:Form 105 Section 7,A Funds Received)

(d) Utilization:

• Out patient: total out patient attendance at the health clinic including new at-tendance and re-attendance (Admin variable, A OutPat Att Total)

• OP referral: total out patient referrals at health clinic (Admin variable, A OutPat Ref Total)

• OP diagnoses: total diagnoses for out patients at the health clinic (Admin variable,A OutPat Diag total)

• Attendance: total number of new attendances at health clinics (Admin variable,A total attendence)

• Tetanus doses: total doses of tetanus administered to women, pregnant womenincluded. Patients are administered 1 to 5 doses (Admin variable, A TI Total)

• Children immunized: total number of children from 0 to 4 years old that areimmunized (Admin variable, A CI Tot)

• IPT doses: total number of first dose and second dose of IPT/IPT1/IPT2 admin-istered at the antenatal clinic (Admin variable, A ANC IPT )

• Iron/acid: number of pregnant women receiving iron/folic acid on 1st antenatalclinic visit (Admin variable, A ANC Iron)

• Free ITN: number of pregnant women receiving free ITNs at the antenatal clinic(Admin variable, A ANC ITN )

• Syphilis test: number of pregnant women tested for syphilis at the antenatal clinic(Admin variable, A ANC Test Syph)

• Maternity admission: number of admissions to maternity unit (Admin variable,A Mat Admit)

• Maternity delivery: number of deliveries performed in the maternity unit (Adminvariable, A Mat Deliver InUnit)

• Vitamin A (mothers): number of mothers given Vitamin A supplement (Adminvariable, A Mat Moth VitA)

• Post natal: total number of post natal attendances (Admin variable, A PN Att)

• Vitamin A (children): total doses of 1st and 2nd doses of Vitamin A given tochildren, including infants aged 6-11 months and children aged 12-59 months (Adminvariable, A CH VitA totl)

• Deworming doses: total doses of 1st and 2nd doses of deworming administeredto children aged 1 to 14 years old (Admin variable, A CH Deworm Tot)

4

Page 46: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

2. Schools

(a) Monitoring:

• DEO visits: Number of visits to the school by DEO in the past three months, asrecorded in facility registration book (Audit variable, V deo visit rec)

• DEO calls: frequency of calls made by DEO to school in the past three months,dichotomous variable 0 = Never, 1 = Ever called in previous three months (Auditvariable, V deo ever call)

• School inspector visits: number of visits to the school by school inspector inthe past three months, as recorded in facility registration book (Audit variable,V isp visit rec)

• School inspector calls: frequency of calls made by school inspector to the schoolin the past three months, categorical variables. 1 = Never, 2 = about once a month,3 = about once a week, 4 = several times a week, 5 = every day (Audit variable,V insp calls)

• School inspector reports: Number of reports written by school inspector, countvariable. (Administrative variable, A Q3 insprep

(b) E↵ort:

• Teacher absenteeism: Number of teachers present over total number of teachersexpected present, average over four days: day of the audit, and previous Monday,Wednesday, and Friday. Measured by enumerator using school attendance recordbook. (Audit variable, Teacher Absent)

• Teachers present (day): Fraction of teachers present during audit, as measuredby enumerator across 3-4 classes observed (Audit variable, V present teach e)

• Meaningful board: Extent to which something meaningful is written on the boardin observed classrooms (average over classrooms), as measured by enumerator, 0 =nothing meaningful written, 1 = something meaningful written (Audit variable,V perc alotwritten)

• Teacher engaged: Average across observed classrooms of teacher engagement, asmeasured by enumerator. 0 = absent, 1 = present and disengaged, 2 = present andengaged (Audit variable, V perc Engaged) Only for endline results

• Sta↵ meetings: Number of sta↵ meetings in past three months, categorical. 1 =None, 2 = between 1 and 3, 3 = more than 3. (Audit variable, V school sta↵ meet)

(c) Inputs:

• Number of teachers: Number of teachers in the school, according to school records(Audit variable, V n teachers)

• Teacher transfers: Number of teachers transferred to the school, according toschool records (Audit variable, V teach transf to)

• Students per uniform: Ratio of students to uniforms, observed by enumerator(Audit variable, V students per supply1 )

• Students per book: Ratio of students to books, observed by enumerator (Auditvariable, V students per supply2 )

• Students per pencil: Ratio of students to pencils, observed by enumerator (Auditvariable, V students per supply3 )

(d) Outcome:

5

Page 47: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

• Enrollment: Total student enrollment in the school (Admin variable, A enrollment)

• PLE Grade 1: Fraction of students who scored a 1 (best score) on the primaryleaving examination (Admin variable, A PLE Grade1rate)

• PLE Grade 2: Fraction of students who scored a 2 (second best score) on theprimary leaving examination (Admin variable, A PLE Grade2rate)

• PLE pass rate: Fraction of students who passed the primary leaving examination,of those who sat for the exam (Admin variable, A PLE passrate)

3. Water

(a) • Parts and Services: sum of water related parts distributed and services completed(Admin variable, parts services 14 16, parts services 13 14 )

• Village Requests: sum of water related requests in villages (Admin variable,village requests1 14 16, village requests 13 14, A2 LC1Request0, A2 LC1Request1,A2 LC1Request2 )

6

Page 48: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

2 Descriptive statistics

Table 1: Education outcomes (descriptive statistics)

Variable Mean (Std. Dev.) Min. Max. NMonitoringDEO calls 0.47 (0.5) 0 1 269Inspector calls 1.96 (0.88) 1 4 249DEO visits 0.16 (0.49) 0 5 267Inspector visits 0.70 (1.02) 0 7 267Inspector reports 1.93 (1.72) 0 6 258E↵ort% Teachers present (records) 0.71 (0.18) 0 1 270% Teachers present (observed) 0.4 (0.29) 0 1 267Meaningful board 0.26 (0.33) 0 1 267Teacher engaged 0.49 (0.35) 0 1 255Sta↵ meetings 1.82 (0.5) 1 3 179InputsN. teachers employed 13.72 (5.77) 2 34 269Teachers transferred to school 0.62 (1.11) 0 10 269Students per uniform 0.57 (0.27) 0 3.05 266Students per book 0.9 (0.16) 0.01 1.45 266Students per pencil 0.81 (0.19) 0.06 1.13 266PerformanceEnrollment 1020.27 (383.42) 352 4086 245% PLE Grade 1 0.01 (0.03) 0 0.25 144% PLE Grade 2 0.29 (0.2) 0 0.85 144PLE pass rate 0.85 (0.15) 0.36 1 144

Table 2: Water outcomes (descriptive statistics)

Variable Mean (Std. Dev.) Min. Max. NParts and services post-treatment 0.08 (0.51) 0 10 2022Parts and services baseline 0.05 (0.39) 0 9 2022Village requests post-treatment 0.06 (0.28) 0 3 2022Village requests baseline 0.13 (0.42) 0 2 2021Village requests (2nd round data) baseline 0.12 (0.4) 0 3 331Village requests (2nd round data) midline 0.2 (0.49) 0 3 331Village requests (2nd round data) endline 0.2 (0.49) 0 3 331

7

Page 49: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 3: Health outcomes (descriptive statistics)

Variable Mean (Std. Dev.) Min. Max. NMonitoringDHO visits 0.1 (0.3) 0 1 141DHO calls 1.44 (0.85) 1 5 120Inspector calls 1.62 (1.08) 0 5 120Inspector visits 0.61 (1.23) 0 8 140Inspection reports 3.05 (0.96) 1 5 144Inspection (yr.) 0.81 (0.63) 0 4 144E↵ortHealth sta↵ present 0.56 (0.23) 0.1 1 94Unauthorized health sta↵ absence (inverse) 0.92 (0.15) 0.3 1 94Register book 0.69 (0.46) 0 1 144Outreach 2.91 (1.76) 0 5 141InputsDays w/o antimalarials -6.58 (10.04) -30 0 137Days w/o ORS -7.92 (11.11) -30 0 136Antimalaria SO -9.35 (20.01) -90 0 144ORS SO -10.14 (23.41) -152 0 144Total SO -0.23 (0.46) -2.98 0 144Funds received (millions) 3.1 (3.69) 0 30.24 144UtlizationOut patient 6706.57 (5672.24) 0 37256 144OP referral 56.23 (83.37) 0 547 144New patients attendance 90.63 (54.55) 8.31 337.99 141OP diagnoses 6032.1 (4532.93) 421 31236 144Tetanus doses 603.84 (494.93) 0 2821 144Children immunized 4673.39 (3654.15) 0 22706 144IPT doses 487.1 (388.79) 0 1879 144Iron acid 311.72 (239.68) 0 1526 144Free ITN 111.37 (140.85) 0 759 144Syphilis test 156.63 (228.82) 0 1265 144Maternity admission 281.11 (267.56) 0 1346 144Maternity delivery 243.51 (271.9) 0 2231 144Vitamin A (mothers) 171.23 (177.29) 0 976 144Post natal 259.54 (402.88) 0 2451 144Vitamin A (children) 917.54 (994.34) 0 4502 144Deworming doses 2138.58 (2873.7) 0 17859 144

8

Page 50: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 4: Balance test (village)

ControlMean

TreatmentMean

Di↵erence ofMeans

p-value forDi↵erence of Means

Adult population (>= 16) 312.920 295.923 16.997 0.472(15.764) (17.208) (23.618)

Mean age 20.641 21.137 -0.495 0.003(0.124) (0.107) (0.163)

Share Lugbara tribe 0.910 0.970 -0.060 0.009(0.024) (0.007) (0.023)

Ethnic polarization 0.480 0.522 -0.042 0.267(0.028) (0.026) (0.038)

Ethnic fractionalization (ELF) 0.055 0.048 0.007 0.624(0.010) (0.009) (0.013)

Religion fractionalization 0.278 0.284 -0.006 0.777(0.018) (0.015) (0.023)

Share literate 0.598 0.642 -0.044 0.017(0.013) (0.013) (0.018)

Mean education (0-4 scale) 1.165 1.192 -0.026 0.392(0.016) (0.025) (0.031)

Share with secondary education 0.234 0.250 -0.016 0.307(0.008) (0.012) (0.015)

Share employed 0.840 0.856 -0.016 0.436(0.016) (0.012) (0.020)

Share employed non-agri sectors 0.241 0.263 -0.023 0.416(0.019) (0.020) (0.028)

Poverty Index -0.084 -0.103 0.019 0.526(0.018) (0.023) (0.030)

Distance to Arua (kms) 31.840 25.211 6.629 0.002(1.592) (1.417) (2.124)

Health center in village 0.205 0.183 0.022 0.665(0.038) (0.034) (0.051)

Primary school in village 0.384 0.305 0.079 0.199(0.046) (0.040) (0.061)

N 112 131 243

9

Page 51: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

3 Outcomes at baseline

Table 5: Health Outcomes at Baseline

C mean T-C P-val N

DHO visits (baseline) 0.08 0.00 0.97 47DHO calls (baseline) 1.43 -0.32 0.12 40Inspector calls (baseline) 1.48 0.05 0.88 42Inspector visits (baseline) 0.21 0.06 0.65 46Inspection reports (baseline) 2.79 -0.33 0.13 48Inspection (yr.) (baseline) 1.04 0.00 1.00 48

Register book (baseline) 0.62 0.00 1.00 48Outreach (baseline) 3.73 -0.62 0.19 47

Days w/o antimalarials (baseline) -6.82 2.58 0.42 43Days w/o ORS (baseline) -10.67 4.76 0.19 42Antimalaria SO (baseline) -14.29 5.12 0.31 48ORS SO (baseline) -16.04 9.71 0.05 48Total SO (baseline) -0.35 0.17 0.05 48Funds received (millions) (baseline) 1.50 0.50 0.43 48

In this Table we explore the extent to which outcome variables used to constrcut summary indices (monitoring, e↵ort and

inputs) were balanced at baseline. Estimation results are derived from OLS regressions; p-values are two-sided.

10

Page 52: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 6: Education at Baseline

C mean T-C P-val N

DEO calls (baseline) 0.22 0.07 0.44 89Inspector calls (baseline) 2.07 -0.17 0.43 84DEO visits (baseline) 0.02 0.11 0.08 87Inspector visits (baseline) 0.45 0.20 0.32 87Inspector reports (baseline) 1.57 -0.35 0.29 86

% Teachers present (records) (baseline) 0.67 0.03 0.49 90% Teachers present (observed) (baseline) 0.35 -0.05 0.47 89Meaningful board (baseline) 0.36 0.13 0.19 89Teacher engaged (baseline) 0.60 0.01 0.94 77Sta↵ meetings (baseline) 1.84 0.02 0.89 89

N. teachers employed (baseline) 13.80 0.65 0.67 89Teachers transferred to school (baseline) 0.53 -0.19 0.48 89Students per uniform (baseline) 0.70 0.08 0.33 88Students per book (baseline) 0.92 0.06 0.10 88Students per pencil (baseline) 0.85 -0.01 0.91 88

Enrollment (baseline) 985.70 19.20 0.82 86% PLE Grade 1 (baseline) 0.00 0.01 0.13 72% PLE Grade 2 (baseline) 0.32 0.07 0.26 72PLE pass rate (baseline) 0.87 0.01 0.72 72

In this table we explore the extent to which outcome variables used to construct summary indices (monitoring, e↵ort and

inputs) were balanced at baseline. Estimation results are derived from OLS regressions with standard errors clustered at the

cluster-level; p-values are two-sided.

11

Page 53: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

4 Results in tabular form

In the manuscript we report estimation result using indices constructed as weighted mean of the

standardized outcome component (Anderson, 2008). Here we report, for robustness, equivalent

results where the summary indices, following Kling, Liebman, and Katz (2007) were constructed

as non-weighted mean of the same set of standardized outcome variables.

Table 7: Education Outcomes Analysis (no covariates)

Short-term Long-term

Variable Coef SE P-val CI Coef SE P-val CI

Monitoring index 0.154 (0.123) 0.211 [-0.087 ,0.394]

0.025 (0.157) 0.871 [-0.282 ,0.333]

DEO calls 0.307 (0.202) 0.128 [-0.089 ,0.702]

0.025 (0.222) 0.911 [-0.411 ,0.460]

Inspector calls 0.432 (0.227) 0.057 [-0.013 ,0.876]

-0.177 (0.211) 0.401 [-0.590 ,0.236]

DEO visits 0.048 (0.217) 0.826 [-0.378 ,0.473]

0.257 (0.239) 0.281 [-0.210 ,0.725]

Inspector visits -0.218 (0.216) 0.313 [-0.641 ,0.205]

0.034 (0.215) 0.874 [-0.386 ,0.455]

E↵ort index 0.171 (0.124) 0.167 [-0.072 ,0.414]

0.120 (0.147) 0.417 [-0.169 ,0.408]

% Teachers present (records) -0.186 (0.192) 0.333 [-0.563 ,0.191]

0.048 (0.225) 0.830 [-0.392 ,0.489]

% Teachers present (observed) 0.317 (0.202) 0.116 [-0.079 ,0.713]

0.317 (0.222) 0.153 [-0.118 ,0.752]

Meaningful board 0.201 (0.178) 0.258 [-0.147 ,0.550]

-0.034 (0.245) 0.889 [-0.515 ,0.446]

Teacher engaged 0.423 (0.215) 0.049 [0.001 ,0.845]

0.310 (0.227) 0.172 [-0.135 ,0.756]

Sta↵ meetings 0.024 (0.209) 0.910 [-0.386 ,0.434]

Input index 0.193 (0.136) 0.155 [-0.073 ,0.458]

0.086 (0.141) 0.540 [-0.190 ,0.362]

N. teachers employed 0.250 (0.129) 0.052 [-0.002 ,0.502]

0.142 (0.171) 0.408 [-0.194 ,0.478]

Teachers transferred to school 0.394 (0.288) 0.171 [-0.170 ,0.958]

0.223 (0.214) 0.297 [-0.196 ,0.641]

Students per uniform 0.031 (0.243) 0.898 [-0.445 ,0.508]

-0.214 (0.211) 0.310 [-0.626 ,0.199]

Students per book 0.276 (0.185) 0.135 [-0.086 ,0.638]

0.124 (0.231) 0.593 [-0.330 ,0.577]

Students per pencil 0.099 (0.254) 0.696 [-0.398 ,0.596]

0.226 (0.203) 0.266 [-0.172 ,0.623]

Performance index -0.143 (0.109) 0.189 [-0.356 ,0.070]

Enrollment 0.018 (0.116) 0.878 [-0.209 ,0.244]

% PLE Grade 1 0.052 (0.370) 0.889 [-0.673 ,0.776]

% PLE Grade 2 -0.304 (0.134) 0.023 [-0.567 ,-0.042]

PLE pass rate -0.106 (0.165) 0.520 [-0.429 ,0.217]

In columns 2-5 we report short-term e↵ects (1 year), and in columns 6-9, long-term e↵ects (year 2). Indices are constructedusing the approach developed by Kling et al., (2007). Standard errors are corrected by combining estimation results usingSeemingly Unrelated Regression estimation (SUR); p-values are two-sided.

12

Page 54: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 8: Education Outcomes Analysis (with covariates adjustment)

Short-term Long-term

Variable Coef SE P-val CI Coef SE P-val CI

Monitoring index 0.156 (0.129) 0.224 [-0.096 ,0.408]

0.004 (0.173) 0.982 [-0.335 ,0.343]

DEO calls 0.240 (0.222) 0.279 [-0.194 ,0.675]

-0.010 (0.221) 0.963 [-0.444 ,0.424]

Inspector calls 0.402 (0.245) 0.101 [-0.078 ,0.882]

-0.189 (0.213) 0.375 [-0.607 ,0.229]

DEO visits 0.097 (0.188) 0.606 [-0.272 ,0.465]

0.241 (0.290) 0.405 [-0.327 ,0.809]

Inspector visits -0.152 (0.190) 0.425 [-0.525 ,0.221]

-0.064 (0.244) 0.792 [-0.543 ,0.415]

E↵ort index 0.133 (0.132) 0.313 [-0.125 ,0.391]

0.020 (0.145) 0.889 [-0.265 ,0.305]

% Teachers present (records) -0.330 (0.180) 0.066 [-0.683 ,0.022]

0.021 (0.193) 0.915 [-0.357 ,0.398]

% Teachers present (observed) 0.202 (0.212) 0.340 [-0.213 ,0.617]

0.241 (0.218) 0.267 [-0.185 ,0.668]

Meaningful board 0.324 (0.195) 0.097 [-0.059 ,0.708]

-0.199 (0.256) 0.437 [-0.700 ,0.303]

Teacher engaged 0.340 (0.214) 0.111 [-0.078 ,0.759]

0.202 (0.215) 0.348 [-0.219 ,0.622]

Sta↵ meetings -0.145 (0.178) 0.415 [-0.495 ,0.204]

Input index 0.184 (0.094) 0.051 [-0.000 ,0.369]

0.002 (0.115) 0.987 [-0.223 ,0.227]

N. teachers employed 0.269 (0.116) 0.021 [0.041 ,0.497]

0.120 (0.173) 0.487 [-0.219 ,0.459]

Teachers transferred to school 0.422 (0.266) 0.112 [-0.099 ,0.942]

0.102 (0.205) 0.617 [-0.299 ,0.504]

Students per uniform 0.116 (0.241) 0.629 [-0.355 ,0.588]

-0.163 (0.182) 0.372 [-0.520 ,0.194]

Students per book 0.215 (0.205) 0.295 [-0.187 ,0.617]

0.029 (0.230) 0.900 [-0.422 ,0.480]

Students per pencil 0.002 (0.234) 0.994 [-0.456 ,0.460]

0.010 (0.191) 0.959 [-0.364 ,0.384]

Performance index -0.133 (0.096) 0.163 [-0.321 ,0.054]

Enrollment 0.034 (0.109) 0.755 [-0.180 ,0.248]

% PLE Grade 1 -0.111 (0.238) 0.641 [-0.577 ,0.355]

% PLE Grade 2 -0.210 (0.157) 0.182 [-0.517 ,0.098]

PLE pass rate -0.086 (0.170) 0.614 [-0.420 ,0.248]

In columns 2-5 we report short-term e↵ects (1 year), and in columns 6-9, long-term e↵ects (year 2). Indices are constructedusing the approach developed by Kling et al., (2007). Standard errors are corrected by combining estimation results usingSeemingly Unrelated Regression estimation (SUR); p-values are two-sided.

13

Page 55: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 9: Health Outcomes Analysis (no covariates)

Short-term Long-term

Variable Coef SE P-val CI Coef SE P-val CI

Monitoring index -0.140 (0.127) 0.272 [-0.389 ,0.110]

-0.175 (0.122) 0.149 [-0.414 ,0.063]

DHO visits -0.122 (0.262) 0.641 [-0.635 ,0.391]

-0.128 (0.256) 0.617 [-0.629 ,0.373]

DHO calls -0.361 (0.257) 0.161 [-0.865 ,0.144]

-0.227 (0.125) 0.071 [-0.472 ,0.019]

Inspector calls -0.257 (0.254) 0.312 [-0.756 ,0.242]

-0.124 (0.276) 0.652 [-0.665 ,0.416]

Inspector visits -0.057 (0.279) 0.839 [-0.603 ,0.490]

-0.146 (0.277) 0.599 [-0.688 ,0.397]

Inspection reports -0.019 (0.055) 0.731 [-0.127 ,0.089]

-0.033 (0.095) 0.731 [-0.218 ,0.153]

E↵ort index -0.184 (0.167) 0.270 [-0.510 ,0.143]

-0.060 (0.138) 0.667 [-0.331 ,0.212]

% Sta↵ Present 0.063 (0.280) 0.822 [-0.486 ,0.613]

0.298 (0.282) 0.290 [-0.255 ,0.852]

% unauthorized absent -0.014 (0.276) 0.960 [-0.555 ,0.527]

0.185 (0.243) 0.447 [-0.292 ,0.662]

Register book -0.301 (0.283) 0.287 [-0.856 ,0.254]

-0.301 (0.283) 0.287 [-0.856 ,0.254]

N. Outreach events -0.527 (0.282) 0.062 [-1.079 ,0.026]

-0.379 (0.304) 0.212 [-0.974 ,0.216]

Input index 0.064 (0.148) 0.668 [-0.227 ,0.354]

0.055 (0.135) 0.686 [-0.210 ,0.319]

Days w/o antimalarials 0.335 (0.247) 0.174 [-0.148 ,0.819]

0.962 (0.227) 0.000 [0.518 ,1.407]

Days w/o ORS 0.151 (0.275) 0.583 [-0.388 ,0.691]

0.219 (0.246) 0.374 [-0.264 ,0.701]

Funds received (millions) 0.005 (0.370) 0.989 [-0.720 ,0.731]

0.093 (0.375) 0.804 [-0.641 ,0.828]

Antimalaria SO 0.056 (0.280) 0.841 [-0.493 ,0.605]

ORS SO -0.174 (0.268) 0.516 [-0.699 ,0.351]

Total SO -0.203 (0.241) 0.398 [-0.675 ,0.268]

Utilization index 0.005 (0.127) 0.967 [-0.243 ,0.253]

0.015 (0.093) 0.875 [-0.168 ,0.198]

Out patients 0.256 (0.205) 0.212 [-0.146 ,0.658]

-0.253 (0.306) 0.410 [-0.853 ,0.348]

N. patients visiting clinic -0.125 (0.094) 0.183 [-0.309 ,0.059]

0.070 (0.142) 0.619 [-0.207 ,0.348]

OP referrals -0.106 (0.176) 0.545 [-0.450 ,0.238]

-0.005 (0.302) 0.988 [-0.596 ,0.586]

OP diagnoses 0.040 (0.104) 0.702 [-0.164 ,0.244]

0.189 (0.164) 0.247 [-0.131 ,0.510]

Tetanus doses 0.159 (0.184) 0.388 [-0.202 ,0.520]

-0.073 (0.195) 0.706 [-0.455 ,0.308]

Children immunized 0.411 (0.329) 0.212 [-0.235 ,1.056]

-0.119 (0.212) 0.573 [-0.534 ,0.295]

IPT doses 0.122 (0.225) 0.588 [-0.319 ,0.563]

-0.047 (0.163) 0.771 [-0.367 ,0.272]

Iron acid -0.013 (0.280) 0.962 [-0.562 ,0.535]

-0.130 (0.217) 0.548 [-0.556 ,0.295]

Free ITN -0.001 (0.247) 0.998 [-0.485 ,0.484]

-0.370 (0.237) 0.119 [-0.834 ,0.095]

Syphilis test -0.268 (0.214) 0.210 [-0.687 ,0.151]

-0.162 (0.295) 0.583 [-0.741 ,0.417]

Maternity admission -0.180 (0.194) 0.353 [-0.560 ,0.200]

-0.144 (0.139) 0.299 [-0.417 ,0.128]

Maternity delivery -0.248 (0.174) 0.155 [-0.590 ,0.093]

0.175 (0.332) 0.597 [-0.475 ,0.825]

Vitamin A (mothers) -0.398 (0.165) 0.016 [-0.722 ,-0.074]

-0.216 (0.189) 0.253 [-0.585 ,0.154]

Post natal -0.127 (0.132) 0.337 [-0.385 ,0.132]

0.087 (0.246) 0.724 [-0.395 ,0.569]

Vitamin A (children) 0.036 (0.306) 0.907 [-0.564 ,0.636]

0.235 (0.303) 0.438 [-0.358 ,0.829]

Deworming doses 0.090 (0.378) 0.812 [-0.651 ,0.830]

-0.215 (0.175) 0.218 [-0.557 ,0.127]

In columns 2-5 we report short-term e↵ects (1 year), and in columns 6-9, long-term e↵ects (year 2). Indices are constructedusing the approach developed by Kling et al., (2007). Standard errors are corrected by combining estimation results usingSeemingly Unrelated Regression (SUR) estimation.

14

Page 56: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

Table 10: Health Outcomes Analysis (with covariates adjustment)

Short-term Long-term

Variable Coef SE P-val CI Coef SE P-val CI

Monitoring index -0.220 (0.158) 0.163 [-0.529 ,0.089]

-0.231 (0.192) 0.228 [-0.606 ,0.145]

DHO visits -0.104 (0.350) 0.767 [-0.791 ,0.583]

-0.142 (0.313) 0.649 [-0.755 ,0.471]

DHO calls -0.495 (0.353) 0.161 [-1.187 ,0.196]

-0.407 (0.184) 0.027 [-0.768 ,-0.046]

Inspector calls -0.566 (0.314) 0.071 [-1.182 ,0.049]

-0.327 (0.423) 0.438 [-1.156 ,0.501]

Inspector visits -0.454 (0.362) 0.209 [-1.164 ,0.255]

-0.337 (0.366) 0.357 [-1.054 ,0.381]

Inspection reports 0.010 (0.053) 0.846 [-0.094 ,0.114]

0.018 (0.092) 0.846 [-0.162 ,0.197]

E↵ort index -0.274 (0.262) 0.296 [-0.788 ,0.240]

-0.059 (0.131) 0.654 [-0.315 ,0.197]

% Sta↵ Present 0.063 (0.391) 0.872 [-0.703 ,0.830]

0.294 (0.272) 0.280 [-0.239 ,0.827]

% unauthorized absent 0.047 (0.504) 0.926 [-0.942 ,1.035]

0.202 (0.248) 0.415 [-0.283 ,0.688]

Register book -0.363 (0.351) 0.302 [-1.052 ,0.326]

-0.192 (0.254) 0.450 [-0.689 ,0.306]

N. Outreach events -0.892 (0.353) 0.012 [-1.585 ,-0.200]

-0.552 (0.346) 0.111 [-1.231 ,0.126]

Input index -0.059 (0.143) 0.681 [-0.339 ,0.221]

-0.029 (0.157) 0.855 [-0.336 ,0.279]

Days w/o antimalarials -0.011 (0.235) 0.963 [-0.472 ,0.450]

0.770 (0.258) 0.003 [0.265 ,1.275]

Days w/o ORS -0.082 (0.363) 0.821 [-0.794 ,0.630]

0.247 (0.246) 0.317 [-0.236 ,0.730]

Funds received (millions) -0.223 (0.389) 0.566 [-0.987 ,0.540]

-0.131 (0.514) 0.799 [-1.138 ,0.876]

Antimalaria SO -0.244 (0.377) 0.518 [-0.983 ,0.495]

ORS SO -0.241 (0.246) 0.327 [-0.723 ,0.241]

Total SO -0.507 (0.264) 0.055 [-1.025 ,0.011]

Utilization index 0.123 (0.129) 0.343 [-0.131 ,0.376]

-0.022 (0.096) 0.815 [-0.210 ,0.165]

Out patients 0.490 (0.212) 0.021 [0.075 ,0.906]

-0.024 (0.298) 0.936 [-0.608 ,0.560]

N. patients visiting clinic -0.235 (0.108) 0.029 [-0.446 ,-0.024]

0.061 (0.155) 0.693 [-0.242 ,0.364]

OP referrals -0.091 (0.148) 0.541 [-0.382 ,0.200]

0.018 (0.307) 0.953 [-0.583 ,0.619]

OP diagnoses -0.013 (0.115) 0.908 [-0.238 ,0.212]

0.161 (0.182) 0.376 [-0.195 ,0.517]

Tetanus doses 0.173 (0.220) 0.431 [-0.258 ,0.605]

-0.208 (0.201) 0.299 [-0.601 ,0.185]

Children immunized 0.337 (0.418) 0.420 [-0.483 ,1.156]

0.019 (0.211) 0.927 [-0.394 ,0.433]

IPT doses 0.410 (0.244) 0.092 [-0.067 ,0.888]

-0.084 (0.139) 0.544 [-0.356 ,0.188]

Iron acid 0.380 (0.304) 0.212 [-0.216 ,0.976]

0.038 (0.205) 0.853 [-0.365 ,0.441]

Free ITN 0.243 (0.301) 0.419 [-0.346 ,0.832]

-0.116 (0.265) 0.663 [-0.635 ,0.404]

Syphilis test 0.029 (0.187) 0.878 [-0.338 ,0.395]

-0.143 (0.350) 0.682 [-0.830 ,0.543]

Maternity admission 0.079 (0.202) 0.695 [-0.317 ,0.476]

-0.334 (0.131) 0.011 [-0.591 ,-0.078]

Maternity delivery 0.001 (0.176) 0.996 [-0.344 ,0.346]

0.035 (0.345) 0.919 [-0.642 ,0.712]

Vitamin A (mothers) -0.183 (0.139) 0.187 [-0.456 ,0.089]

-0.506 (0.158) 0.001 [-0.815 ,-0.196]

Post natal 0.176 (0.130) 0.178 [-0.080 ,0.431]

0.180 (0.232) 0.437 [-0.274 ,0.634]

Vitamin A (children) -0.212 (0.337) 0.529 [-0.872 ,0.448]

0.104 (0.386) 0.788 [-0.652 ,0.860]

Deworming doses -0.110 (0.277) 0.692 [-0.653 ,0.433]

-0.430 (0.260) 0.098 [-0.940 ,0.080]

In columns 2-5 we report short-term e↵ects (1 year), and in columns 6-9, long-term e↵ects (year 2). Indices are constructedusing the approach developed by Kling et al., (2007). Standard errors are corrected by combining estimation results usingSeemingly Unrelated Regression (SUR) estimation.

15

Page 57: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

5 Change in outcome variables overtime

0

.2

.4

.6

.8M

ean

DEO

cal

ls

Control TreatmentBase mid end Base mid end

DEO calls

0

.5

1

1.5

2

2.5

Mea

n In

spec

tor c

alls

Control TreatmentBase mid end Base mid end

Inspector calls

0

.1

.2

.3

Mea

n DE

O v

isits

Control TreatmentBase mid end Base mid end

DEO visits

0

.5

1

1.5

Mea

n In

spec

tor v

isits

Control TreatmentBase mid end Base mid end

Inspector visits

0

.5

1

1.5

2

2.5M

ean

Insp

ecto

r rep

orts

Control TreatmentBase mid end Base mid end

Inspector reports

Figure 1: School monitoring: bar plots of outcome variables overtime

0

.05

.1

.15

Mea

n DH

O v

isits

Control TreatmentBase mid end Base mid end

DHO visits

0

.5

1

1.5

2

Mea

n DH

O c

alls

Control TreatmentBase mid end Base mid end

DHO calls

0

.5

1

1.5

2

Mea

n In

spec

tor c

alls

Control TreatmentBase mid end Base mid end

Inspector calls

0

.2

.4

.6

.8

1

Mea

n In

spec

tor v

isits

Control TreatmentBase mid end Base mid end

Inspector visits

0

1

2

3

4

Mea

n In

spec

tion

repo

rts

Control TreatmentBase mid end Base mid end

Inspection reports

0

.2

.4

.6

.8

1

Mea

n In

spec

tion

(yr.)

Control TreatmentBase mid end Base mid end

Inspection (yr.)

Figure 2: Health monitoring: bar plots of outcome variables overtime

16

Page 58: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

0

.2

.4

.6

.8

Mea

n %

Tea

cher

s pr

esen

t (re

cord

s)

Control TreatmentBase mid end Base mid end

% Teachers present (records)

0

.1

.2

.3

.4

.5

Mea

n %

Tea

cher

s pr

esen

t (ob

serv

ed)

Control TreatmentBase mid end Base mid end

% Teachers present (observed)

0

.1

.2

.3

.4

.5

Mea

n M

eani

ngfu

l boa

rd

Control TreatmentBase mid end Base mid end

Meaningful board

0

.2

.4

.6

Mea

n Te

ache

r eng

aged

Control TreatmentBase mid end Base mid end

Teacher engaged

0

.5

1

1.5

2

Mea

n St

aff m

eetin

gs

Control TreatmentBase mid end Base mid end

Staff meetings

Figure 3: School e↵ort: bar plots of outcome variables overtime

0

.2

.4

.6

Mea

n He

alth

sta

ff pr

esen

t

Control TreatmentBase mid end Base mid end

Health staff present

0

.2

.4

.6

.8

1

Mea

n Un

auth

orize

d he

alth

sta

ff ab

senc

e (in

vers

e)

Control TreatmentBase mid end Base mid end

Unauthorized health staff absence (inverse)

0

.2

.4

.6

.8

Mea

n Re

gist

er b

ook

Control TreatmentBase mid end Base mid end

Register book

0

1

2

3

4

Mea

n O

utre

ach

Control TreatmentBase mid end Base mid end

Outreach

Figure 4: Health e↵ort: bar plots of outcome variables overtime

17

Page 59: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

0

5

10

15

Mea

n N

. tea

cher

s em

ploy

ed

Control TreatmentBase mid end Base mid end

N. teachers employed

0

.2

.4

.6

.8

1

Mea

n Te

ache

rs tr

ansf

erre

d to

sch

ool

Control TreatmentBase mid end Base mid end

Teachers transferred to school

0

.2

.4

.6

.8

Mea

n St

uden

ts p

er u

nifo

rm

Control TreatmentBase mid end Base mid end

Students per uniform

0

.2

.4

.6

.8

1

Mea

n St

uden

ts p

er b

ook

Control TreatmentBase mid end Base mid end

Students per book

0

.2

.4

.6

.8

Mea

n St

uden

ts p

er p

enci

l

Control TreatmentBase mid end Base mid end

Students per pencil

Figure 5: School inputs: bar plots of outcome variables overtime

-20

-15

-10

-5

0

Mea

n Da

ys w

/o a

ntim

alar

ials

Control TreatmentBase mid end Base mid end

Days w/o antimalarials

-15

-10

-5

0

Mea

n Da

ys w

/o O

RS

Control TreatmentBase mid end Base mid end

Days w/o ORS

-20

-15

-10

-5

0

Mea

n An

timal

aria

SO

Control TreatmentBase mid end Base mid end

Antimalaria SO

-20

-15

-10

-5

0

Mea

n O

RS S

O

Control TreatmentBase mid end Base mid end

ORS SO

-.5

-.4

-.3

-.2

-.1

0

Mea

n To

tal S

O

Control TreatmentBase mid end Base mid end

Total SO

0

1

2

3

4

5

Mea

n Fu

nds

rece

ived

(milli

ons)

Control TreatmentBase mid end Base mid end

Funds received (millions)

Figure 6: Health inputs: bar plots of outcome variables overtime

18

Page 60: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

6 Treatment e↵ects and messaging intensity

-4

-2

0

2

4C

hang

e m

onito

ring

0 100 200 300 400All relevant messages

School monitoring

-4

-2

0

2

4

Cha

nge

effo

rt

0 100 200 300 400All relevant messages

School effort

-4

-2

0

2

4

Cha

nge

inpu

t

0 100 200 300 400All relevant messages

School input

-4

-2

0

2

4

Cha

nge

mon

itorin

g

0 2 4 6Education messages

School monitoring

-4

-2

0

2

4C

hang

e ef

fort

0 2 4 6Education messages

School effort

-4

-2

0

2

4

Cha

nge

inpu

t

0 2 4 6Education messages

School input

Figure 7: Education: change from baseline to midline (short-term) against messages sent

-2

-1

0

1

2

Cha

nge

mon

itorin

g

0 100 200 300 400All relevant messages

Health monitoring

-2

-1

0

1

2

Cha

nge

effo

rt

0 100 200 300 400All relevant messages

Health effort

-2

-1

0

1

2

Cha

nge

inpu

t

0 100 200 300 400All relevant messages

Health input

-2

-1

0

1

2

Cha

nge

mon

itorin

g

0 5 10Health messages

Health monitoring

-2

-1

0

1

2

Cha

nge

effo

rt

0 5 10Health messages

Health effort

-2

-1

0

1

2

Cha

nge

inpu

t

0 5 10Health messages

Health input

Figure 8: Health: change from baseline to midline (short-term) against messages sent

19

Page 61: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

-1

0

1

2

3

4

base

line

edu

mon

itorin

g

0 2 4 6Education messages

School monitoring

-2

-1

0

1

2

3

base

line

edu

effo

rt

0 2 4 6Education messages

School effort

-1

0

1

2

3

4

base

line

edu

inpu

t

0 2 4 6Education messages

School input

-2

0

2

4

base

line

edu

outc

ome

0 2 4 6Education messages

School outcome

Messaging and baseline conditions

Figure 9: Education: outcome indices at baseline against messages sent. Figure shows that placesthat were worse o↵ to begin with did not necessarily send a larger number of messages via U-Bridge.

-2

-1

0

1

2

base

line

mon

itorin

g

0 5 10N. health messages

Health monitoring

-2

-1

0

1

base

line

effo

rt

0 5 10N. health messages

Health effort

-2

-1

0

1

2

base

line

inpu

t

0 5 10N. health messages

Health input

-1.5

-1

-.5

0

.5

1

base

line

utiliz

atio

n

0 5 10N. health messages

Health utilization

Figure 10: Health: outcome indices at baseline against messages sent. Figure shows that placesthat were worse o↵ to begin with did not necessarily send a larger number of messages via U-Bridge

20

Page 62: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

7 Heterogenous e↵ects by distance to district HQs

0

100

200

300

Tota

l mes

sage

s (c

lust

er)

2 2.5 3 3.5 4Distance to Arua (log)

Total messaging

0

5

10

15

20

Educ

atio

n m

essa

ges

(clu

ster

)

2 2.5 3 3.5 4Distance to Arua (log)

Education messaging

0

5

10

15

20

Hea

lth m

essa

ges

(clu

ster

)

2 2.5 3 3.5 4Distance to Arua (log)

Health messaging

0

5

10

15

20

Wat

er m

essa

ges

(clu

ster

)

2 2.5 3 3.5 4Distance to Arua (log)

Water messaging

Messaging and distance to district HQs

Figure 11: Cluster-level: SMS messaging by distance to Arua

0

50

100

150

Tota

l mes

sage

s (c

lust

er)

0 20 40 60Distance to Arua (log)

Total messaging

0

5

10

15

Educ

atio

n m

essa

ges

(clu

ster

)

0 20 40 60Distance to Arua (log)

Education messaging

0

5

10

Hea

lth m

essa

ges

(clu

ster

)

0 20 40 60Distance to Arua (log)

Health messaging

0

5

10

15

Wat

er m

essa

ges

(clu

ster

)

0 20 40 60Distance to Arua (log)

Water messaging

Messaging and distance to district HQs

Figure 12: Village-level: SMS messaging by distance to Arua

21

Page 63: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

L M H

-1.5-1-.5

0.51

1.5

Mar

gina

l Effe

ct o

f U-B

ridge

on

mon

itorin

g

0 20 40 60Moderator: Distance

School monitoring

L M H

-1.5-1-.5

0.51

1.5

Mar

gina

l Effe

ct o

f U-B

ridge

on

effo

rt

0 20 40 60Moderator: Distance

School effort

L M H

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f U-B

ridge

on

inpu

t

0 20 40 60Moderator: Distance

School input

L M H

-1.5

-1

-.5

0

.5

1

Mar

gina

l Effe

ct o

f U-B

ridge

on

outc

ome

0 20 40 60Moderator: Distance

School outcome

Figure 13: Education services: Heterogenous treatment e↵ect by distance to Arua

L M H

-3

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

1 2 3 4 5Moderator: Distance

Health monitoring

L M H

-4

-2

0

2

4

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

1 2 3 4 5Moderator: Distance

Health effort

L M H

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

1 2 3 4 5Moderator: Distance

Health input

L M H

-3

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

1 2 3 4 5Moderator: Distance

Health utilization

Figure 14: Health services: Heterogenous treatment e↵ect by distance to Arua

22

Page 64: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

L M H

-3

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

0 20 40 60Moderator: Distance

Water Services

L M H

-1

-.5

0

.5

1

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

0 20 40 60Moderator: Distance

Water Requests

Figure 15: Water parts & services: Heterogenous treatment e↵ect by distance to Arua

23

Page 65: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

8 Heterogenous e↵ects by community’s wealth

0

100

200

300

Tota

l mes

sage

s (c

lust

er)

-2 -1 0 1 2Wealth

Total messaging

0

5

10

15

20

Educ

atio

n m

essa

ges

(clu

ster

)

-2 -1 0 1 2Wealth

Education messaging

0

5

10

15

20

Hea

lth m

essa

ges

(clu

ster

)

-2 -1 0 1 2Wealth

Health messaging

0

5

10

15

20

Wat

er m

essa

ges

(clu

ster

)

-2 -1 0 1 2Wealth

Water messaging

Messaging and cluster wealth

Figure 16: Cluster-level: SMS messaging by community’s wealth

0

50

100

150

Tota

l mes

sage

s (c

lust

er)

-4 -2 0 2 4Wealth

Total messaging

0

5

10

15

Educ

atio

n m

essa

ges

(clu

ster

)

-4 -2 0 2 4Wealth

Education messaging

0

5

10

Hea

lth m

essa

ges

(clu

ster

)

-4 -2 0 2 4Wealth

Health messaging

0

5

10

15

Wat

er m

essa

ges

(clu

ster

)

-4 -2 0 2 4Wealth

Water messaging

Messaging and village wealth

Figure 17: Cluster-level: SMS messaging by community’s wealth

24

Page 66: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

L M H

-3

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f U-B

ridge

on

mon

itorin

g

-2 0 2 4Moderator: Wealth

School monitoring

L M H

-3

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f U-B

ridge

on

effo

rt

-2 0 2 4Moderator: Wealth

School effort

L M H

-3-2-10123

Mar

gina

l Effe

ct o

f U-B

ridge

on

inpu

t

-2 0 2 4Moderator: Wealth

School input

L M H

-3

-2

-1

0

1

Mar

gina

l Effe

ct o

f U-B

ridge

on

outc

ome

-2 0 2 4Moderator: Wealth

School outcome

Figure 18: Education services: Heterogenous treatment e↵ect by village wealth; lower values arepoorer villages

L M H

-3-2-1012

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

-2 -1 0 1 2Moderator: Wealth

Health monitoring

L M H

-3-2-1012

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

-2 -1 0 1 2Moderator: Wealth

Health effort

L M H

-3

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

-2 -1 0 1 2Moderator: Wealth

Health input

L M H

-3

-2

-1

0

1

2

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

-2 -1 0 1 2Moderator: Wealth

Health utilization

Figure 19: Health services: Heterogenous treatment e↵ect by cluster wealth; lower values arepoorer clusters

25

Page 67: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

L M H

-3

-2

-1

0

1

2

3

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

-4 -2 0 2 4Moderator: Poverty

Water Services

L M H

-1

-.5

0

.5

1

Mar

gina

l Effe

ct o

f UBr

idge

on

Y

-4 -2 0 2 4Moderator: Poverty

Water Requests

Figure 20: Water village requests: Heterogenous treatment e↵ect by poverty

26

Page 68: ICT for Service Delivery · 2017-12-22 · Guy Grossman† Melina Platas ... U-Bridge compared to similar ICT platforms for political communication, at least in the first year, we

References

Anderson, Michael L. 2008. “Multiple Inference and Gender Di↵erences in the E↵ects of Early

Intervention: A Reevaluation of the Abecedarian, Perry Preschool, and Early Training Projects.”

Journal of the American Statistical Association 103 (484): 1481–1495.

Kling, Je↵rey R, Je↵rey B Liebman, and Lawrence F Katz. 2007. “Experimental Analysis of

Neighborhood E↵ects.” Econometrica 75 (1): 83–119.

27