313

Click here to load reader

Technology and the New Economy

Embed Size (px)

Citation preview

Page 1: Technology and the New Economy
Page 2: Technology and the New Economy

Technology and the New Economy

Page 3: Technology and the New Economy

This page intentionally left blank

Page 4: Technology and the New Economy

Technology and the New Economy

edited by Chong-En Bai and Chi-Wa Yuen

Foreword by Robert E. Lucas Jr.

The MIT Press

Cambridge, Massachusetts

London, England

Page 5: Technology and the New Economy

( 2002 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in anyform by any electronic or mechanical means (including photocopying,recording, or information storage and retrieval) without permission inwriting from the publisher.

This book was set in Sabon on 3B2 by Asco Typesetters, Hong Kong,and was printed and bound in the United States of America.

Library of Congress Cataloging-in-Publication Data

Technology and the new economy / edited by Chong-En Bai andChi-Wa Yuen; foreword by Robert E. Lucas Jr.p. cm.

‘‘Lectures . . . originally delivered . . . at the University of Hong Kongin 2001–2002 in celebration of its 90th birthday’’—Introd.Includes bibliographical references and index.ISBN 0-262-02534-5 (alk. paper)1. Technological innovations—Economic aspects—United States—Congresses. 2. Technological innovations—Economic aspects—Congresses. 3. Information technology—Congresses. I. Bai,Chong-En. II. Yuen, Chi-Wa, 1960–HC110.T4 T3928 20033380.064—dc21 2002026322

Page 6: Technology and the New Economy

Contents

Foreword by Robert E. Lucas Jr. vii

Introduction 1

Chong-En Bai and Chi-Wa Yuen

1 Stock Markets in the New Economy 9

Boyan Jovanovic and Peter L. Rousseau

2 The Value of Competitive Innovation and U.S. Policy

toward the Computer Industry 49

Timothy F. Bresnahan and Franco Malerba

3 Technology Dissemination and Economic Growth: Some

Lessons for the New Economy 95

Danny Quah

4 Technological Advancement and Long-Term Economic

Growth in Asia 157

Jeffrey D. Sachs and John W. McArthur

5 Monetary Policy in the Information Economy 187

Michael Woodford

Postscript 275

Chong-En Bai and Chi-Wa Yuen

Index 285

Page 7: Technology and the New Economy

This page intentionally left blank

Page 8: Technology and the New Economy

Foreword

A public lecture series in which distinguished economic

scholars discuss technology and the new economy seems a fine

way to celebrate the ninetieth anniversary of the University of

Hong Kong (HKU). The Hong Kong economy—that glorious

symbol of the possibilities for economic growth that are avail-

able to any society, no matter how modest its resources—is

just the right place to have such a series of lectures. I take the

quality of the lectures collected in this volume as evidence of

the rightness of the location, of the agenda, and of the people

invited by HKU to speak and write on various aspects of this

topic.

Even in this setting, though, it seems that economists are

mistrustful of the novelty of the ‘‘new economy.’’ Is it really

new? Is new technology based on micro-circuitry fundamen-

tally different economically from new technology based on

small electric motors or hydrocarbon molecules? Does in-

formation technology really affect productivity? I find myself

entirely out of sympathy with such guarded reactions. I remem-

ber looking across the airplane aisle last summer and think-

ing that I had never imagined I would live to see something

as beautiful as the notebook computer on which another pas-

senger was working, such an elegant and functional solution

Page 9: Technology and the New Economy

to such a tightly constrained design problem. How can anyone

doubt its novelty and importance?

My fellow passenger was working on color graphics, and

I thought how common it has become to see graphics

everywhere, and how much they have improved: axes labeled,

units specified, sources cited, color imaginatively used. Michael

Bloomberg made a fortune on this idea, and his firm produces

only a few drops in the ocean of graphically presented infor-

mation. Does this represent improvements in production pos-

sibilities? How can an economist ask such a question? People

who can read, interpret, and construct graphs can think better

than people who cannot. We know this is true for thinking

about economics, and of course it is true for other subjects as

well. We also know that people who think better are more

productive, indeed, that better thinking is what productivity

growth is. And graphics is just one side effect of the infor-

mation technology (IT) revolution. Of course it can be hard

to pick up such effects in aggregate time series, but we know

from everyday experience that they are important, that they

are changing our lives.

But what kind of economic analysis is needed to think about

the new economy? The chapters in this volume take this ques-

tion in a variety of interesting directions. My reactions, like

those of these contributors, are idiosyncratic, based on my in-

terests and my economic instincts.

The chapters by Bresnahan and Malerbo and by Jovanovic

and Rousseau, and some of the discussion by Bai and Yuen in

their postscript, raise some hard questions concerning indus-

trial organization. We know from the Microsoft case that the

new technology raises novel issues within the framework of

American antitrust law and new possibilities for legal action.

viii Foreword

Page 10: Technology and the New Economy

But I cringed at the list of questions for oligopoly theory that

Bai and Yuen provide: Do we have even a start when it comes

to understanding any one of them? But we have lived without a

workable oligopoly theory for a long time, and I take Jova-

novic and Rousseau as proposing to seek regularity in com-

petitively determined asset prices rather than in goods prices

determined . . . who knows how?

There is an undeniable cost of doing without a theory of

oligopoly pricing. We have a body of regulatory practices and

antitrust laws that are so arbitrary and so loosely connected to

modern economic theory and evidence that economic analysis

seems almost beside the point. Increasingly, no one even pre-

tends to be able to measure the effect of legal actions and reg-

ulations on consumer welfare. What would be the consequence

for economic growth and individual welfare if the antitrust

laws were repealed? The whole issue of monopoly power,

with the important exception of government or government-

supported monopoly, seems to me little more than a ripple on

the great tide of economic growth.

The possible implications of IT for international trade and

growth, as touched on in the postscript, seem especially in-

teresting to me. I agree with Bai and Yuen that it is far from

clear what the implications of IT for world trade flows will be.

But from the point of view of growth theory, it is the diffusion

of ideas that is important, and goods flows are important

mainly because we think they are related to the flow of ideas.

For example, in the course of becoming manufacturers of cars

that succeeded in world markets, the Japanese absorbed and

became leading contributors to the frontier technology for

producing cars. Could they not have done this by obtaining

blueprints from Detroit and Turin and using the ideas so

Foreword ix

Page 11: Technology and the New Economy

acquired to produce cars for domestic sale only? Maybe, but

the diffusion of ideas in this disembodied way always seems to

come up short.

Why should this be surprising? We learn how to play the

piano by playing for our teacher and getting our teacher’s crit-

icism and by listening to him or her play the same pieces we

have attempted. By such a trial-and-error process, in addition

to our study of the score, the musical blueprint—we bring our

playing closer to his or her standard. By exporting our music

to a more sophisticated listener we improve our ability to pro-

duce it. I think the learning process described in this example is

typical of the way trade fosters—is essential to—the diffusion

of ideas, and why countries that have shifted their workforce

to exports that compete with products from other, more so-

phisticated economies have been so much more successful than

those that have closed themselves off.

Bai and Yuen cite the exciting example of Indian software

exports. Another favorite of mine is the processing of New

York traffic tickets in Ghana. They ask what ‘‘the implications

of these developments for the overall pattern of international

trade’’ might be. Surely one benefit of these new exports must

be that they sidestep (for a while, at least!) some of the diabol-

ical trade barriers that have long been in place in Ghana and

India. But it must also be the case that such exports of services

foster learning and the diffusion of technology, just as does the

growth in exports of manufactured goods. Indian computer

code must come up to American quality standards or its ex-

port will not be sustained. For economic growth, international

flows of goods are important mainly as a means to the inter-

national flows of ideas, and it may be that new technology

weakens the link between these flows: The ideas can travel,

with or without the goods.

x Foreword

Page 12: Technology and the New Economy

Michael Woodford considers how information technology

may affect the workings of the monetary system. Certainly we

can see it in details. I remember spending an entire afternoon in

a bank in Bar Harbor, Maine, back in the 1960s: I had run out

of cash while on vacation and needed more sent from my bank

in Pittsburgh. Now I can get dollars anywhere in the world in

seconds (in the unlikely event that I need cash at all)! How

do such changes affect aggregate behavior? This is an even

harder question than the one Solow asked, I think, but no less

important.

These scattered reactions are hardly a substitute for the

thoughtful essays contained in this volume. But I hope they will

serve as an advertisement, or perhaps as an appetizer.

Robert E. Lucas Jr.

October 2, 2002

Foreword xi

Page 13: Technology and the New Economy

This page intentionally left blank

Page 14: Technology and the New Economy

Introduction

Chong-En Bai and Chi-Wa Yuen

One of the most important driving forces behind the rapid

economic expansion in the United States and the world at large

in the 1990s is the development of information technology

(IT). The technology has made significant impact on many as-

pects of the economy, to the extent that ‘‘new economy’’ has

emerged as a popular term both in the media and in academia.

What is truly new about our economy today? What has con-

tributed to the IT revolution? Has it been driven more by

supply-side forces or demand-side forces? What kinds of gov-

ernment policies have contributed to it? What other institu-

tions have contributed to it? Is it any different in its nature

from other types of technological progress? What are the im-

plications of such technological changes for output growth and

macroeconomic fluctuations as well as for the design and im-

plementation of growth and stabilization policies?

Believing that these are questions that would be interesting

to people from different walks of life, we took advantage of

a special occasion—namely, the ninetieth anniversary of our

university—to invite some leading experts in various fields of

economics to offer their perspectives on these issues. This book

contains edited versions of lectures originally delivered by

Boyan Jovanovic, Timothy Bresnahan, Danny Quah, Jeffrey

Page 15: Technology and the New Economy

Sachs, and Michael Woodford at the University of Hong Kong

in 2001–2002 in celebration of its ninetieth birthday. Together,

these papers provide important clues to some of the most fun-

damental questions about the development of the informa-

tion technology and its effects on the economy, ranging from

such elements as competition policy (Bresnahan and Malerba),

innovation-related institutions (Sachs and McArthur), and de-

mand factors (Quah) to the long-run values of leading in-

novating firms (Jovanovic and Rousseau) and the effectiveness

of monetary policy in stabilizing the economy (Woodford).

Written in accessible language, the book is valuable to a wide

audience, including academics, undergraduate and graduate

students, and the general public with some basic knowledge

in economics. In this introduction, we provide a summary of

these essays. Some related issues are discussed in the postscript.

Boyan Jovanovic and Peter L. Rousseau (chapter 1) examine

the relation between innovation and the stock market value of

the innovating firm. They identify three waves of technological

innovation that occurred at the beginning, the middle, and the

end of the twentieth century, namely, electricity and internal

combustion, chemicals and pharmaceuticals, and the computer

and the Internet. They find that each wave of innovation is

followed by a vintage of stock market listings and that firms in

each of the vintages have produced a higher-than-average rate

of return to investment. The stock market values of these vin-

tage firms have been highly stable over time, thus suggesting

that their high valuation is not due to bubbles and not based

on specific technologies that would tend to become obsolete

over time. Rather, they are based on a superior organizational

capital of the firms, which may include the quality of manage-

ment and the corporate culture that encourage innovation and

entrepreneurship.

2 Chong-En Bai and Chi-Wa Yuen

Page 16: Technology and the New Economy

The current third wave of innovation in IT is found to be

more similar to the first wave than to the second. The age of

the entrant in the stock market is lower in the first and third

waves than in the second wave, implying that innovation is

carried out by young firms in the first and third waves but by

older firms in the second. One possibility is that innovation in

the first and third waves requires lower fixed costs than in the

second wave. This appears to be confirmed by the count of

patents over the years, which exhibits a U shape. The low cost

of innovation seems to be more salient with IT than with elec-

trification: IT represents an ‘‘invention in the method of in-

venting’’ and is also associated with strong spillover effects.

This value of IT is evidenced by the surge in patenting in the

last six years. Very likely, the wave of IT innovation is far from

over. The recent setbacks in the IT sector can be understood in

light of the fact that it is not necessarily the first users of a

technology that reap the greatest benefits, as was the case in

the electrification wave.

Timothy F. Bresnahan and Franco Malerba (chapter 2) con-

sider conditions for sustained innovation in terms of the in-

stitutional environment, particularly government policy. Based

on a detailed investigation of the five eras of the computer in-

dustry (namely, the mainframe, minicomputer, PC, supermini

and client-server computing, and the Internet), their analysis

centers on two questions. In the short term, what explains the

concentrated location of rent-generating supply within each

segment of the computer industry in a single country? In the

long term, what explains the persistent U.S. success in all the

segments?

In the short run, concentration in each segment has a lot

to do with scale economies. This seems to suggest the validity

of the ‘‘new trade theory’’: The first-mover advantage is

Introduction 3

Page 17: Technology and the New Economy

substantial, and government intervention is desirable in ensur-

ing the emergence of the first mover from within the country.

However, the long-term history suggests otherwise. New trade

theory cannot explain why the United States has maintained

persistent dominance in all the segments in spite of dramatic

discontinuity between various eras of the computer industry.

The transition from one era to the next in the computer

industry has experienced dramatic changes in the technology,

the market structure, and the dominant players (including the

customers). Therefore, for individual firms and for a country,

success in one segment of the industry does not imply success

in other segments. Since the origins of various segments were

characterized by high degrees of uncertainty, it would be im-

possible for the government to pick the winner. Instead, the

market is the best selection mechanism, where the winner can

be picked after numerous approaches to experimentation and

exploration have been taken by various parties. The United

States provides an excellent environment for such experimen-

tation and selection. First, the U.S. government allows mar-

ket selection to work without intervention, which levels the

playing field for participants in the selection process. Second,

market selection is strengthened by competition policies that

enhance the influence of demanders on the selection mecha-

nism. The low barrier to exit also reinforces the mechanism.

Finally, institutions exist that increase the variety of experi-

ments from which the market selects. Universities are fertile

breeding grounds for new ideas and entrepreneurship. It is

easier for new businesses to get started, get funding, and grow

in the United States than in other parts of the world. All

these factors have not only facilitated the efficient emergence of

concentration within each segment, but also helped the United

States maintain its dominance through various eras of the

computer industry.

4 Chong-En Bai and Chi-Wa Yuen

Page 18: Technology and the New Economy

Ever since Solow (1956), we have understood that techni-

cal progress (rather than physical capital accumulation) is the

ultimate engine of economic growth, and technology dissem-

ination is an important channel of equalizing income differ-

ences across countries in the world. On this basis, Danny Quah

(chapter 3) argues that there is nothing new in the new econ-

omy if the proliferation of information and communications

technology (ICT) is interpreted as merely ‘‘the most recent

manifestation of an ongoing sequence of technical progress.’’

Besides, such supply-side interpretation fails to resolve three

paradoxes in the new economy, namely, the Solow produc-

tivity paradox (that IT investment has not been accompanied

by significant improvement in measured labor productivity),

the falling deployment of human capital in science and tech-

nology in the face of output growth, and the trade deficits in

ICT products experienced by technology leaders such as the

United States.

In addition to changes in the supply-side (or cost) character-

istics of the economy, the ICT revolution has also brought

about changes in the nature of goods and services consumed

that make them more and more like knowledge, namely, being

nonrival (or infinitely expansible) and aspatial. Quah pro-

poses that this change in the ‘‘knowledge content’’ of goods

and services especially on the demand/consumption side—the

technology/final consumer linkage—is what really constitutes

the ‘‘newness’’ in the new economy. To illustrate the impor-

tance of demand considerations as determinants of the sus-

tainability of economic growth, he cites the example of ancient

China to highlight the possibility of growth being bogged

down by inadequate demand. This possibility could be much

higher in the new economy because the consumer has to

incur some learning cost before he or she can truly enjoy the

Introduction 5

Page 19: Technology and the New Economy

consumption of these new knowledge products. Contrary to

Say’s Law, therefore, supply may not be able to create its own

demand. This demand-side hypothesis can potentially help re-

solve the three productivity puzzles.

Like Quah, Jeffrey D. Sachs and John W. McArthur (chapter

4) cite Solow’s contribution to introduce the ‘‘old’’ economy

view of the unimportant role of savings or capital accumula-

tion and the indispensable role of (endogenous) technological

production/innovation and diffusion as engines of sustained,

long-run growth. They also explain why technology adopters

can never ‘‘catch up’’ with technology innovators.

Based on evidence from patenting data, they classify coun-

tries into three tiers of technological capacity: the high inno-

vators (the U.S., Japan, Germany, Korea, Taiwan, Israel, etc.),

the technology users (most other countries, China included),

and the technologically excluded. Most countries in Asia are

found to belong to the second category, although some of them

are undergoing a transition from being a technology borrower

to becoming a technology innovator.

Sachs and McArthur then discuss how the success of inno-

vation hinges crucially on the government’s choice of strategies/

processes and the underlying economic systems. Eight basic

characteristics of the innovation process, both market and

nonmarket based, are identified—ranging from its general scale

economies and creative-cum-destructive nature to site, organi-

zation, and financing specificity. The experience of the United

States, the most innovative country in the world, is then used

to explain how nine characteristics of its innovation system—

again both market and nonmarket based, ranging from its

heavy investment in basic science to its effective higher edu-

cation and patent systems—have helped the United States

achieve such high and sustained rates of innovation.

6 Chong-En Bai and Chi-Wa Yuen

Page 20: Technology and the New Economy

Finally, they use these characteristics to shed light on the

challenges facing Asia, concluding that Asia’s growth prospects

depend on the emergence of technological innovation (rather

than pure adoption/imitation) induced endogenously by a well-

structured institutional and policy framework.

Michael Woodford (chapter 5) addresses the concern that,

with improvement in information technology and hence effi-

ciency of financial markets, central banks may be less able

to stabilize the macroeconomy through monetary policy—

because (a) the ability of central banks to ‘‘surprise’’ the mar-

kets will be reduced as economic agents become better in-

formed about monetary policy decisions and actions, and (b)

private-sector demand for base money will shrink as a result of

such financial innovations as e-money and more efficient clear-

ing systems.

Woodford explains why the result that ‘‘under rational ex-

pectations, only unanticipated policy matters’’ does not imply

that the effectiveness of monetary policy hinges on the ability

of central banks to fool the markets about what they do. In-

stead, by allowing the central banks to signal more precisely

their future policy plans and by tightening the link between the

interest rates they directly control and other market rates,

monetary policy can be even more effective—in affecting in a

desired way the evolution of market expectations about inter-

est rates and inflation and in strengthening the intended effects

of such policies—in the information economy.

Woodford also dismisses the relevance of the size and the

stability of the demand for base money to the implementation

of monetary policy. By reducing an important source of dis-

turbance, the erosion of currency would actually help simplify

the central bank’s problem. Instead of targeting the monetary

base, what really matters for the effectiveness of monetary

Introduction 7

Page 21: Technology and the New Economy

policy is central bank control of overnight interest rates, which

will not be affected much by the erosion of base money.

He acknowledges, though, that improvements in information

technology, hence efficiency of the financial system, may have

important consequences for some specific operating and deci-

sion procedures that the central banks have to follow in rela-

tion to the choice and implementation of their policy targets.

These essays address a selection of important topics about

technology and the new economy. A brief discussion of some

other topics relevant to the IT revolution that are nonetheless

left out in these essays is relegated to the postscript.

Reference

Solow, Robert. 1956. ‘‘A contribution to the theory of economicgrowth.’’ Quarterly Journal of Economics 70 (February): 65–94.

8 Chong-En Bai and Chi-Wa Yuen

Page 22: Technology and the New Economy

1Stock Markets in the New Economy

Boyan Jovanovic and Peter L. Rousseau

1.1 Introduction

The term ‘‘new economy’’ has, more than anything, come to

mean a technological transformation, and in particular its em-

bodiment in the computer and on the Internet. These tech-

nologies are more human capital intensive than earlier ones

and have probably hastened the pace of the shift in the U.S.

economy toward the service industries. The new economy is

often also linked to economic ‘‘globalization’’ as reflected in

the expansion of trade and the integration of capital markets,

but this can be viewed as much as a result of technological

change as an independent phenomenon.1

Upon reflection, however, it is clear that the new economy is

not entirely ‘‘new.’’ There have always been new technologies,

and each has, on the whole, demanded new skills. Technol-

ogies that have driven new economies of the past include

steam, electricity, the internal combustion engine, antibiotics,

and chemicals, and these were in turn refined in a host of

smaller innovations. Here we will draw upon this rich past to

see what today’s new economy may hold in store.

To do this, we use today’s value of various vintages of

stock market entrants as a barometer of the quality of the

Page 23: Technology and the New Economy

new technological developments that they brought with them

to the market. We find that as new technologies emerge and

see widespread adoption, the vintages of firms at the time of

adoption become extremely valuable in terms of market capi-

talization, and that this value comes at the expense of older

firms. In the case of electrification, the new technology gen-

erated a high flow of new products that persisted over an

extended period and created lasting value for 1920s entrants.

Recently the information technology (IT) vintage firms have

also become extremely valuable and there has been an asso-

ciated high flow of new products.

This evidence strongly suggests that we are in the midst of

a major episode of Schumpeterian-style creative destruction.

Briefly, the data show:

1. Direct indicators of technological change such as patents

have surged, just as they did in the early part of the twentieth

century.

2. The largest firms today are younger than they have ever

been. In the past, the major new technologies like electricity

and internal combustion were introduced by young firms. The

dominance of young firms therefore signals the presence of

major technological change.

3. At those times when entrants do account for a lot of value,

like the 1920s, they manage to hold on to it. This resilience of

the successful vintages of the past suggests that the enormous

value created by the entrants of the last fifteen years is likely to

last.

Moreover, far more than electricity, we believe that IT rep-

resents an ‘‘invention in the method of inventing,’’ as Griliches

(1957, 502) put it when describing the advent of hybridization.

Just as hybridization raised the rate of growth of agricultural

10 Boyan Jovanovic and Peter L. Rousseau

Page 24: Technology and the New Economy

productivity seemingly permanently, so IT may permanently

raise the rate of the world’s productivity growth.

1.2 Technology, Entry, and Today’s Giants

The flagship technologies of the most recent wave, the com-

puter and the Internet, were brought into the market mainly by

small young firms. This suggests that the story of the IT revo-

lution is, to a large extent, about entrants. Can the same be

said for the great technologies of the past, such as electricity

and the internal combustion engine? How many of today’s

stock market giants entered the stock market bearing an elec-

trically powered or diesel-driven product or process?

Table 1.1 lists the first product or process innovation for

some well-known companies, along with their dates of found-

ing, incorporation, and stock exchange listing. It also includes

the share of total market capitalization that can be attributed

to each firm’s common stock at the end of 2000. The informa-

tion is based upon our reading of individual company histories

and an extension of the stock files distributed by the University

of Chicago’s Center for Research in Securities Prices (CRSP)

from its 1925 starting date back through 1885.2 The firms ap-

pearing in the table separate into roughly three groups: those

based upon electricity and internal combustion, those based

upon chemicals and pharmaceuticals, and those based upon

the computer and Internet. Let us consider a few of the entries

more closely.

1.2.1 Electricity/Internal Combustion Engine

Two of largest companies in the United States today are Gen-

eral Electric (GE) and AT&T. Founded in 1878, GE now ac-

counts for 3.1 percent of total stock market value, and had

Stock Markets in the New Economy 11

Page 25: Technology and the New Economy

Table 1.1Key dates in selected company histories

Company nameFoundingdate

First majorproduct orprocessinnovation

Incorporationdate Listing date

% of stockmarket in2000

General Electric 1878 1880 1892 1892 3.10

AT&T 1885 1892 1885 1901 0.42

Detroit Edison 1886 1904 1903 1909 0.04

General Motors 1908 1912 1908 1917 0.19

Coca Cola 1886 1893 1919 1919 0.99

Pacific Gas & Electic 1879 1879 1905 1919 0.05

Burroughs/Unisys 1886 1886 1886 1924 0.03

Caterpillar 1869 1904 1925 1929 0.11

Kimberly-Clark 1872 1914 1880 1929 0.25

Procter & Gamble 1837 1879 1890 1929 0.67

Bristol-Myers Squibb 1887 1903 1887 1933 0.94

Boeing 1916 1917 1916 1934 0.38

Pfizer 1849 1944 1900 1944 1.90

Merck 1891 1944 1934 1946 1.41

Disney 1923 1929 1940 1957 0.39

Hewlett Packard 1938 1938 1947 1961 0.41

12

Boyan

JovanovicandPeter

L.Rousseau

Page 26: Technology and the New Economy

Time Warner 1922 1942 1922 1964 0.41

McDonalds 1948 1955 1965 1966 0.29

Intel 1968 1971 1969 1972 1.32

Compaq 1982 1982 1982 1983 0.17

Micron 1978 1982 1978 1984 0.13

Microsoft 1975 1980 1981 1986 1.51

America Online 1985 1988 1985 1992 0.53

Amazon 1994 1995 1994 1997 0.04

eBay 1995 1995 1996 1998 0.06

Source: Data from Hoover’s Online (2000), Kelley (1954), and company Web sites.Note: The first major products or innovations for the firms listed in the table are: GE 1880, Edison patents incan-descent light bulb; AT&T 1892, completes phone line from New York to Chicago; DTE 1904, increases Detroit’selectric capacity six-fold with new facilities; GM 1912, electric self-starter; Coca Cola 1893, patents soft drink for-mula; PG&E 1879, first electric utility; Burroughs/Unisys 1886, first adding machine; CAT 1904, gas driven trac-tor; Kimberly-Clark 1914, celu-cotton, a cotton substitute used in WWI; P&G 1879, Ivory soap; Bristol-MyersSquibb 1903, Sal Hepatica, a laxative mineral salt; Boeing 1917, designs Model C seaplane; Pfizer 1944, deep tankfermentation to mass produce penicillin; Merck 1944, cortisone (first steroid); Disney 1929, cartoon with sound-track; HP 1938, audio oscillator; Time-Warner 1942, ‘‘Casablanca’’; McDonalds 1955, fast food franchisingbegins; Intel 1971, 4004 microprocessor (8088 microprocessor in 1978); Microsoft 1980, develops DOS; Compaq1982, portable IBM-compatible computer; Micron 1982, computer ‘‘eye’’ camera; AOL 1988, ‘‘PC-Link’’; Amazon1995, first on-line bookstore; eBay 1995, first on-line auction house.

StockMarketsintheNewEconomy

13

Page 27: Technology and the New Economy

already established a share of over 2 percent by 1910. AT&T,

founded in 1885, contributed 4.6 percent to total market value

by 1928, and more than 8.5 percent at the time of its forced

breakup in 1984. Both were early entrants of the electricity era.

GE came to life with the invention of the incandescent light

bulb by Thomas Edison in 1880, while AT&T established a

long-distance telephone line from New York to Chicago in

1892 to make use of Bell’s 1876 invention of the telephone.

Both technologies represented quantum leaps in the modern-

ization of industry and communications, and would come to

improve greatly the quality of household life. Both firms were

listed on the New York Stock Exchange (NYSE) about fifteen

years after founding. The film industry emerged later in the

electrification process with the founding of the Warner Bros.

Motion Picture Company (the antecedent of today’s Time-

Warner) in 1922. And though the company did not formally

list on the NYSE until 1964, its commanding position in the

U.S. entertainment industry was established shortly after found-

ing with movie classics such as the ‘‘Jazz Singer’’ in 1927 and

‘‘Casablanca’’ in 1942. General Motors (GM) was an early en-

trant to the automobile industry, listing on the New York Stock

Exchange (NYSE) in 1917—nine years after its founding. By

1931 it accounted for more than 4 percent of stock market

value, and its share would hover between 4 and 6.5 percent

until 1965, when it began to decline gradually to its current

share of only 0.2 percent. These examples suggest that many of

the leading entrants of the turn of the twentieth century created

lasting market value. Further, the ideas that sparked their emer-

gence were brought to market relatively quickly.

1.2.2 Chemicals/Pharmaceuticals

Procter and Gamble (P&G), Bristol-Myers Squibb, and Pfizer

are now all leaders in their respective industries but took much

14 Boyan Jovanovic and Peter L. Rousseau

Page 28: Technology and the New Economy

longer to list on the NYSE than the electrification-era firms. In

fact, both Pfizer and P&G were established before 1850 and

thus predate all of them. Despite P&G’s early start and cre-

ation of the Ivory soap brand in 1879, it was not until 1932

that the company took its place among the largest U.S. firms by

exploiting advances in radio transmission to sponsor the first

‘‘soap opera.’’ Pfizer’s defining moment came when it devel-

oped a process for mass-producing the breakthrough drug

penicillin during World War II, and the good reputation that

the firm earned at that time later helped it to become the main

producer of the Salk and Sabin polio vaccines. In Pfizer’s case,

like that of P&G, the company’s management and culture had

been in place for some time when a new technology (in Pfizer’s

case antibiotics) presented a great opportunity.

1.2.3 Computer/IT

Firms at the core of the recent IT revolution, such as Intel,

Microsoft, and Amazon, came to market shortly after found-

ing. Intel listed in 1972, only four years after starting, and now

accounts for 1.3 percent of total stock market value. Microsoft

took eleven years to go public. Conceived in an Albuquerque

hotel room by Bill Gates in 1975, the company, with its new

disk operating system (MS-DOS), was perhaps ahead of its

time, but later joined the ranks of today’s corporate giants with

the proliferation of the personal computer. In 1998, Microsoft

accounted for more than 2.5 percent of the stock market, but

this share fell to 1.5 percent over the next two years in the

midst of antitrust action. Amazon caught the internet wave

from the outset to become the world’s first on-line bookstore,

going public in 1997—only three years after its founding.

As the complexities of integrating goods distribution with an

Internet front end came into sharper focus over the ensuing

years, however, and as competition among Internet retailers

Stock Markets in the New Economy 15

Page 29: Technology and the New Economy

continued to grow, Amazon’s market capitalization by 2001

had been cut in half to less than 0.1 percent of total stock

market value.

These firms, as well as the others listed in table 1.1, brought

new technologies into the stock market and accounted for

nearly 16 percent of its value at the close of 2000. The firms

themselves also seem to have entered the stock market sooner

during the electricity and computer/Internet revolutions, at op-

posite ends of the twentieth century, than firms based on mid-

century technologies. In the next two sections, we examine

these observations more systematically in a universe that in-

cludes all exchange-listed firms.

1.3 How Much Value Does Each Technological Vintage

Command Today?

The examples in the final column of table 1.1 suggest that firms

entering the stock market with a new technology seem to cre-

ate lasting value. Is this just a characteristic of today’s largest

companies, or does it apply more generally? One measure of

the importance of a past technology is how long the firms that

carried it to market have survived and how much value they

have created. Jovanovic and Rousseau (2001a) show that a

firm’s organizational imprint, which in their model is created

upon entry to the stock market, is shaped largely by the avail-

able technologies, and that the quality of this imprint relates

closely to market value even today. The solid line in figure 1.1

provides an accounting of the value in 1998 of all firms that

were then listed on the three major U.S. stock exchanges—the

NYSE, the American Stock Exchange (AMEX), and Nasdaq—

by year of listing, and it offers strong evidence in favor of this

view.3

16 Boyan Jovanovic and Peter L. Rousseau

Page 30: Technology and the New Economy

The leading vintages in the figure retain a strong presence in

1998 even per unit of investment. The dashed line accounts for

all cumulative real investment by the year of that investment.4

Relative to investment, the 1950s and even the 1960s—which

saw the Dow and the Standard and Poor (S&P) 500 indexes do

very well and which some economists refer to as a golden

age—did not create as much lasting value as the 1920s.5

In a one-sector world in which every firm financed its start-

up investment with a stock issue and then simply kept up its

capital and paid for all parts and maintenance out of its profits,

each firm’s current value would be proportional to its initial

investment, and the dashed lines and the solid lines would co-

incide. Why, then, does the solid line deviate from the dashed

line? Why, for example, do the vintage-1920s firms account for

relatively more stock market value than they do for gross in-

vestment? Several explanations come to mind.

Figure 1.1U.S. gross investment and the 1998 value of listed firms by year ofexchange listing

Stock Markets in the New Economy 17

Page 31: Technology and the New Economy

1.3.1 Technology

The entrants of the 1920s came in with technologies and

products that were better and therefore either (a) accounted for

a bigger-than-average share of all 1920s investment, (b) deliv-

ered a higher return per unit of investment or (c) invested more

than other firms in subsequent decades. The state of technol-

ogy prevailing at the firm’s birth affects that firm for a long

time, sort of like the weather affects a vintage of wine; some

vintages of wine are better than others, and the same seems to

be true of firms. In other words, the quality of the entering

firms is better in some periods than in others. Jovanovic and

Rousseau’s (2001a) model attributes the differences between

the solid and dashed lines in figure 1.1 to factors (a) and (b)

alone—a quality explanation as one would naturally use with

vintage wines. Implicitly, we appeal to the market power that a

firm derives from the patents that it may own on its inventions

and products. These innovations create ‘‘organization capital,’’

which can be defined as the intangible features of a firm that

make it more valuable than the simple sum of its assets. We

believe that organization capital depreciates more slowly than

physical capital because it can stay intact in the face of equip-

ment replacement and employee turnover. New members of a

firm acquire it from the older ones and the firm’s organization

capital thus survives. This intangible part of the firm’s capital

stock is the main reason why, in figure 1.1, we see lasting

effects of a firm’s vintage on market value.

1.3.2 Mergers and Spin-Offs

The dashed line is aggregate investment, not the investment of

entrants (on which we do not have data). The entrants of the

1920s were, perhaps, not new firms embodying new invest-

ment but, rather, existing firms that split or that merged with

18 Boyan Jovanovic and Peter L. Rousseau

Page 32: Technology and the New Economy

other firms and relisted under new names, or privately held

firms that went public in the 1920s. We accordingly adjust fig-

ure 1.1 for mergers to the extent that is possible with available

data.6 Some mergers may reflect a decision by incumbents

to redirect investment and redeploy old capital to new uses.

Such mergers arise because of technological change. Others

may arise because of changes in antitrust law or its interpreta-

tion. Either way, some firms engage in mergers as a precursor

to exchange listing, and this means that a new listing may be a

pre-1920s entity disguised as a member of the 1920s cohort.7

1.3.3 Financing

The entrants of the 1920s may have financed a higher-than-

average share of their own investment by issuing shares, or

they later (e.g., in the 1990s) bought back more of their debt

or retained more earnings than other firms did. We can be

reasonably sure, however, that today’s successful firms did not

acquire their currently high stock-market valuations by con-

verting their debt into equity. Figure 1.2 presents the combined

market value of all firms in our sample as a share of gross

domestic product (GDP), as well as aggregate debt of U.S.

businesses, defined here as the sum of the market value of cor-

porate bonds and commercial and industrial bank loans.8 The

shaded areas denote periods of economic contraction as de-

fined by the National Bureau of Economic Research (NBER).

The figure indicates that around 1915, equities started to grow

faster than debt—indeed, while stocks rose ten times faster

than GDP, debt starts and ends the period at about 50 percent

of GDP. Moreover, none of the four large humps in the value

of stocks were associated with a flight out of debt—in fact, the

two series are highly positively correlated at those frequencies,

with a correlation coefficient of 0.85. Even though we know

Stock Markets in the New Economy 19

Page 33: Technology and the New Economy

that the fraction of capital investment financed by stocks has

not been constant, there is no evidence in figure 1.2 to suggest

a substitution of debt finance into equity. Thus, such shifts

cannot be used to explain the departures of the solid line from

the dashed line in figure 1.1—not generally, and not for the

1920s in particular.

1.3.4 Bubbles

The 1920s cohort may be overvalued, as may be the high tech

stocks of the 1990s, while other vintages may be undervalued.

Note, however, that figure 1.1 is a cross-section plot of values

in 1998 and not a time-series plot. As we shall see, the cross-

vintage differences in value have been highly persistent over

time, and this is inconsistent with the crashes of stock-market

prices, such as Japan’s stock market crash in 1990 and Nas-

Figure 1.2U.S. business debt and the market value of exchange-listed commonstocks

20 Boyan Jovanovic and Peter L. Rousseau

Page 34: Technology and the New Economy

daq’s post-2000 crash, that are often pointed to by adherents

of the bubbles view.

1.3.5 Market Power, Monitoring

The 1920s cohort may be in markets that are less competitive

or in activities for which shareholders can monitor manage-

ment more easily. For example, the very success of the internet

technology has lowered markups and increased the pace with

which Internet-based applications reach obsolescence. The first

effect is there for the old and new economy alike. But the sec-

ond is restricted for the most part to the high-tech sector. Such

an effect is likely to have become more serious recently and

may be partially responsible for the relative decline of technol-

ogy stocks.

1.4 How Stable Is the Value of a Vintage over Time?

How reliable a signal of long-term value is the value of a

collection of firms grouped by their vintage? Could there be

vintage-specific or technology-specific bubbles? Many analysts

believe that the March 2000 value of the Nasdaq firms was not

warranted by fundamentals. On this view, the Nasdaq index

contained a bubble that has since burst. Were the 1920s simi-

lar in this respect? We do not analyze Japan here, but for the

United States, while the market as a whole was probably over-

valued in early October 1929, the firms that entered the market

during the 1920s were not overvalued.

The stock market values of various vintages of firms have

been highly stable over time. That is, if a firm today is over-

valued relative to its fundamentals, it has always been over-

valued, and that seems highly improbable. This can be seen in

Stock Markets in the New Economy 21

Page 35: Technology and the New Economy

figure 1.3, which shows the evolution of market share for stock

market incumbents at ten-year intervals. It is not the retention

of ordering by vintage that is interesting, since this arises by

definition due to the figure’s focus on incumbents rather than

vintages, but rather the stability of the relative spacing between

lines that reflects a stability in the values of vintages over time.

The thickest decadal strip is for the firms of the 1920s. If the

market had overvalued these firms in 1929, the strip would

have gotten much thinner when divided by output, and this

evidently did not happen. Figure 1.4, which traces the value of

each vintage as a share of total stock market capitalization,

shows even more clearly that after the 1929 crash and into the

onset of the Great Depression, it was the pre-1910 vintages of

firms that permanently lost market share.

The stability of the vintages’ values shown in figures 1.3 and

1.4 suggests that organization capital depreciates slowly—so

Figure 1.3Shares of market value retained by ten-year incumbent cohorts (ratioto GDP)

22 Boyan Jovanovic and Peter L. Rousseau

Page 36: Technology and the New Economy

slowly that the imprints made by firms of various entering co-

horts seem to persist despite the entry of new firms and the

technologies that they carry into the stock market. Organiza-

tion capital is therefore not something that is necessarily em-

bodied in a particular technology or type of equipment, but

is rather a firm attribute that remains intact as other inputs to

the production process adjust. Perhaps firms that enter in the

midst of technological change are ones in which innovation

and entrepreneurship were not only encouraged but became

embedded in the quality of management and the corporate

culture generally. It is then easy to imagine that such firms

would be able to adjust their inputs and product mixes with

market conditions while maintaining their organization capital.

What does this cohort-specific stability imply for the IT co-

hort? The recent decline of Nasdaq-listed firms has dramati-

cally reduced the value that the 1990s entrants commanded in,

say, 1999. The old economy firms did not lose as much value

Figure 1.4Shares of market value retained by ten-year incumbent cohorts

Stock Markets in the New Economy 23

Page 37: Technology and the New Economy

as did the new economy firms. In stark contrast, the crash of

1929 over the next several years affected the then-old vintages

more than the then-new ones; in other words, the then ‘‘old

economy’’ firms suffered more in the long run. In spite of these

differences between the aftermath of the 1929 crash and that of

Nasdaq, a lot of similarities between the IT and electrification

revolutions remain, and it is these similarities that we turn to

next.

1.5 Lessons from the Electrification Era

In this section, we show that the early entrants of the electri-

fication era were not the ones that ended up procuring the

largest market shares and that the diffusion of electricity was

much slower than we are currently seeing with IT. This sug-

gests that, despite the apparent similarities, it is important to be

cautious in directly comparing the two technological episodes

and extrapolating from the experience of electrification.

Paul David (1991) has claimed that the IT revolution looks a

lot like the electricity revolution did a hundred years ago, and

our data overall do support this claim. David argued that elec-

trification ushered in an era of fast productivity growth in

part because of the externalities associated with electrification.

Thus, it was not necessarily the firms that specifically invested

in electricity generation that reaped the benefit of electrifica-

tion, but rather the economy at large. David’s view is quite

consistent with evidence from the stock market valuations of

the leading firms of the era, which is our focus here. As we see

in what follows, this pattern repeats itself in the IT era. In spite

of the recent setbacks in the IT sector, experience so far sug-

gests that is not necessarily the first users of a technology who

reap the greatest benefits. Can the same be said of electrifica-

24 Boyan Jovanovic and Peter L. Rousseau

Page 38: Technology and the New Economy

tion? Perhaps so. After all, figure 1.1 shows that lasting value

was not really created until the 1920s. By then, if one considers

the opening of the hydroelectric dam at Niagara Falls in 1894

as the start, electrification had already been on the scene for a

quarter century. This suggests that the early entrants in the

electrification era (with the exceptions of GE and AT&T) were

not, generally speaking, the firms that exploited the new tech-

nology most effectively.

Figures 1.5 and 1.6 illustrate the slower diffusion of electric-

ity than computers. As figure 1.5 shows, factory electrification

started slowly at the turn of the twentieth century and did not

grow rapidly until after 1915, reaching its height only in the

late 1920s.9 In figure 1.6 we match up the spread of electricity

with that of personal computer use by consumers.10 Indeed,

electricity diffused more slowly than computers, but the paral-

lels between the penetration of home lighting and personal

computers that David emphasizes are also striking.11

Figure 1.5Electrification of U.S. factories, 1899–1939

Stock Markets in the New Economy 25

Page 39: Technology and the New Economy

Why did electricity diffuse so slowly? In asking this question

we should remember that one hundred years ago, the financial

playing field favored the large, established firm much more

than it does today. The later rise of smaller firms may have

been due partly to changes in the law (such as the Sherman

Antitrust Act of 1890 and the transparency forced on the mar-

ket by the Securities’ Acts of 1933) but it probably stemmed

much more from a gradual but profound change in both tech-

nology and in the growth of expertise with which business is

financed.

The capital market was not nearly as deep in the 1920s as it

is today—some 50 percent of Americans own stock today,

whereas only 2 or 3 percent owned stocks in the 1920s, and

even less in the 1890s. Moreover, Wall Street’s financial ex-

pertise was concentrated in a few large banks. The market was

thus less well prepared to float shares of smaller firms, and the

Figure 1.6The diffusion of electricity and personal computers among U.S. con-sumers

26 Boyan Jovanovic and Peter L. Rousseau

Page 40: Technology and the New Economy

big bankers of the era as a rule shied away from new issues by

unknown companies. Navin and Sears (1955), for example,

discuss the formation of the industrial market in New York

around the turn of the century, and find that only large firms

and combines were usually able to capture the attention of

the nation’s early financiers. Nelson (1959) notes that only

19.6 percent of all consolidations during the turn-of-the-

century merger wave traded on the NYSE sometime in the

next three years. In addition, between 1897 and 1907 the total

value of cash issues to the general public ($392 million) was

only 11.6 percent of the value of securities that were ex-

changed for the assets and securities of other companies. It

appears, then, that the small company had a harder time a

century ago. We will see, however, that although the financial

market was probably less efficient a hundred years ago, it did

not prevent young firms from listing and, so, it cannot have

been the main reason why electrification did not spread faster

than it did.

Other factors, present a century ago but largely absent

today, played a role in slowing down the spread of electricity.

First, technological information did not spread as fast as it

does today. An indirect indicator is the spread of product in-

novations and the growth in the number of their producers.

Agarwal and Gort (1999) give evidence that a new product

diffuses through the economy much faster today than it would

have one hundred years ago, leading us to expect a more pro-

tracted playing out of events in the electricity era. Second,

the price of computing power is falling at a much faster

rate than the price of electricity did. Gates (1999, 118) pro-

vides evidence, similar to that in figure 1.6, that computers are

penetrating the household sector faster than other consumer

durables did early in the twentieth century. Third, the adoption

Stock Markets in the New Economy 27

Page 41: Technology and the New Economy

of electricity by factories seems to have gone through a peculiar

two-stage adoption process: Located to a large extent in New

England factory towns, textile firms around the turn of the

century readily adapted the new technology by using an elec-

tric motor rather than steam to drive the shafts that powered

looms, spinning machines and other equipment (see Devine

1983). This early and only partial adoption of electricity was

further delayed by lags in the distribution of the new power—

lags that made it more costly to electrify a new industrial plant

fully. It is only after 1915, when secondary motors begin to

receive widespread usage, that industrial listings take off on the

NYSE and outperform railroads. This is broadly similar to the

recent and more compressed pattern of decline, merger, and

gradual acceleration in IT-intensive industries since 1985, ex-

cept that the IT-intensive industries are the service industries,

not manufacturing.

1.6 Age of Incumbents

As Schumpeter emphasized, technological change destroys old

technologies and old businesses. New technologies and prod-

ucts are usually brought in by young companies and this

means that—with some delay—when a new technology comes

to market, an economy’s leading firms tend to get younger.

One signal, then, that a new technology has come on the scene

is a drop in the average age of the leading firms.

Figure 1.7 shows the average age of the largest firms whose

market value sums to 5 percent of GDP for each year since

1885 using both years since incorporation and years since ex-

change listing as measures of age.12 Some of the more promi-

nent entries and exits (denoted by an ‘‘X’’) to this elite group

are also labeled. The leading firms were getting older over the

28 Boyan Jovanovic and Peter L. Rousseau

Page 42: Technology and the New Economy

first thirty years of our sample period and were largely rail-

roads, but manufacturing firms began to list rapidly on the

NYSE after 1914 as the use of electrified plants became wide-

spread. The Pullman Company, which manufactured railroad

cars and equipment until the 1980s, is a case in point, enter-

ing the 5 percent group in 1889 and remaining there until it

was replaced by GM in 1920. In fact, the average age of the

largest firms, based upon year of incorporation, dropped from

nearly fifty years to just under thirty years between 1914 and

1921.

The two decades that followed the Great Depression saw

relatively few firms enter the stock market. Accordingly, the

largest firms, which in the vast majority of cases were able to

ride out the Depression, remained large. This is clear from the

45 degree slope of the average age lines in figure 1.7 between

1934 and 1954. The leaders got younger in the 1990s, and

Figure 1.7Average age of the largest firms whose market values sum to 5 percentof GDP

Stock Markets in the New Economy 29

Page 43: Technology and the New Economy

their average ages now lie well below the 45 degree line. We

attribute this shakeout to the computer and to the Internet.

A comparison of figure 1.7 with figure 1.1 reveals another

interesting fact—over the past 115 years, times when lasting

value was created correspond to periods when the market

leaders were replaced by younger firms. This is particularly

true of the 1920s and the 1990s. A widening of the gap be-

tween the market shares of the 1920s incumbents and those of

earlier incumbent cohorts over the course of the 1920s is also

apparent in figure 1.3, and offers further evidence of a reversal

of value from firms that existed at the start of the twentieth

century to those that entered in the 1920s.

We concluded earlier that the 1920s entrants held up pretty

well in the long run. Let us now consider the 1990s and the

IT industry more closely. Figure 1.8 shows the shares of total

market value that can be attributed to early IT entrants that

turned out to be the losers, and the later entrants that turned

out to be the winners. The losers include IBM, Burroughs/

Figure 1.8Winners and losers in the IT industry

30 Boyan Jovanovic and Peter L. Rousseau

Page 44: Technology and the New Economy

Unisys, Honeywell, NCR, Sperry-Rand, DEC, Data General,

Prime Computer, Scientific Data Systems, and Computer

Associates—all early providers of mainframe or minicomputer

products and services. The winners include Apple, Compaq,

Dell, Gateway, Informix, Microsoft, Novell, Oracle, People-

soft, AOL, Infoseek, Lycos, Netscape, and Yahoo—later pro-

viders of personal computers, software, and Internet services.

The early IT leaders produced and supported hardware that

was expensive to maintain and to use. Software for these

mainframes and minicomputers were for the most part home-

grown, either by a firm’s internal programmers or perhaps

with the assistance of the hardware provider. Migration of ap-

plications from older to newer computers was slow and prone

to error as programmers demonstrated considerable job mo-

bility and documentation for homegrown applications could

often be sparse. Many firms became ‘‘locked in’’ to their data

processing systems and were slow to change. The early leaders

were thus, in spite of the growing use of personal computers

in the mid-1980s, able to continue to service a variety of cus-

tomers and to maintain their market shares.

But firms did finally either change or disband. And when

they did, a second round of innovations, more sweeping than

the first, transformed the U.S. marketplace. Software became

more standardized, more easily customized, and easier to use.

Analysts had already solved most everyday business problems

(i.e., accounts payable, ordering, project planning) with appli-

cations during the first IT wave, and this combined expertise

led to new, generic software that could suit most businesses

directly off the shelf. The price of computers fell rapidly, as

did the demand for specialized programmers within the busi-

ness firm. The Internet provided new ways to advertise and sell

products. Firms that were able to adjust their organizations to

Stock Markets in the New Economy 31

Page 45: Technology and the New Economy

the second wave of IT began to phase out old systems and

hardware. Others, for which adjustment represented too large

a burden, exited. New firms, without the weight of older sys-

tems and workplace designs built around them, were able to

adopt the cheaper and better technology quickly. The older

IT providers, with their organization capital built around cus-

tomer dependence and reliable service, began to lose ground.

1.7 Age of Entrants

When considering table 1.1, we noted that some of today’s

larger firms were brought to market quickly both recently and

in the early part of the twentieth century, while firms that listed

in the middle of the century were considerably older. Is this too

a general characteristic of U.S. firms? Apparently so. Figure 1.9

shows that companies that first listed at the close of the nine-

teenth century were as young as the companies that are enter-

ing the NYSE, AMEX and Nasdaq today. The figure shows

Figure 1.9Waiting times to exchange listing

32 Boyan Jovanovic and Peter L. Rousseau

Page 46: Technology and the New Economy

average waiting times from founding and incorporation to ex-

change listing.13 While it is true that transactions costs were

lower at the beginning and end of the twentieth century than

they were in the middle (see Jones 2001), their absolute mag-

nitude and variation over time have been too small to account

for the decisions of so many firms in the middle part of the

century to delay their entry to the stock market.

The finance expert would attribute a rapid life cycle from

founding to IPO as a result of increasingly sophisticated finan-

cial markets, but the evidence in the data does not support

such a view. Firms took as long to list at the turn of the twen-

tieth century as they are taking today, and waiting times were

much longer in the 1940–1960 period. A part of this may be

the result of the Securities Act of 1933 that diverted some new

start-ups from the NYSE to the over-the-counter (OTC) mar-

ket where they could escape the more stringent listing require-

ments. This can explain only a part of the increase, however,

because the rise in age of listing firms is evident well before the

1929 crash and the 1933 act.

The debate continues on how much real effect the Securities

Act of 1933 did have—see Simon (1989)—but it seems safe to

conclude that neither legal changes nor financial regression can

explain the rise in listing ages. The natural candidate there-

fore seems to be the nature of the technologies that came along

during the three different epochs—early, middle, and late cen-

tury. As noted earlier, chemical and pharmaceutical firms were

the important entrants of the 1940–1960 period, and most had

existed for many decades prior to listing. Is it possible that the

need to be flexible is something especially true of these indus-

tries? In other words, does the midcentury listing pattern sug-

gest that it is not just the quality of the firms but the identity of

the sectors that determine how fast an idea can come to market?

Stock Markets in the New Economy 33

Page 47: Technology and the New Economy

1.8 Direct Technological Indicators

One indicator of innovative activity within a firm is the num-

ber of patents that it secures. Not all ideas that define a firm

are patented early in its life, but the level of patenting activity

in an economy is probably related to the number of new ideas

being generated there. It also reflects the entrepreneurial cli-

mate, since patents are often used to protect property rights to

products that have emerged from the research and develop-

ment (R&D) process, whether such R&D is recognized on a

company’s books or not. Moreover, it is the property rights of

the firm that define what the firm is about and what its orga-

nization capital will be built around.

Figure 1.10 shows the number of patents that have been

issued annually in the U.S. economy since 1885.14 This fig-

ure has a U shape, suggesting that the pace of innovation was

greater during times of rapid technological change, such as

the 1920s and the post-1985 period, while it was slower during

Figure 1.10Patents per million in the population

34 Boyan Jovanovic and Peter L. Rousseau

Page 48: Technology and the New Economy

the middle of the century, which was the age of the technology-

refining incumbent. This graph, though somewhat smoother

than the plot of market value by vintage in figure 1.1, has a

similar pattern after detrending. The rise over the past four

years has been remarkable, and Lerner and Kortum (1998)

argue that technological change has led to this surge.

Changes in patent legislation will affect the number of filings

and issues, and could account for some of the fluctuations in

figure 1.10. Nevertheless, changes in patent laws themselves

often arise due to technological change. For example, legisla-

tors may act to encourage innovation and competition by low-

ering fees and extending patent lengths when a new technology

is perceived as having the potential to transform industry even

though individuals entrepreneurs are not yet ready to bear the

start-up costs. They might raise fees and shorten patent lengths

later in the technological cycle to offer protection to firms that

did bear the costs of bringing in a new technology. Either way,

patent laws are more likely to change during times of techno-

logical transformation.

When examining patent laws in a single country such as the

United States, it is often unclear whether changes are a result

of technology or some country-specific factor, such as a shift

in political leadership. Global patterns, however, can be more

plausibly linked to technological factors. Figure 1.11 presents

cross-country averages of changes in patent legislation at ten-

year intervals from 1850–1990 for as many as sixty countries

that were compiled by Josh Lerner (2001), and contrasts these

with the size of the U.S. stock market with respect to GDP.15

In the figure, a country with at least one change in patent

law in a given year counts once in the ‘‘policy reform index,’’

while multiple changes in a single year are all counted in

the measure of ‘‘distinct policy changes.’’ Lerner distinguishes

Stock Markets in the New Economy 35

Page 49: Technology and the New Economy

discretionary changes in government stance toward patent-

ing from changes associated with the establishment of a new

nation, a revolution or coup, or temporary measures during

times of war, and he excludes these more special cases from his

counts of policy changes. Both indexes are normalized by the

number of active countries in the sample at the beginning of

the decade to adjust for wide disparities in the country cover-

age over time.

The close relationship between patent policy changes and the

performance of the U.S. stock market is apparent in figure

1.11, with periods of policy reform often preceding increases

in the total value of the stock market. If Lerner’s indexes are

reasonable proxies for the state of technology, and we believe

that they are, the low-frequency correlation between the series

suggests that the stock market recognizes new technologies

quickly and values them accordingly. The lags that we observe

in the 1920s between patent law changes and market value

Figure 1.11Worldwide changes in patent laws and U.S. stock market size

36 Boyan Jovanovic and Peter L. Rousseau

Page 50: Technology and the New Economy

may just reflect changes in the ease with which new firms can

list, as today’s Nasdaq now stands ready to absorb innovative

firms.

In figure 1.12, we contrast Lerner’s cross-country measures

with the ratio of merger capital to stock market capitalization

in the United States from 1885 to 1998.16 Since we normalize

by stock market size in the figure, we include only mergers

among firms that are both listed in our extended CRSP data-

base. Despite this limitation, the five merger waves of the past

century all stand out, including that of the turn of the twentieth

century, the late 1920s, the late 1960s, the mid-1980s, and the

current wave that began around 1993. Like the size of the mar-

ket generally, increases in merger activity also occur at times

when changes in international patent laws occur frequently.

It is natural to think that mergers should be associated

with technology.17 Gort (1969), for example, argued that

Figure 1.12Worldwide changes in patent laws and the ratio of merger to stockmarket value in the United States

Stock Markets in the New Economy 37

Page 51: Technology and the New Economy

technological change would raise the dispersion in how much

potential alternative owners would value a particular asset.

After the technological shock, the highest valuation of a firm’s

assets may shift to someone outside who then may try to

acquire that firm. A shock that was large enough could thus set

off a merger wave.18 The argument extends to any shock that

rearranges comparative managing advantage. Some firms will

react to the shock better than others. A firm that cannot adapt

will become a takeover target, or it may try to survive by

acquiring some other firm that does have the expertise needed

to cope in the new environment. The larger and wider ranging

the shock, the larger the resulting merger wave. Jovanovic and

Rousseau (2001c) formalize some of these themes in a model

of mergers as a reallocative mechanism that operates rapidly

during times of technological change. In the model, new tech-

nologies are carried in by entrants who are more efficient than

incumbent firms. These entrants combine with existing firms

who can adjust to the new technology to acquire the less effi-

cient and older firms. This occurs rather than exit because

mergers offer a means to acquire capital with at least part of its

organizational component intact. As a merger wave begins, the

demand for the capital of less efficient incumbents rises, caus-

ing their values to rise on the merger market, and encouraging

these firms to seek to be acquired rather than liquidated.

Figure 1.12 thus suggests that mergers are caused by factors

that transcend country-specific legal changes. It also appears

that merger waves have been quite synchronous in the few

countries where we have enough data to tell. McGowan’s

(1971) study of the United States, Canada, the United King-

dom, and France showed strong intercountry similarities in the

industries that experienced high merger activity. At the turn of

the twentieth century and in the 1960s both Great Britain and

38 Boyan Jovanovic and Peter L. Rousseau

Page 52: Technology and the New Economy

the United States experienced bursts of merger activity (Nelson

1959), and in the 1960s so did Sweden, Canada, the Nether-

lands, and Japan (Singh 1975; Matsusaka 1996). Great Britain

and the United States both had merger waves in the 1980s

(Town 1992), and the merger wave of the 1990s affected many

advanced economies.

1.9 What Next? The Second Democratization of Knowledge

One difference, not yet discussed, between electricity and IT is

that, while both enable more outputs to be produced with the

same inputs, IT is probably much more valuable in the process

of invention. Computers are essential in the process of gather-

ing and disseminating the relevant information, in designing

complex new products, in simulating the outcomes of experi-

ments that are costly or time-consuming to perform, in coordi-

nating research efforts of people that are often geographically

separated, in market research and identifying consumer wants,

and so on. We can, in other words, expect a faster stream of

new products than we saw following the mass adoption of

electricity. The surge in patenting during the last six years is an

indication of that.

But there are dissenting views. Looking largely at evidence

on the growth of productivity, Daniel Sichel (1997) and Robert

Gordon (2000) have suggested that the computer does not

measure up to the great inventions of the past. The debate will

go on, but, as we have argued (see Jovanovic and Rousseau

2002), nothing comparable to Moore’s Law has been seen in

any of the great technologies of the past, and, given that the

spread of the computer shows little signs of slowing down and

given that computer scientists expect Moore’s Law to continue

for at least another twenty years (Meindl, Chen, and Davis

Stock Markets in the New Economy 39

Page 53: Technology and the New Economy

2001), the long-run impact of the computer and Internet will,

we believe, far outstrip that of, say, the internal combustion

engine.

We also can expect further declines in the cost of computing

power and in software, components that, in spite of their fall-

ing cost, are absorbing an ever increasing share of U.S. firms’

investments. It is only a matter of time before world investment

follows suit, and when it does, computers and software will

be a real bargain even compared to today. Caselli and Cole-

man (2000, Table A.2) find that at the world level, the demand

for computers has an income elasticity of about two. As the

world’s incomes rise, we can expect a vast number of new

computers to be sold, and, through a process of learning by

doing, we can expect the costs of computing and information

management and dissemination to decline even more dramati-

cally. At least in the semiconductor industry, we know that

learning is essentially global; Irwin and Klenow (1994) have

found that learning spills over just as much between firms in

different countries as between firms within a given country.

They estimated that a doubling of cumulative output reduces

costs by 20 percent.

The availability of cheap computers, better software, and

faster Internet access does not eliminate or even reduce the

need for education in schools and colleges. The world will still

need to provide the other complementary resources before it can

take full advantage of information technology, and those other

resources—mainly human capital—will not become cheaper as

rapidly as computers will. Nevertheless, by eliminating many

of the diffusion lags that stem from informational barriers, the

computer and internet afford us the opportunity to do more

effective and faster research closer to the knowledge frontier

and to adopt frontier technologies much faster than before. In

40 Boyan Jovanovic and Peter L. Rousseau

Page 54: Technology and the New Economy

a narrow sense, the speed of sharing information via the Inter-

net may seem no bigger a productive leap than the telephone,

the telegraph, mail by internal combustion engine and air (or

even fax), but in the long run it will probably draw worldwide

thinking together in a way comparable only to the printing

press back in the fifteenth century that made scribal copying

obsolete and gave access to written knowledge to many more

than the handful of monks and aristocrats who could access it

previously. This was the first democratization of knowledge,

and it had profound effects on human development. As with

the IT revolution, the scope of the printing press was limited by

human capital—that is, by the ability of people to read. But its

scope quickly widened from Germany to England and else-

where, and the printing press thus allowed science to grow

and spread faster and farther, and it provided the technol-

ogies for the Industrial Revolution of the eighteenth century

and beyond.

Notes

The authors thank the NSF for financial help.

1. It is of course important to avoid attributing the current wave ofglobalization solely to technological factors since technological regressdid not cause the reversal of the globalization trend that occurredearly in the twentieth century.

2. We extended the CRSP stock files backward from their 1925starting year by collecting year-end observations from 1885 to 1925for all common stocks traded on the NYSE. Prices and par values arefrom the The Commercial and Financial Chronicle, which is also thesource of firm-level data for the price indexes reported in the CowlesCommission’s Common Stock Prices Indexes (1939). We obtainedfirm book capitalizations from Bradstreet’s, The New York Times,and The Annalist. The resulting dataset includes 21,516 firms, and isdescribed in detail in Jovanovic and Rousseau 2001a. The companiesincluded in the table 1.1 were chosen subjectively based on their being

Stock Markets in the New Economy 41

Page 55: Technology and the New Economy

large and well known, and, not least, because the information wesought on them was available. The designation of a particular event asa ‘‘1st Product or Process Innovation’’ is based upon our reading ofthe company history, and in some cases represent difficult choicesabout which reasonable individuals could easily disagree.

3. AMEX firms enter CRSP in 1962 and Nasdaq firms in 1972. SinceNasdaq firms traded over the counter before 1972 and AMEX’s pre-decessor (the New York Curb Exchange) dates back to at least 1908,we adjust the entering capital in 1962 and 1972 by reassigning mostof it to an approximation of the ‘‘true’’ entry years. We do this byusing various issues of Standard and Poor’s Stock Reports and StockMarket Encyclopedia to obtain incorporation years for 117 of the 274surviving Nasdaq firms that entered CRSP in 1972 and for 907 of the5,213 firms that entered Nasdaq after 1972. We then use the sampledistribution of differences between incorporation and listing years ofthe post-1972 entrants to assign the 1972 firms into proper initialpublic offering (IPO) years. See Jovanovic and Rousseau 2001a for amore detailed description of these adjustments.

4. The cumulative investment series is private domestic investmentfrom Kendrick 1961, Table A-IIa for 1885–1953, joined with esti-mates for more recent years from the National Income and ProductAccounts. We construct the series by inflating the annual investmentseries to represent 1998 dollars, summing across the years, and thenassigning each year its percentage of the total.

5. In terms of 1998 market value, the 1920s entrants as a group ac-count for 9.2 percent, while the entrants of the 1950s and 1960saccount for 5.4 percent and 15.8 percent respectively. Our emphasis,however, is not so much on the contributions of these cohorts to 1998value as on the gap between market shares and the shares of cumu-lative real investment that can be attributed to these decades. Usingthe ratio of the areas under the solid and dashed lines in figure 1.1 asan estimate of the relative size of this gap, we find a ratio of 2.75 inthe 1920s far exceeds the ratios of 0.80 and 1.51 that correspond tothe 1950s and 1960s. Indeed, the ratio in the 1920s exceeds that ofany other decade in our sample.

6. The merger adjustment uses several sources. CRSP itself identifies7,455 firms that exited the database by merger between 1926 and1998 but links only 3,488 (46.8%) of them to acquirers. Our exami-nation of the 2000 edition of Financial Information Inc.’s AnnualGuide to Stocks: Directory of Obsolete Securities and every issue of

42 Boyan Jovanovic and Peter L. Rousseau

Page 56: Technology and the New Economy

Predicasts Inc.’s F&S Index of Corporate Change between 1969 and1989 uncovered the acquirers for 3,646 (91.9%) of these unlinkedmergers, 1,803 of which turned out to be CRSP firms. We also re-corded all mergers from 1895 to 1930 in the manufacturing and min-ing sectors from the original worksheets underlying Nelson (1959)and collected information on mergers from 1885 to 1894 from thefinancial news section of weekly issues of The Commercial and Finan-cial Chronicle. We then recursively traced backward the merger his-tory of every 1998 CRSP survivor and its targets, apportioning the1998 capital of the survivor to its own entry year and those of itsmerger partners using the share of combined market value attributableto each in the year immediately preceding the merger. The process ofadjusting figure 1.1 ended up involving 5,422 mergers.

7. An analysis of mergers in the manufacturing and mining sectors inthe 1920s, however, suggests that capital brought into the market byentering firms shortly after a merger cannot account for very much ofthe entry in figure 1.1. We reached this conclusion after examining all2,701 mergers recorded for the 1920s in the worksheets underlyingNelson 1959. Many mergers involved a single acquirer procuringmultiple targets in the course of consolidation. We included the valueof acquirers that entered the NYSE anytime in the next two years andremained listed in 1998 as part of value brought into the market via a1920s merger. We also checked delisted 1920s acquirers to determineif they were predecessors (through a later acquisition or sequence ofacquisitions) to a CRSP firm that was listed in 1998, and treated thesemergers similarly. The percentages obtained by dividing the 1998value of all entering postmerger capital by the 1998 capital impliedby the solid line in figure 1.1 for each year of the 1920’s were 6.81in 1920, 0.53 in 1921, 0.67 in 1922, 1.77 in 1923, 0.02 in 1924, 1.91in 1925, 7.32 in 1926, 2.07 in 1927, 5.95 in 1928, 0.41 in 1929, and1.59 in 1930. Since the method attributes all entering capital to themerger targets even though much of it probably resided with the ac-quiring firm prior to merger and some may reflect postmerger appre-ciation of market value, these figures are likely to overstate the actualamounts of entering capital associated with mergers. This was neces-sary because we have no record of the value of unlisted targets priorto merger and the subsequent entry of the acquirers.

8. We obtain business debt for 1945–2000 from the Federal ReserveBoard’s Flow of Funds Accounts as the sum of corporate bonds andbank loans (1999, Table L.4, lines 5 and 6). We join these totals with

Stock Markets in the New Economy 43

Page 57: Technology and the New Economy

those for the book value of outstanding corporate bonds from Hick-man (1952) for 1885–1944, splicing his series for railroad bonds(1885–1899) with his series for all corporate bonds which begins in1900. Commercial and industrial bank loans for 1939–1944 are fromthe Federal Reserve Board’s All-Bank Statistics and are joined with allnon–real estate, noncollateral loans for 1896–1938. We then join thisresult with total loans from the U.S. Bureau of the Census’s (1975)Historical Statistics of the United States (series X582). The figuresfrom All Bank Statistics and Historical Statistics are for dates closestto June 30, and so we average them across years to be consistent withthe calendar-year basis of the Flow of Funds.

We convert the book valuations of debt into market valuesusing the annual average of monthly yields on AAA-rated corpo-rate bonds from Moody’s Investment Service for 1919–2000 andHickman’s ‘‘high grade’’ bond yields, which line up with Moody’sprecisely, for 1900–1918. Yields on ‘‘high-grade industrial bonds’’from Friedman and Schwartz 1982, Table 2.8, are used for 1885–1899.

To determine the market value, we let rt be the bond interest rateand then compute

r�t ¼1

P ti¼1885ð1 � dÞ t�i

Xt

i¼1885

ð1 � dÞ t�irt�i:

Therefore r�t is a weighted average of past interest rates. We thenchoose a d of 10 percent to approximate the growth of new debt plusretirements of old debt. Finally, we multiply the book value of out-standing debt by the ratio r�t =rt to obtain its market value.

9. We obtain summary data on the diffusion of electricity and powerequipment in factories from the U.S. Bureau of the Census (1940),Table 1, p. 275.

10. Data on the spread of electricity use by consumers are approxi-mations derived from Historical Statistics (series S108 and S120).Statistics on computer ownership are from Gates (1999), p. 118, withthe 2003 projection from Forrester Research, Inc.

11. By setting 1975 as the starting date for IT, we adopt the advent ofthe microprocessor as the key event rather than the earlier mainframecomputer that ‘‘arrived’’ in 1952 with the tabulation of results for thatyear’s U.S. presidential election. Greenwood and Jovanovic (1999)and Hobijn and Jovanovic (2001) make the case for the micropro-cessor more strongly.

44 Boyan Jovanovic and Peter L. Rousseau

Page 58: Technology and the New Economy

12. Listing years are those for which firms enter our extended CRSPdatabase. Incorporation dates are from Moody’s Industrial Manual(1920, 1928, 1955, 1980), Standard and Poor’s Stock Market En-cyclopedia (1981, 1988, 2000), and various issues of S&P’s StockReports.

13. We applied the Hodrick-Prescott filter to all three series beforeplotting them. The data set that we used to compute waiting times isdescribed further in Jovanovic and Rousseau 2001b.

14. Data on the number of patents issued are from the U.S. Patentand Trademark Office for 1963–2000 and from Historical Statistics(U.S. Bureau of the Census 1975, 957–958) for earlier years.

15. Lerner determines the number of changes in patent policy in agiven year by examining patent office documents and legal mono-graphs that involved patent policy. His sample consists of the sixtycountries with the highest total GDP in 1997. He counts patent feechanges as policy reforms only when they rise by more than 100percent or fall by more than 50 percent in an attempt to elimi-nate changes in fees with little real effect that were brought about byperiods of moderate to high inflation. See Lerner 2001 for completedocumentation of this new and informative dataset.

16. We include in figure 1.12 the market values of firms in our ex-tended CRSP database, both acquirers and targets, at the end of theyear before the merger. This restricts the merger series to includeNYSE-listed firms from 1885, with the additions of AMEX-listedfirms from 1962 and Nasdaq firms from 1971. We apply the correc-tions to the CRSP files described in n. 4 to reflect all merger activityprior to computing the totals.

17. Some have argued that the merger wave of the 1960s was drivenby the tax system.

18. The technological basis for mergers is reinforced by sectoral evi-dence in Gort 1962 that indicates a strong and positive correlationacross sectors between merger activity and the ratio of technical per-sonnel to total employees.

References

Agarwal, R., and M. Gort. 1999. ‘‘First mover advantage and thespeed of competitive entry: 1887–1986.’’ Working paper, SUNYBuffalo.

Stock Markets in the New Economy 45

Page 59: Technology and the New Economy

All-Bank Statistics, United States. 1959. Washington, DC: Board ofGovernors of the Federal Reserve System.

The Annalist: A Magazine of Finance, Commerce, and Economics.1913–1925. Various issues.

Annual Guide to Stocks: Directory of Obsolete Securities. 2000.Jersey City, NJ: Financial Information Inc.

Bradstreet’s. 1885–1925. Various issues.

Caselli, F., and W. J. Coleman. 2001. ‘‘Cross-country technology dif-fusion: The case of computers.’’ American Economic Review 91(2):328–335.

The Commercial and Financial Chronicle. 1885–1925. Various issues.

Cowles, A., and Associates. 1939. Common Stock Price Indexes,Cowles Commission for Research in Economics Monograph No. 3.2d ed. Bloomington, IN: Principia Press.

CRSP Database. 2000. Chicago: University of Chicago Center forResearch on Securities Prices.

David, P. 1991. ‘‘Computer and dynamo: The modern productivityparadox in a not-too-distant mirror.’’ In Technology and Productiv-ity: The Challenge for Economic Policy, 315–347. Paris: OECD.

Devine, W. D. 1983. ‘‘From shafts to wires: Historical perspective onelectrification.’’ Journal of Economic History 43(2): 347–372.

Flow of Funds Accounts. 1999. Washington, DC: Board of Governorsof the Federal Reserve System.

Friedman, M., and A. J. Schwartz. 1982. Monetary Trends in theUnited States and the United Kingdom. Chicago: University of Chi-cago Press.

Gates, B. 1999. Business @ the Speed of Thought. New York: WarnerBooks.

Gordon, R. J. 2000. ‘‘Does the ‘New Economy’ measure up to thegreat inventions of the past?’’ Journal of Economic Perspectives 14(4):49–74.

Gort, M. 1962. Diversification and Integration in American Industry.Princeton, NJ: Princeton University Press.

Gort, M. 1969. An economic disturbance theory of mergers. Quar-terly Journal of Economics 94: 624–642.

Greenwood, J., and B. Jovanovic. 1999. ‘‘The IT revolution and thestock market.’’ American Economic Review 89(2): 116–122.

46 Boyan Jovanovic and Peter L. Rousseau

Page 60: Technology and the New Economy

Griliches, Z. 1957. ‘‘Hybrid corn: An exploration in the economics oftechnological change.’’ Econometrica 25(4): 501–522.

Hickman, W. B. 1952. ‘‘Trends and cycles in corporate bond financ-ing.’’ Occasional Paper No. 37. New York: National Bureau of Eco-nomic Research.

Hobijn, B., and B. Jovanovic. 2001. ‘‘The IT revolution and the stockmarket: Evidence.’’ American Economic Review 91(5): 1203–1220.

Hoover’s Online: The Business Network. 2000. Austin, TX: Hoover’s,Inc.

Irwin, D. A., and P. J. Klenow. 1994. ‘‘Learning-by-doing spilloversin the semiconductor industry.’’ Journal of Political Economy 102(6):1200–1227.

Jones, C. M. 2001. ‘‘A century of stock market liquidity and tradingcosts.’’ Working paper, Columbia University.

Jovanovic, B., and P. L. Rousseau. 2001a. ‘‘Vintage organizationcapital.’’ NBER Working Paper No. 8166, March.

Jovanovic, B., and P. L. Rousseau. 2001b. ‘‘Why wait? A century oflife before IPO.’’ American Economic Review 91(2): 336–341.

Jovanovic, B., and P. L Rousseau. 2001c. ‘‘Mergers as reallocation.’’Working paper, University of Chicago and Vanderbilt University.

Jovanovic, B., and P. L. Rousseau. 2002. ‘‘Moore’s law and learning-by-doing.’’ Review of Economic Dynamics 5(2): 346–375.

Kelley, E. M. 1954. The Business Founding Date Directory. Scarsdale,NY: Morgan and Morgan.

Kendrick, J. 1961. Productivity Trends in the United States. Prince-ton, NJ: Princeton University Press.

Lerner, J. 2001. ‘‘150 years of patent protection.’’ NBER WorkingPaper No. 7478, January.

Lerner, J., and S. Kortum. 1998. ‘‘Stronger protection or technologicalrevolution: What is behind the recent surge in patenting?’’ Carnegie-Rochester Conference Series on Public Policy 48: 247–304.

Matsusaka, J. 1996. ‘‘Did tough antitrust enforcement cause the di-versification of American corporations?’’ Journal of Financial andQuantitative Analysis 31: 283–294.

McGowan, J. 1971. International comparisons of merger activity.Journal of Law and Economics 14(1): 233–250.

Stock Markets in the New Economy 47

Page 61: Technology and the New Economy

Meindl, J. D., Q. Chen, and J. Davis. 2001. ‘‘Limits on Silicon Nano-electronics for Terascale Integration.’’ Science (September 14): 2044–2049.

Moody’s Industrial Manual. 1920, 1928, 1955, 1980. New York:Moody’s Investors Service.

Navin, T. R., and M. V. Sears. 1955. ‘‘The rise of a market for in-dustrial securities, 1887–1902.’’ Business History Review 30(2): 105–138.

Nelson, R. L. 1959. Merger Movements in American Industry, 1895–1956. Princeton, NJ: Princeton University Press.

The New York Times. 1897–1928. Various issues.

Predicasts F&S Index of Corporate Change. 1969–1992. Cleveland,OH: Predicasts Inc.

Sichel, D. E. 1997. The Computer Revolution. Washington, DC:Brookings Institution.

Simon, C. J. 1989. ‘‘The effect of the 1933 Securities Act on investorinformation and the performance of new issues.’’ American EconomicReview 79(3): 295–318.

Singh, A. 1975. ‘‘Take-overs, economic natural selection, and thetheory of the firm: Evidence from the postwar United Kingdom expe-rience.’’ Economic Journal 85: 497–515.

Stock Market Encyclopedia. 1981, 1988, 2000. New York: Standardand Poor’s Corporation.

Stock Reports. 1981, 1988, 2000. New York: Standard and Poor’sCorporation.

Town, R. 1992. ‘‘Merger waves and the structure of merger andacquisition time series.’’ Journal of Applied Econometrics 7, IssueSuppl.: Special Issue on Nonlinear Dynamics and Econometrics (De-cember): S83–S100.

U.S. Bureau of the Census, Department of Commerce. 1975. Histori-cal Statistics of the United States, Colonial Times to 1970. Washing-ton, DC: Government Printing Office.

U.S. Bureau of the Census. 1940. Census of Manufactures for 1939.Washington, DC: Government Printing Office.

48 Boyan Jovanovic and Peter L. Rousseau

Page 62: Technology and the New Economy

2The Value of Competitive Innovation

and U.S. Policy toward the Computer

Industry

Timothy F. Bresnahan and Franco Malerba

2.1 Introduction

The United States has maintained a position of international

leadership in the computer industry during the last fifty years

despite considerable change in markets and technologies. The

firms, entry conditions, and firm structures that supported U.S.

success in the IBM era bear little resemblance to those of the

Silicon Valley era. Persistent U.S. international leadership poses

a challenge to economic analysis: Is it just a coincidence that

the United States led in two such different industrial contexts?

Or are the two industrial contexts simply much more similar

than they appear, so there is no change? In this chapter, we

provide an analysis that explains both the changes in markets

and technologies and the persistence of U.S. international

leadership. We take up two related themes about the ongoing

international success of the computer industry in the United

States and its ongoing ability to supply new technologies to

support economic growth: (1) the factors at the base of the

concentration of rent generation in a single country and their

persistency over time, and (2) the institutions and public policy

forces contributing to this concentration in a country. Both

Page 63: Technology and the New Economy

themes cover a history of some fifty years, leading up to the

conflict and policy questions of today.

First, we ask what industry forces have led to the concen-

trated location of rent-generating supply1 in this industry in a

single country, and what forces have selected the United States

for persistent success. The concentration question has a rea-

sonably direct answer arising from the application of industrial

organization methods to international trade. Readily identifi-

able strategic and technical forces lead to an equilibrium in-

dustry structure in which, for many important technologies

in the industry, there is a high level of concentration. This is

especially true of the parts of the industry where invention

and technical progress are sources of private and social rents.

Ongoing technical and market progress over many decades

means that those forces have not waned.

Explaining the persistence of producer rents in one country

is far more difficult. There has been dramatic change in the

economic and technological basis for the rent-generating parts

of the industry. To be sure, the industry has periods within

which a firm or a technology persists in a leading position be-

cause of first-mover advantages related to lock in and network

effects. Over the longer haul, however, those positions have

often been eroded and replaced. The market and technical

positions that led to early U.S. success have been eclipsed, and

the firms leading the industry in rent generation have changed

several times, not only in name, but also in fundamental orga-

nizational structure, technical competence, and marketing capa-

bility. Persistence in the United States over the long haul is

not explained by the ongoing success of any particular national

champion firm or technology but rather by the replacement of

one by the next. This is closely related to the industry’s ability

50 Timothy F. Bresnahan and Franco Malerba

Page 64: Technology and the New Economy

to bring forth new technologies that support new applications

of computing, a growth pole for the world.

Second, we also address the related question of national

forces outside the industry that contribute to the location of the

rent generating sectors or to their persistence. We include here

a range of national institutions, such as scientific and engi-

neering development in universities, creation of high-tech labor

forces, and so on, but focus particularly on the role of public

policy. The role of institutions and public policy has been sup-

portive rather than directive or determinative of private-sector

efforts within the industry itself. Critically, institutions and

policies have not been aimed at preserving the rents of the in-

dustry from one period to the next. Instead, they have been

focused on supporting the creation and market selection of

new capabilities. Public policy has avoided the mistake, wide-

spread among the rich countries in connection to this industry,

of protectionist national champion policies. These slow the loss

of position in one era but do not encourage winning of a new

position in the next one. Second, U.S. institutions and policies

accommodate the market forces behind long-run change, thus

linking U.S. producer rents to the best prospects for the future

rather than the past.

This long-standing policy stance of the United States is little

understood, so that when it makes the headlines, as it has in

connection with the Microsoft antitrust case (U.S. v. Microsoft

and State of New York et al. v. Microsoft), it sets off a new

round of debate over whether the United States should become

protectionist of existing producer rents. In fact, the U.S. gov-

ernment and existing national champion Microsoft are in

conflict in an antitrust suit. The government does not seek

to protect existing rents but instead to protect potential

The Value of Competitive Innovation and U.S. Policy 51

Page 65: Technology and the New Economy

competition based in new technologies that might disturb the

status quo.2 This is a continuation of the long-standing policy

of enabling market choice of new rents rather than protecting

old ones, a policy that has led to ongoing improvements in the

technical and market basis of computing with substantial

social benefits, and incidentally to the continued location of the

producer rents in the United States.

In this chapter we examine the forces leading to concentra-

tion and persistence of supplier rents at two time scales. One is

within particular technological eras and within particular in-

dustry segments, such as the time period in which the most

important computers were mainframe computers. For this time

scale, analysis based on the new trade theory works very well.

Our other time scale is long enough to capture the foundation

of new segments, such as the personal computer segment, and

transitions in the industry, such as the emergence of compe-

titors against IBM based on new technologies. At this longer

time scale, we need an entirely different body of theory to ex-

plain producer rents persistently concentrated in the United

States.

2.2 Short- and Long-Scale History: Persistence across

Distinct Technological Eras

In this section, we examine the forces that have led to the

concentration and persistence of the rent-generating parts of

the computer industry as it has transitioned through a number

of distinct eras: mainframe, minicomputer, PC, supermini and

client-server computing, and the Internet. Within each of these

eras, we illustrate the forces supporting the ongoing creation of

social rents and persistence of the location and success of in-

dustry, involving the improvement of existing technical, mar-

52 Timothy F. Bresnahan and Franco Malerba

Page 66: Technology and the New Economy

keting, and industrial organization capabilities. Since there are

powerful forces for national persistence within each era, the

persistence evident in the long time-scale history arises in the

forces behind industry location at the founding of each era.

Thus, we provide a short analysis of each of those era-founding

moments and of the related periods of transition between one

era and its replacement.

2.2.1 Persistence of Leadership in Business Data Processing;

Mainframes and IBM’s Leadership

Mainframe computers are systems used for large departmen-

tal or company-wide applications. The demanders are pro-

fessionalized computer specialists in large organizations. They

have close bilateral working relationships with suppliers. In

the industrialized countries, many of the sites doing this kind

of computing have been in operation for decades. A process

of learning by using, plus ever cheaper large computers, has

led valuable applications and a steadily rising demand curve.

These sites have absorbed—and paid for—dramatic increases

in computer power. While mainframe computing sites number

only in the tens of thousands, their total market demand has

been on the order of billions of dollars over several decades.3

Mainframes are produced by vertically integrated firms. IBM

is the largest producer, active in the development, manufactur-

ing, marketing and distribution of its systems, and producing

most of the components in-house. Market success was related

to major and continuous R&D efforts, to effective marketing,

and to the close integration of technology, marketing and

management. One element of IBM’s strategy was particularly

important. This was the development in 1964, of the computer

platform, and the related technological concept of compati-

bility standards and modular (interchangeable) components.4

The Value of Competitive Innovation and U.S. Policy 53

Page 67: Technology and the New Economy

IBM controlled and coordinated system development, even in

the presence of rivalry from the producers of some modular

components, because it could control key interfaces. Other

firms could sell hardware or software add-on products com-

patible with IBM systems, but only if they used interfaces

defined by IBM. Compatibility across products and over sub-

sequent product families allowed the persistence of existing

standards and lock-in of the existing customer base. IBM’s

long-standing dominant position in the mainframe market was

heavily reinforced by positive feedback forces associated with

the investments by other firms, by suppliers, and by customers

in IBM platforms.

Technologies, firms’ capabilities, strategies and organization,

customers’ needs, and market structure were strikingly IBM-

centric. Competitors, customers, and even national govern-

ments defined their computer strategies in relationship to IBM.

For decades, IBM was the manager of both the cumulative and

the disruptive/radical parts of technical change. When an es-

tablished technology aged, IBM was not only its owner but

also the innovator of the new, a process by which some of the

sunk costs of the industry were destroyed by being replaced.5

But other sunk costs—such as the interfaces and compatibility

standards at the heart of IBM’s product lines, IBM’s invest-

ments in customer relationships, and customer’s investments in

technical and marketing relationships with IBM—were pre-

served. As a result, IBM and U.S. rents persisted until the

1990s.6

The concentration of the mainframe segment and the per-

sistent leading position of IBM—and the United States—are

attractive places to use new trade theory arguments.7 There

were very substantial scale economies at the firm level, not only

54 Timothy F. Bresnahan and Franco Malerba

Page 68: Technology and the New Economy

technical, but also Chandlerian ones surrounding joint invest-

ments in management, marketing, and technology. Further-

more, the nature of compatibility and platforms meant that

there were social scale economies as well. The social scale eco-

nomies, especially, were associated with sunk costs by buyers.

These forces are powerful reasons, as modern theory makes

clear, for concentration and persistence.8 Even as the market

segment grew dramatically in size, the scale economies con-

tinued to be large relative to demand and were appropriated at

the firm level.9 Any equilibrium theory of industry structure

will predict such a result, though predictions about which firm

(or even which kind of firm) earn rents may well depend on

delicate and hard-to-observe strategic opportunities.

Thus, the international allocation of producer rents will in-

herit the structure of the underlying industry equilibrium. The

rents flow to one country, the one containing the rent-earning

firm.

There is no connection between this outcome and any affir-

mative strategic trade or industrial policy. Throughout the

period of IBM’s dominance, the United States opposed the

dominant status of its own ‘‘national champion.’’ The height of

this opposition came in the long antitrust case, U.S. v. IBM,

with a number of arguments, including those contending that

IBM’s vertically integrated structure prevented competition in

component markets. The case ultimately was dropped by the

government.10

Non-U.S. national governments protected their domestic

producers against IBM, with the hope of building an industry

that would earn rents. This met with no success under Euro-

pean ‘‘national champions’’ policies and modest success under

Japanese managed competition policies.11 Japanese firms such

The Value of Competitive Innovation and U.S. Policy 55

Page 69: Technology and the New Economy

as Hitachi sold IBM-compatible hardware in the unbundled

regime. There was, however, no serious direct challenge to

IBM’s standard-setting position in mainframes either at home

or abroad.

2.2.2 Original Location of the Industry: The Founding of

the Computer Industry

With increasing returns to scale as strong as those in main-

frame computing, the underlying industry equilibrium is inde-

terminate with regards to which among several firms will

dominate. The international allocation of producer rents is

indeterminate. As a matter of pure logic, this raises the possi-

bility of governments engaging in strategic trade policy to steer

the producer rents to their countries. Given the persistence of

leadership positions, the same logic suggests that governments

will (or should, in the more mercantilist variants of the theory)

engage in strategic trade policy activities at the beginning of an

era, when the market allocation is being determined.12

That theoretical logic, however, bears little resemblance to

the forces and events determining the international allocation

of rents in the period leading up to the establishment of IBM’s

position of dominance (roughly from late in World War II to

the mid-1950s). That calls for a very different view of planned

or unplanned outcomes of government action.

To be sure, it was not predetermined that the producer rents

to the early computer business would go to the United States.

Many of the early computer companies were founded by en-

trepreneurs from universities, and during the 1940s and early

1950s universities in the United Kingdom and France as well

as the United States did advance research and built early com-

puter prototypes. Additionally, European firms such as Sie-

mens, Bull, Olivetti, BTM, Telefunken, and Zuse had computer

56 Timothy F. Bresnahan and Franco Malerba

Page 70: Technology and the New Economy

projects, some with a heavy commitment to R&D and others

with strong connections to business customers. A similar list

emerged in the United States, drawn both from existing elec-

tronics firms and entrepreneurial startups. Both technical and

market capabilities were built on both sides of the Atlantic.13

There were powerful reasons why the equilibrium would

flow to a U.S. firm. It was a country with a large demand curve

for computers and, for national defense reasons, a steep one.

The various U.S. defense department agencies funding much

computer research, and buying much in the way of early com-

puting, were quite nationalistic. Finally, right after the end of

the World War II, Japan was far from technically advanced,

and Europe more oriented to rebuilding existing areas of

strength than to building in a new one.

All these differences do little to help understand the actual

sources of U.S. success, which occurred far more at the level of

the firm than the country. An English IBM, for example, could

easily have emerged and won.14 Explaining our certainly of

that counterfactual involves delving a bit deeper into the rea-

sons for IBM’s success and the limited role of its U.S. location.

In the late 1940s and early 1950s, there was considerable

uncertainty about the technical features of computers, their

highest-value uses, and the appropriate structure for a com-

puter company. A number of different computer companies,

in a number of different countries, made very distinct choices

about technology, market, and structure. IBM emerged from

this early competitive epoch to dominate supply, in the pro-

cess determining the technologies needed for computing, the

marketing capabilities needed to make computers commer-

cially useful, and the management structures that could link

technology and its use. The United States was, in the ensuing

era, the dominant country in the computer business because it

contained the dominant firm, IBM.

The Value of Competitive Innovation and U.S. Policy 57

Page 71: Technology and the New Economy

Much of what is ex post obvious about the mainframe seg-

ment was ex ante difficult to foresee.15 In the late 1940s the

obvious application of the computer was for rapid calculation

for scientific or military purposes. Forecasts of the future of

the computer as a business data processing machine were far

vaguer. With so much uncertainty, there was considerable op-

portunity for experimentation and error. The firms competing

for market leadership ranged from those with strong elec-

tronics technical capabilities (some of these were startups) to

those with existing market connections to business customers.

By far the most common experiments, however, were based

on the view that the computer would be used for computation,

namely, rapid calculation. These experiments pushed firms

away from the largest and most profitable uses of computers,

business data processing. Some firms with strong connection to

business equipment customers did attempt to adapt to the new

circumstances; for them the challenge was one of mastering a

major change in technical basis, from mechanical or electro-

mechanical to electronic.

IBM, a preexisting business equipment firm, was dominant

in the tab-card business in the United States in the era before

the computer and thus had, already, a strong marketing con-

nection in business data processing. IBM was able to adapt to

new circumstances by building a substantial electronics tech-

nical capability and a capability to manage the connections

between technical progress and customer needs.16 It was this

construction of an integrated technology, marketing, and man-

agement company (the famous Chandlerian three-pronged in-

vestment) that permitted IBM to dominate the industry. In

addition, IBM’s preexisting knowledge as a business equipment

company led it to experiments that were ultimately consistent

with the new emerging demand. Out of literally dozens of

58 Timothy F. Bresnahan and Franco Malerba

Page 72: Technology and the New Economy

experiments with the appropriate model of the firm, IBM’s

adaptation of its market knowledge, combined with technical

experimentation, ultimately succeeded.

The key role of decision making at the firm level does not

mean that the national-level forces were unimportant, only that

they played subsidiary roles. Indeed, the intense firm-level ex-

perimentation in the United States was supported by national

institutions. Many experiments came out of entrepreneurs in

universities. Experimentation, especially technical experimen-

tation, was supported by a very large number of different gov-

ernment computer technology initiatives. Uncertainty about

future technologies and new demand raises the returns to a

variety of experimental, exploratory approaches.17 Mutually

exclusive approaches to a certain objective have, collectively, a

higher probability of success than does any one.18 In addition,

when the nature of demand and the direction of technical

change are uncertain, there is a breadth effect of pursuing dis-

tinct technological objectives.19 When uncertainty relates to

demand and commercialization as well as to technology, the

range of experimentation is not limited to technical oppor-

tunities but includes organizational forms and modes of buyer-

seller interaction. In general, the less demand and technology

are defined ex ante, the wider is the variety of approaches that

firms within an industry pursue in order to reach a successful

new product, technology, or process.

The U.S. policy was fundamentally consistent with this view

of the value of experimentation and exploration. The govern-

ment-sponsored research initiatives were not particularly to

the advantage of IBM.20 Nor did government initiatives set a

technical direction. Rather, government R&D funding and de-

fense procurement served to support exploratory activities and

the development of a wide variety of firms and technologies.

The Value of Competitive Innovation and U.S. Policy 59

Page 73: Technology and the New Economy

Far from picking IBM as a leader, the U.S. government sup-

ported variety.

U.S. market institutions then worked to let IBM emerge as

the clear industry leader.21 This selection mechanism was not

present in other countries: European countries used national

champion policies that protected one large national firm in

each country, weakening selection processes. In general, in the

United States the role of successful national institutions and

especially successful national policy was to support a wide

range of initiatives, one of which eventually worked out in

the marketplace. The motivation behind the support was not

one of directing rents toward the United States, but rather of

supporting valuable basic research and, distinctly, mission-

oriented defense procurement.22

By their trial-and-error nature, firm-level experiments and

exploration lead to shakeouts. In general in high-tech indus-

tries, radical innovations and emerging markets are often fol-

lowed by shakeouts that not only reduce the number but also

the variety of firms. The role of a shakeout is to select among

the variety of technologies, organizational forms, and modes

of buyer-seller interaction that were early experiments.23 Of

course, the intensity and rapidity of selection depends upon a

range of factors. Barriers to exit, whether as a matter of gov-

ernment policy or the nature of competition, slow selection.

Competitive environments speed up selection.

U.S. policy at the beginning of the commercial computer era

was consistent with the idea that rapid selection by markets is

likely to do a better job than selection directed by governments

or slowed by them. The result of supporting a wide variety of

initiatives, but permitting market selection rather than strate-

gically directing the industry, was the emergence of a firm with

technologies and structures aligned with commercial market

60 Timothy F. Bresnahan and Franco Malerba

Page 74: Technology and the New Economy

desires. This was the key to the long process of computerizing

white-collar work, first in the United States and later in all

the rich countries, first in the service sectors then in most of

the economy. This computerization of work led to substantial

technical progress in the using industries, ultimately a signifi-

cant contributor to world economic growth.

2.2.3 The Minicomputer Segment: Concentration with a

Different Cause

Though IBM was dominant in mainframes selling to corpo-

rations, other computer demand segments emerged and grew.

New computer systems and distinct sellers supplied these.

One new kind of system—minicomputers—was for scientific

and engineering demand and other technical computation.

Minicomputer users are factories, laboratories, and design

centers. These were technically sophisticated customers.24 Pro-

grams are written for a single use; the value of compatibility

(as opposed to technical power) is correspondingly less. Thus,

minicomputer firms compete less by sales forces, marketing,

and support and more by technical progress. Sellers tend to

use technical rather than businesspeople to visit customers, and

to have good communications with customers about the best

technical features of the computers. Information about the

technical features buyers wanted and the technical capabilities

of different sellers’ products flowed freely. Minicomputers

shared only the most basic technologies with mainframes.

Multiple minicomputer platforms flourished, with partial

compatibility.25 Initial firms were entrepreneurial start-ups

(primarily technology based) such as DEC, Perkins-Elmer, and

Gould. Most were clustered in the Route 128 region near

Boston. Entry barriers were never high enough to keep out

well-funded and technically capable entrants: Hewlett Packard

The Value of Competitive Innovation and U.S. Policy 61

Page 75: Technology and the New Economy

entered successfully well after the category was established.

Despite open entry conditions, DEC maintained market share

leadership, relying on continuous technical improvements.

These American minicomputer sellers were international

leaders, especially DEC. Consistent with the multiple-seller

industry structure, some European firms entered and a few

even earned rents for a period. For example, during the

1960s and the 1970s in Germany several firms, such as Nix-

dorf, Konstanz, Triumph Adler, Kienzle, Dietz, and Krantz,

started to produce minicomputer systems. These minicomputer

systems were all proprietary, focused on sector-specific appli-

cations and had specific software. These companies (particu-

larly Nixdorf) experienced success until the 1980s but later

exited.

Why this pattern? The underlying industry structure was one

of monopolistic competition with multiple competing firms,

compatibility standards, and platforms. While it was concen-

trated, barriers to entry were far less than in business comput-

ing segments. Scale economies were driven primarily by R&D,

not particularly by marketing or by network effects. The com-

paratively limited role of user platform-specific investments

meant less opportunity to create a dominant position by estab-

lishing marketwide standards. These modest scale economies

and modest sunk costs led to a monopolistically competitive

structure, and not one that yielded nearly as much producer

rent as did the mainframe segment.

Accordingly, the minicomputer segment first went through a

period of some geographical distribution and then later grew

more concentrated in one country. This time, however, the

concentration in one country was not so much in one firm.

Instead, spillouts across multiple producers who continued in

competition characterized the industry.

62 Timothy F. Bresnahan and Franco Malerba

Page 76: Technology and the New Economy

2.2.4 Forces Favoring the United States in Minicomputers

The preceding market structure analysis leaves open the ques-

tion of why the minicomputer industry, too, ended up specifi-

cally in the United States. The first obvious cause to consider is

that the existence of a U.S. dominant firm in the immediately

preceding technology, mainframes, was an important cause of

continued U.S. dominance. This turns out to be false, as does a

story of purposeful government rent steering.

The existence of a very different body of demand permitted

emergence of a distinct segment without competition from the

existing dominant computer technologies, mainframes. Since

the minicomputer draws on distinct technologies and serves

very different demands, and since the marketing model for

minicomputers is very different and the typical organization of

a minicomputer firm is quite distinct from a mainframe one, it

is not surprising that there was some segmentation.

It is perhaps more surprising that IBM, the firm, was unable

to dominate this segment even as it effectively dominated (and

unified) all the segments with commercial buyers. Adaptation

of IBM’s capabilities to the distinct conditions appears to have

been quite difficult. The struggles of existing dominant firms

to adapt to radical change is a familiar topic,26 of course, and

the incentives for IBM to adapt to this particular change were

quite low at the founding stage since it was already posed to

dominate a more profitable segment. Despite a series of efforts

to enter, and despite the low barriers to entry, IBM was not

one of the leaders of the minicomputer segment.27

A variety of forces far weaker than continuity by a success-

ful dominant firm located the minicomputer industry primarily

in the United States. The technical computing research spon-

sored by the defense department, mentioned earlier, led to

early minicomputer companies related to university research.

The Value of Competitive Innovation and U.S. Policy 63

Page 77: Technology and the New Economy

Institutions supporting formation of a technology firm were

particularly strong in the United States. Yet there were a sub-

stantial number of European entrants (not all coming from

national champions). Finally, some of the skilled workforce

and technical knowledge, but only some, was shared with

mainframes. This was a (weak) force for co-location of mini-

computer rents with IBM in the United States.

Ultimately, however, the location of the minicomputer in-

dustry in the United States was the outcome of the same set of

forces of experimentation and exploration28 followed by mar-

ket selection29 as we saw in mainframes. The market selected a

very different set of technologies and organizational forms in

this segment, so the U.S. policy of favoring a wide range of in-

itiatives rather than existing national champions was congru-

ent with underlying market and technical forces. This opened

up the possibility for ongoing variety in the choice of tech-

nologies and the direction of technical progress within the

broad computer industry, as invention in two distinct segments

went forward. That variety would ultimately contribute con-

siderably to the ability of the overall industry to growth.

One should not exaggerate the distinction between gov-

ernment-led and market-led outcomes in the minicomputer

segment, for they are far closer here than in the mainframe

segment. Military demanders wanted much the same from

minicomputers as did other technical demanders, and govern-

ment engineers were among those advancing such technologies

as the UNIX operating system and the ARPAnet (later Internet)

networking environment. The distinction to draw here is be-

tween military procurement that is purposively a part of stra-

tegic trade policy, which does not describe the U.S. stance

accurately, and mission-oriented military procurement that

raises the demand curve for valuable technologies, which does.

64 Timothy F. Bresnahan and Franco Malerba

Page 78: Technology and the New Economy

2.2.5 Concentration and Persistence in PCs

A third kind of computer systems—personal computers (PCs)

—was for ‘‘individual productivity applications.’’ This newer

demand segment opened up in the 1970s. The customers are

again distinct from the previous two segments, as are the basic

technologies of hardware and software. Powerful network

effects link customers directly to one another and to vendors.

These network effects have been an important source of con-

centration and persistence; the structure has typically been of a

worldwide dominant platform, sometimes with a strong sec-

ond. Since the early 1980s, there has been persistence of the

IBM PC platform and its descendants in a chain of compati-

bility. Over that same period, the typical customer has been

nontechnical, so that marketing capabilities have played an

important role.

These distinctions from the preexisting mainframe and

minicomputer segments permitted emergence of a new set of

technologies, firms, and markets, only loosely linked to prior

sources of rents at the national level. The PC segment also has

important differences in industrial organization, of which the

most important is vertical disintegration of supply of key plat-

form components, which leads to divided technical leadership

(Bresnahan and Greenstein 1999). The primary advantages to

sellers of divided technical leadership are speed and specializa-

tion, and the PC segment reflects that. Product life cycles are

very short, and the rate of change, upgrading, and improvement

in hardware and software has been high. Complex systems

products could be quickly brought to market because special-

ists innovated rapidly.30 Divided technical leadership supports

this by permitting advances in one part of a platform—say, a

specific piece of platform software, like an operating system—

by a specialized firm while other sellers of other forms of key

The Value of Competitive Innovation and U.S. Policy 65

Page 79: Technology and the New Economy

platform software and hardware advance at their own pace.

An advantage to buyers, but not particularly to sellers, of this

industrial organization is that it is more competitive than ver-

tical integration of key platform components.

In each horizontal layer (component market) of the PC seg-

ment, market structure was highly fragmented at the begin-

ning, often becoming more and more concentrated as time

passed. Some key components had dominant firms: micro-

processor (Intel), operating system (Microsoft), and word

processor (WordPerfect and later Microsoft). Other key com-

ponents were supplied much more competitively (e.g., many

hardware components such as add-in cards). The making of

the computers themselves became highly concentrated shortly

after the introduction of the IBM PC, but new entrants eroded

that position later on.

Again, the American suppliers became the world leaders,

though there were real efforts, both government-sponsored and

private, to move leadership to Europe or Japan.31 Understand-

ing persistence and concentration in the United States is at once

easy and hard.

The easy part is the explanation of the high level of concen-

tration and persistence in platforms. Products with large-scale

economies and much cumulativeness, such as word processors,

operating systems, and microprocessors, show concentration

and persistence of the industry leaders at an intermediate time

scale. Shifts in platform leadership from one firm to another,

however, open up a gap between persistence at the firm level

and at the national level, a topic to which we will return later.

A strong force working at the national level (but not the firm

level) is close vertical linkages among distinct firms. To some

degree, this is accomplished by the regional co-location of

66 Timothy F. Bresnahan and Franco Malerba

Page 80: Technology and the New Economy

competitors and complementors, notably in Silicon Valley.

Thus, the concentration of the key rent-generating components

in the United States reflects many of the same forces present in

minicomputing, including a shared skilled-labor pool, a shared

body of technical knowledge, and other externalities across

firms but within region.

While the PC segment in its early stages shared important

technologies with minicomputers (CP/M closely resembled a

minicomputer operating system) and briefly shared a dominant

firm (IBM) with mainframes, both technological and demand

developments were largely separate from those in the other

segments of the industry. Even the regional agglomeration

economies were distinct, illustrated by the shift from Route

128 to Silicon Valley.

The success of Route 128 in one era and Silicon Valley in

another led to number of European imitations, often with

considerable government support. Of these, there is only one

that can even be called a partial success, the area around

Cambridge (U.K.). However, this area never developed a posi-

tion of world market leadership. Often the European attempts

were top-down and directive, and many involved the still-

surviving national champions.

Another advantage of divided technical leadership is that it

has permitted relocation of supply of some platform compo-

nents to other countries. In Taiwan, a government-supported

‘‘Silicon Valley’’ has flourished, with agglomeration economies,

local positive externalities, and so on. Taiwanese policy has

been as far from ‘‘national champions’’ as imaginable, being

quite tolerant of entry and exit.32 While successful, the Tai-

wanese cluster is not in competition with the U.S. one, confined

to hardware and components now in the later stages of the

product life cycle.33

The Value of Competitive Innovation and U.S. Policy 67

Page 81: Technology and the New Economy

But now let us turn to the more difficult part of explaining

the persistent U.S. position that is, once again, understanding

why a persistent and concentrated structure was located in a

particular country. For the PC segment, this problem is ex-

acerbated by lack of continuity at the firm level even within the

segment. We examine three periods of rapid and disruptive

change, the initial founding of the segment (mentioned earlier),

a platform shift, and a change in platform leadership.

The rent-generating parts of the PC industry have always

been American, but there are three very distinct times in which

leadership of the industry has emerged or shifted. Each time

the forces that tended to locate the rents that then persisted in

the United States have been distinct. They have never involved

direct government rent-steering, though a number of distinct

mechanisms for encouraging innovation have been in play.

2.2.5.1 Original Founding of Hobbyist PC Segment At

the beginning of the PC segment, there was experimentation

and exploration with several prototypes by a large variety of

hobbyists, and later on with systems and software developed

around two de facto standards: CP/M and Apple II. Here

again a variety of new specialized microcomputer firms such

as Apple, Commodore, Digital Research, and Tandy explored

new developments in microcomputers. This experimentation

and exploration was worldwide, but the most successful firms

emerged in the western regions of the United States.

There were some very limited elements of continuity from

the previous successes at a national level. PC software and

hardware took important ideas from minicomputer products,

for example. Yet this flowed through a loose network of tech-

nically sophisticated people rather than as a continuation of

the commercial success of the preexisting computer industry.

68 Timothy F. Bresnahan and Franco Malerba

Page 82: Technology and the New Economy

Adaptation to the new market segment by existing computer

firms was not an important source of supply.34 Other firms,

such as microprocessor manufacturers Intel and Motorola, did

‘‘adapt,’’ though their adaptation consisted largely, at this

stage, of selling existing product lines to new customers.

The most important U.S. national institutions and policies

supporting the emergence at this time were entirely non-

directive: the existence of a large body of technical expertise in

universities and the generally supportive environment for new

firm formation in the United States. The location of the initial

PC hobbyist industry—not one associated with a large volume

of rents—in the United States was largely because technology

entrepreneurship in broad generality was easy there. Persis-

tence in the short run occurred because the network effects

surrounding early standards and associated sunk costs were

strong.

Experimentation in Europe was rather more limited in this

era and in the era of the IBM PC. Most entrants were estab-

lished electronics firms, including the long-protected national

champion computer firms. (An exception occurs in the U.K.,

where there were some entrepreneurial efforts.) Japanese efforts

in the PC era notably involved an attempt to use the country’s

cultural and linguistic uniqueness to start a local cycle of net-

work effects, an effort ultimately defeated by worldwide scale

economies. In neither case were there effective mechanisms for

protection for, in contrast to the mainframe era, PC buyers

were small, scattered, and unlikely to respond to government

jawboning.

2.2.5.2 Creation of the IBM PC Up to this point, we have

discussed market segment foundings periods of rapid techni-

cal change during which the location of rents in a particular

The Value of Competitive Innovation and U.S. Policy 69

Page 83: Technology and the New Economy

country is still open to determination, not fixed by first-mover

advantages. We now turn to a series of transitions, similar

time periods during which the rents in an existing segment

shifted from one firm to another, often from one type of firm

to another. The first of these is the creation of the IBM PC.

After a brief period, it became clear that the highest value

uses of the PC were not for hobbyists but instead for such

business applications as word processing and spreadsheets. The

marketing model of the early PC industry was not optimized

to that purpose, and discontinuous technical progress meant

an opportunity to replace the existing technical standards.

IBM, the existing dominant firm in commercial computing in

the United States, saw the nascent personal computer market in

two very different ways, one linked to its existing base of cus-

tomers and flow of rents and the other as completely separate.

After a debate inside the company, in the early 1980s the firm

entered the PC business, taking advantage of its strong capa-

bilities as a marketer of computers but in a way that was com-

pletely separate from its existing franchise.35 Leadership of the

PC segment quickly passed to IBM, though Apple computer,

second in the pre-IBM era, continued to be second in im-

portance. There was a break in compatibility, as the IBM PC

would not immediately work with complementary hardware or

software from the previous standard.

Breaks in compatibility are rare and difficult in commercial

computing.36 They involve moving a body of customers and

complementors away from the familiar standard to a new one.

IBM had a very powerful brand name and reputation, and this

was part of the way the firm found sufficient disruptive force

to move the market. There was also a technical opportunity,

as PC computing moved discontinuously from an 8-bit to a

16-bit foundation, and an associated market opportunity, as

70 Timothy F. Bresnahan and Franco Malerba

Page 84: Technology and the New Economy

the expansion of the market to a new body of demanders who

wanted somewhat different features in a computer (e.g., ease

of use was more important for business people.) To compete

with the many other initiatives to make a new 16-bit PC plat-

form, some compatible with CP/M, IBM chose to change its

view of what a computer company should be. Rather than be-

ing vertically integrated, as it had been in mainframes, IBM

chose to have other firms supply key platform components—

notably, to have Intel supply the microprocessor and Micro-

soft the operating system.37 This offered IBM the opportunity

to enter quickly (the specialized structure offering superior

speed) and therefore take advantage of a contested market

opportunity.

Thus, although the creation of the IBM PC involves conti-

nuity in the sense that a dominant firm from an earlier era of

the industry was the leader, it involves fundamental change in

other senses. First, IBM was not the original innovator of the

PC segment; that called for entrepreneurship from outside the

existing computer industry. IBM returned later to participate in

technical improvements and commercialization and adapted

itself to the structures of the new segment. Second, the conti-

nuity was not supported by policy but selected by markets. At

the national level, the standard setting role for the PC segment

would likely have stayed in the United States even without

IBM’s participation, as many of the other firms putting for-

ward new PC architectures were American. Third, the move

involved very considerable adaptation of existing capabilities,

notably a dramatic shift in structure by IBM.

2.2.5.3 Shift of Control to Wintel While divided technical

leadership permitted IBM to enter quickly and then dominate

the PC segment, it left IBM with close complementors well

The Value of Competitive Innovation and U.S. Policy 71

Page 85: Technology and the New Economy

positioned to wrest control of the PC segment’s standards. The

story of how first Intel encouraged direct entry against IBM,

turning ‘‘clones’’ into ‘‘industry standard PCs,’’ and then

Microsoft gained control of the direction of the platform, is

now well known.38 For our purposes, the important lessons

are threefold.

First, the value of having multiple distinct views of the future

of the PC among which consumers could choose—in this case,

at a minimum, IBM’s, Intel’s, and Microsoft’s views—shows

the value of strong market selection in ensuring ongoing

growth of producer and consumer rents. The background to

that selection was the wide range of experimentation and ex-

ploration in the United States and IBM’s adaptation of the

divided technical leadership model together with its own mar-

keting capabilities.

Second, IBM lost control of the PC platform not to a new

and superior form of PC but to a compatible one, with control

shifting to complementors and previous partners. The divided

technical leadership permitted this form of competitive im-

provement and enhancement to the platform, with the resulting

considerable improvement in products and prices to the benefit

of users, without the need for as radical and difficult a step

as the earlier replacement of CP/M with the IBM PC. Indeed,

not long after the shift of control of the platform to Intel and

Microsoft, applications vendors Lotus and WordPerfect would

undertake platform-steering efforts of their own, threatening

the newly established platform leadership positions. Those

efforts ended badly for Lotus and WordPerfect, as they them-

selves were victims of competition that originated from a seller

of complements, Microsoft. Divided technical leadership de-

clined as one firm controlled many key software layers in the

platform.

72 Timothy F. Bresnahan and Franco Malerba

Page 86: Technology and the New Economy

Third, this was all without meaningful government direc-

tion, although the institutional and policy stance of the United

States permitted the change. U.S. institutions throughout were

supportive of new firm foundation and of market selection.

Absent strong competition policies, IBM would have been

easily able to take advantage of its position to block competi-

tion and to maintain control of standard setting in the PC.

2.2.5.4 Lessons of the PC Shifts The persistence of the U.S.

national leadership through the series of changes in leadership

associated with the PC business turns on a remarkable variety

within that country in firm and regional capabilities. The ele-

ments of maintaining national leadership arise, not because

of continuity, but because, at times of change, many of the

interesting experiments with regard to new leadership were

American. Thus, even though existing firm rents and/or exist-

ing technology rents were abandoned, this rigorous domestic

competition continued to leave the rents of the industry in one

country. This series of switches, from entrepreneurial start-ups

(CP/M and Apple) to national champion (IBM) to adolescent

technology specialists (Microsoft and Intel), illustrates the wis-

dom of a national policy that is completely neutral toward

the form of successful supply. The critical features of national

policy here were supporting experimentation and exploration

over a wide range, which created a strong incentive for exist-

ing dominant firm adaptation, and supporting an environ-

ment in which market selection of the future winners cannot

be blocked by the past ones. Finally, the division of technical

leadership among multiple complementary producers of key

components, possible in the United States because of the wide

number of experiments with distinct firm capabilities and spe-

cializations, served the segment well in providing competition

The Value of Competitive Innovation and U.S. Policy 73

Page 87: Technology and the New Economy

and the considerable speed advantages of divided technical

leadership. Availability of many different firms to participate in

distinct leadership roles drew not on any particularly successful

efforts at national coordination (market forces were suffi-

cient for coordination when needed) but on a national policy

of broad support for invention, experimentation, and entre-

preneurship. The fruits of those experiments, many of which

had gone through long periods of earning small rents, were

later adapted to the changing circumstances of the computer

business.

2.2.6 Entry, after a Long Delay, into IBM’s Mainframe

Markets

IBM’s dominance of the mainframe segment never ended.

Mainframe customers, however, began in the late 1980s to

have real competitive alternatives to IBM.

Entry that ultimately threatened IBM took a long time to

develop. As discussed earlier, entry and competition from

similar mainframe firms was not at all effective. An important

limit on the scope of IBM’s market was set by the invention

of the superminicomputer, a machine based on minicomputer

technology but running software suitable for commercial (not

only technical) uses. In the late 1970s and early 1980s, for-

merly technical minicomputer firms, notably DEC, were able

to adapt to a more commercial customer base. More broadly,

a new vertically disintegrated supply was able to grow up,

with entrepreneurial firms such as Oracle selling software for

commercial computing but running on smaller and cheaper

machines than mainframes. This new vertically disintegrated

supply was, once again, overwhelmingly American, drawn both

from start-up firms taking advantage of the entry opportunities

afforded by vertical disintegration and existing firms adapting

74 Timothy F. Bresnahan and Franco Malerba

Page 88: Technology and the New Economy

to the new market conditions. Notably, the successful adaptors

did not include IBM, the closest established firm.39 These

events led to a limiting of IBM’s market scope but by no means

the end of IBM dominance, as the firm continued through the

1980s to be one of the world’s most profitable enterprises.

It was at the end of the 1980s when a real challenge to IBM’s

position occurred. The immediate cause of this was not the

invention of a better mainframe computer than an IBM one.

Instead, networking technologies advanced to the point where

users in large commercial sites could consider using a net-

work of smaller computers instead of a single, large mainframe.

The idea was that technologies previously used for technical

computing—minicomputers and workstations—would provide

the power previously available from mainframe systems. Users

would access the networked system through the now familiar

PC. Instead of mainframe and terminal, systems used ‘‘server’’

and ‘‘client’’ computers. While a variant of this particular

technical idea had been under development inside IBM for

some years, and indeed had been a major motivation for IBM’s

advancement of the PC platform, superior technical and mar-

ket versions arose outside IBM. Particularly because these new

firms had no strong reason to preserve IBM rents, they had

incentives to take up technical and market solutions that re-

placed rather than enhanced IBM’s position at many sites.

Users did not migrate instantly, because of the considerable

switching costs associated with longstanding lock-in, but what

had been a strong market position for IBM was considerably

weakened, because they had to compete with close and effec-

tive competitors for what had long been their most solidly

committed sites.

This episode contains an important cautionary tale about

national champions. Over the course of the 1980s, IBM

The Value of Competitive Innovation and U.S. Policy 75

Page 89: Technology and the New Economy

anticipated the value of client-server computing in considerable

detail, and sought to put itself in position to offer a complete

solution to commercial sites running from client through mid-

dleware to server. As the dominant firm selling large, complex,

networked applications, and as the dominant PC firm, IBM

could offer a compelling story that it was well posed to be the

supplier of the new platform. Use of market selection, rather

than of efforts to preserve and maintain the existing producer

rents, was the key to opening up substantial value for con-

sumers of computers. As buyers made those choices and moved

away from traditional computer vendors to new ones, the

fraction of total investment represented by information tech-

nology capital (now including a great deal of data networking)

grew dramatically, as did the contribution of IT applications to

world economic growth.

The new firms were, once again, largely American. The na-

tional institutions supporting this competitive replacement and

enhancement were, once again, not directive. In this era as well

as in others, it was simple to start a new U.S. company to take

advantage of this new opportunity. U.S. policy was not focused

on preserving the existing IBM rents. If anything, policy sup-

ported the entrants’ initiatives. It was at this juncture that

some of the advantages of the almost forgotten IBM antitrust

suit finally came to have a real payoff, as firms for long in

the business of complementing IBM became participants in the

platforms and important competitors once the ‘‘competitive

crash’’ occurred. More generally, the entrants were a mixture

of firms, some long-standing complementors to IBM adapting

capabilities to participate in the new platform, some from out-

side the mainframe segment, similarly adapting capabilities,

and others start-ups. The important point here about adapta-

tion is that established firms other than the existing dominant

firm are potential adaptors of capabilities to a new use.

76 Timothy F. Bresnahan and Franco Malerba

Page 90: Technology and the New Economy

2.2.7 Convergence of the Internet with the PC

By the mid 1990s, the PC sector had a single, strong domi-

nant firm steering its platform, Microsoft. The main structural

force that had permitted competition in this segment despite

powerful network effects—divided technical leadership—had

declined steadily over time. In the mid-1990s, developments

on the Internet brought a new threat to Microsoft’s position.

Convergence of the Internet with the PC led to an opportunity

to reestablish divided technical leadership. The addition of a

browser layer to the PC industry was the key marketing force

at work here, for the browser was a surprisingly popular new

application.40 The nature of the underlying competitive op-

portunity represented by the browser was a platform shift

away from the PC, or at least the centrality of the PC, for

individual productivity applications. Those might come to be

more network oriented, adding Web browsing, e-mail, elec-

tronic purchasing, instant messaging, and so on to the familiar

applications running on a single PC. This was another time at

which there was discontinuous technical change and an asso-

ciated market change opportunity.

While such a transition offered consumers the potential ben-

efits of choice between existing technologies and vendors and

new ones, such choice was not in the interest of the incumbent

dominant firm. Microsoft saw the changes on the Internet, es-

pecially the wide distribution and use of a browser outside its

own control, as a potential threat to its position and its market

power. In deciding to make responding to the threat from the

Internet a priority, Bill Gates, Microsoft’s CEO, drew the

analogy between the wide acceptance of the Netscape browser

and the arrival of the IBM PC a generation earlier (Gates

1995). Each was, in his view, a significant enough event that it

could be the opportunity to shift control of rents from one firm

The Value of Competitive Innovation and U.S. Policy 77

Page 91: Technology and the New Economy

to another, or an opportunity to lower the rents earned by all

firms as an era of stable positions ended, replaced by a period

of rapid and disruptive change. Rather than finding itself in

a position of uncontested platform leadership and operating

system monopoly, Microsoft could find itself facing effective

competition in the operating system business and potential

replacement of that platform by a newer, technically superior

one.41

Based on its PC experience, Microsoft decided that divided

technical leadership would render its position more competi-

tive. It thought that external control of such Internet-centric

technologies as the browser and Java would lower barriers to

entry into PC operating systems and would threaten its domi-

nant position. It therefore acted to prevent widespread distri-

bution of those innovative technologies under the control of

other firms. Caught off guard by the sudden success of the

Internet, and far behind in standards-setting races, Microsoft

found itself unable to win by advancing its own versions of

browser and Java technologies and giving them away for free,

despite its considerable ‘‘strong second’’ skills in incremental

technical progress and technology marketing. Having failed at

competition, Microsoft turned to an impressively wide-ranging

arsenal of anticompetitive tactics, exploiting clout of its exist-

ing monopoly position.42 But for these anticompetitive acts,

divided technical leadership would have reemerged in the PC

business. More likely, we would now think of the part of

computing serving individual end users as drawing on both the

PC and the Internet; that segment would now have divided

technical leadership.

The U.S. government challenged Microsoft’s behavior in an

antitrust case, arguing that demanders should get to choose

among continuation of the status quo, increased competition

78 Timothy F. Bresnahan and Franco Malerba

Page 92: Technology and the New Economy

going forward, or even a replacement of the existing platform

with a new one. For our purposes here, the important question

is not the exact nature of Microsoft’s violations of the law but

the purposes of the intervention. The government saw that the

shift of personal computing from a stand-alone PC basis to a

networked applications basis offered entrants an opportunity

to present consumers with new choices about their mode of

computing. Rather than necessarily staying with Windows, or

a more networked descendant of Windows, consumers might

have chosen a distinct operating system or even something ‘‘far

cheaper than a Windows PC’’ (Gates 1995). Denying them that

choice meant denying the industry the opportunity to move

forward to a new supply model if that were what the market

was to have selected.

The antitrust suit is at an intermediate stage. The courts, in-

cluding an appeals court, have upheld the main charges against

Microsoft.43 An effective remedy, a divestiture to reestablish

divided technical leadership and lower entry barriers into

Microsoft’s monopoly markets, was overturned by the appeals

court on procedural grounds. The question of ultimate remedy

has been left, at this stage, to a new court. The market, too,

is at an intermediate stage. A challenge to Microsoft’s leader-

ship arose in the late 1990s, was cast aside by anticompetitive

means, and still has not been presented to users of computers

for their choice of continuity, partial continuity, or change.

The U.S. policy stance stayed consistent with that of the

previous several decades in this lawsuit. In particular, the gov-

ernment appeared as the agent of choice between the new and

the old. By acting in favor of a strong market selection mecha-

nism, the government would, in this instance as in the past,

enable change when the market preferred it but not force either

change or stasis on the market.

The Value of Competitive Innovation and U.S. Policy 79

Page 93: Technology and the New Economy

2.2.7.1 The Founding of the Internet Sector

Is the founding of the Internet one of those examples of the use

of defense procurement as an instrument of strategic trade

policy? Many observers point to the common location of most

Internet-related vendors in the United States in the late 1990s

and the original location of the Internet as a U.S. defense-

department sponsored network (then called ARPAnet) as an

example of government investment that ultimately led to sig-

nificant national advantage.

In fact, the Internet grew up as a technical computing

network, largely linking minicomputers used by scientists and

engineers in government and universities and, to some de-

gree, similar people in firms. In that role, it came to be highly

internationalized.

The important steps toward giving the Internet its modern

role did not originate in the United States. The World Wide

Web was promulgated by a Brit living in Switzerland. He drew

on his own inventive powers and on technologies and con-

nections that were global. The creation of the Web was only

the beginning of a new commercial end-user-oriented comput-

ing network. The next critical step, the browser, was seen by

entrepreneurs in American universities. They were reimporting

a technology that had by then only limited U.S. elements.

The crucial elements of U.S. policy in creating a commercial

Internet sector were supportive and enabling, not directive or

‘‘strategic,’’ with regard to the Internet.

2.3 Lessons for Positive Economics

We have touched on what we think are the broad positive

and policy issues as we have examined each of these periods,

80 Timothy F. Bresnahan and Franco Malerba

Page 94: Technology and the New Economy

whether foundings or transitions, of determination of the in-

ternational allocation of producer rents and of the computer

industry’s capacity to serve worldwide economic growth. We

pull together these lessons here.

2.3.1 Rejection of the Broad Theory of U.S. Persistence

There is an oversimplified, broad theory that at first seems to

explain U.S. persistence. It has three elements. (1) The United

States, an early mover, has the largest domestic market, and

the Department of Defense was a very important (price insen-

sitive and nationalistic) demander in the industry’s formative

years. (2) Given first-mover advantages, the commercial win-

ners were those with the greatest initial advantage. Thus, (3),

the experience of the United States in computing illustrates

the value of wise strategic trade theory. We hope that, by this

juncture, it is obvious why we think that this oversimplified,

broad theory is highly inaccurate.

First, let us be clear that part of this theory is right. Over

shorter time scales within segments, the tendency has been for

computing first-mover advantages to preserve firms’ and na-

tions’ positions. One problem with this theory arises when it

attempts to explain the longer time scale. Another arises with

the positive political economy argument that the broad theory

explains actual U.S. policy formation.

For the longer time scale the broad theory is very unsatisfy-

ing. The foundings of new segments described in the previous

section are important discontinuities. Each new segment used

a new technology to address a new demand and new types

of users, typically with a new commercialization mechanism.

Each new segment created specific types of user-producer rela-

tionships, and firms had different capabilities, organization,

The Value of Competitive Innovation and U.S. Policy 81

Page 95: Technology and the New Economy

and strategies. The later periods of transition in the mainframe

and PC segments were ones in which old segments came to be

served by new firms, technologies, and organizational models,

ones that involve change, not continuity, in the source of rents.

To understand the persistence of U.S. dominance, we need to

understand these periods of radical change, founding of new

segments, and major transitions in segment leadership. To

understand the role of policy, we need to understand not sim-

ple stories of attempting to steer known rents to the United

States, but a complex story of supporting private enterprise to

get read to reap unknown rents or to meet current national

needs having nothing to do with the commercial or trade in-

terests of the United States. Most important, policy was firmly

focused on enabling the rents of the future, not on protecting

the rents of the past to the point of active hostility to national

champions.

2.3.2 Concentration and Persistence: The ‘‘New Trade

Theory’’ by Way of Modern Industrial Organization

We found that, for intermediate-scale time periods and within

particular segments, the concentration and persistence of the

producer rents in one country were largely as explained in

the simple theory. Social increasing returns to scale occur in

the higher-value computing segments and are associated with

cumulative investments by sellers and considerable irrever-

sibilities (sunk costs) by buyers. Those are powerful reasons

explaining large producer rents, concentrated structure, and

persistence at a national level.

These same forces are also powerful explanations for the

success of the industry in enabling the creation of worldwide

consumer rents. Social increasing returns to scale obtained in

the mainframe segment, and in the improved networked seg-

82 Timothy F. Bresnahan and Franco Malerba

Page 96: Technology and the New Economy

ment that has been replacing it, have led to tremendous con-

tributions to world productive capabilities. Social increasing

returns to scale around a series of PC standards have also

led to higher and higher levels of contribution to consumer

surplus, though the blocking of market selection of a new

structure in the late 1990s has slowed that process. Over the

appropriate time scale, and with the appropriate limits on

scope, the welfare as well as positive implications of social in-

creasing returns to scale theory play out.

Thus, with the limitations that the results apply only on

short time scales and within segments, our analysis confirms

the importance of the forces that have led to an embrace of the

broad theory of U.S. persistence and success. We differ with the

broad theory, however, because we do not stop there. We go

on to examine the longer time scale and the analysis across

segments. This wider scale—the one that is appropriate to

understanding the phenomenon of long-term U.S. success and

to analyzing the industry’s contribution to growth—contains

many elements that contradict the overbroad theory. We have

emphasized three outcomes that lead us toward a more com-

plete story:

. The scope and nature of increasing returns and sunk costschanged several times as the technological basis of the industry

changed.

. Market structure and the type of firm changed.

. User relations and the definition of effective commercializa-tion changed.

These differences, and the way that they played out in the

periods of rapid change and disruption that have characterized

the industry over a longer time scale, lead us to a positive

analysis that has three more elements in it.

The Value of Competitive Innovation and U.S. Policy 83

Page 97: Technology and the New Economy

2.3.3 Growth and Change

To understand the growth and change of the computer in-

dustry as a successful creator of opportunities for economic

growth, and to understand its persistence in the United States

over a longer time scale, we need to understand two kinds of

periods of disruptive change and discontinuity.44

The first of these is foundings. For the computer industry, we

have identified three major periods of founding: those of the

industry overall (corresponding to the mainframe segment), the

minicomputer segment, and the PC segment.

The second kind of periods is transitions. We have identified

several periods of transitions, or potential transition, including

the breakdown of barriers to entry into IBM mainframes, the

transition from CP/M to the IBM PC, and the potential cre-

ation of an Internet-based replacement or major enhancement

to the PC.

Looking at these periods of radical growth and change

leads us to emphasize the unpredictability, ex ante, of the spe-

cific technical, marketing, or organizational structures that will

come to be clear leaders in the industry ex post. Accordingly,

each founding saw the creation of a supply side that met user

needs only after a wide variety of explorations and experi-

ments came forward with the winning one selected by a market

process. The forces leading to success in a particular country ex

ante are then related to the number and variety of experiments

based on technical and market capabilities.

For transitions, adaptation led to a further source of ex-

ploration and experimentation. Existing firms can adapt exist-

ing technologies or marketing capabilities to the new needs

of a segment after discontinuous change. Our examination of

adaptation by the existing dominant firm in a segment has

revealed that adaptation is by no means always successful,

84 Timothy F. Bresnahan and Franco Malerba

Page 98: Technology and the New Economy

often made difficult by the fundamental changes in technology,

structure/strategy, or commercialization/marketing capabilities

that characterize periods of dramatic change.

Successful adaptation by outsiders to the segment from

within the same country is a source of continuity at the na-

tional level even where there is change at the firm level. This is

a point about adaptation that the literature has not always

considered, focusing instead on the existing dominant firm.

The computer industry has several important sources of out-

siders ready to offer new experiments in times of radical

change. The first comes from outside the segment but within

the industry. We saw examples of entrants of this sort based

on technical capabilities (minicomputers become supermini-

computers or servers) or marketing capabilities (IBM enters the

PC). Clearly, existence of firm capabilities or technologies in

a nearby segment lowers the costs of certain experiments.

Second, complementors to an existing dominant firm can be

experimenters who become the dominant firm in the next era

of the segment. We saw this in the importance of divided tech-

nical leadership in the PC segment, in the competitive crash,

and in the PC/Internet convergence. Either kind of adapta-

tion, the next-segment kind or the complementor kind, may

be undertaken by existing firms or by entrepreneurial entrants

that take the opportunity to adapt.

In sum, the importance of all these points is to belie a com-

mon view: increasing returns-to-scale industries need, at a

national level, technology and market investments that are co-

ordinated to a single goal. Within the intermediate time period

and within the segment, powerful market forces will tend to

achieve that coordination. At the longer time scale, however, it

is the breadth and variety of experiments and capabilities fol-

lowed by market selection, not any coordination on a single

The Value of Competitive Innovation and U.S. Policy 85

Page 99: Technology and the New Economy

goal, that explains the persistence at a national level. This oc-

curs because of the powerful force of uncertainty, a force that

comes to the foreground in times of discontinuous change.

2.4 Lessons for Policy

These views of the positive economics of (1) international suc-

cess in the computer industry and (2) success in meeting a

changing set of user needs over time lead us to a specific view

of the public policy issues.

Just as there was a false, overbroad positive theory of U.S.

success, there is a false simplicity about certain policy pre-

scriptions. The existence of scale economies, such as the social

increasing returns to scale so important in many computer

segments, does not imply the wisdom of a policy that protects

national champions. Nor does it imply the wisdom of any

other policy of picking winners, even ones that sometimes seem

wise, like assigning to governments the duty of coordinating

disparate national efforts around certain common goals or

standards. A worldwide strong market selection mechanism

means that individual governments cannot have a local pro-

tectionist mechanism. Instead, rigorous domestic competition is

the key to selection in world markets.45

However, this does not mean that the proper role of gov-

ernment policy or other national institutions is completely

passive. It simply means that it has to be enabling rather than

directive. National institutions and policies that encourage ex-

perimentation and exploration in a wide range of technologies

have been effective by not pushing the industry toward any

particular strategic trade policy goal. Instead, they permit

entrepreneurship by new companies and adaptation by exist-

ing ones hoping to be major players in the new field. Finally,

86 Timothy F. Bresnahan and Franco Malerba

Page 100: Technology and the New Economy

national institutions can ensure that strong market selection

mechanisms bring demanders as well as suppliers to bear on

the choice of organizational structure, technology, and mode

of commercialization. Such policies may be unsatisfactory to

governments eager to be able to claim credit for causing in-

dustrial growth and development. But they arise from the fun-

damental limits of what policy can knowingly direct, and what

it should leave to markets, in circumstances of uncertainty.

While not always perfect, U.S. and to some extent Japanese

and Taiwanese national policies and institutions have respected

these market realities. That long-standing respect for the mar-

ketplace continues into the present in the formation of U.S.

policy.

Notes

The authors, who are at Stanford University, United States, andCESPRI, Bocconi University, Italy, thank SIEPR and the Italian CNRfor support.

1. We use ‘‘rent’’ here in the economic sense of meaning a high returnto an asset, factor of production, or capability. Engineers who mightwork in the computer industry earn far more in the United States: thatis a rent to U.S. human capital. Similar rents have been earned by U.S.firms.

2. One of the authors, Bresnahan, worked on the Microsoft antitrustcase while at the Department of Justice.

3. The boundaries of the mainframe segment are not clear. Commer-cial minicomputers eventually became much like mainframes, for ex-ample. We treat the boundary competition between mainframes andother kinds of computers as unimportant for the period 1955–1989.The much more powerful competitive forces unleashed against main-frames in the ‘‘competitive crash’’ of the 1990s we treat elsewhere inthe chapter.

4. On this, see Bresnahan and Greenstein 1999.

5. This is of course a key point in the strategic analysis of dominantfirms in technology-intensive industries, the ability of the incumbent

The Value of Competitive Innovation and U.S. Policy 87

Page 101: Technology and the New Economy

firm to see through the ‘‘Arrow effect’’ and innovate to maintain itsposition.

6. There are some exceptions, notably the successful production ofplug-compatible computers and other components by competitorfirms, notably Japanese ones. Yet control of the compatibility stan-dard associated with modularity (the key to producer rents) stayedwith IBM.

7. See Krugman 1992 and Helpman 1998.

8. The theory of social scale economies and collectively sunk costs hasbeen carefully worked out by, for example, Farrell and Saloner 1985and Katz and Shapiro 1986.

9. See Bresnahan and Greenstein (1999), particularly on why com-patibility forces meant that the scale economies continued to mattereven as the market grew. If this had not been true, the segment wouldlikely have had a monopolistic competition structure with many suc-cessful selling firms, each with products suitable to a class of cus-tomers. This monopolistic competition structure is more emphasizedin the NITT literature, but what really matters for application is notthe special case assumed in the theory but that the strategic oppor-tunities available to firms are one important input into the interna-tional industry structure and the allocation of rents.10. The case did, however, lead IBM to unbundle mainframe com-puter hardware from software such as the operating system in anattempt to head off prosecution. This led to modest increases in com-petition in the short run and contributed to substantial increases downthe road.

11. See also Bresnahan and Malerba 1998. Briefly, European coun-tries erected barriers to exit for single national champion firms. Jap-anese policy restricted attention to a modest number of existing,successful electronics firms with government support, but insisted oncompetition among them and on success in exporting as conditionsfor ongoing support. Ultimately, the Japanese achieved a near miss,with a plausible effort to leapfrog IBM.

12. For analysis of strategic trade policy, see Dixit and Kyle 1985 andKrugman 1993.

13. In Japan, experimentation at this early stage was limited, sincethere was not much advanced technical capability. See Bresnahan andMalerba 1998 for a detailed discussion of the European and Japanesecases.

88 Timothy F. Bresnahan and Franco Malerba

Page 102: Technology and the New Economy

14. European companies, possibly anticipating protected domesticmarkets, followed two strategies. If they were electronics firms, theytended to produce computers optimized for scientific calculation. Ifthey were business equipment firms, they tended to make small investments in electronic technology. There is an interesting counter-factual question of whether a united European market would have ledto these same supply choices. Given that most U.S. firms (other thanIBM) similarly followed their original trajectories, there is reason todoubt it, however.

15. See Rosenberg 1996 on the role of uncertainty of this type in hightechnologies generally and Bresnahan and Malerba 1998 for a farmore detailed treatment of the issues covered here.

16. This adaptation involved considerable innovation within thecompany, including elements of separating the new from the old. SeeUsselman 1993. Other business data processing companies, includingEuropean ones, were far less successful at shifting to electronic com-puting.

17. See Metcalfe and Gibbons 1987 and Nelson 1995; see also Cohenand Malerba 1995 for the similar case of complementary learning.

18. See Evenson and Kislev 1976 and Nelson 1982.

19. See Cohen and Klepper 1992.

20. Indeed, IBM was quite hostile to the role of the government,delaying until late any research collaboration with governmentagencies or government-sponsored research. See the chapter titled‘‘Government-Sponsored Competition’’ in Pugh 1995.

21. The United States relied on market mechanisms for selection, agoal supported to the small extent necessary by policy. Automaticcontinuity of tab-card-era dominant firm IBM as the commercial dataprocessing dominant firm was opposed by the government in a (mod-erately effectual) antitrust suit.

22. For example, the purpose of government-sponsored ENIAC wasto be able to numerically integrate so that, for example, artillery shellsmight land on the enemy’s tank. This was exactly the technical direc-tion not taken by IBM.

23. See Klepper 1996 and Metcalfe 1997.

24. As we will see, minicomputer technologies were later used toserve other bodies of demand.

The Value of Competitive Innovation and U.S. Policy 89

Page 103: Technology and the New Economy

25. For example, there is a mixture of proprietary operating systems(such as those on the DEC Vax family) and open but not completelyidentical ones (such as UNIX).

26. See Henderson 1993 and Henderson and Clark 1990 for analyti-cal treatments.

27. After a series of failed entry attempts, IBM had a successful mini-computer line only in the late 1980s, and that was after the inventionof the ‘‘commercial minicomputer,’’ namely, minicomputer technol-ogy used by demanders more like mainframe users.

28. For analytical sources, see nn. 17–19.

29. Further analysis in work cited at n. 23.

30. IBM chose a nonintegrated structure for the IBM PC in order toobtain this speed.

31. See Bresnahan and Malerba 1998 for more detailed analyses ofthe American, European, and Japanese cases.

32. Aw, Chen, and Roberts 2001 and Saxenian 2000. These papersargue that the pro-market-selection policies of Taiwan have moved itinto a hardware rent-generating position in the industry just as therents in the United States have gone to software.

33. See Grossman and Helpman 1991a, b for relevant theory.

34. The important exception is IBM, to which we shall turn in amoment. A number of existing firms attempted the adaptation, only tofail, including such impressive (on their own ground) vendors as DECand AT&T.

35. Famously, IBM sent the PC organization to a separate geograph-ical location (Boca Raton, Florida) in order to prevent influence on itfrom elsewhere in the company.

36. See Bresnahan and Greenstein 1999 for more analysis and moredetail on this break.

37. Though they were not recruited at the beginning to play aplatform-component role in the PC, such widely distributed appli-cations software vendors as Lotus (spreadsheets) and WordPerfect(word processors) came to have a role in the technical leadership ofthe PC platform.

38. See Ferguson and Morris 1993 and Bresnahan and Greenstein1999.

90 Timothy F. Bresnahan and Franco Malerba

Page 104: Technology and the New Economy

39. See Bresnahan and Greenstein (1999) for an analysis of thedilemma facing IBM.

40. See Gates 1995 for the observation that the key change was thewidespread and popular use of the Internet—driven by the Netscapebrowser.

41. See Gates 1995 for discussion. Numerous other Microsoft plan-ning documents show this reaction as well, but this one has the CEOarguing in detail for a radical change in the strategic direction of thecompany.

42. A number of sources describe these anticompetitive acts in de-tail. See Bresnahan 2001, Jackson 1999, and CADC 2001 for threeapproaches.

43. See CADC 2001. The main charge, that of maintaining the Win-dows monopoly, was upheld. Several of the specific acts found illegalmight also have been illegal for a second reason, and the appeals courtfailed to find them illegal for two reasons.

44. A small literature is beginning to take up the analysis of industriesthat undergo change and renewal, and for which our intermediate-runvs. long-run distinction is material. See Jovanovich and MacDonald1994 and Klepper-Simons 2000.

45. This argument closely follows that of Porter 1998.

References

Aw, B. Y., X. Chen, and M. J. Roberts. 2001. ‘‘Firm-level evidence onproductivity differentials and turnover in Taiwanese manufacturing,’’Journal of Development Economics 66: 51–86.

Bresnahan, T. 2001. ‘‘The economics of the Microsoft case.’’ Mimeo.,Stanford University, available on-line at hhttp://www.stanford.edu/~tbresi.

Bresnahan, T., and S. Greenstein. 1999. ‘‘Technological competitionand the structure of the computer industry.’’ Journal of IndustrialEconomics 47(1): 1–40.

Bresnahan, T., and F. Malerba. 1998. ‘‘Industrial dynamics and theevolution of firms’ and nations’ competitive capabilities in the worldcomputer industry.’’ In The Sources of Industrial Leadership, ed. D.Mowery and R. Nelson. Cambridge: Cambridge University Press.

The Value of Competitive Innovation and U.S. Policy 91

Page 105: Technology and the New Economy

CADC. 2001. Order, affirming in part, reversing in part, and re-manding in part, in US vs. Microsoft, 00-5212.

Cohen, W., and S. Klepper. 1992. ‘‘The anatomy of industry R-D in-tensity distributions.’’ American Economic Review 82: 773–788.

Cohen, W. M., and F. Malerba. 1995. ‘‘Diversity, innovative activitiesand technological change.’’ Mimeo., Carnegie Mellon University andBocconi University.

Dixit, A., and A. Kyle. 1985. ‘‘The use of protection and subsidies forentry promotion and deterrence.’’ The American Economic Review75: 139–152.

Evenson, R., and Y. Kislev. 1976. ‘‘A stochastic model of applied re-search.’’ Journal of Political Economy 84: 256–281.

Farrell, J., and G. Saloner. 1985. ‘‘Standardization, compatibility, andinnovation.’’ The Rand Journal of Economics 16: 70–84.

Ferguson, C. H., and C. R. Morris. 1993. Computer Wars: How theWest Can Win in a Post-IBM World. New York: Times Books/Random House.

Gates, B. 1995. ‘‘The Internet tidal wave.’’ Microsoft Internal Memo-randum, May. Available as GX 20 in U.S. v. Microsoft.

Grossman, G., and E. Helpman. 1991a. ‘‘Endogenous product cy-cles.’’ The Economic Journal 101: 1216–1230.

Grossman, G., and E. Helpman. 1991b. Innovation and Growth inthe Global Economy. Cambridge, MA: The MIT Press.

Helpman, E. 1998. ‘‘The structure of foreign trade.’’ Mimeo., Har-vard University, August. Based on the Bernard-Harms Prize Lecture.

Helpman, E., and P. R. Krugman. 1989. Trade Policy and MarketStructure. Cambridge, MA: The MIT Press.

Henderson, R. 1993. ‘‘Underinvestment and incompetence as re-sponses to radical innovation: Evidence from the photolithographicindustry.’’ RAND Journal of Economics 24(2): 248–270.

Henderson, R. M., and Kim B. Clark. 1990. ‘‘Architectural innova-tion: The reconfiguration of existing product technologies and thefailure of established firms.’’ Administrative Science Quarterly 35: 9–30.

Jackson, T. P. 1999. Findings of Fact, U.S. District Court for the Dis-trict of Columbia, U.S. v. Microsoft, Civil Action No. 98B1232 (TPJ).

92 Timothy F. Bresnahan and Franco Malerba

Page 106: Technology and the New Economy

Jovanovich, B., and G. MacDonald. 1994. ‘‘The life cycle of a com-petitive industry.’’ Journal of Political Economy 102: 322–347.

Katz, M., and C. Shapiro. 1986. ‘‘Technology adoption in the pres-ence of network externalities.’’ The Journal of Political Economy 94:822–842.

Klepper, S. 1996. ‘‘Entry, exit, growth and innovation over the prod-uct life cycle.’’ American Economic Review 86: 562–583.

Klepper, S., and K. Simons. 2000. ‘‘The making of an oligopoly: Firmsurvival and technological change in the evolution of the U.S. tire in-dustry.’’ Journal of Political Economy 108: 728–760.

Krugman, P. R. 1992. ‘‘Technology and international competition: Ahistorical perspective.’’ In Linking Trade and Technology Policies, ed.H. M. Caldwell and G. E. Moore. Washington DC: National Aca-demic Press.

Krugman, P. R. 1993. ‘‘The Current Case for Industrial Policy.’’In Protectionism and World Welfare, ed. D. Salvatore. Cambridge:Cambridge University Press.

Metcalfe, S. 1997. Evolutionary economics and creative destruction.London: Routledge.

Metcalfe, J. S., and M. Gibbons. 1987. ‘‘Technological variety and theprocess of competition.’’ Economie Appliquee 39: 493–520.

Nelson, R. 1982. ‘‘The role of knowledge in R-D efficiency.’’ Quar-terly Journal of Economics 97: 453–470.

Nelson, R. R. 1995 ‘‘Recent evolutionary theorizing about economicchange.’’ Journal of Economic Literature 33: 48–90.

Porter, M. 1998. Competitive Advantage. New York: Simon andSchuster.

Pugh, E. W. 1995. ‘‘Building IBM: Shaping an industry and its tech-nology.’’ Cambridge, MA: The MIT Press.

Rosenberg, N. 1996. ‘‘Uncertainty and Technological Change.’’ InThe Mosaic of Economic Growth, ed. R. Landau, T. Taylor, and G.Wright. Stanford: Stanford University Press.

Saxenian, A. 2000. ‘‘Taiwan’s Hsinchu region: Imitator and partnerfor Silicon Valley.’’ Stanford SIEPR Discussion Paper #00-044.

Usselman, S. 1993. ‘‘IBM and Its Imitators: Organizational Capabil-ities and the Emergence of the International Computer Industry.’’Business and Economic History 22(2) (Winter): 1–35.

The Value of Competitive Innovation and U.S. Policy 93

Page 107: Technology and the New Economy

This page intentionally left blank

Page 108: Technology and the New Economy

3Technology Dissemination and

Economic Growth: Some Lessons

for the New Economy

Danny Quah

3.1 Introduction

Pick up a newspaper today, and you have to realize how words

and concepts that didn’t even exist a decade ago—Internet

browsers, desktop operating systems, Open Source Software,

WAP delivery, the three billion letters of the human genome,

political organization and mobilization by Internet chat rooms

—now appear regularly in front-page headlines. These head-

lines describe news items—not science fiction trends, not ar-

cane academic technologies, not obscure scientific experiments.

Someone out there with a handle on the social zeitgeist

has determined that these items—part of the new economy—

impact readers’ lives. Evidently, they are right, for these ideas

subsequently insinuate their way into hundreds of thousands

of nonspecialist but informed discussions. When did popular

culture evolve to where relative merits of different Internet

browsers can be quietly debated at dinner (sometimes not so

quietly), or where personal affinity for different desktop oper-

ating systems can constitute a basis for liking or disliking

someone (Stephenson 1999)?

When you live in that world, it is puzzling when you meet

people intent on proving to you that none of those things you

Page 109: Technology and the New Economy

think you see and experience is real. These people, many of

them academic economists, seem to come from an alternate,

orthogonal universe. They say the new economy is nothing

compared to the truly great inventions of the past (surely a

strawman hypothesis if ever one was needed). These skeptics

show you charts and figures, bristling with numerical calcu-

lations, arguing that the changes you figured to be deep and

fundamental apply, in reality, only to the miniscule group of

people working in companies that manufacture computers.

Are academic economists undermining their own credibility

and doing their profession a disservice, when they argue a case

so ridiculously opposite to what others think is plain and ob-

vious? Or, are they providing a needed reality check as ram-

pant hyperbole takes over all else?

Either way, a tension has built up between two groups of

observers on the new economy. In this chapter, I describe how

such a situation might have come about, and I suggest some

possible ways to understand and resolve that tension.

3.1.1 Technologies and Consumers

Anyone who visits urban centers in the Far East and Southeast

Asia notices immediately the extreme, in-your-face nature to

modern technologies here. Advanced technological products

are sold, incongruously, in grubby marketplaces. Sophisticated

software and hardware change hands in crowded stores that

seem better suited to trading fresh homegrown agricultural

produce.

To be clear, it’s not that the nature of the underlying tech-

nologies differs between here and the rest of the world. It’s that

modern Asia uses modern technology more visibly, forging a

sharper, more direct link between that technology and ordi-

nary consumers. Internet cafes were invented in Thailand and

96 Danny Quah

Page 110: Technology and the New Economy

proliferated widely in Asia early on. Next-generation wireless

mobile applications in Japan have been among the most inno-

vative worldwide and are globally admired and imitated. Ur-

ban center road pricing and seaport management in Singapore

have attained timesliced precision that are orders of magni-

tudes better than anywhere else in the world. In many East

Asian states, the Internet is a critical source of information,

shortcircuiting barriers in a way that nothing else can. Hong

Kong has cash card transactions rates unmatched elsewhere.

In city squares throughout the Far East, up-to-the-second,

streaming information screams out in high-tech high definition

at throngs of ordinary shoppers. Digital entertainment imaging

and animation here are unparalleled: East Asia continues to

make the best toys in the world, high-tech or otherwise.

This technology/final consumer linkage is, of course, not

unique in the world. Nokia Corporation in Helsinki has gotten

to be the world’s leading mobile telecommunications company

by focusing on exactly this, delivering leading-edge technology

directly (and literally) into the hands of hundreds of millions of

consumers worldwide.

But, if not unique, this linkage is not particularly common-

place either. Take that example of Finnish wireless banking,

mobile telecommunications, and information dissemination

applications. In the eyes of some, when compared to daily life

in Helsinki, consumer usage of technology in Silicon Valley is

akin to that of a relatively backward Third World country.

Perhaps so too, when compared to Hong Kong and other parts

of Asia.

3.1.2 Accumulating Capital under Joseph Stalin

In 1994, Paul Krugman (1994) suggested that because Sin-

gapore appeared to have developed primarily by heavily

Technology Dissemination and Economic Growth 97

Page 111: Technology and the New Economy

accumulating physical capital, its high economic growth rate

could not be sustainable—the same way that Joseph Stalin’s

program for economic growth, embodied in exhorting Soviet

steel production to match that of the United States, was ulti-

mately bound to fail.

In this interpretation, Krugman used the economists’ predic-

tion that ongoing physical capital accumulation—other things

being equal—would eventually run into diminishing returns.

Putting into operation big machines, steel factories, bridges

and other physical infrastructure, and heavy machinery can

contribute to growth only temporarily—and then only in a

relatively minor way.

But if not physical capital, then what drives economic per-

formance? Many economists now agree that technical progress

and its close relative, technology dissemination, constitute the

ultimate source of sustained economic growth. That is the

position I take in this chapter.

But if that view is held almost uniformly, its connection to

the new economy is not as obviously uncontroversial. Econ-

omists such as Robert Gordon (2000) have been delightedly

skeptical on the contribution of the new economy to economic

performance. To caricature those views, the new economy has

been a scam, foisted on an unsuspecting public and naive,

trend-chasing policymakers by the new economy’s slick sales

and public relations machine.

3.1.3 Shopping the Internet

At the end of 2000, I got to have breakfast with a successful

multimillionaire Internet entrepreneur in London. I asked him

if he thought, as some seemed to, that Internet developments

amounted to a new industrial revolution. He replied, ‘‘We’re

just talking about selling more groceries through a big out-of-

town shopping center—how revolutionary is that?’’

98 Danny Quah

Page 112: Technology and the New Economy

My entrepreneur acquaintance—for the record, not an Inter-

net grocer—has a self-aware, tongue-in-cheek manner about

him. His statement is pithy to an extreme on the new economy.

It displays the same focus on the technology/consumer link-

age I described earlier. The statement is, in my view, spot on,

mostly, but it is a little too flippant on what is new in the New

Economy.

This chapter attempts to show why the technology/con-

sumer linkage is critical in the new economy—against a back-

ground of what economists know about economic growth

and technology, and about the importance of technology’s dis-

semination over time and across economies. It is here where

the new economy is truly new (well, almost) and where it

diverges most sharply from conventional mechanisms relating

technology and economic growth.

3.2 Technology in Economic Growth: Knowledge and

Economic Performance

From early on, economists studying growth found that capital

accumulation accounted for only 13 percent of the improve-

ment in economic welfare experienced over the first part of

the twentieth century (Solow 1957). The rest of economic

progress—almost 90 percent of it—had to be attributed to

technology, or total factor productivity (TFP). Recent empirical

analyses, notably Feyrer (2001), document how yet other key

features of patterns of cross-country development similarly

hinge importantly on TFP.

Those early conclusions followed from the so-called neo-

classical growth model (see, e.g., Solow 1956 and 1957 or the

technical appendix for this chapter). But the key policy impli-

cation that many took from this work was exactly opposite

to what the research showed—at least as I am interpreting it.

Technology Dissemination and Economic Growth 99

Page 113: Technology and the New Economy

In the 1960s and 1970s, researchers and policymakers read

Solow’s work on the neoclassical growth model to mean that

physical capital accumulation was what mattered most for

economic growth. The reason, perhaps, is that, on the theoret-

ical side, neoclassical growth analysis focused on the economic

incentives surrounding decisions to save and invest in physical

capital; empirical analysis showing instead technology or TFP

accounting for a much greater effect on economic performance

and growth was downplayed.

(Some authors still take TFP to be no more than a residual,

whereupon many possibilities remain open for its interpreta-

tion and explanation—it might be political barriers, monopoly

inefficiency, X-efficiency, political economy inefficiency, moral

hazard, social capital, and so on. In this chapter, I adopt prin-

cipally the discipline of the neoclassical growth model, and I

identify TFP with only technology and possibly human capi-

tal, including the latter under technology more generally. The

technical appendix for this chapter makes this more precise.)

Thus, the development community devoted energy to putting

in place physical infrastructure for growth, while academic

economists sought to recalibrate models and redefine variables

to reduce the measured contribution due to technology. As an

example of these efforts, consider human capital—education

and training—which improves labor quality and thus increases

the effective quantity of labor. Accounting explicitly for human

capital might then reduce the importance of technology in ex-

plaining economic growth.

By the time Paul Krugman (1994) articulated his justly fa-

mous critique of Singaporean development policy, the weight

of opinion had swung full circle back to an emphasis on

technology—thanks to forceful arguments developed mean-

while in Lucas (1988) and Romer (1986, 1990, 1992). Econo-

100 Danny Quah

Page 114: Technology and the New Economy

mies could not hope to sustain high growth through savings

and capital accumulation alone. Thus, by the mid-1990s, con-

ventional wisdom was that a high TFP contribution to eco-

nomic growth indicated a successful economy, not one with

mismeasured capital stock and labor input. The way to in-

crease TFP growth was research and development (R&D)—

raising the science and knowledge base of the economy. Econ-

omists’ focus had shifted from the incentive to accumulate

physical capital to incentives for knowledge accumulation and

technical progress.

A simple formalization will help clarify the issues here as

well as others. Suppose that total output Y satisfies a produc-

tion function:

Y ¼ FðK;N; ~AAÞ; ð1Þ

with K denoting the capital stock, N the quantity of labor, and~AA a first, preliminary index of technology.

To deal with potential mismeasurement in technology and to

highlight the role of human capital, suppose that ~AA has two

components, h human capital per worker, and A technology

proper. Because human capital is embodied in workers, h is

specific to an economy—assuming for the discussion here that

workers can be identified as belonging to particular economies.

By contrast, A is disembodied and global. An alternative char-

acterization might be that A describes codifiable knowledge,

while h describes tacit knowledge.

Denoting quantities in different economies using subscripts,

one assumes that

~AAj ¼ ðhj;AÞ ð2Þ

applied to (1) gives either

Yj ¼ FðKj;Nj � hj;Nj � AÞ ð3Þ

Technology Dissemination and Economic Growth 101

Page 115: Technology and the New Economy

or

Yj ¼ FðKj;Nj � hj � AÞ: ð4Þ

The technical appendix shows that in one important class of

models (section 3.7.3) standard assumptions surrounding (3)

and (4) imply equilibria where levels of per capita incomes or

labor productivity, Y=N, can be influenced by decisions on

human capital. Growth rates in labor productivity, however,

remain equal to the growth rate of technology A and thus

invariant to decisions and policies on human capital.

In a different class of models (section 3.7.4), growth rates

are influenced by human capital accumulation decisions. A

key feature of such models is that growth arises from interac-

tion between demand- and supply-side characteristics, not just

production-side developments.

The technical appendix clarifies the structural features dis-

tinguishing these two class of models. Notably, however, the

models in sections 3.7.3–3.7.4 take human capital to be used

only in producing goods and services. Then, advances in hu-

man capital can increase labor productivity, even taking the

state of technology as given. Such models should be distin-

guished from those in, say, Romer (1990) where human capi-

tal is an input into R&D and thus technical progress, which

thereby evolves endogenously. Human capital can therefore

play dual but conceptually distinct roles in economic growth.

Working out the relative contributions to growth of tech-

nology and human capital, although not always distinct,

matters. In the decomposition (2), technology A is the accu-

mulation of a kind of knowledge resembling a global public

good. Human capital h, however, is different. One part of

knowledge that matters for growth is codifiable; the other,

tacit.

102 Danny Quah

Page 116: Technology and the New Economy

3.3 Dissemination and Catch-up? A Persistent and Growing

Divide

While A has always been viewed as an important engine of

economic growth—and the evidence and discussion of section

3.2 reconfirm this—recognizing the peculiar nature of the

incentives for A’s creation and dissemination raises a number

of subtle issues.

A first natural inclination is to view knowledge—ideas,

blueprints, designs, recipes—simply as a global public good.

Two observations argue for this.

First, knowledge is nonrival or infinitely expansible (David

1993; Romer 1990): However costly it might be to create the

first instance of a blueprint or an idea, subsequent copies have

marginal cost zero. The owner of an idea never loses posses-

sion of it, even after giving away the idea to others.

This observation differs from ideas being intangible: Hair-

cuts are intangible, but obviously not infinitely expansible.

Second, knowledge disrespects physical geography and other

barriers, both natural and artificial. Knowledge is aspatial;

ideas and recipes can be transported arbitrary distances with-

out degradation. (As before, the intangibility of haircuts but

their extreme location specificity makes clear why intangibility

alone cannot be the defining characteristic for knowledge.) The

acceptability of different ideas might of course differ across

locations, depending on the users of those ideas—but that

varies not strictly with geographical or national barriers, nor

monotonically in physical distance.

An extreme view following from the two observations—

first, that codifiable A accounts for most of economic growth

and second, that codifiable A is nonrival and has global

reach—is that the world should be roughly egalitarian, with all

Technology Dissemination and Economic Growth 103

Page 117: Technology and the New Economy

economies having approximately the same income levels. Or, if

not, then at least income gaps between countries should be

gradually narrowing.

But the opposite is happening. While the whole world is

getting richer, the gap between poorest and richest is grow-

ing. Average per capita income (real, purchasing power parity

adjusted) has grown at a rate of 2.25 percent per year since

1960. At the same time, however, the income ratio between the

world’s 90th-percentile and 10th-percentile economies grew

from 12.3 in the first half of the 1960s to 20.5 in the sec-

ond half of the 1980s (Quah 1997, 2001a). Moreover, distinct

income clusters—one at the high end of the income range,

another at the low end—appear to be emerging. The cross-

economy income distribution has dynamics that are difficult to

reconcile with a naive view of knowledge dissemination.

If, to explain these observations, we allow the possibility

that A, the driver of growth, might differ across countries, then

technology dissemination—how Aj in economy j helps improve

Aj 0 in economy j0—becomes paramount for economic growth.

Dissemination mechanisms have been studied (e.g., Barro

and Sala-i-Martin 1997; Cameron, Proudman, and Redding

1998; Coe and Helpman 1995; Eaton and Kortum 1999;

Grossman and Helpman 1991), typically assuming that knowl-

edge and technology are embodied in intermediate inputs and

that property rights permit monopoly operation by the owners

of items of knowledge. However, in all these, that A is nonrival

and aspatial is never explicitly considered. But it is those pecu-

liar properties—nonrivalry and aspatiality—that allow great-

est parallel between developments in the new economy and

what economists might know about technology dissemination.

Parente and Prescott (2000) have posed questions that come

closest to the ones stated earlier. They too focus on A and its

apparent inability to disseminate globally. They conclude that

104 Danny Quah

Page 118: Technology and the New Economy

it is vested interests within a potentially A-receiving country

that represent significant barriers to A’s dissemination. By

contrast, Quah (2001a) suggested that those obstacles emerge

from an equilibrium interaction between A-transmitting and

A-receiving economies. In section 3.5 I consider the possibility

that it is high aversion to change and newness and low exper-

tise among potential users of A that prevent A’s dissemination.

This possibility had also been considered previously in Quah

(2001b, c).

3.4 The New Economy: Puzzles and Paradoxes

If we understand the new economy to be no more than what

has emerged from the proliferation of information and com-

munications technology (ICT), then the new economy ought to

contain no great surprises. ICT is just the most recent manifes-

tation of an ongoing sequence of technical progress. It should

then also contribute to economic performance the same way

technical progress has always done.

3.4.1 Why Might the New Economy Be New?

Two observations suggest potential differences. First, for many,

ICT is a general purpose technology (GPT), bearing the power

to influence profoundly all sectors of an economy simulta-

neously (Helpman 1998). Unlike technical advances in, say,

pencil sharpeners, ICT’s productivity improvements can ripple

strongly through the entire economy, affecting everything from

mergers and acquisitions in corporate finance, to factory floor

rewiring of inventory management mechanisms.

Second, ICT products themselves behave like knowledge

(Quah 2001c), in the sense described in section 3.3. Whether

or not one considers, say, a Britney Spears MP3 file down-

loadable off the Internet as a piece of scientific knowledge—

Technology Dissemination and Economic Growth 105

Page 119: Technology and the New Economy

and I suspect most people would not—the fact remains, such

an item has all the relevant economic properties of knowl-

edge: infinite expansibility and disrespect of geography. Thus,

models of the spread of knowledge, like those described

earlier, can shed useful light on the forces driving the creation

and dissemination of ICT products. This view suggests some-

thing markedly new in the new economy—a change in the

nature of goods and services to become themselves more like

knowledge.

This transformation importantly distinguishes modern from

earlier technical progress: The economy is now more knowledge

based, not just from knowledge being used more intensively

in production, but from consumers’ having increasingly direct

contact with goods and services that behave like knowledge.

3.4.2 Puzzles and Paradoxes?

I now describe some puzzles relating technology, economic

growth, and the new economy. I suggest that interpreting the

new economy in the terms I have just described helps resolve

some, although not all, of these puzzles.

To overview, paradoxes in the knowledge-driven, technology-

laden economy are of three basic kinds:

1. What used to be just the Solow productivity paradox

(Solow 1987)—‘‘you see computers everywhere except in the

productivity numbers’’—extends more generally to science and

technology. Put simply, a skeptic of the benefits of computers

must, on the basis of productivity evidence, be similarly skep-

tical of science and technology’s impact on economic growth.

2. It is not just that science and technology or ICT seem unre-

lated to economic performance, the correlation is sometimes

negative. When output growth has increased, human capital

deployment in science and technology appears to have fallen.

106 Danny Quah

Page 120: Technology and the New Economy

3. Although it is by most measures the world’s leading tech-

nology economy, the United States imports more ICT than it

exports. And its TFP dynamics haven’t changed as much as

have TFP dynamics in other economies.

3.4.3 Solow Productivity Paradoxes

Figure 3.1 contrasts rapidly expanding information technology

(IT) investment with insignificant labor productivity improve-

ment in the United States between the mid-1960s and the early

1990s (Kraemer and Dedrick 2001). In 1973, annual growth

in IT spending rose to 17 percent from an average of �0.2percent over the preceding eight years. It then averaged 15.7

percent for the twenty-two years afterward. Productivity

growth averaged 2.3 percent for the first period, and then

an anemic 0.9 percent subsequently. Thus, a potentially key

addition to technological base of the U.S. economy appears,

Figure 3.1IT investment has exploded but productivity growth has languished

Technology Dissemination and Economic Growth 107

Page 121: Technology and the New Economy

in reality, to have contributed not at all to U.S. productivity

growth.

Figure 3.2 shows, however, that the puzzle is more profound

than the Solow paradox alone. From 1950 through 1988, the

fraction of the U.S. labor force employed as scientists and

engineers in R&D increased fourfold, from 0.1 percent to 0.4

percent (Jones 1995). The increase in this series is much

smoother: As much increase occurred after 1972 as before.

Yet, as we saw earlier in figure 3.1, labor productivity growth

fell sharply. (For completeness, figure 3.2 also graphs TFP

growth, which relates much the same story as labor produc-

tivity growth.) The smooth secular rise in science and technol-

ogy inputs engendered nothing remotely similar in incomes or

productivity.

I conclude that whatever mechanism relates technology

inputs—scientists and engineers; information technology—

Figure 3.2Scientists and engineers in R&D have grown fourfold while produc-tivity growth has failed to show anything remotely comparable

108 Danny Quah

Page 122: Technology and the New Economy

with measured productivity improvements, it is little under-

stood. That mechanism is no more transparent for prosaic and

uncontroversial inputs such as scientists and R&D engineers

than it is for ICT.

The puzzle only deepens turning to more recent evidence on

the US economy. Over 1995–1999, growth in nonfarm busi-

ness sector productivity rose to an annual rate of 2.9 percent,

more than double its average over the previous two decades

(U.S. Department of Commerce 1999). Was this the long-

awaited resolution of the Solow productivity paradox? If so,

yet a different paradox emerges. Over this time, human capital

indicators for science and technology in the United States

declined almost uniformly. Figures from the National Science

Foundation (http://caspar.nsf.gov/) show that while between

1987 and 1997 the total number of bachelor’s degrees in-

creased by 18 percent, that for computer science fell by 36

percent, for mathematics and statistics by 23 percent, for engi-

neering 16 percent, and for physical sciences, 1 percent. Bur-

relli (2001) reports that U.S. science and engineering graduate

enrollment fell in every single year since 1993, turning around

only in 1999. Just as U.S. productivity growth was starting to

increase, measurable science and engineering inputs for gen-

erating new technology were doing exactly the opposite.

These observations suggest, in my view, a number of com-

plications in the stylization that science and engineering con-

stitute direct inputs into technical progress in turn, driving

economic growth. If there is a productivity paradox for ICT

and the new economy, then a yet larger one holds for science

and technology more broadly.

3.4.4 International Puzzles

Most studies have thus far focused on the United States, but

cross-country evidence raises yet further puzzles. Is the United

Technology Dissemination and Economic Growth 109

Page 123: Technology and the New Economy

States the world’s leading new economy? In 1997 the share of

ICT in total business employment was the same, 3.9 percent

(OECD 2000), for both the United States and the European

Union (EU). However, comparing the two blocs, the United

States is clearly well ahead on both value added and R&D ex-

penditure. In the United States, the share of ICT value added

in the business sector was 8.7 percent, while the share of ICT

R&D expenditure was 38.0 percent. The EU, by contrast, had

ICT value added of only 6.4 percent, and R&D expenditure in

ICT 23.6 percent.

That the EU numbers are averages across nation states,

however, disguises wide diversity across different economies.

Thus, a number of EU member states as well as other OECD

economies show up ahead of the United States in new

economy/ICT indicators (OECD 2000, Tables 1–3, pp. 32–

34). Compared to the US, ICT share in total business em-

ployment is higher in Sweden (6.3%), Finland (5.6%), the

United Kingdom (4.8%), and Ireland (4.6%). Similarly, Korea

(10.7%), Sweden (9.3%), the United Kingdom (8.4%), and

Finland (8.3%) each have ICT shares of value added that ex-

ceed that of the United States. The share of ICT R&D expen-

diture is 51 percent in Finland and 48 percent in Ireland.

Moreover, in 1998 the United States imported US$ 35.9 billion

more ICT than it exported (OECD 2000, Table 4, p. 35). By

contrast, Japan (US$ 54.3 billion), Korea (US$ 13.6 billion),

Ireland (US$ 5.8 billion), Finland (US$ 3.6 billion), and Swe-

den (US$ 2.8 billion) all showed ICT trade surpluses.1

Finally, if the new economy and ICT are supposed to have

affected TFP’s dynamics in the U.S. economy, they appear to

have done so less than in economies like Finland, Ireland, and

Sweden. Vanhoudt and Onorante (2001) document that for the

United States the contribution of TFP to economic growth has

110 Danny Quah

Page 124: Technology and the New Economy

remained approximately constant at 71–72 percent throughout

both the 1970s and the 1990s. By contrast, Finland saw an

increase in TFP contribution to its growth performance from

60 percent to 85 percent; Ireland, from 63 percent to, in essence,

100 percent; and Sweden, from 51 percent to 72 percent.

No single piece of empirical evidence here is overwhelming

by itself, but the range of them suggests to me a couple of sur-

prising possibilities. First, it is economies like Finland, Ireland,

Sweden, Korea, and Japan that, in different dimensions, are

more New Economy than the United States—the first three

of these, most consistently so. Second, to the extent that the

United States has been a successful new economy and has

powered ahead on the technology supply side, it is its ICT

consumption, the demand side, that has grown even more.

3.4.5 What Does the New Economy Have to Be?

This discussion comes full circle to my introduction, where I

argued that the consumption or demand side of the new econ-

omy deserves greater attention than it has thus far attracted.

By contrast, productivity-focused new economy analyses are

numerous and varied, and include the influential and provoca-

tive study of Gordon (2000). In that work, the author identi-

fies the new economy as the acceleration in the rate of price

declines of computers and related technologies since 1995. He

compares new economy developments to what he calls ‘‘five

great inventions’’ from the past, identified as product clusters

surrounding (1) electricity; (2) the internal combustion engine;

(3) chemical technologies (notably molecule-rearranging tech-

nologies, incorporating developments in petroleum, plastics,

and pharmaceuticals); (4) pre–World War II entertainment,

communications, and information (including the telegraph,

telephone, and television); and (5) running water, indoor

Technology Dissemination and Economic Growth 111

Page 125: Technology and the New Economy

plumbing, and urban sanitation infrastructure. In Gordon’s

analysis, these clusters of technological developments drove the

immense productivity improvements of the second industrial

revolution, 1860–1910. In Gordon’s definition, the new econ-

omy pales by comparison.

There is no question that Gordon’s list of great inventions

includes critically important technical developments. But com-

paring mere price reductions—if that is all the new economy

is—in inventions already extant (computers, telecommunica-

tions) to the items in the list hardly seems a balanced beginning

to assess their relative importance. Moreover, the past always

looks good—the further back the past, the better. The further-

back past has been around longer than the recent past, and so

has had greater opportunity to influence the world around us.

As an extreme, consider that at the end of 1999 a group of

leading thinkers were asked what they considered the critical

inventions of the millennium. Freeman Dyson, the renowned

theoretical physicist, extended the choice to cover two millen-

nia, and nominated dried grass:

The most important invention of the last two thousand years was hay.In the classical world of Greece and Rome and in all earlier times,there was no hay. Civilization could exist only in warm climateswhere horses could stay alive through the winter by grazing. Withoutgrass in winter you could not have horses, and without horses youcould not have urban civilization. Some time during the so-called darkages, some unknown genius invented hay, forests were turned intomeadows, hay was reaped and stored, and civilization moved northover the Alps. So hay gave birth to Vienna and Paris and London andBerlin, and later to Moscow and New York. (1999)

Very prosaic, minor changes can have profound effects, if they

stay around long enough.

Gordon’s list focuses on how the supply side of the economy

has changed. Even (4) from his list is of interest, in his analysis,

112 Danny Quah

Page 126: Technology and the New Economy

because it made the world smaller (‘‘in a sense more profound

than the Internet’’ (Gordon 2000) and really should include the

postal system and public libraries leading, in turn, to literacy

and reading.

In the analysis I develop here, by contrast, the new economy

is not only or even primarily a change in cost conditions on the

supply side, then affecting the rest of the economy that uses

that technology. Instead, it is the change in the nature of goods

and services to become increasingly like knowledge. To draw

out again the underlying theme, this is not just to say those

goods and services are science and technology intensive, but

instead that their physical properties in consumption are the

same as those of knowledge.

Such goods and services are becoming more important in

two respects: first, as a fraction of total consumption; and sec-

ond, in their increasingly direct contact with a growing number

of consumers. To be concrete then, I include in this new econ-

omy definition:

1. information and communications technology, including the

Internet;

2. intellectual assets;

3. electronic libraries and databases;

4. biotechnology (i.e., carbon-based libraries and databases).

The common, distinctive features of these categories are, as

earlier indicated: They represent goods and services with the

same properties as knowledge; they are increasingly impor-

tant in value added, and they represent goods and services with

whom a growing number of final consumers are coming into

direct contact. Quah (2001c) has called such goods knowledge-

products. (This is partly to distinguish the issues here from

those typically studied in, say, the ‘‘economics of information.’’

Technology Dissemination and Economic Growth 113

Page 127: Technology and the New Economy

The economic impact of a word-processing package, process-

controller software, gene sequence libraries, database usage, or

indeed the Open Source Software movement can be fruitfully

considered without necessarily bringing in ideas such as moral

hazard, adverse selection, or contract theory—the usual ‘‘eco-

nomics of information’’ concerns.)

Categories (1)–(4) in my definition are, of course, not mutu-

ally exclusive. Intellectual assets (2) include both patentable

ideas and computer software, with the latter obviously in-

cluded in ICT (1) as well. But by intellectual assets, I refer also

to software in its most general form, namely, not just computer

software, but also video and other digital entertainment and

recorded music. Finally, I prefer the term ‘‘intellectual assets’’

because it does not presume a social institution—such as pa-

tents and copyrights—to shape patterns of use, the way that,

say, the term ‘‘intellectual property’’ does.

Viewing the new economy as changes only on the supply or

productivity side can give only part of the picture. This sim-

plification is sometimes useful. Here it misleads. It generates

an unhealthy obsession with attempting to measure the new

economy’s productivity impacts. But even were that focus jus-

tified, shifting attention to the demand or consumption side

helps raise other important and subtle new issues.

3.5 Knowledge in Consumption and Economic Growth

When the new economy is identified with its potential supply-

side impact, the critical links are threefold. First, the new

economy emphasizes knowledge, and knowledge raises pro-

ductivity. Second, improved information allows tighter control

of distribution channels, and with better-informed plans, in-

ventory holdings can be reduced. Third, delivery lags have

114 Danny Quah

Page 128: Technology and the New Economy

shortened so that productive factor inputs—capital and labor

—can be reallocated faster and with less frictional wastage.

In the stylization from section 3.2 and running through most

of the discussion of sections 3.3 and 3.4, knowledge and the

new economy are represented by A in the production function

Y ¼ FðK;N;AÞ ð5Þ

(now ignoring the distinction between A and ~AA from section

3.2). In the conventional analysis, controversy surrounds the

quantitative dimension to this relation: Just how much does the

new economy affect A, and what is the multiplier on A for Y?

What I have tried to argue above is that the new economy is

most usefully viewed as moving A from the production func-

tion (5) to be an argument in agents’ preferences. The new

economy is a set of structural changes in the economy that

have ended up inserting into utility functions objects that have

the characteristics of A. Succinctly, if U represents a utility

function, and C the consumption of other, standard commod-

ities, then the new economy is

U ¼ UðC;AÞ: ð6Þ

Quah (2001c) has studied a model where learning to use new

A is costly in time, and therefore A affects consumers’ budget

constraint. The indirect utility function is then a reduced-form

representation with exactly the features of (6).

That A disrespects geography and is infinitely expansible has

profound implications for the behavior of consumers as well as

producers. For one, transportation costs and end-user location

can no longer satisfactorily explain what we see in patterns of

economic geography (Fujita, Krugman, and Venables 1999;

Quah 2000, 2001b). For another, demand-side characteristics

assume increased importance in determining market outcomes

(Quah 2001c).

Technology Dissemination and Economic Growth 115

Page 129: Technology and the New Economy

To see this second point, consider two possibilities. First,

suppose societies have established institutions—intellectual

property rights (IPRs) like patents, say—that prevent driving

the market price of knowledge products to zero marginal cost.

Social institutions do this by making copying illegal for all but

the IPR holder. The IPR holder then operates as a monopolist,

delivering a quantity and charging a price determined entirely

by the demand curve. Cost considerations determine profits,

but not price or quantity—it is demand alone that determines

market outcomes.

Second, suppose the opposite, namely, that IPR institutions

do not exist. Knowledge-products then are not protected by

IPRs, but have incentive mechanisms for their creation and

dissemination separated—as might happen, say, under systems

of patronage or procurement (David 1993). Then infinite ex-

pansibility of the knowledge-product results in the supply side

supplying as much as the demand side will bear, in a way

divorced from the structure of costs in creation. Again, then,

the ultimate determinant of market outcomes is the demand

side.

These observations suggest the seemingly paradoxical con-

clusion that the most serious obstacle impeding progress in the

new economy might be consumer-side reluctance to participate

in it. The advanced technologies around us might well turn out

to be unproductive, not because of any defect inherent to them,

but instead simply because we users have chosen not to use

those technologies to best effect.

Statistical evidence in Jalava and Pohjola (2002) suggests

two conclusions that bear on this hypothesis. First, in the

United States in the 1990s, ICT use provided benefits exceed-

ing those from ICT production. Second, in Finland the contri-

116 Danny Quah

Page 130: Technology and the New Economy

bution of ICT use to output growth has more than doubled in

the 1990s.

Evidence of a different nature also sheds light on this

demand-side hypothesis. Quah (2001c) describes a historical

example where demand-side considerations mattered critically

for technical progress. China at the end of the Sung dynasty

in the fourteenth century was neither chockful of dot-com en-

trepreneurs nor brimming with Internet infrastructure. How-

ever, it did stand on the brink of an industrial revolution, four

centuries before the Industrial Revolution of late-eighteenth-

century Western Europe.2

China produced more iron per capita in the fourteenth cen-

tury than did Europe in the early eighteenth. Blast furnace and

pig/wrought iron technologies were more advanced in China

in 200 BCE than European ones in the 1500s. In China, iron’s

price relative to grain fell, within a century, to a third of its

level at the end of the first millennium—a technological im-

provement not achieved in the West until the eighteenth cen-

tury. Paper, gunpowder, water-powered spinning machines,

block printing, and durable porcelain moveable type were all

available in China between four hundred to one thousand

years earlier than in Europe. China’s invention of the com-

pass in 960 and ship construction using watertight bouyancy

chambers made the Chinese the world’s most technologically

formidable sailors, by as much as five centuries ahead of those

in the West.

China’s lead over Europe along this wide range of technical

fronts has long suggested to some that China should have seen

an industrial revolution four hundred years before Europe.

Detractors from this view do, of course, have a point: Perhaps

China wasn’t ahead in every single dimension of technological

Technology Dissemination and Economic Growth 117

Page 131: Technology and the New Economy

prowess. But fretting over specific details on, for instance,

whether the Chinese used gunpowder mostly for fireworks

rather than warfare, or whether their understanding of tech-

nology was more bluesky science rather than engineering

oriented (or indeed vice versa), seems niggardly—academic

even—in light of the impressively broad array of demonstrated

technical competencies in China. Yet, despite this, the sub-

sequent five centuries saw dismal Chinese economic decline,

rather than sweeping economic progress. Why?

One reasonable conjecture, it seems to me, is that China’s

failure to exploit its technical base was a failure of demand. In

fourteenth-century China, technological knowledge was tightly

controlled. Scholars and bureaucrats kept technical secrets to

themselves; it was said that the Emperor ‘‘owned’’ time itself.

The bureaucrats believed that disseminating knowledge about

technology subverted the power structure and undermined

their position. That might well have been so. But, as a result,

no large customer base for technology developed, and techno-

logical development languished after its early and promising

start.

Eighteenth-century European entrepreneurs, in contrast, were

eager to use high-technology products such as the spinning

jenny and the steam engine. Strong demand encouraged yet

further technical progress. In 1781, to encourage sharper engi-

neering effort, Matthew Boulton wrote James Watt that ‘‘the

people in London, Manchester, and Birmingham are steam-

mill mad’’ (Pool 1997, 126).

Great excitement across broad swathes of society fired the

economic imagination and drove technology into immediate

application, as described in equation (6). Europe took the lead;

China languished.

118 Danny Quah

Page 132: Technology and the New Economy

I do not know if these demand-side considerations explain

the paradoxes in section 3.4. But they suggest to me that per-

haps we might have been looking in the wrong place all along

for evidence on the new economy.

3.6 Conclusion

Because the new economy is so intertwined with ICT, we are

primed to think of new economy developments as nothing

more than technology-driven, productivity-improving changes

on the supply side. We then want New Economy developments

to do what all technical progress has historically done. And

we emerge disappointed when we find productivity has not

skyrocketed, inflation has not forever disappeared, business

downturns have not permanently vanished, and financial mar-

kets have not remained stratospheric.

This chapter has argued that the most profound changes in

the new economy are not productivity or supply-side improve-

ments, but instead consumption or demand side changes. The

chapter has summarized the case for the importance of tech-

nical progress in economic growth, has argued why the new

economy differs and described how it is truly new, and has

drawn lessons from economic history to highlight potential

pitfalls and dangers as the new economy continues to evolve.

The technical appendix studies the role of human capital in

economic growth, clarifying when human capital affects in-

come levels but not growth rates and when it does affect

growth rates. It emphasizes the distinction between human

capital used for improving technology and human capital

used in producing goods and services. Both matter and each

separately can influence economic growth. The key finding is

Technology Dissemination and Economic Growth 119

Page 133: Technology and the New Economy

that endogenous growth results from the interaction of demand

and supply features, contrasting sharply with economic growth

emerging solely from production-side characteristics.

Policy implications from this analysis are twofold. The

first involves measurement; the second, longer-term con-

cerns. We might be looking in the wrong place—supply-side

developments—for evidence on the impact of the new econ-

omy. Demand-side changes—the behavior of consumers—

might be where we need to document more carefully the

new economy. This is not to suggest a naive Keynesian-type

conclusion that only the demand-side is important. Both sup-

ply and demand matter—in growth as in all other economic

outcomes.

This altered emphasis in the ultimate source of economic

growth leads in turn to the second, longer-term implication. If

the profound changes are to be on the part of consumers, and

those changes take a while to filter through to steady-state

equilibrium growth, perhaps we should simply stay the course,

have faith in the new economy, and not obsess about measur-

ing productivity changes in the short term. Skilled, discerning

consumers and increased levels of broad-based education—for

encouraging improved uses of technology, for raising labor

productivity, for pushing back the frontiers of science and

technology—are what will drive economic growth, one way or

another.

3.7 Technical Appendix

This appendix studies the role of human capital in growth. It

considers two classes of models: First, where human capital

choices influence levels but not growth rates; second, where

120 Danny Quah

Page 134: Technology and the New Economy

human capital choices influence steady-state growth rates. (To

isolate the direct role of human capital, this appendix does not

consider the case where technology is influenced by inputs of

human capital (e.g., Romer 1990).)

In general, it is not the details on the mechanism for accu-

mulating human capital that matter for distinguishing the two

different effects. Instead, it is the a priori assumption on how

human capital enters the production function. Recall produc-

tion function (1),

Y ¼ FðK;N; ~AAÞ;

and assume that ~AA comprises two components ðh;AÞ, where his per worker human capital and A is technology proper.

In the first class of models—where human capital affects in-

come levels but not growth rates—the total stock of human

capital is a separate capital input, paralleling physical capital

Y ¼ FðK;N; ~AAÞ ¼ FðK;H;NAÞ; with H ¼ hN: ðPF0Þ

The second class of models has human capital attached explic-

itly to workers (e.g., Lucas 1988; Rebelo 1991; Uzawa 1965):

Y ¼ FðK;N; ~AAÞ ¼ FðK; hNAÞ ¼ FðK;HAÞ: ðPF1Þ

Human capital then augments labor the same way as does

technology, and—as demonstrated in what follows—affects

growth rates in steady state.3

Section 3.7.3 treats the first class of models, while section

3.7.4 the second.4 Assume throughout that F, whether in (PF0)

or (PF1), is constant returns to scale or homogeneous degree 1

(HD1).

The core of the material below is sufficiently well known

that it appears in a number of textbooks (e.g., Barro and Sala-

i-Martin 1995). However, the organization and emphases dif-

fer. Most important, this appendix explicitly includes in the

Technology Dissemination and Economic Growth 121

Page 135: Technology and the New Economy

analysis technical change, population growth, and different

depreciation rates on human and physical capital. This is more

than just bookkeeping, as without them one is unable to ex-

amine the interaction between, say, technical change and hu-

man capital accumulation. Thus, section 3.7.4 demonstrates

that with ongoing technical progress, when human capital

contributes to growth its reduced-form relationship with in-

come and physical capital shows a diminishing significance—

even though were human capital absent, growth would fall.

Put differently, even when human capital matters, an empirical

researcher will discover no stable cointegrating relationship of

it with physical capital and income.

Next, under the same conditions, one observes that, unlike

physical capital, human capital must become progressively

costlier to accumulate. As technology advances, incrementing

the typical worker’s stock of human capital will, in equilib-

rium, demand ever greater resources. Thus the analysis in sec-

tion 3.7.4 captures the intuition that technologically advanced

economies require substantial, costly training, even if measured

human capital shows no large corresponding increases result-

ing from that training.

Turning from substantive to expositional considerations,

one finds that the analysis—including all the additional pos-

sibilities just mentioned and using general functional forms—

is conceptually easier than when applying just, say, Cobb-

Douglas functions.5 Without being any more complicated, the

development in section 3.7.4 includes as convenient special

cases a number of well-known models of growth with human

capital.

Although all the material that follows is technically more

difficult than that in the text, sections 3.7.1–3.7.3 remain rela-

tively less formal and rigorous. Section 3.7.4, on the other

122 Danny Quah

Page 136: Technology and the New Economy

hand, requires greater precision in the statements, and so uses a

much more formal (definition/theorem/proof) presentation.

3.7.1 General Setup

As far as possible, I use the following notational convention:

Uppercase letters denote economy-wide quantities, and lower-

case, their per capita or per worker versions. The Roman

alphabet denotes observable economic time series, and Greek,

parameters or coefficients. The more complicated the symbol

(tildes, underscores), the less easily is what it denotes found in

national income accounts. Necessarily, however, there will be

some exceptions: The state of technology, A, cannot be directly

measured, but the symbol is so much used in the literature,

calling it something else would only confuse.

Assume

_NN=N ¼ nb 0; Nð0Þ > 0; and ð7Þ_AA=A ¼ xb 0; Að0Þ > 0; ð8Þ

namely, the labor force and technology evolve at constant

proportional growth rates. Endogenous population and tech-

nology models alter (7) and (8), respectively, setting out mecha-

nisms and incentives for determining _NN=N and _AA=A. This

technical appendix focuses on human capital, however, and so

we will retain (7) and (8).

Let the labor force equal the population, and define per

worker output and capital as

y ¼def Y=N and k ¼def K=N;

and their technology-adjusted versions as

~yy ¼def Y=NA and ~kk ¼def K=NA: ð9Þ

In this formulation, y is simultaneously also per capita income

as well as average labor productivity. Following the same

Technology Dissemination and Economic Growth 123

Page 137: Technology and the New Economy

convention, define H to denote total human capital H ¼def

h�N, and the technology-adjusted version~hh ¼ H=NA ¼ h=A: ð10Þ

(This last definition will turn out to be useful only in section

3.7.3.) Aggregate physical and human capital depreciate at in-

stantaneous flow rates dK and dH, respectively.

To fix ideas, section 3.7.2 establishes the Solow neoclassical

growth model in our notation. Section 3.7.3 extends this to

where human capital affects levels but not growth rates. To

clarify the connection to the Solow model, the discussion here

follows Mankiw, Romer, and Weil (1992) in assuming ad hoc

accumulation in physical and human capital. This is not crucial

though: An optimizing Cass-Koopmans analysis obtains the

same results. What matters is assuming the production func-

tion (PF0) rather than (PF1).

Section 3.7.4 turns to an optimizing framework, and shows

how switching between production functions (PF0) and (PF1)

allows human capital to affect growth rates.

3.7.2 Neoclassical Growth

Following Solow (1956), let physical capital K evolve as

_KK ¼ tKY � dKK; Kð0Þ > 0; tK A ð0; 1Þ; and dK > 0; ð11Þ

with _KK denoting K’s time derivative, and tK the savings or in-

vestment rate. It will be useful to define the deepening constant

zK ¼def ðnþ xÞ þ dK > 0:

In this first model take h to be constant. Specialize produc-

tion function (1) to the constant returns to scale function

Y ¼ FðK;NAÞ: ð12Þ

A balanced-growth steady state (BGSS) is a collection of time

paths

124 Danny Quah

Page 138: Technology and the New Economy

fyðtÞ; kðtÞ : tg

such that _yy=y and k=y are constant in time. An equilibrium is a

collection of time paths

fyðtÞ; kðtÞ : t A ½0;yÞg

satisfying equations (11)–(12). A BGSS equilibrium is a BGSS

satisfying equations (11)–(12).

To understand the properties of equilibrium, divide (12)

throughout by NA to obtain

~yy ¼ Fð~kk; 1Þ ¼def f ð~kkÞ:

Using (7)–(9) in equation (11) then gives_~kk~kk=~kk ¼ tK � f ð~kkÞ~kk�1 � zK;

~kkð0Þ > 0: ð13Þ

Under standard economic assumptions on f ¼ Fð � ; 1Þ the dif-ferential equation (13) implies that ~kk converges from any initial

point ~kkð0Þ to the unique solution of

f ð~kkÞ~kk�1 ¼ zK � t�1K :

Thus in equilibrium at BGSS, capital per worker

k ¼ K=N ¼ ~kkA

grows at the constant rate _AA=A ¼ x. Output per worker

y ¼ Y=N ¼ FðK;NAÞN

¼ f ð~kkÞA

converges similarly to a unique time path that grows in BGSS

at the same constant, exogenously-given rate x.

Summarizing, in this model with h constant, in BGSS the

growth rate of per capita income equals that for technology.

3.7.3 Two Models of Growth with Human Capital: Levels

but Not Growth Rates

This section studies two different models for human capital in

economic growth. In the first, h human capital per worker

Technology Dissemination and Economic Growth 125

Page 139: Technology and the New Economy

increases without bound; in the second, h remains finite in

steady state. Both models, however, predict that choices on

human capital influence only the level of output per worker.

Steady-state growth rates will remain fixed at that for technol-

ogy, _AA=A ¼ x, as in the previous model.

First, (following Mankiw, Romer, and Weil 1992) suppose

production function (1) now takes the form of equation (PF0)

Y ¼ FðK;H;NAÞ;

with constant returns to scale in all three arguments.

Parallel with physical capital accumulation (11) let H evolve

as

_HH ¼ tHY � dHH; Hð0Þ > 0;

0 < tK þ tH < 1; and dH > 0; ð14Þ

with tH the rate of investment in human capital. Human

capital increases from resources spent on it—schooling, for

example—and depreciates at a constant proportional rate. In-

vestment on human capital is a constant fraction of income.

Equation (14) allows h ¼ H=N to increase without bound. In-

deed, in the equilibrium that follows, h will diverge to infinity.

A BGSS is a collection of time paths

fyðtÞ; kðtÞ; hðtÞ : tg

such that _yy=y, k=y, and h=y are constant in time. An equilib-

rium is a collection of time paths

fyðtÞ; kðtÞ; hðtÞ : t A ½0;yÞg

satisfying equations (PF0), (11), and (14). A BGSS equilibrium

is a BGSS satisfying equations (PF0), (11), and (14).

To see the properties of equilibrium, rewrite (PF0) in tech-

nology-adjusted per capita form:

~yy ¼ Fð~kk; ~hh; 1Þ ¼def f ð~kk; ~hhÞ:

126 Danny Quah

Page 140: Technology and the New Economy

As with the definition of zK, let

zH ¼def ðnþ xÞ þ dH > 0:

Then just as one obtained (13) for the neoclassical growth

model, one has_~kk~kk=~kk ¼ tK � f ð~kk; ~hhÞ~kk�1 � zK and ð15Þ_~hh~hh=~hh ¼ tH � f ð~kk; ~hhÞ~hh�1 � zH: ð16Þ

The pair of equations (15)–(16) implies a steady state in ð~kk; ~hhÞsatisfying

f ð~kk; ~hhÞ~kk�1 ¼ zK � t�1K and f ð~kk; ~hhÞ~hh�1 ¼ zH � t�1H : ð17Þ

Because F is HD1, function f will not be. Equation (17) then

has a full-rank Jacobean and thus determines a unique pair

ð~kk; ~hhÞ. From (15)–(16), the vector ð~kk; ~hhÞ globally converges tothe unique solution of (17). (Note that were f HD1, then the

Jacobean of (17) would be singular. Then, if a solution existed,

equation (17) would determine not ð~kk; ~hhÞ separately, but onlytheir ratio.)

A useful interpretation of this result derives from recogniz-

ing that the left side of equations (17) are the average products

of physical and human capital, respectively, holding fixed

technology-augmented labor NA. When F is HD1, those aver-

age products decline to zero even when the other capital input

rises proportionally. Although no explicit optimization informs

the accumulation decision, the hypothesized savings functions

imply slowing accumulation, (15) and (16), with declining aver-

age products. Therefore, ~kk and ~hh do not grow indefinitely but

instead converge to unique, finite values.

From the dynamics of ð~kk; ~hhÞ, per capita income y ¼ Y=Nconverges too to a unique steady-state path that grows at rate_AA=A ¼ x. This is exactly as in the neoclassical growth model in

section 3.7.2. The level of the steady-state path in y varies: For

Technology Dissemination and Economic Growth 127

Page 141: Technology and the New Economy

instance, it increases in steady-state ~hh, which could be caused

by, among other possibilities, a higher investment rate tH on

human capital. However, to repeat, the growth rate of per

capita income remains entirely unaffected, equaling x always.

The second model—following Jones (1998, chap. 3) or

Romer (2001, sec. 3.8)—again leaves unaffected the key

growth predictions of the neoclassical model. Suppose as be-

fore that h increases through investment, or through education

in particular. However, while education can raise a worker’s

human capital with no diminishing returns, the amount of time

that a worker can devote to education is bounded. Then even if

all the worker’s lifetime were spent on education, her human

capital can, at most, reach some finite upper limit. Specifica-

tions that embody this implication include many typically used

in labor economics. For instance,

hðsÞ ¼ h0ecs; s A ½0; 1�; h0;c > 0;

with s denoting the fraction of time spent in schooling, implies

a constant proportional effect for education

h0ðsÞhðsÞ ¼ c

(usually taken to equal 0.10 (e.g., Jones 1998, chap. 3)). But

then even as s increases to its upper limit of 1, per worker hu-

man capital h approaches only at most h0ec < y.

Use production function (PF1)

Y ¼ FðK;NhAÞ;

assumed to satisfy constant returns to scale, so that

~yy ¼ Fð~kk; hÞ:

Denote the solution to a worker’s optimization problem on

education choice by the constant s, so that the corresponding

human capital level is

128 Danny Quah

Page 142: Technology and the New Economy

h ¼ h0ecs A ½h0; h0ec�:

Then, using (PF1), (7), and (8), the physical capital accumula-

tion equation (11) becomes_~kk~kk=~kk ¼ tK � Fð~kk; hÞ~kk�1 � zK: ð18Þ

But the behavior ~kk from (18) is exactly the same as that from

(13), up to a shift factor in levels, induced by h. Thus, again, ~kk

converges from any initial point ~kkð0Þ to the unique solution of

Fð~kk; hÞ~kk�1 ¼ zK � t�1K :

Under standard assumptions on F, the steady state level of ~kk is

increasing in h, and thus in s. However, the steady growth rate

of capital per worker k ¼ K=N is simply _AA=A ¼ x, independent

of s. Output per worker y ¼ Y=N inherits the same propertiesof global convergence and invariant steady-state growth rate.

Thus, while levels of output per worker increase with educa-

tion, growth rates are unchanged.

3.7.4 Growth with Human Capital

The models thus far have used arbitrary accumulation pro-

cesses (11) and (14) and either production function (PF0) or

production function (PF1) with bounded per worker human

capital. In all cases per capita income growth occurred only

from technical progress _AA=A ¼ x. This section adopts produc-

tion function (PF1) and allows per worker human capital to

grow without bound. For completeness, the discussion also

takes an optimizing approach to accumulating physical and

human capital, in place of the arbitrary (11) and (14). It is easy

to see, however, that replacing (PF1) with (PF0) would restore

the growth results of the previous section.

The analysis in this section includes, in a consistent nota-

tion, special cases such as the one-sector model in Barro and

Sala-i-Martin (1995, sec. 5.1) and the two-sector model in

Technology Dissemination and Economic Growth 129

Page 143: Technology and the New Economy

Rebelo (1991)—and therefore the Lucas model (Lucas 1988)

as well.

A social planner for the economy will solve a welfare opti-

mization program that can then be decentralized with markets.

Let C denote aggregate consumption so that, as earlier,

c ¼ C=N and ~cc ¼ c=A

respectively define per capita and technology-intensive, per

capita consumption.

Everyone in the economy is identical and infinitely lived. The

representative agent discounts the future at constant rate r > 0

and has instantaneous utility UðcÞ, where U 0 > 0, U 00 < 0, and

U 0ðcÞ ! y as c! 0:

Social welfare isðy0

e�rtNðtÞUðcðtÞÞ dt ¼ Nð0Þ �ðy0

e�ðr�nÞtUðcðtÞÞ dt:

Define

RðcÞ ¼ �cU 00ðcÞU 0ðcÞ > 0:

If U has the CRRA form

UðcÞ ¼ c1�y � 11� y

; y > 0;

then RðcÞ ¼ y constant. However, to clarify the role that utility

function U plays in the growth analysis, I will write R in gen-

eral and assume it constant when necessary, rather than intro-

duce a new parameter y.

Assume the production functions (PF0) and (PF1) are every-

where continuously differentiable. Denote partial derivatives

with respect to their jth argument by Fj. As mnemonic, write,

FK ¼ F1 and FH ¼ F2, noting that in general FH0 qF=qH. For

130 Danny Quah

Page 144: Technology and the New Economy

instance, in (PF1), qF=qH equals F2A ¼ FHA. Since F is HD1,each Fj is HD0. The technology-adjusted per capita versions of

(PF0) and (PF1) are, respectively,

~yy ¼ Fð~kk; ~hh; 1Þ ¼def f ð~kk; ~hhÞ and

~yy ¼ Fð~kk; hÞ ¼def f ð~kk; hÞ:

The function f corresponding to (PF0) is decreasing returns to

scale. That for (PF1) has h rather than ~hh as argument, and

retains the HD1 property—it is the same function as F, but I

will write f to treat (PF0) and (PF1) simultaneously. I will also

carry along the mnemonic fK and fH for partial derivatives

in the first and second arguments; again, fK ¼ qf=q~kk0 qf=qK.

Second partial derivatives will, analogously, be denoted fKKand so on. For now, assume only that all first partial deriva-

tives are nonnegative; they might or might not satisfy Inada-

type conditions. Because further assumptions on f vary with

the model, I will restrict f as necessary below rather than here.

Denote by IK aggregate investment devoted to changing

physical capital, and by IH that for changing human capital.

Here, IH excludes learning-by-doing but includes formal

schooling and training—activities that draw resources away

from consumption and physical capital investment. Assume

that IK, subject to being nonnegative, can be costlessly trans-

formed with consumption C, so both are measured in the same

numeraire units. By contrast, private agents can trade IH only

at price q, not necessarily unity. The aggregate economy might,

of course, face additional constraints on IH—the two, IK and

IH, might never be directly tradeable—but this q interpretation

allows a consistent treatment of a range of different models.

The usual per capita and technology-adjusted versions are

iK ¼def IK=N ~iiK ¼def iK=A;

iH ¼def IH=N ~iiH ¼def iH=A:

Technology Dissemination and Economic Growth 131

Page 145: Technology and the New Economy

The national income identity is

Y ¼ Cþ IK þ IH � q;

with technology-adjusted per capita version

~yy ¼ ~ccþ ~iiK þ ~iiH � q:

Since ~yy ¼ f ð~kk; hÞ, when q is positive this equation describes thetension between consumption and accumulating physical capi-

tal on the one hand and accumulating human capital on the

other. Models where H increases through, say, learning by

doing significantly depart from such a tension.

Physical capital accumulation follows:

_KK ¼ IK � dKK ) _~kk~kk ¼ ~iiK� zK~kk: ð19Þ

How H depends on IH will vary, depending on what is being

studied in a particular model, and won’t necessarily be exactly

as the relation above between _KK and IK.

Definition 3.7.1 A balanced-growth steady state (BGSS) is a

collection of time paths

fyðtÞ; cðtÞ; kðtÞ; hðtÞ; qðtÞ : tg

such that _yy=y, _hh=h, c=y, k=y, and q are invariant in time.

The definition implies _cc=c ¼ _kk=k ¼ _yy=y. However, the relation

between h and y is left unspecified: this will matter below.

Write g ¼def _yy=y for the growth rate of per capita income or,equivalently, worker productivity in BGSS.

Without pretending to replace an equilibrium analysis, we

can already conjecture at the formal results to come. If F is

either (PF0) or (PF1) with h bounded, then BGSS has

_yy=y ¼ _cc=c ¼ _kk=k ¼ x ¼ _AA=A:

When F is (PF0), then we also have in BGSS _hh=h ¼ x so that

h=y is invariant. Growth comes only from technical progress:

132 Danny Quah

Page 146: Technology and the New Economy

No other outcome is possible with f displaying decreasing

returns to scale.6

If, however, F is (PF1) then BGSS potentially has

_yy=y ¼ _cc=c ¼ _kk=k ¼ _hh=hþ x;

so that the economy’s growth rate g exceeds both _hh=h and_AA=A. We, of course, need a model still to determine g in equi-

librium, but regardless of g’s value, with x ¼ _AA=A > 0, the

above already implies that in BGSS:

1. The ratios of human capital to income and to physical cap-

ital, h=y and h=k (or equivalently H=Y and H=K), converge to

zero;

2. Human capital must become increasingly costly to produce

from IH.

Thus, even with human capital mattering critically for growth,

it will trend with neither income nor capital: In this model, the

failure to find a stable cointegrating relationship between hu-

man capital and income is evidence for rather than against the

importance of human capital in growth.

To understand the second implication, suppose it failed and

instead a counterpart to equation (19) held:_~hh~hh ¼ ~iiH � ðnþ xþ dHÞ~hh , _hh ¼ iH � ðnþ dHÞh;

or

~iiH ¼ ð _~hh~hh=~hhþ ½nþ xþ dH�Þ~hh:

Since f is HD1, BGSS has

~yy ¼ f ð~kk; hÞ ) _hh=h ¼ _~yy~yy=~yy ¼ _~kk~kk=~kk ¼ g� x;

so that_~hh~hh=~hh ¼ _hh=h� x ¼ g� 2x:

But then in BGSS the right side of the national income identity

Technology Dissemination and Economic Growth 133

Page 147: Technology and the New Economy

~yy ¼ ~ccþ ð _~kk~kk=~kkþ zKÞ~kkþ ð _hh=hþ ½nþ xþ dH�Þ~hh �q

cannot grow at g� x, the growth rate of the left side.

Instead, what is needed is something like

_hh ¼ iH=A� ðnþ dHÞh: ð20Þ

In words, the contribution of iH to _hh becomes progressively

difficult as A rises.7

From the discussion emerges

Proposition 3.7.2 If production F is (PF0) then BGSS has

_hh=h ¼ _yy=y ¼ x:

If, however, production F is (PF1) then BGSS has

_hh=h ¼ _yy=y� x:

This specification specializes to several well-known cases.

With (PF0), setting q ¼ 1 and _HH ¼ IH � dHH, and requiring

~ccþ ~iiK þ ~iiH a f ð~kk; ~hhÞ

recovers an optimizing version of the model in Mankiw,

Romer, and Weil (1992). Specifying (PF1) and bounding ~hh

gives the model in Jones (1998, chap. 3) and Romer (2001,

sec. 3.8).

Using (PF1) and fixing q ¼ 1 gives the one-sector growthmodel in Barro and Sala-i-Martin (1995, sec. 5.1). Freeing up q

but requiring that for some HD1 (sub)production functions F,

G and allocation shares sK; sH A ½0; 1�:

FðK;HAÞ ¼ FðsKK; sHHAÞ

þ q � Gð½1� sK�K; ½1� sH�HAÞ

Cþ IKaFðsKK; sHHAÞ

IH aGð½1� sK�K; ½1� sH�HAÞ

gives the model in Rebelo (1991). As before, call the partial

derivatives FK;FH, and so on. Then, restricting further GK ¼

134 Danny Quah

Page 148: Technology and the New Economy

0 gives the Lucas model. Since this case bears specific interest,

the discussion below will take care to account for it with sK ¼1 at the corner optimum.

Hereafter, consider the following:

Definition 3.7.3 Assume production is given by (PF1) and

human capital accumulation by (20). Suppose the economy

solves the social welfare optimization program:

supf~cc; ~iiK; ~iiH ;q; sK; sHg

ðy0

Uð~ccðtÞAðtÞÞe�ðr�nÞt dt

s:t: ~cc; ~iiK; ~iiH; qb 0 and 0a sK; sH a 1 ð21Þ_~kk~kk ¼ ~iiK � zK

~kk ð22Þ_hh ¼ ~iiH � ðnþ dHÞh ð23Þ

~ccþ ~iiK þ q~iiH a f ð~kk; hÞ ¼ ~yy ð24Þf ð~kk; hÞ ¼ FðsK~kk; sHhÞ þ q � Gð½1� sK�~kk; ½1� sH �hÞ ð25Þ

and either

~iiH aGð½1� sK�~kk; ½1� sH�hÞ ð26aÞ

or

q ¼ 1: ð26bÞ

A BGSS equilibrium is a BGSS together with pair ðsK; sHÞinvariant in time solving (21)–(26).

When (26a) holds, (24) and (25) imply

~ccþ ~iiKaFðsK~kk; sHhÞ;

namely, the technology for producing IH differs from that for

producing Cþ IK. Call Cþ IK goods, so that F and G de-

scribe goods production and human capital production, re-

spectively.

To analyze equilibrium, define for nonnegative Lagrange

multipliers

Technology Dissemination and Economic Growth 135

Page 149: Technology and the New Economy

ðmK;mH;mC;mY ;mI;mqÞ

the Hamiltonian:

H ¼ e�ðr�nÞt½Uð~ccAÞ þ ð~iiK � zK~kkÞmK þ ð~iiH � ðnþ dHÞhÞmH

þ ð~ccþ ~iiK þ q~iiH � f ð~kk;hÞÞmC� ðf ð~kk; hÞ �FðsK~kk; sHhÞ

� q � Gð½1� sK�~kk; ½1� sH�hÞÞmY� ð~iiH � Gð½1� sK�~kk; ½1� sH�hÞÞmI � ð1� qÞmq�:

The first-order conditions at an optimum are as follows:

qH

q~cc¼ 0 ) AU 0 �mC ¼ 0 ð27Þ

qH

q~iiK¼ 0 ) mK �mC ¼ 0 ð28Þ

qH

q~iiH¼ 0 ) mH � ðq �mC þmIÞ ¼ 0 ð29Þ

qH

qsKs 0 ) FK �mY � ðq �mY þmIÞGKs 0 ð30Þ

qH

qsHs 0 ) FH �mY � ðq �mY þmIÞGH s 0 ð31Þ

qH

qq¼ 0 ) �~iiH �mC þ G �mY þmq ¼ 0 ð32Þ

and

qH

q~kk¼ � d

dt½e�ðr�nÞtmKðtÞ�

) fK �mC þ ð1� sKÞGK �mI � zK �mK� ðfK � sK �FK � q½1� sK�GKÞmY

¼ ½ðr� nÞ � _mmK=mK�mK ð33Þ

136 Danny Quah

Page 150: Technology and the New Economy

with, finally,

qH

qh¼ � d

dt½e�ðr�nÞtmHðtÞ�

) fH �mC þ ð1� sHÞGH �mI � ðnþ dHÞ �mH� ðfH � sH �FH � q½1� sH�GHÞmY

¼ ½ðr� nÞ � _mmH=mH�mH: ð34Þ

Conditions (30) and (31) work in the obvious way if it is

optimal to set sK or sH to their boundary values at either 0

or 1. For instance, in the Lucas case, GK ¼ 0 so that share sKis optimally set to 1 whereupon (30) becomes the inequality

FK �mY > ðq �mY þmIÞGK. Related, when q is not restrictedto 1, equation (32) fails and so provides no additional restric-

tion in the solution. Finally, conditions (27)–(29) have been

stated as equalities rather than more generally because all

equilibria of interest below will have ~cc; ~iiK, and ~iiH positive.

In these first-order conditions, the price q only ever appears

together with the Lagrange multiplier mI. When q is not

restricted to 1 (as in (26b)), the pair ðq;mIÞ are then deter-mined only jointly, not individually. This implies that the level

of measured output y in (24)–(25) is indeterminate as well,

although its growth rate might be uniquely tied down. One

sees this after corollary 3.7.6 later. The economics is straight-

forward: When (26a) is activated, the economy physically can-

not instantaneously transform resources between goods and

human capital. A range of possible prices q can then be con-

sistent with the observed outcomes in goods and human capital

production. Put another way, agents’ decisions are optimally at

a corner solution. Then, up to limits, the Lagrange multiplier

mI on (26a) moves appropriately to compensate for alternative

settings of q. As the market price q varies, again up to limits,

Technology Dissemination and Economic Growth 137

Page 151: Technology and the New Economy

optimal decisions remain unaltered, with mI transparently ad-

justing to maintain equilibrium. Being only a shadow value, mIis invisible to GDP accounting, whereas q appears explicitly.

Setting q to zero recovers what Barro and Sala-i-Martin (1995,

chap. 5) call ‘‘narrow output’’; setting q to its maximum value

within the feasible range recovers ‘‘broad output.’’

Identical Technologies for Human Capital and Goods

When ~iiH is freely interchangeable with c and ~iiK, set mI ¼mY ¼ 0 and mq > 0. Then conditions (30)–(31) are irrelevantand q ¼ 1 so that first-order conditions (29), (32), (33), and(34) become, respectively,

mH �mC ¼ 0�~iiH �mC þmq ¼ 0

fK �mC � zK �mK ¼ ½ðr� nÞ � _mmK=mK�mKfH �mC � ðnþ dHÞ �mH ¼ ½ðr� nÞ � _mmH=mH�mH:

Calling m the common value mC ¼ mK ¼ mH and log-differ-entiating (27) with respect to time, the collection of first-order

conditions collapses to

_mm=m ¼ rþ dK þ x� fK ¼ rþ dH � fH ð35Þ_~cc~cc=~cc ¼ ½ð1� Rð~ccAÞx� _mm=m�Rð~ccðAÞ�1: ð36Þ

From the HD0 property of fK and fH, equation (35) implies

fHð1; h=~kkÞ � fKð1; h=~kkÞ ¼ ðdH � dKÞ � x ð37Þ

so that h=~kk is constant in time,8 depending only on dH; dK; x,

and f .

Significantly, (37) holds everywhere in equilibrium, not

only in BGSS. Thus, the model does not in general admit an

equilibrium—BGSS or otherwise—with arbitrary initial con-

ditions in K and H. At arbitrary initial levels of physical and

138 Danny Quah

Page 152: Technology and the New Economy

human capital the implied marginal products need not line up

as required in (37). In this model, physical and human capital

can change only gradually and so cannot be instantaneously

adjusted to meet marginal productivity conditions. But when

(37) does hold at a particular value of h=~kk, then equation (35)

gives _mm=m, which in turn determines _~cc~cc=~cc through (36).

That this gives the growth rate of the economy overall is

shown in the following proposition, which also summarizes the

discussion thus far and provides further details:

Proposition 3.7.4 Assume in definition 3.7.3 that ~iiH is freely

interchangeable with c and ~iiK. Suppose that Rð~ccAÞ is constantand f satisfies

E fixed h: fKð~kk; hÞ ! 0 as ~kk! y;

fKð~kk; hÞ ! y as ~kk! 0;

fKK < 0;

E fixed ~kk: fHð~kk; hÞ > ðdH � dKÞ � x uniformly in h on

a neighborhood of 0;

and fHH a 0:

Then, for any given initial value ~kk� > 0, BGSS equilibrium

exists and is unique,with the ratio h=~kk taking a value h� con-

stant in time and independent of ~kk�. The BGSS growth rate is

g ¼ ½fKð1; h�Þ � ðrþ dKÞ�R�1

¼ ½ðfHð1; h�Þ þ xÞ � ðrþ dHÞ�R�1; ðG1Þ

bounded from above by the average product of K in producing

goods ðCþ IKÞ net of per capita depreciation. If x > 0 then the

ratios of human capital to income and to physical capital con-

verge to 0.

Proof By the assumptions on f , the left side of equation (37),

fH � fK, exceeds its right side at h=~kk ¼ 0 and strictly declines

Technology Dissemination and Economic Growth 139

Page 153: Technology and the New Economy

monotonically without bound. Thus (37) admits a unique pos-

itive finite solution h� in h=~kk. Using h� in (35) and plugging the

result into (36) gives the growth rate _~cc~cc=~cc, varying with h=~kk but

not ~kk itself. The definition of BGSS then gives

_~yy~yy=~yy ¼ _~kk~kk=~kk ¼ _~cc~cc=~cc ¼ ½ð1� RÞx� _mm=m�R

¼ ½ð1� RÞx� ðrþ dK þ x� fKð1; h�Þ�R

Moreover, h=~kk constant implies also _hh=h ¼ _~kk~kk=~kk ¼ _~cc~cc=~cc. Then

g ¼ _yy=y ¼ _~yy~yy=~yyþ x ¼ _~cc~cc=~ccþ x

¼ ½ð1� RÞx� _mm=m�R�1 þ x ¼ ½x� _mm=m�R�1

¼ ½fKð1; h�Þ � ðrþ dKÞ�R�1

¼ ½ðfHð1; h�Þ þ xÞ � ðrþ dHÞ�R�1; from ð35Þ

verifying ðG1Þ. Since _~kk~kk=~kk ¼ g� x in BGSS, one also has ~kkðtÞ ¼~kk�eðg�xÞt. To see this establishes an equilibrium, note that

equations (22)–(24) imply:

~yy ¼ ~ccþ ð _~kk~kk=~kkþ zKÞ~kkþ ð _hh=hþ ðnþ dHÞÞh;

so that ð~kk; h�Þ then determine the other endogenous variables:~iiH ¼ ð _hh=hþ ½nþ dH �Þh� � ~kk

~iiK ¼ ð _~kk~kk=~kkþ zKÞ~kk~cc ¼ f ð1; h�Þ~kk� ~iiH � ~iiK

~yy ¼ f ð1; h�Þ~kk

m ¼ mK ¼ mH ¼ mC ¼ AU 0ð~ccAÞmq ¼ ~iiH �m and q ¼ 1:

Define c ¼def ~cc=~kk. In BGSS equilibrium _c=c ¼ _~cc~cc=~cc� _~kk~kk=~kk ¼ 0 sothat, from (22), (23), (24), and

_~kk~kk=~kk ¼ g� x, one has

c ¼ f ð1; h�Þ � zK � ð _hh=hþ ½nþ dH�Þh� � ðg� xÞ¼ ff ð1; h�Þ � ð _hh=hþ ½nþ dH�Þh�g � ðnþ dKÞ � g:

140 Danny Quah

Page 154: Technology and the New Economy

Since m < y so that (27) gives c > 0, the expression on the

right must be positive. The term in braces is the average prod-

uct of K in producing Cþ IK. Net of per capita depreciation,that is, taking away nþ dK, this average product must therefore

exceed growth rate g. Finally, for x > 0,

_hh=h ¼ _~yy~yy=~yy ¼ _yy=y� x < _yy=y ¼ _kk=k

) h=y; h=k! 0 as t ! y: Q.E.D.

The hypotheses on f as stated in proposition 3.7.4 might ap-

pear unusual, but are implied by the usual strict concavity and

Inada conditions. The statement gives an explicit lower bound

on fH that might well be negative, whereupon the condition

is redundant. I have chosen to give the hypotheses as above

to allow for situations in the literature that violate standard

assumptions but cause no difficulties otherwise. A prominent

example would be where the technology for accumulating H

is linear (e.g., Lucas 1988).

BGSS equilibrium growth rate ðG1Þ has interesting featuresthat should be emphasized:

Proposition 3.7.5 Under the hypotheses of proposition 3.7.4

the steady-state growth rate g exceeds technology’s growth rate

x precisely when

fKð1; h�Þ > Rxþ rþ dK

, fHð1; h�Þ > ðR� 1Þxþ rþ dH:

Proof Immediate from ðG1Þ. Q.E.D.

The economy’s growth rate ðG1Þ exceeds that of technologywhen the equilibrium steady state capital ratio h� implies mar-

ginal products fK and fH sufficiently high. The threshold for

these marginal products depends, notably, on both the produc-

tion side ðx; dK; dHÞ and the consumer side ðr;RÞ. Moreover,

Technology Dissemination and Economic Growth 141

Page 155: Technology and the New Economy

when the threshold is exceeded, the equilibrium growth rate

itself depends, again, on both production features ðf ; x; dK; dHÞand consumer characteristics ðr;RÞ. This contrasts with equi-librium growth rates in sections 3.7.2 and 3.7.3 that vary only

with technology, namely, just with x. In the longer term, it

might be this—rather than convergence or divergence, scale

effects, stochastic trends, or a range of others—that turns out

to be the single most distinctive characterization of endogen-

ous growth. Emphasize this—it will appear again later—as

follows:

Corollary 3.7.6 (Endogenous Growth Meta) Growth varies

with not only supply-side properties but demand-side features

as well.

Finally, also worth observing is that here population growth

n has no influence on the per capita income growth rate g. This

finding, however, is quite special and easily overturned, despite

the relatively general specification of the previous model.

Different Technologies for Human Capital and Goods

The setup here makes straightforward extending the discussion

to where human capital investment differs in essential ways

from consumption and physical capital investment. This is the

case considered in Lucas (1988), Rebelo (1991), and Uzawa

(1965).

Numerous special cases are possible. To keep things man-

ageable, I rule out sK ¼ 0 and sH ¼ 1, namely, where no K isused in F for producing goods and no H is used in G for gen-

erating human capital.9 Taken together, those possibilities rep-

resent the extreme version of what Barro and Sala-i-Martin

(1995) call empirically irrelevant ‘‘reversed factor intensities.’’

Ruling out sK ¼ 0 and sH ¼ 1 simply formalizes two prop-

142 Danny Quah

Page 156: Technology and the New Economy

erties: first, some physical capital is always necessary in goods

production, and, second, it is not possible to produce new

human capital without some human capital to begin. Indeed,

human capital is most of what goes into producing yet more

human capital. A leading case of interest, which implies

the exclusion, is Lucas’s, which assumes GK ¼ 0 and FH > 0

everywhere, so that in equilibrium sK ¼ 1 and sH A ð0; 1Þ.Next, sH ¼ 0 can also be excluded. That boundary value

would imply that human capital is not used in producing

goods. But then it cannot be optimal to continue to produce

any human capital at all in equilibrium, for human capital is

neither consumed nor used in producing anything except itself.

Thus, in the analysis to follow, the first-order condition (31) is

strengthened to an equality.

Suppose that (26a) constrains ~iiH to G while q is unrestricted

so that mq ¼ 0. Then (32) gives mC ¼ mY . Equation (25)implies:

fK ¼ sK �FK þ q� ð1� sKÞGKfH ¼ sH �FH þ q� ð1� sHÞGH:

From these and (29), the FOC (33) becomes

sKFK �mC þ ð1� sKÞGK �mH � zK �mK¼ ½ðr� nÞ � _mmK=mK�mK:

If sK ¼ 1 then the left side becomes just FK �mC � zK �mK. If,conversely, sK A ð0; 1Þ, then (30) holds with equality so thattogether with (29) it givesFK �mC ¼ GK �mH so that again theleft side is FK �mC � zK �mK. Thus, ruling out sK ¼ 0, usingmC ¼ mK from (28), gives for the previous:

FK � zK ¼ ðr� nÞ � _mmC=mC: ð33 0Þ

Again, by the partial derivatives of (25), the FOC (34) becomes

Technology Dissemination and Economic Growth 143

Page 157: Technology and the New Economy

sHFH �mC þ ð1� sHÞGH �mH � ðnþ dHÞ �mH¼ ½ðr� nÞ � _mmH=mH�mH;

so that ruling out sH ¼ 1, analogous reasoning to that abovegives

GH � ðnþ dHÞ ¼ ðr� nÞ � _mmH=mH: ð34 0Þ

(Counterparts to ð33 0Þ–ð34 0Þ are easily obtained if exclusionrestrictions sK0 0 and sH0 1 are reversed.)

Define

m ¼def mH=mC; c ¼def ~cc=~kk; h ¼def h=~kk:

Now collect three dynamic equations for the just-defined m; c,

and h. First, combining ð330Þ and ð340Þ gives:_mm=m ¼ _mmH=mH � _mmC=mC

¼ dH � dK � xþFK � GH; ð38Þ

where, becauseFK and GH are each HD0, sK0 0, and sH0 1,

one can evaluate FK and GH in (38) at

1;sHsKh

� �and

1� sK1� sH

h�1; 1

� �;

respectively. The reason for taking FK and GH at these points

will become clear below.

Second, as earlier, log-differentiate (27) with respect to time

to get

_~cc~cc=~cc ¼ ½ð1� Rð~ccAÞÞx� _mmC=mC�Rð~ccAÞ�1:

Combining this with _mmC=mC from ð33 0Þ and recognizing_~kk~kk=~kk ¼ ~iiK=~kk� zK ¼ FðsK; sH � hÞ � c� zK

¼ sKF 1;sHsKh

� �� c� zK

(where I have used FHD1) gives

144 Danny Quah

Page 158: Technology and the New Economy

_cc=c ¼ _~cc~cc=~cc� _~kk~kk=~kk

¼ ðnþ dKÞ � ðrþ dKÞRð~ccAÞ�1 þ c

þ Rð~ccAÞ�1FK � sK �F ð39Þ

with both FK andF evaluated at ð1; sH � s�1K hÞ.The term sK �F will play a key role in subsequent discus-

sion. SinceFð1; sH � s�1K hÞ is the output-physical capital ratio inthe Cþ IK sector (or physical capital’s average product in pro-ducing goods), the product sK �F is the ratio of goods pro-

duced to the economy-wide quantity of physical capital, not

just the quantity used in goods production. Call this the goods-

physical capital ratio. Its counterpart

ð1� sHÞ � G1� sK1� sH

h�1; 1

� �;

or the ratio of the flow of new human capital to the economy-

wide stock of human capital, will be similarly useful in the

analysis below.

Return now to the third of the dynamic equations. Using G

HD1, we have

_hh=h ¼ Gð½1� sK�=h; 1� sHÞ � ðnþ dHÞ

¼ ð1� sHÞG1� sK1� sH

h�1; 1

� �� ðnþ dHÞ

so that

_h=h¼ _hh=h� _~kk~kk=~kk

¼ dK � dH þ xþ cþ ð1� sHÞ � G� sK �F: ð40Þ

Equation (40) combines together c;G, and F without using

prices. This causes no problems, however, as by this point

these terms are all simply numbers—they are ratios of the ap-

propriate quantities.

Technology Dissemination and Economic Growth 145

Page 159: Technology and the New Economy

Provided R is constant the three equations, (38)–(40),

together with (30) and (31) rewritten (using mC ¼ mY andequation (29)) as the pair:

eigther of

�m ¼ FK � G�1

K or

sK ¼ 1ð41Þ

m ¼ FH � GH ð42Þ

all F;FK;FH evaluated at ð1; sHs�1K � hÞ and all G;GK;GHevaluated at ð½1� sK�½1� sH��1 � h�1; 1Þ, give five conditionsthat jointly determine ðm; c; h; sK; sHÞ. The reason is now ap-parent for the evaluation point given right after equation (38).

Growth behavior here parallels proposition 3.7.4. However,

the more involved nonlinear equations (38)–(41) make less

transparent existence and uniqueness of the equilibrium, in

contrast to the single equation (37) needed above. Special cases

assuming explicit functional forms for ðF;GÞ—for example,the Cobb-Douglas pair model in Barro and Sala-i-Martin (1995,

sec. 5.2) and Rebelo (1991) or the Cobb-Douglas linear model

in Lucas (1988)—can be studied from the algebra of (38)–(41)

directly.10 The proposition that follows therefore hypothesizes

a unique solution to these equations, leaving unspecified the

more primitive assumptions on ðF;GÞ that would transformthe hypothesis into a conclusion. Nevertheless, some work re-

mains to confirm that this solution is a BGSS equilibrium.

Proposition 3.7.7 Assume in definition 3.7.3 that Rð~ccAÞ isconstant and that human capital accumulates through a pro-

duction function G different from F (that for producing

goods). Assume ðF;GÞ implies that equations (41) and (42)together with the zeroes of equations (38)–(40) have a unique

solution ðm�; c�; h�; s�K; s�HÞ, where sK0 0 and sH0 1. Then,

for any given initial value ~kk� > 0, BGSS equilibrium exists

146 Danny Quah

Page 160: Technology and the New Economy

and—except in ðy; qÞ—is unique. It is characterized by aðm�; c�; h�; s�K; s

�HÞ constant in time and independent of ~kk

�, with

the equilibrium nonuniqueness given as

q A ½0;m�� and ~yy ¼ Fþ q � G A ½F;Fþm� � G �:

The BGSS equilibrium growth rate is

g ¼ ½FK � ðrþ dKÞ�R�1 ¼ ½ðGH þ xÞ � ðrþ dHÞ�R�1; ðG2Þ

bounded from above by the goods-physical capital ratio net

of per capita depreciation. If x > 0 then the ratios of human

capital to income and to physical capital converge to zero.

Proof By the hypotheses, (26a) is satisfied with equality and q

is determined endogenously in equilibrium, so that (26b) no

longer holds. In BGSS equilibrium, HD1 in production func-

tion (PF1), proposition 3.7.2, and (42) give m constant and

therefore _mm ¼ _c ¼ _hh¼ 0. Therefore, BGSS equilibrium has

(38)–(40) become

sK �F� c� ð1� sHÞ � G ¼ xþ dK � dH ð43ÞsK �F� c� R�1FK ¼ ðnþ dKÞ � ðrþ dKÞR�1 ð44Þ

FK � GH ¼ xþ dK � dH: ð45Þ

By hypothesis, these together with (41)–(42) admit a solution

ðm�; c�; h�; s�K; s�HÞ:

This allows us to evaluate:

_mmC=mC ¼ ðr� nÞ � ðFK � zKÞ

¼ ðr� nÞ � ðGH � ðnþ dHÞÞ_~cc~cc=~cc ¼ ½ð1� RÞx� _~mm~mmC= ~mmC�R�1:

By BGSS definition 3.7.1

_~yy~yy=~yy ¼ _~kk~kk=~kk ¼ _~cc~cc=~cc

so that

Technology Dissemination and Economic Growth 147

Page 161: Technology and the New Economy

g ¼ _yy=y ¼ _~cc~cc=~ccþ x

¼ ½FK � ðrþ dKÞ�R�1 ¼ ½ðGH þ xÞ � ðrþ dHÞ�R�1;

verifying ðG2Þ. In BGSS, either proposition 3.7.2 or h� con-stancy gives _hh=h ¼ _~kk~kk=~kk ¼ g� x. From any initial ~kk� we then

have ~kkðtÞ ¼ ~kk�eðg�xÞt. To see this establishes an equilibrium,

calculate

~iiH ¼ ð _hh=hþ ½nþ dH�Þh� � ~kk

~iiK ¼ ð _~kk~kk=~kkþ zKÞ~kk~cc ¼ c�~kk ¼ FðsK; sHh�Þ~kk� ~iiK

mY ¼ mK ¼ mC ¼ AU 0ð~ccAÞ

mH ¼ m� �mC:

The solution ðm�; c�; h�; s�K; s�HÞ and an initial ~kk

� uniquely de-

termine the endogenous variables above. However, not so for

ðmI; q; ~yyÞ individually. Instead, from (24), (25), and (29), we

have

mI ¼ mH � q �mC ¼ ðm� � qÞ �mC~yy ¼ Fðs�K; s�H ; h

�Þ~kkþ q � Gð½1� s�K�=h�; 1� s�HÞh

�~kk

so that any constant q A ½0;m�� implies an mI such that

0amI am�mC ¼ mH;

and a y ¼ A~yy that together with the above constitutes a BGSSequilibrium. Next, (39) gives

c ¼ sKF� R�1FK � ½ðnþ dKÞ � ðrþ dKÞR�1�

¼ sKF� ðnþ dKÞ � ½FK � ðrþ dKÞ�R�1

¼ ½sKF� ðnþ dKÞ� � g:

The term in brackets is the goods-physical capital ratio net of

per capita depreciation. Since m < y so that (27) gives c > 0,

the expression on the right must be positive: The growth rate g

148 Danny Quah

Page 162: Technology and the New Economy

is bounded from above by the net of per capita depreciation

goods-physical capital ratio. Finally, for completeness, repro-

duce the previous argument that for x > 0,

_hh=h ¼ _~yy~yy=~yy ¼ _yy=y� x < _yy=y ¼ _kk=k

) h=y; h=k! 0 as t ! y: Q.E.D.

Is there intuition for the indeterminacy in ðq; ~yyÞ? Recall from(24)–(25) in definition 3.7.3 that q is a relative price. It serves

two functions: First, q accounts for what is immediately added

to national income by human capital accumulation. Second,

q is a market signal to allocate resources between producing

goods and producing human capital. When technologies F

and G differ and restriction (26a) holds, the equilibrium pro-

duction decision is a corner solution: goods and human capital

cannot be transformed into each other—not just costlessly, but

at all. The relative price that decentralizes this allocation deci-

sion is determined only up to an appropriate range. All prices

within that range imply the same observed outcome in quan-

tities; the slack is taken up by some shadow value, in this case,

the Lagrange multiplier mI. But then using q in national in-

come accounts leads similarly to a range of possible values for

GDP. When q is set to zero, GDP fails to include human capi-

tal accumulation and is then what Barro and Sala-i-Martin

(1995, chap. 5) call ‘‘narrow output.’’ Conversely, at the max-

imum feasible equilibrium value for q, namely m� ¼ FH � G�1H

(corresponding to equation (5.16) in Barro and Sala-i-Martin

(1995)), GDP evaluates to what Barro and Sala-i-Martin

(1995, Chap. 5) call ‘‘broad output.’’ The analysis above,

however, suggests that any level of GDP between narrow and

broad output is equally meaningful. All of them grow at the

same rate in BGSS equilibrium; all of them imply an identical

value to the program (21)–(26).

Technology Dissemination and Economic Growth 149

Page 163: Technology and the New Economy

As earlier, the BGSS equilibrium growth rate has interesting

features:

Proposition 3.7.8 Under the hypotheses of proposition 3.7.7

the steady-state growth rate g exceeds technology’s growth rate

x precisely when

FKðs�K; s�H � h�Þ > Rxþ rþ dK

, GHð½1� s�K�=h�; 1� s�HÞ > ðR� 1Þxþ rþ dH:

Proof Immediate from ðG2Þ. Q.E.D.

The equilibrium growth rate ðG2Þ resembles ðG1Þ in the earlierdiscussion. For the economy’s growth rate to exceed that of

technology, the marginal productivity of physical capital in

goods production or, equivalently, the marginal productivity of

human capital in generating new human capital must be suf-

ficiently high. The critical threshold depends on both produc-

tion ðx; dK; dHÞ and consumption ðr;RÞ characteristics. Whenthe threshold is exceeded, again, the equilibrium growth rate

depends on both production features ðF;G; x; dK; dHÞ and con-sumer characteristics ðr;RÞ.Proposition 3.7.7, as already discussed, hypothesizes that

ðF;GÞ implies a unique solution to equations (38)–(42). Areasonable conjecture is that standard Inada-type conditions

would deliver this. However, those curvature conditions would

unnecessarily rule out, among others, the leading case with G

linear (Lucas 1988), and where the equilibrium can be studied

explicitly. To see this, note that, in my notation, that model has

FðsK � ~kk; sH � hÞ ¼ ðsK � ~kkÞaðsH � hÞ1�a; a A ð0; 1ÞGð½1� sK� � ~kk; ½1� sH� � hÞ ¼ g� ½1� sH� � h;

g > maxf0;�½xþ dK � dH�g:

Then the ratios and marginal products in proposition 3.7.7 are

150 Danny Quah

Page 164: Technology and the New Economy

F¼ sHsK

� h� �1�a

FK ¼ a� sHsK

� h� �1�a

FH ¼ ð1� aÞ � sHsK

� h� ��a

G ¼ GH ¼ g and GK ¼ 0:

By the last of these, s�K ¼ 1 in equation (41). Using this in (45)determines s�H � h�, since g > �½xþ dK � dH� by hypothesis. Inturn, equation (44) then gives c�, and equation (43), s�H and

h� separately. Finally, (42) gives m�. The BGSS equilibrium

growth rate is

g ¼ _hh=hþ x ¼ g� ð1� s�HÞ � ðnþ dHÞ þ x:

This depends on consumer characteristics through s�H being

determined in (43)–(45).

Notes

I thank the Economic and Social Research Council (awardR022250126) and the Andrew Mellon Foundation for supporting thisresearch. Nazish Afraz provided research assistance. Discussions withPartha Dasgupta and comments from the editors and an anonymousreferee have helped me better understand some of the issues here. Thispaper was delivered in a public lecture as part of the University ofHong Kong’s 90th Anniversary Celebrations, 2001.

1. I have not been able to get more disaggregated statistics on thekinds of ICT products that are aggregated in the statistics. Perhapsintra-industry trade and product differentiation might be insightful forthinking about these numbers. If so, however, it also suggests that anaggregate, macro emphasis on ICT and productivity is misleading forassessing economic performance.

2. The analysis in Quah (2001c) had been originally motivated by myreading of Jones (1988) and Mokyr (1990). Since those, Landes

Technology Dissemination and Economic Growth 151

Page 165: Technology and the New Economy

(1998) has further reignited controversy over the historical facts; see,for example, Pomeranz (2000). What matter for my discussion are notprecise details on how much exactly China might have been ahead ofEurope, when—within a five-century span of time—catch-up fromone to the other occurred, or if the reversal was sudden or gradual.No one disputes that fourteenth-century China was technologicallyadvanced nor that afterward China lost significant technologies that ithad earlier had. It is these that I draw on for this discussion.

3. To emphasize, in (PF0) the aggregate human capital stock Happears as factor input, additional to and separate from labor N. Sucha production function is used, for example, in Mankiw, Romer, andWeil (1992), where it takes the specific form KaHbðNAÞ1�a�b, witha; b > 0 and aþ b < 1.

4. A third class of models—for example, Jones (1998, chap. 3) orRomer (2001, sec. 3.8)—specifies production function (PF1) as in thesecond class of growth models, but then bounds the amount of humancapital per worker that can be accumulated. The results then are thesame as in levels-but-not-growth models, so this appendix incorpo-rates them in section 3.7.3.

5. Using general functional forms—assuming, say, no more thanconstant returns to scale—clears up any lingering doubts about apossible knife-edge nature to the conclusions. And it prevents theusual explosive cascade of exponents in a’s and ð1� aÞ’s in the expo-sition where descriptions such as ‘‘the net marginal product of physi-cal capital’’ then become ambiguously aliased into a whole range ofother possible interpretations. As just one example, equation (5.13)in Barro and Sala-i-Martin (1995, 180) uses n to mean two logicallydifferent things—one a Lagrange multiplier, the other an allocationshare. Later on, just before equation (5.18) the authors use a ‘‘signi-ficant amount of algebra’’ (omitted) to obtain a critical result. Ofcourse, their accurate and powerful economic intuition gets them tothe correct answer in any case. My exposition, conversely, never usesany significant amount of algebraic manipulation.

6. This overstates somewhat. Even with F given by (PF0), BGSS with_yy=y > x might be possible if h=y grows without bound. However, forconsumption to remain bounded from below given the national in-come identity, h accumulation must then become progressively lessresource-demanding. This seems implausible.

7. Alternatively, the definition of BGSS in definition 3.7.1 to requireinvariant q can be modified appropriately.

152 Danny Quah

Page 166: Technology and the New Economy

8. When dH � dK ¼ x ¼ 0 and f ð~kk; hÞ ¼ ~kkah1�a, then (37) gives h=~kk ¼ð1� aÞ�1a. This special case is, however, neither more insightful noreasier to obtain than the general case considered in this chapter. Moreimportant, it is strictly misleading in hiding the dependence of equi-librium h=~kk on model parameters.

9. This exclusion will be used in ð33 0Þ and ð34 0Þ. Given the currentsetup, an interested reader can easily see the implications of relaxingthe restriction.

10. As an exercise, the interested reader is encouraged to plug in spe-cific functional forms and confirm that the resulting solutions verifyequilibria previously obtained in the literature. See also the discussionat the end of this section.

References

Barro, Robert J., and Xavier Sala-i-Martin. 1995. Economic Growth.McGraw-Hill, New York.

Barro, Robert J., and Xavier Sala-i-Martin. 1997. ‘‘Technology diffu-sion, convergence, and growth.’’ Journal of Economic Growth 2(1):1–25, March.

Burrelli, Joan S. 2001. ‘‘Graduate enrollment in science and engineer-ing increases for the first time since 1993.’’ National Science Founda-tion Division of Science Resources Studies, Data Brief, 11 January.

Cameron, Gavin, James Proudman, and Stephen Redding. 1998.‘‘Productivity convergence and international openness.’’ In Opennessand Growth, ed. James Proudman and Stephen Redding, 221–260.London: Bank of England.

Coe, David T., and Elhanan Helpman. 1995. ‘‘International R&Dspillovers.’’ European Economic Review 39(5): 859–887, May.

David, Paul A. 1993. ‘‘Intellectual property institutions and thepanda’s thumb: Patents, copyrights, and trade secrets in economictheory and history.’’ In Global Dimensions of Intellectual PropertyRights in Science and Technology, ed. M. B. Wallerstein, M. E.Mogee, and R. A. Schoen, 19–61. Washington DC: National Acad-emy Press.

Dyson, Freeman. 1999. ‘‘What is the most important invention in thepast two thousand years?’’ The Third Culture. Available on-line athhttp://www.edge.org/documents/Invention.htmli.

Technology Dissemination and Economic Growth 153

Page 167: Technology and the New Economy

Eaton, Jonathan, and Samuel Kortum. 1999. ‘‘International technol-ogy diffusion: Theory and measurement.’’ International EconomicReview 40(3): 537–570, August.

Feyrer, James. 2001. ‘‘Convergence by parts.’’ Working paper, Dart-mouth College, Hanover, December.

Fujita, Masahisa, Paul Krugman, and Anthony Venables. 1999. TheSpatial Economy: Cities, Regions, and International Trade. Cam-bridge, MA: The MIT Press.

Gordon, Robert J. 2000. ‘‘Does the ‘New Economy’ measure up tothe great inventions of the past?’’ Journal of Economic Perspectives14(4): 49–74, Fall.

Grossman, Gene M., and Elhanan Helpman. 1991. Innovation andGrowth in the Global Economy. Cambridge, MA: The MIT Press.

Helpman, Elhanan, ed. 1998. General Purpose Technologies andEconomic Growth. Cambridge, MA: The MIT Press.

Jalava, Jukka, and Matti Pohjola. 2002. ‘‘Economic growth in theNew Economy: Evidence from advanced economies.’’ InformationEconomics and Policy 14(2), June.

Jones, Charles I. 1995. ‘‘Time series tests of endogenous growthmodels.’’ Quarterly Journal of Economics 110: 495–525, May.

Jones, Charles I. 1998. Introduction to Economic Growth. NewYork: W. W. Norton.

Jones, Eric L. 1988. Growth Recurring: Economic Change in WorldHistory. Oxford: Oxford University Press.

Kraemer, Kenneth L., and Jason Dedrick. 2001. ‘‘Information tech-nology and productivity: Results and policy implications of cross-country studies.’’ In Information Technology, Productivity, andEconomic Growth, ed. Matti Pohjola, UNU/WIDER and Sitra, 257–279. Oxford: Oxford University Press.

Paul Krugman. 1994. ‘‘The myth of Asia’s miracle.’’ Foreign Affairs73(6): 62–78, November/December.

Landes, David S. 1998. The Wealth and Poverty of Nations. London:Little, Brown and Company.

Lucas, Robert E. 1988. ‘‘On the mechanics of economic develop-ment.’’ Journal of Monetary Economics 22(1): 3–42, July.

Mankiw, N. Gregory, David Romer, and David N. Weil. 1992. ‘‘Acontribution to the empirics of economic growth.’’ Quarterly Journalof Economics 107(2): 407–437, May.

154 Danny Quah

Page 168: Technology and the New Economy

Mokyr, Joel. 1990. The Lever of Riches: Technological Creativity andEconomic Progress. Oxford: Oxford University Press.

OECD. 2000. Measuring the ICT Sector. Paris: OECD.

Parente, Stephen L., and Edward C. Prescott. 2000. Barriers toRiches. Walras-Pareto Lectures. Cambridge, MA: The MIT Press.

Pomeranz, Kenneth. 2000. The Great Divergence: China, Europe, andthe Making of the Modern World Economy. Princeton: PrincetonUniversity Press.

Pool, Robert. 1997. Beyond Engineering: How Society Shapes Tech-nology. New York: Oxford University Press.

Quah, Danny. 1997. ‘‘Empirics for growth and distribution: Polariza-tion, stratification, and convergence clubs.’’ Journal of EconomicGrowth 2(1): 27–59, March.

Quah, Danny. 2000. ‘‘Internet cluster emergence.’’ European Eco-nomic Review 44(4–6): 1032–1044, May.

Quah, Danny. 2001a. ‘‘Cross-country growth comparison: Theory toempirics.’’ In Jacques Dreze, editor, Advances in MacroeconomicTheory, Vol. 133 of Proceedings of the Twelfth World Congress ofthe International Economic Association, Buenos Aires, ed. JacquesDreze, 332–351. London: Palgrave.

Quah, Danny. 2001b. ‘‘Demand-driven knowledge clusters in aweightless economy.’’ Working paper, Economics Department, LSE,April.

Quah, Danny. 2001c. ‘‘The weightless economy in economic devel-opment.’’ In Information Technology, Productivity, and EconomicGrowth, ed. Matti Pohjola, UNU/WIDER and Sitra, 72–96. Oxford:Oxford University Press.

Rebelo, Sergio. 1991. ‘‘Long-run policy analysis and long-rungrowth.’’ Journal of Political Economy 99(3): 500–521, June.

Romer, David. 2001. Advanced Macroeconomics, 2d ed. New York:McGraw-Hill.

Romer, Paul M. 1986. ‘‘Increasing returns and long-run growth.’’Journal of Political Economy 94(5): 1002–1037, October.

Romer, Paul M. 1990. ‘‘Endogenous technological change.’’ Journalof Political Economy 98(5, part 2): S71–S102, October.

Romer, Paul M. 1992. ‘‘Two strategies for economic development:Using ideas and producing ideas.’’ Proceedings of the World BankAnnual Conference on Development Economics (March): 63–91.

Technology Dissemination and Economic Growth 155

Page 169: Technology and the New Economy

Solow, Robert M. 1956. ‘‘A contribution to the theory of economicgrowth.’’ Quarterly Journal of Economics 70(1): 65–94, February.

Solow, Robert M. 1957. ‘‘Technical change and the aggregate pro-duction function.’’ Review of Economics and Statistics 39(3): 312–320, August.

Solow, Robert M. 1987. ‘‘We’d better watch out.’’ New York TimesBook Review Section, 12 July, p. 36.

Stephenson, Neal. 1999. In the Beginning Was the Command Line.New York: Avon Books.

U.S. Department of Commerce. 1999. The Emerging Digital Econ-omy II. U.S. Department of Commerce.

Uzawa, Hirofumi. 1965. ‘‘Optimal technical change in an aggregativemodel of economic growth.’’ International Economic Review 6: 18–31, January.

Vanhoudt, Patrick, and Luca Onorante. 2001. ‘‘Measuring economicgrowth and the new economy.’’ EIB Papers 6(1): 63–83.

156 Danny Quah

Page 170: Technology and the New Economy

4Technological Advancement and

Long-Term Economic Growth in Asia

Jeffrey D. Sachs and John W. McArthur

4.1 Introduction

We are living in an age of remarkable technological change

that is forcing us to think very hard about the linkages between

technology and economic development. The harder we think

about it, the more we realize that technological innovation is

almost certainly the key driver of long-term economic growth.

We further realize that the innovation process must be sup-

ported by a complex set of social institutions. Although mar-

kets have a great deal to do with innovation, innovation is not

purely a market-driven phenomenon. Innovating economies

require an interconnected set of market and nonmarket insti-

tutions to make the innovation process work effectively, and

for this reason, governments need an innovation strategy if

they wish to foster highly innovative economic systems.

This need for an innovation strategy is as real in Asia as it

is anywhere else in the world. In Asia, however, the necessity

is perhaps more immediate than in most other developing re-

gions, since many Asian economies now stand at a threshold

of development requiring a new approach to technology and

growth. Over the next twenty-five years, many Asian econo-

mies will undergo a transition from being top-flight adopters of

Page 171: Technology and the New Economy

technologies from the United States, Europe, and Japan, to be-

coming technology innovators.

This chapter outlines in broad terms the rationale for a focus

on systems of innovation, with particular emphasis on the

challenges facing East Asian economies. Following this intro-

duction, section 4.2 briefly outlines the modern theory of eco-

nomic growth, focusing on the main lessons regarding the role

of technology in economic development. We relate the theory

to the most notorious modern example of an economy without

technological advance, the Soviet Union, as well as to Latin

America, a region that has also generally paid insufficient heed

to the importance of technological advance. Section 4.3 dis-

cusses the distinct processes of innovation and diffusion, and

describes Asia’s place in the current global technological di-

vide. Section 4.4 then emphasizes several key traits of the

innovation process and section 4.5 describes the notable suc-

cesses of the U.S. innovation system in this light. Section 4.6

highlights some lessons for Asia as the region’s economies

progress toward innovation-based growth in the years ahead,

and section 4.7 concludes.

4.2 Economic Growth Theory and the Role of Technology

Economic theory offers a series of textbook approaches to

understanding economic change. One of the first was initiated

in 1776 by Adam Smith (Smith 1981), who emphasized the

role of the division of labor in promoting rising output per

person. He stressed that increasing specialization, mediated

mainly by market forces, would lead to rising efficiency in

production, and therefore to rising living standards. Smith

focused on the role of market institutions, efficiency in trans-

actions, and effective property rights in promoting high levels

158 Jeffrey D. Sachs and John W. McArthur

Page 172: Technology and the New Economy

of economic well-being. Understandably, Smith’s model of the

division of labor did not draw primary attention to innovation

since he was living at the time when the Industrial Revolution

was just gaining force. The full import of sustained innovations

across many economic sectors could still not be seen.

Much of modern growth theory was developed in the middle

part of the twentieth century, when a series of pathbreaking

papers—including those by Roy Harrod (1939), Evsey Domar

(1946), and particularly Robert Solow (1956) and his followers

—led economists to stress savings, investment, and capital ac-

cumulation as key drivers of gross national product levels and

growth. The practical implication was that, based on these and

a few other key theoretical foundations, development econo-

mists around the world directed their policy advice toward

ways to raise the savings rate in an economy and on ways to

channel savings into productive investments. Much less atten-

tion was paid to the part of economic growth that is founded

upon technological change.

There is a certain irony to the focus on capital accumulation,

since Solow’s pathbreaking 1956 neoclassical model, the one

that won him a Nobel Prize in 1987, actually had a contrary

message, as Solow himself indicated. The Solow approach re-

mains the first economic growth model that students learn,

usually presented with a focus on the rise in capital per person

as the prime force in raising living standards over time. Yet

Solow showed that when the saving rate rises in an economy,

this leads to a temporary increase in the rate of capital accu-

mulation and a permanent increase in the level of output per

capita, but not to a rise in the long-run rate of growth of out-

put per capita. The long-term economic growth rate in Solow’s

model is actually independent of the rate of saving and capital

accumulation. Indeed, in order to produce a sustained rate of

Technological Advancement and Economic Growth in Asia 159

Page 173: Technology and the New Economy

growth in his model, Solow had to go beyond mere capital

accumulation. He had to introduce an exogenous rate of

improvement in labor productivity, presumably the result of

technological advancement. But in his famous model, Solow

did not try to explain the source of that technological ad-

vancement; he merely assumed it.

A year after his 1956 theoretical piece, Solow made a basic

and tremendously important calculation that is still instructive

for scholars today (Solow 1957). He examined U.S. economic

data from 1909 to 1949 and asked what they tell us about the

sources of U.S. economic growth over that period of time.

Ingeniously, he used his theoretical framework to extract the

part of economic growth that was due to more capital accu-

mulated per person from the part that was due to the advance

of technology. These were the first such national growth ac-

counting calculations in the modern study of economics.

What did Solow find? He found that technological change

accounted for seven-eighths of the growth of the U.S. economy

and that increases in capital stock—the equipment, machinery,

and residential stock relative to the population—accounted for

only one-eighth of the growth of income per person in the

United States. His empirical assessment supported the theoret-

ical suggestion of his model that technological advancement

has been the key long-term driver of economic development.

Those two articles in 1956 and 1957 had an extremely

important message: Understanding long-term economic growth

requires understanding technological innovation. But the eco-

nomics profession is somewhat odd. The technically challenging

part of the Solow growth models lies in solving a differential

equation for how fast the capital stock grows rather than in

interpreting the mysterious process of technological change.

And so, for the many years following Solow’s initial contribu-

160 Jeffrey D. Sachs and John W. McArthur

Page 174: Technology and the New Economy

tions, economists studied the role of savings and investment as

the central feature of economic growth, rather than focusing

on the sources of long-term technological change. This began

to change only in the 1980s.

4.2.1 What Happens When There is No Technological

Advancement?

Joseph Stalin provided the most compelling example of trying

to use a high saving rate as the key to economic development

when he promoted forced saving, in a very brutal manner, to

promote industrialization in the Soviet Union. Yet the Soviet

economy had very little technological change in the civilian

sector for decades and, as a result, came about as close as pos-

sible to a case of a high saving rate combined with stagnant

technology. It is probably fair to say that it proved a key result

of the Solow model nicely, albeit in a planned-economy con-

text: Capital accumulation without technological advancement

eventually leads to the end of economic growth.

In the beginning of forced industrialization in the 1930s,

the Soviet economy grew quite rapidly as the marginal pro-

ductivity of new capital investments in industry was high. The

Soviet planners in the 1930s and afterward allocated industrial

investments according to the industrial division of labor that

they copied from the United States and Germany at the time.

They calculated how many steel mills and coalmines and so

forth were needed to build an automobile sector or an airplane

industry and then built up those industries in fixed proportions

over time. The division of labor was rigidly set. Capital accu-

mulation increased the scale of production without affecting

dramatically the division of labor. New innovations were diffi-

cult or impossible to introduce into the rigid planning struc-

ture, other than in the military sector.

Technological Advancement and Economic Growth in Asia 161

Page 175: Technology and the New Economy

The Soviet planners contributed to a national tragedy, but

an instructive historical episode for the world, by pursuing the

capital accumulation process with little civilian technological

change for half a century. They proved that by accumulating

capital in the absence of technological change, the marginal

productivity of capital is driven down to essentially zero. By

the 1970s and 1980s, the Soviet Union was producing more

steel in the aggregate than the United States, for example, even

though its income level was less than a third of the U.S. level.

But by that time the ability to turn the vast quantities of steel

into higher output per capita had almost disappeared. As a

result, the Soviet Union became a giant steel graveyard, with

rusting steel everywhere.

Although not characterized by a high savings rate, some

South American economies, most notably Argentina, provide

another example of what can happen when a region does not

progress technologically. Thirty years ago, much of South

America was at an admirable level of income per capita by

global standards. Most of the region has stagnated economi-

cally since then. There are many different explanations as to

why. The standard ones involve things like bad macroeco-

nomic management, unstable governments, and high inflation.

However, many of these explanations are more symptoms than

fundamental causes. At the root of the problem, it appears, is

the low emphasis on long-term technological advancement and

innovation.

In the 1960s and 1970s, many economies in South Amer-

ica probably became quite comfortable, and perhaps even

complacent, with the wealth provided by natural resource ex-

ploitation. Hence they failed to make the transition to techno-

logical innovation as the basis for development. Even today,

high-income and sophisticated economies like Argentina show

162 Jeffrey D. Sachs and John W. McArthur

Page 176: Technology and the New Economy

very little technological innovation. Argentina produces many

world-class scientists, but too many of these end up working in

Boston or Palo Alto rather than in Buenos Aires. This is in part

because there has been no national strategy to promote tech-

nological advancement through domestic innovation.

In sum, the failure of traditional development economics in

many countries where capital accumulation was the core focus

highlights the need for long-term technological advancement to

sustain economic growth. An economy without technological

innovation, even if it has an extremely high national savings

rate like China’s, will not avoid stagnation unless it continually

advances its technological capacity. To do so systematically,

one needs to understand the process of developing and apply-

ing new ideas in production.

4.3 Innovation and Diffusion: Asia Today in Relation to the

World’s Technological ‘‘Core’’

Fortunately, since the early 1980s growth theory and devel-

opment theory have increasingly analyzed the process of tech-

nological innovation as a central feature of growth rather

than as something that was simply ‘‘brought in’’ from the

outside. Major contributions were made by Lucas (1988),

Romer (1990), Grossman and Helpman (1991), and Aghion

and Howitt (1992), among many others. Today, the goal is

to understand the transition from technological change as an

‘‘exogenous’’ feature of an economy to technological change as

an ‘‘endogenous’’ feature. Broadly, the aim is to understand

how a society produces technological advance.

Theoretical models stress that there are two basic modes of

advancing technology. One is innovation (developing one’s

own new technologies) and the other is adoption (introducing

Technological Advancement and Economic Growth in Asia 163

Page 177: Technology and the New Economy

technologies that have been devised elsewhere). Of course, all

economies pursue both modes to some extent, and there is no

doubt that every economy produces only a modest fraction

of the technologies that it uses. Adoption of technology from

abroad is sufficient to raise living standards substantially, and

even to achieve long-term growth based on the continuing

technological innovations achieved abroad. But technology

adoption has its limitations as well.

Economic theory demonstrates that if one economy is a

technological innovator while another economy is a technol-

ogy adopter, the innovator will maintain a lead in income per

capita relative to the adopter. The income gap between the

two economies persists over time even though the technology

adopter ends up incorporating all of the technological advances

made by the innovator. It does so, but only with a lag, and the

persisting lag in technology translates into a persisting gap in

income levels in favor of the innovator. The relative income

ratio, or degree of ‘‘catch-up’’ between the innovator and the

adopter, depends on the relative rates of innovation and diffu-

sion of technology (where diffusion signifies the rate at which

innovations are absorbed by the adopting economy). The les-

sons from this kind of model of innovation and adoption

are twofold. First, a follower economy that adopts technology

from abroad but that does not innovate itself will always

lag behind the innovator. Second, even technological adoption

requires specialized institutions that facilitate the diffusion of

new technologies.

This pattern of enduring income gaps between technological

innovators and adopters is not just a theoretical construct. In

background research for the most recent Global Competi-

tiveness Report (McArthur and Sachs 2002), we have found

strong empirical evidence suggesting the limits to technological

164 Jeffrey D. Sachs and John W. McArthur

Page 178: Technology and the New Economy

diffusion as a source of growth and the need for economies to

progress beyond adoption to innovation if they want to con-

tinue to close the gap with the highest-income countries. This

evidence is of great importance to many East Asian economies

today, given their current stage of economic development. Our

colleague Andrew Warner (2000, 2002) has also shown em-

pirically that countries differ markedly in their capacities to

innovate and to adopt technologies. Some countries, including

many in Asia, are effective adopters of technology while dis-

playing little innovation to this point.

Indeed, it is fair to say that East Asia has been the most suc-

cessful region in the developing world in adopting technologies

from the innovating economies. This is in part because East

Asia developed ingenious institutions for quickly adopting

technological advances from abroad. For example, the elec-

tronics and semiconductor production throughout Southeast

Asia and coastal China is based on technology that came from

the United States and Japan originally thirty years ago. The

East Asian developing countries created special economic

zones, export processing zones, science parks, and other insti-

tutional arrangements to entice foreign investments in the elec-

tronics sector who were looking for low-cost places to produce

their products. Thanks to the success of these specialized insti-

tutions, East Asia became one of the key global centers for new

electronics industries during the past three decades. Thus, even

though the technology was originally developed in Palo Alto

and environs, it diffused very quickly to East Asia. The diffu-

sion was so fast that it allowed a substantial narrowing of the

income gap of East Asia with the United States. But, as the

formal growth models suggest, rapid technological diffusion by

itself did not, and will not, fully close the income gap. Full

catching up will require that East Asia become a major inno-

vator in its own right.

Technological Advancement and Economic Growth in Asia 165

Page 179: Technology and the New Economy

Much of Asia, with roughly two-thirds of the world’s popu-

lation, is currently in the middle of an historic transition from

being a technological adopter to becoming a center of innova-

tion as well. Japan made that transition many decades ago. To

understand where the rest of Asia needs to go technologically,

it is instructive to consider which parts of the world are cur-

rently technological innovators, as opposed to technological

adaptors. In doing so, one quickly finds one of the most strik-

ing facts of the world economy today: The places that are true

technological innovators—in that they are creating new pro-

cesses or new products, commercializing them, and bringing

them to market—form a small part of the world’s population.

If we look at the amount of patenting as one indicator of in-

novation (with patents providing a rough measurement of the

rate of commercialization of ideas), it turns out that the top ten

patenting countries in the world, with less than 13 percent of

the world’s population and 69 percent of the world’s gross

national product (GNP), account for 94 percent of all patents

taken out in the United States.1 The top twenty patenting

countries in the world, with less than 15 percent of the world’s

population and 77 percent of its GNP, account for 99 percent

of the all current patenting in the United States.

These figures illustrate the astoundingly high concentra-

tion of technological activity in the world today. In no sense is

innovation a globally dispersed process with all regions con-

tributing to the advancement of knowledge in roughly propor-

tionate terms, or even in terms proportionate to income levels.

Instead, the global divide in technology is even starker than the

divide in income. Only a few parts of the world are high inno-

vation countries. Another bloc of the world, with roughly 2

billion people, including the 1.3 billion in China, consists of

166 Jeffrey D. Sachs and John W. McArthur

Page 180: Technology and the New Economy

effective adopters of technology from abroad. A third category

of countries, with perhaps as much as half the world’s popula-

tion, is neither innovating nor particularly successful at adopt-

ing technologies developed abroad. This largest group doesn’t

attract foreign investors in high-tech fields; and it can’t make

effective use of technologies developed abroad because it lacks

something—the engineers, the scientists, the local market size,

or the ecological characteristics—required to use the new tech-

nologies effectively.

The three-tiered global divide in technological capacity—

those that are innovating at a high rate, those that are adopting

at a high rate, and those that are largely excluded from the

process of technological advancement—is also the major driver

of the world’s widening gaps in income over long periods of

time. The countries that are falling farther and farther behind

the world’s leaders in income are the technologically excluded

countries. The countries in the middle that are technological

adopters—like so much of East Asia over the past forty years,

other than Japan—often grow even faster than the leaders for

a period because once they create good systems for diffusion of

technology, they can enjoy a period of rapid but incomplete

catching up.

Consider the U.S. patent data in more detail. In 2000, the

U.S. Patent and Trademark office granted 85,072 patents to

inventors in the United States. Japanese inventors were awarded

31,296 patents, the second-highest number among all countries.

Germany ranked third with 10,234 patents. If one puts that in

terms of patenting per million population, which gives a useful

measure of the intensity of innovative activity in the economy,

the United States had 309 patents per million population, Japan

247 patents per million population, and Germany 124 patents

per million population.

Technological Advancement and Economic Growth in Asia 167

Page 181: Technology and the New Economy

As shown in figure 4.1, there are two Asian economies other

than Japan that are notable for having made the transition

from adoption to innovation during the last twenty-five years:

Taiwan and Korea. (The other developing country to do so

over the same period was Israel, which last year registered 783

patents, or 135 per million people.) These are the two coun-

tries that exhibited a dramatic rise in the rate of scientific and

patenting activities and today both stand out as being among

the world leaders in innovative activity. Korean inventors, for

example, received 3,314 patents last year in the United States,

a rate of 70 patents per million population—not as high as

in the United States, Germany, or Japan, but very respectable

Figure 4.1Patents per capita in 2000: Asia compared to other selected econo-mies.Source: U.S. Patent and Trademark Office 2001.

168 Jeffrey D. Sachs and John W. McArthur

Page 182: Technology and the New Economy

in global terms. Taiwanese inventors received 4,667 patents in

the United States in the year 2000, or 210 patents per million,

which ranks third in the world on a per capita basis. Further

behind stand Hong Kong and Singapore, somewhere in the

middle between innovators and non-innovators. Last year

Hong Kong inventors had 179 patents in the United States, or

26 per million people. Singapore had 218, or 54 per million

people. Probably no economies absorb technology faster and

better than Hong Kong and Singapore. But these economies

are not yet great engines of scientific advance.

What about China? China had 119 patents in the United

States in the year 2000, so that is 0.1 patent per million, or 1

patent for every 10 million in the population. While China is

the fastest-growing economy in the world and its coastal zones

have been enormously successful in bringing in technologies

and producing increasingly sophisticated exports, China is not

yet really an innovating economy. While there are astound-

ingly fine scientists around the country, it remains difficult in

the Chinese system to transfer the basic science developed in

the Chinese Academy of Sciences into commercializable prod-

ucts that are marketed in the world economy.

In Southeast Asia, Indonesia received 6 patents last year for

its 224 million people, or less than 3 per 100 million popu-

lation. Malaysia had 42 patents taken out in the United

States, or 1.8 patents per million. Thailand had 15 patents,

again less than 3 per every 10 million population. The Philip-

pines had 2 patents, or less than 3 per 100 million population.

These patenting data provide one measure of Southeast Asia’s

current status in terms of endogenous growth. Basically, en-

dogenous growth there is nonexistent; no commercializable

science-based technological advance is taking place in this re-

gion today.

Technological Advancement and Economic Growth in Asia 169

Page 183: Technology and the New Economy

Referring to the South American context for a moment, the

U.S. patent data highlight the weakness of the region regarding

technological innovation. In the year 2000, Argentina had 54

patents or only 1.5 patents per million population, which was

slightly more than Chile at 1.0 per million population and

Brazil at 0.6 per million. In other words, even the most devel-

oped economies in South America are currently in a techno-

logical position similar to much of Southeast Asia. Notably,

however, in 1960 Argentina was roughly five times richer

than Southeast Asian economies in terms of per capita GNP.

Despite its relative wealth, Argentina failed to make a tran-

sition to technological innovation, as did other countries in

South America. The lesson must not be lost for the economies

of East Asia.

4.4 Characteristics of the Innovation Process

A high rate of innovation requires a mix of market and non-

market institutions, with the mix reflecting the nature of the

innovation process. There are several basic characteristics of

this process that we would highlight.

First, innovation is science based. This implies a great deal of

importance for higher education as a fundamental feature of a

national innovation strategy. Critically, higher education does

not take place anywhere in the world without a major invest-

ment by government.

Second, innovation is an increasing returns to scale process,

which means that ten scientists isolated on ten separate desert

islands will produce much less scientific and technological

progress than the ten scientists stuck together on one island.

That is why scientists like to congregate in islands or valleys

like Silicon Valley or Route 128. This is also why we have

170 Jeffrey D. Sachs and John W. McArthur

Page 184: Technology and the New Economy

universities—because it is helpful for scientists to talk to each

other so that they can develop good ideas with the help of

the person next door. Creating an innovation system requires

creating scale.

Third, innovation depends on market-based incentives, and

most importantly on the scope of the market itself (just as

Adam Smith emphasized in regard to the division of labor).

Paul Romer and others have put great stress on the importance

of the scope of the market in promoting innovation. Develop-

ing a new idea requires a significant onetime investment of

research and development (R&D), and this ‘‘fixed cost’’ of in-

novation must be recouped through subsequent sales. If the

potential market for the innovation is large, it is obviously

easier to recoup the one-time R&D expenses. A small market,

on the other hand, will not justify the high onetime costs of

R&D. That is one reason why it is vital to be an open econ-

omy. When an economy is export oriented, it has the whole

world as a potential market. A closed economy, on the other

hand, will not only fail to get new ideas from outside, but will

also not generate incentives for innovation based on a limited

domestic market.

Fourth, and vitally, there is a fundamentally mixed public

and private good nature to the innovation process. A central

characteristic of knowledge is what economists call ‘‘nonrival-

ness,’’ which means that if one person discovers a new idea

(such as a new scientific discovery) and shares it with others,

the idea isn’t lost to the first person. Ideas are not like a barrel

of oil or a ton of steel, where use of the commodity by one

person means that less is available for others. With ideas,

everybody can partake of the advancement of knowledge

without depriving others of the knowledge. This nonrivalness

has a critical implication. Society benefits through the wide-

Technological Advancement and Economic Growth in Asia 171

Page 185: Technology and the New Economy

spread diffusion of ideas. To this end knowledge-based econo-

mies aim at the free and broad distribution of basic scientific

knowledge, new mathematical theorems, and the like.

There is of course a major problem with the free dissem-

ination of knowledge: Discoverers may lack a financial incen-

tive to make their discoveries in the first place if their ideas will

be freely available throughout the society. For this reason, sci-

entists are encouraged by social status, fame, and prizes, as

well as by direct market incentives. They are also encouraged

by the temporary monopoly privileges granted by a patent to

a new invention. But patents are imperfect instruments for

giving incentives to make new discoveries. Patents offer finan-

cial benefits to the inventor for a temporary period (now gen-

erally 20 years from the date of filing) but limit the ability of

others in the society to make use of the knowledge.

In the face of these tensions, innovative societies have found

the following pragmatic compromises. Basic scientific discov-

eries, in general, are not patentable. They are to be freely

available for use throughout society. Patents are limited to

specific new technologies. Also, patents are given for a limited

period of time, so that eventually the knowledge can be freely

used throughout society. The costs of permanent monopoly

rights in slowing the diffusion of new ideas would be too

great. Meanwhile, governments support basic scientific discov-

ery through direct subsidization of primary research in uni-

versities, government research laboratories, and even private

companies that qualify for government grants.

Fifth, special financing mechanisms beyond the banking

sector help to accommodate knowledge creation in the private

sector. A lot of knowledge is intangible and noncollateral-

izable. Banks often won’t lend to people with good ideas be-

cause the banks require collateral to guarantee loans. With

172 Jeffrey D. Sachs and John W. McArthur

Page 186: Technology and the New Economy

new ideas there is frequently no collateral available. This is

what makes venture capital a distinctive industry. Venture

capital is not lending against collateral, but against someone’s

hope that the technology is going to work commercially. That

is not what bankers do for a business, nor is it what one would

want banks to do because banking has other risky features that

require tight regulation. Thus, since banks do not and should

not lend mainly for noncollateralized ideas, the innovation

process requires somebody else who will: venture capitalists.

Sixth, innovation generates destruction of older technologies

and business sectors in a process Joseph Schumpeter ([1942]

1984) famously termed ‘‘creative destruction.’’ New advances

are not painless to those using and producing older technol-

ogies. Thus, economic death of old sectors is part and parcel of

the advance of new sectors. One of the reasons that the Soviets

could never develop a new industry is that they never let an

old one die. There really was lifetime employment protection

(other than for the millions sentenced to the gulag). Although

people could lose their jobs (and indeed sometimes their lives)

for political reasons, they did not lose their job for economic

reasons. With no sectors ever declining, no new sectors could

ever grow.

Seventh, the innovation process is characterized by specific

forms of organization that develop, test, and prove ideas. In-

novation first requires networks to bring different kinds of

knowledge together. It also requires a great deal of risk taking

and decentralization within larger enterprises to allow entre-

preneurs within the firm to be entrepreneurial. It furthermore

requires a great deal of learning. The most advanced inno-

vation systems are comprised of enterprises investing heavily

in their workers’ knowledge, which is not a traditional activity

in many economies.

Technological Advancement and Economic Growth in Asia 173

Page 187: Technology and the New Economy

Eighth, many technologies exhibit characteristics of site

specificity, which means if you want to solve problems in agri-

culture, health, energy use, and so forth, local ecological char-

acteristics are so important that the relevant problems need

to be solved at home. Not all technologies can be adopted

from abroad, which is another reason why the technological

adopters stay behind the technological leaders: Much of what

the technological leaders are producing is not necessarily rele-

vant to the adopter’s needs if the local ecological settings are

quite different. If U.S. inventors develop new processes for

raising wheat productivity, that may have little direct benefit

for cassava growers in Africa. Local needs require local inno-

vations in many sectors.

4.5 The U.S. Economy as an Innovation System

These eight characteristics of the innovation process lead to

several practical implications for the design and operation of

national systems of innovation. We illustrate this basic idea by

looking at how the United States has achieved such high and

sustained rates of innovation. Part of the story of course is

that the U.S. economy is large, integrated, and efficient. A large

scope of the market provides a large incentive for innovation.

Yet the story is more complicated. Specific institutions, both

market and nonmarket based, are integral to U.S. success.

First, the United States invests intensely in basic science

through the federal budget. Many believe that the United

States is a free market economy in the technology realm, but

this is not true. The U.S. government budget for science is now

roughly $US 90 billion a year, or almost 1 percent of GNP.

Biomedical research alone is supported at a rate of around $25

billion per year. One needs to understand that U.S. industrial

174 Jeffrey D. Sachs and John W. McArthur

Page 188: Technology and the New Economy

policy is quite consciously focused on science-based technolog-

ical growth, even though many observers believe that that the

United States has no industrial policy. In the late 1980s, when

the U.S. government was worried about Japanese competition,

it financed major investment in the semiconductor sector to

advance its technology. More recently, the government has

invested heavily in the human genome project and nanotech-

nology, among other leading sectors.

Second, the United States has demonstrated and championed

the agglomeration economies that have been achieved most

prominently in Silicon Valley, the research triangle of North

Carolina, and Route 128 in the Boston area, but also in dozens

of other locations around the United States.2

Third, the United States has a rather effective patent system,

even though it is a system under stress at this moment. When

an inventor files a patent, he or she has to disclose in detail

what the new invention entails, in return for the patent’s

monopoly rights. That is extremely important in making the

knowledge publicly available. The system is also effective at

processing a huge numbers of patents, now more than 150,000

per year. The judicial system has considerable expertise in

protecting intellectual property after the patent is granted. Still,

the system is under considerable stress regarding the appropri-

ate scope of patenting, the definition of the boundaries of new

patents, and the sheer volume of new patent applications to

process.

Fourth, the United States also has a very effective interface

between government, universities and industries, and these

connections have been honed experimentally over the last

twenty-five years. As one important part of the process, the

Bayh-Dole Act of 1980 enabled universities to receive patents

on new inventions that were developed with government

Technological Advancement and Economic Growth in Asia 175

Page 189: Technology and the New Economy

grants, thereby giving new incentives to academic centers to

support applied R&D activities, and to collaborate with the

private sector in R&D. That gave a tremendous boost, most

notably in biotechnology, to university-business collaboration

in the innovation process.

Fifth, the United States has a highly advanced regulatory

environment in many areas. In agro-biotechnology, for in-

stance, the Food and Drug Administration (FDA), the U.S.

Department of Agriculture, and the Environmental Protection

Agency (EPA) have all set high regulatory standards con-

tributing to food product safety. These high standards have

given consumers a large amount of confidence in technological

change. The United States has not yet had the kind of back-

lash to innovation in agro-biotechnology that has occurred in

Europe, so its innovation has not been stifled as it has been

in Europe. The solid and credible regulatory structure has

helped fuel the innovation process in these areas. Regulation

can thereby promote technology, even though some free mar-

ket economies resist it.

Sixth, the United States has an extremely strong network

of venture capital financing that is closely interwoven with

the key regional nodes of technological innovation. The infra-

structure and tax systems both support venture capital, based

on an understanding that normal banking will not create the

needed financing for technology start-ups.

Seventh, the United States has a flexible labor market, which

means that a lot of people lose their jobs so that a lot more can

get new ones. It is an economy utterly typified by creative de-

struction. Net job creation is ferociously successful, something

Europe hasn’t yet caught on to.

Eighth, the administrative environment is tremendously

conducive to new business start-ups. To start a business, one

176 Jeffrey D. Sachs and John W. McArthur

Page 190: Technology and the New Economy

basically needs only to write a small check to the state govern-

ment to register the new company. This fosters an incredibly

dynamic process of natural selection of small businesses. Mil-

lions of new ventures and ideas are tried each year. Only a

small fraction of these survive, but that small fraction may go

on to do wonderful things.

Ninth, and finally, the United States now has a stupendously

effective higher education system, with extremely high parti-

cipation rates. The country’s gross tertiary enrollment rate is

estimated to be 81 percent (World Bank 2001), which means

that overall postsecondary enrollment is equal to four-fifths of

the university age population. This is an imprecise measure of

university enrollment, since it includes students of all ages at

major research universities, smaller liberal arts colleges, spe-

cialized vocational training centers, and community colleges,

but it does indicate the huge number of Americans attending

college in one form or another. And even with the imprecision

of the measure, it is vastly higher than the same figure in most

other parts of the world.

4.6 Some Lessons for Asia’s Transition from Technology

Borrower to Core Innovator

Altogether, these factors make the U.S. system extraordinarily

dynamic technologically. They also help to shed some light on

Asia’s current challenges in moving from technological bor-

rower to technological innovator. Of these challenges, the fol-

lowing stand out.

First, and most critically, higher education is probably going

to be the region’s most strategic investment for the next

generation. Tertiary enrollment rates in Asia are still rather

low, as shown in figure 4.2. In China the tertiary enrollment

Technological Advancement and Economic Growth in Asia 177

Page 191: Technology and the New Economy

rate (according to World Bank data) was just 6 percent in the

mid-1990s. In Indonesia it was roughly 11 percent, and in

Malaysia it was just under 12 percent. Hong Kong was con-

siderably higher at 26 percent, as was Singapore at 39 percent.

All of these rates have no doubt increased in the past few years,

but they still lag far behind the enrollment rates in higher edu-

cation seen in the technologically innovative economies.

A second challenge is to increase government spending on

science. This does not imply indiscriminate investment in, for

example, theoretical physics, but it does imply investment in

areas that are relevant for an economy and its society. Korea,

Figure 4.2Tertiary enrollment rates in Asia compared to other selected econo-miesSource: World Bank 2001; World Bank and UNESCO Task Force onHigher Education and Society 2000.

178 Jeffrey D. Sachs and John W. McArthur

Page 192: Technology and the New Economy

Taiwan, and Israel are examples of countries that, thirty years

ago, consciously decided invest substantial government rev-

enues in building world-class laboratories in order to support

research at universities and to facilitate R&D in the private

sector. After a generation of investment, they have seen enor-

mous returns. Today, they are continuing down this path of

science-based growth, with all three currently rank among the

top fifteen in the world in terms of total R&D spending as a

percentage of gross national product, and all allocating

roughly two percent or more of their national incomes to re-

search (World Bank 2001). These spending ratios are some-

what ahead of Singapore, which spends in the neighborhood

of 1.1 percent of GNP on R&D, and China, which spends

roughly 0.7 percent of GNP. All of these figures are signi-

ficantly better than those for Indonesia, Malaysia, and the

Philippines, which each spend less than one quarter of one

percent of GNP on R&D.

A third challenge, and related to the first two, is to foster

university-business relations for new startups and technolog-

ical innovation in key areas. In survey results calculated for

the latest Global Competitiveness Report 2001–2002 (GCR)

(World Economic Forum 2002), Singapore, Taiwan, and Korea

are the only Asian countries to score among the top twenty

on a question that asks executives to rate the level of local

university-business collaboration. Japan scores 26th, China

28th, India 38th, Malaysia 42nd, Indonesia 45th, and the

Philippines 55th. This dimension represents a key development

area for most Asian economies.

Fourth, an effective intellectual property rights system is

needed. At the core of this issue rests the need for the rule of

law and an effective, independent judiciary to protect of intel-

lectual property rights. Many Asian countries do not have

Technological Advancement and Economic Growth in Asia 179

Page 193: Technology and the New Economy

judicial systems that are independent from political pressures

or from the parties in a dispute, let alone intellectual property

rights regimes. Again citing the latest GCR results, on a com-

posite measure of institutional strength in ‘‘contracts and law,’’

most Asian economies fare poorly. Singapore scores among the

world’s top ten countries, but Malaysia, for example, scores

42nd while China ranks 51st and Philippines ranks 56th, two

spots ahead of Indonesia. More specifically, on a survey ques-

tion that asks about the protection of intellectual property,

Singapore, Japan, Taiwan, and Hong Kong rate between 15th

and 25th, while Thailand and Malaysia rank in the mid-forties

and India, China, the Philippines, and Indonesia rank no better

than 58th. Legal institutions are by no means easy to de-

velop but they mark a crucial challenge in the long-term devel-

opment of most Asian economies and thus need to be on this

list.

Fifth, economies in the region need to improve the adminis-

trative conditions for business startups. As figure 4.3 shows,

some Asian economies are performing well in this respect, but

even Japan needs to do more in this area. Japan is remarkably

technologically innovative but it is not nearly as good at

bringing innovations to market. One of the reasons is the diffi-

culty of starting a business in Japan today. In a GCR survey

question that asks executives to rank the overall ease of start-

ing a business locally, Hong Kong ranks first in the world,

Singapore ranks 6th, Thailand places 17th, China 23rd, Japan

32nd, and Korea 49th. Another reason, one that still poses

a key challenge in much of Asia, is that the venture capital

market is thin. In a GCR survey question on the availability of

venture finance for innovative but risky ideas, Taiwan, Singa-

pore and Hong Kong rank 13th, 14th, and 16th, respectively,

180 Jeffrey D. Sachs and John W. McArthur

Page 194: Technology and the New Economy

but Japan ranks 31st, China scores 49th, the Philippines ranks

50th, and Thailand places 51st. Private finance mechanisms for

innovation need to be a key priority in these economies.

A sixth challenge lies in the structure of business enterprises

in Asia. Innovative firms require special conditions of internal

organization, including a high degree of delegation of authority

within enterprises, productivity-based compensation, and in-

ternal learning mechanisms within the firm. Figure 4.4 shows

the GCR results for a question regarding the typical amount of

Figure 4.3Administrative Burden for start-ups: ‘‘Starting a new business in yourcountry is generally: (1 ¼ extremely difficult and time consuming,7 ¼ easy)’’Source: World Economic Forum 2002.

Technological Advancement and Economic Growth in Asia 181

Page 195: Technology and the New Economy

firms’ internal investment in staff training. Notably, Singapore

and Japan rate well at a global scale but much of Asia still lags

far behind. This and related evidence suggest that many of

the organizational forms and corporate practices in Asia are

not particularly advantageous for high rates of organizational

learning and innovation.

In practical terms, the exact transition pathway for an econ-

omy hoping to move from a successful diffusion system to a

successful innovation system is not fully known, but together

the six points mentioned help to highlight key areas on which

Figure 4.4Firm investments in staff training: ‘‘In your country, companies’ gen-eral approach to human resources is to invest (1 ¼ little in trainingand development, 7 ¼ heavily to attract, train, and retain staff )’’Source: World Economic Forum 2002.

182 Jeffrey D. Sachs and John W. McArthur

Page 196: Technology and the New Economy

many Asian economies must focus. Undoubtedly this list is not

exhaustive, and there is much room for economies to innovate

in creating systems of innovation. But, at a minimum, policy

priorities need to mix market and nonmarket forces to develop

sound innovation-oriented education, research, finance, regu-

latory, and business structures.

4.7 Conclusion

A central finding of economics over the past fifty years has

been that technological advancement is critical to long-term

economic growth. More recent research distinguishes between

the crucial roles for technological diffusion in the catch-up

phase of economic development and innovation once econo-

mies reach a fairly high level of development. Asia’s great

challenge in this regard is to move from adoption to innova-

tion as the engine of technological advancement. Yet the social

systems that best foster technological innovation do not come

into existence without an explicit effort to create them.

Creating a successful innovation system is a challenge that

requires focus, attention, and institutional creativity. There

is no doubt that Asia has everything that it needs to become

a central site of science-based innovation in the twenty-first-

century world economy. This chapter has highlighted some of

the issues it must face in achieving this aim. As the region pro-

gresses, we predict that one of twenty-first-century’s biggest

transitions will occur when both China and India begin to

make dramatic contributions to global science and technology

and thereby dramatic contributions to the welfare of the world.

When this happens, the structure of the world economy will

change in new and promising ways.

Technological Advancement and Economic Growth in Asia 183

Page 197: Technology and the New Economy

Notes

This chapter was originally presented as a speech by Professor JeffreyD. Sachs on May 25, 2001, as part of the Technology and the Econ-omy Lecture Series at Hong Kong University.

1. According to United States Patent and Trademark Office’s 2001data. The U.S. Patent and Trademark Office record the country originof a patent according to the country of residence of the first-namedinventor. Note that the data refer to ‘‘utility patents,’’ that is, patentsfor new inventions.

2. Our colleague Michael E. Porter has provided ongoing leadershipin advancing the mapping and understanding of U.S. business clusters,as discussed, for example, in his article ‘‘Clusters and the New Eco-nomics of Competition.’’ See Porter 1998.

References

Aghion, Philippe, and Peter Howitt. 1992. ‘‘A model of growththrough creative destruction.’’ Econometrica 60 (March): 323–351.

Domar, Evsey D. 1946. ‘‘Capital expansion, rate of growth, and em-ployment.’’ Econometrica 14 (April): 137–147.

Grossman, Gene M., and Elhanan Helpman. 1991. Innovation andGrowth in the Global Economy. Cambridge, MA: The MIT Press.

Harrod, Roy F. 1939. ‘‘An essay in dynamic theory.’’ EconomicJournal 49 (June): 14–33.

Lucas, Robert E. Jr. 1988. ‘‘On the mechanics of economic develop-ment.’’ Journal of Monetary Economics 22 (July): 3–42.

McArthur, John W., and Jeffrey D. Sachs. 2002. ‘‘The growth com-petitiveness index: Measuring technological advancement and thestages of development.’’ In The Global Competitiveness Report 2001–2002, ed. Michael E. Porter, Jeffrey D. Sachs, et al. New York: Ox-ford University Press.

Porter, Michael E. 1998. ‘‘Clusters and the new economics of compe-tition.’’ Harvard Business Review (November–December): 77–90.

Romer, Paul M. 1990. ‘‘Endogenous technological change.’’ Journalof Political Economy 98 (October): S71–S102.

184 Jeffrey D. Sachs and John W. McArthur

Page 198: Technology and the New Economy

Schumpeter, Joseph A. [1942] 1984. The Theory of Economic Devel-opment. Cambridge: Harvard University Press.

Solow, Robert. 1956. ‘‘A Contribution to the theory of economicgrowth.’’ Quarterly Journal of Economics 70 (February): 65–94.

Solow, Robert. 1957. ‘‘Technical change and the aggregate produc-tion function.’’ Review of Economics and Statistics 39 (August): 312–320.

Smith, Adam. [1776] 1981. An Inquiry into the Nature and Causes ofthe Wealth of Nations. Indianapolis: Liberty Press.

U.S. Patent and Trademark Office. 2001. ‘‘Patent counts by country/state and year: Utility patents, January 1, 1963–December 31, 2000.’’Available on-line at hhttp://www.uspto.gov/i.

Warner, Andrew M. 2000. ‘‘Economic creativity.’’ In The GlobalCompetitiveness Report 2000, ed. Michael E. Porter, Jeffrey D. Sachs,et al. New York: Oxford University Press.

Warner, Andrew M. 2002. ‘‘Economic creativity: An update.’’ In TheGlobal Competitiveness Report 2001–2002, ed. Michael E. Porter,Jeffrey D. Sachs, et al. New York: Oxford University Press.

World Bank. 2001. World Development Indicators 2001 CD-ROM.Washington, DC: The World Bank.

World Bank and UNESCO Task Force on Higher Education andSociety. 2000. Higher Education in Developing Countries: Peril andPromise. Washington, DC: The World Bank.

World Economic Forum. 2002. The Global Competitiveness Report2001–2002, ed. Michael E. Porter, Jeffrey D. Sachs, et al. New York:Oxford University Press.

Technological Advancement and Economic Growth in Asia 185

Page 199: Technology and the New Economy

This page intentionally left blank

Page 200: Technology and the New Economy

5Monetary Policy in the Information

Economy

Michael Woodford

Improvements in information-processing technology and in

communications are likely to transform many aspects of eco-

nomic life, but likely no sector of the economy will be more

profoundly affected than the financial sector. Financial markets

are rapidly becoming better connected with one another, the

costs of trading in them are falling, and market participants

now have access to more information more quickly about de-

velopments in the markets and in the economy more broadly.

As a result, opportunities for arbitrage are exploited and elim-

inated more rapidly. The financial system can be expected to

become more efficient, in the sense that the dispersion of valu-

ations of claims to future payments across different individuals

and institutions is minimized. For familiar reasons, this should

be generally beneficial to the allocation of resources in the

economy.

Some, however, fear that the job of central banks will be

complicated by improvements in the efficiency of financial

markets, or even that the ability of central banks to influence

the markets may be eliminated altogether. This suggests a pos-

sible conflict between the aim of increasing microeconomic

efficiency—the efficiency with which resources are correctly

allocated among competing uses at a point in time—and that

Page 201: Technology and the New Economy

of preserving macroeconomic stability, through prudent central

bank regulation of the overall volume of nominal expenditure.

Here I consider two possible grounds for such concern. I first

consider the consequences of increased information on the part

of market participants about monetary policy actions and

decisions. According to the view that the effectiveness of mon-

etary policy is enhanced by, or even entirely dependent upon,

the ability of central banks to surprise the markets, there might

be reason to fear that monetary policy will be less effective in

the information economy. I then consider the consequences of

financial innovations tending to reduce private-sector demand

for the monetary base. These include the development of tech-

niques that allow financial institutions to more efficiently man-

age their customers’ balances in accounts subject to reserve

requirements and their own balances in clearing accounts at

the central bank, so that a given volume of payments in the

economy can be executed with a smaller quantity of central

bank balances. And somewhat more speculatively, some argue

that ‘‘electronic money’’ of various sorts may soon provide

alternative means of payment that can substitute for those

currently supplied by central banks. It may be feared that such

developments can soon eliminate what leverage central banks

currently have over the private economy, so that again mone-

tary policy will become ineffective.

I argue that there is little ground for concern on either count.

The effectiveness of monetary policy is in fact dependent nei-

ther upon the ability of central banks to fool the markets about

what they do, nor upon the manipulation of significant market

distortions, and central banks should continue to have an im-

portant role as guarantors of price stability in a world where

markets are nearly frictionless and the public is well informed.

Indeed, I argue that monetary policy can be even more effective

188 Michael Woodford

Page 202: Technology and the New Economy

in the information economy, by allowing central banks to use

signals of future policy intentions as an additional instrument

of policy, and by tightening the linkages between the interest

rates most directly affected by central bank actions and other

market rates.

However, improvements in the efficiency of the financial

system may have important consequences, both for the specific

operating procedures that can most effectively achieve banks’

short-run targets, and for the type of decision procedures for

determining the operating targets that will best serve their sta-

bilization objectives. In both respects, the U.S. Federal Reserve

might well consider adopting some of the recent innovations

pioneered by other central banks. These include the use of

standing facilities as a principal device through which over-

night interest rates are controlled, as is currently the case in

countries like Canada and New Zealand; and the apparatus

of explicit inflation targets, forecast-targeting decision proce-

dures, and published Inflation Reports as a means of com-

municating with the public about the nature of central-bank

policy commitments, as currently practiced in countries like the

United Kingdom, Sweden, and New Zealand.

5.1 Improved Information about Central Bank Actions

One possible ground for concern about the effectiveness of

monetary policy in the information economy derives from the

belief that the effectiveness of policy actions is enhanced by, or

even entirely dependent upon, the ability of central banks to

surprise the markets. Views of this kind underlay the prefer-

ence, commonplace among central bankers until quite recently,

for a considerable degree of secrecy about their operating tar-

gets and actions, to say nothing of their reasoning processes

Monetary Policy in the Information Economy 189

Page 203: Technology and the New Economy

and their intentions regarding future policy. Improved effi-

ciency of communication among market participants, and

greater ability to process large quantities of information,

should make it increasingly unlikely that central bank actions

can remain secret for long. Wider and more rapid dissem-

ination of analyses of economic data, of statements by central

bank officials, and of observable patterns in policy actions are

likely to improve markets’ ability to forecast central banks’

behavior as well, whether banks like this or not. In practice,

these improvements in information dissemination have coin-

cided with increased political demands for accountability from

public institutions of all sorts in many of the more advanced

economies, and this had led to widespread demands for greater

openness in central bank decision making.

As a result of these developments, the ability of central

banks to surprise the markets, other than by acting in a purely

erratic manner (that obviously cannot serve their stabilization

goals), is likely to be reduced. Should we expect this to reduce

the ability of central banks to achieve their stabilization goals?

Should central banks seek to delay these developments to the

extent that they are able?

I argue that such concerns are misplaced. There is little

ground to believe that secrecy is a crucial element in effective

monetary policy. To the contrary, more effective signaling of

policy actions and policy targets and, above all, improvement

of the ability of the private sector to anticipate future central

bank actions should increase the effectiveness of monetary

policy, and for reasons that are likely to become even more

important in the information economy.

5.1.1 The Effectiveness of Anticipated Policy

One common argument for the greater effectiveness of policy

actions that are not anticipated in advance asserts that central

190 Michael Woodford

Page 204: Technology and the New Economy

banks can have a larger effect on market prices through trades

of modest size if these trades are not signaled in advance. This

is the usual justification given for the fact that official in-

terventions in foreign exchange markets are almost invariably

secret, in some cases not being confirmed even after the inter-

ventions have taken place. But a similar argument might be

made for maximizing the impact of central banks’ open market

operations upon domestic interest rates, especially by those

who feel that the small size of central-bank balance sheets rel-

ative to the volume of trade in money markets makes it im-

plausible that central banks should be able to have much effect

upon market prices. The idea, essentially, is that unanticipated

trading by the central bank should move market rates by more,

owing to the imperfect liquidity of the markets. Instead, if

traders are widely able to anticipate the central bank’s trades in

advance, a larger number of counterparties should be available

to trade with the bank, so that a smaller change in the market

price will be required in order for the market to absorb a given

change in the supply of a particular instrument.

But such an analysis assumes that the central bank better

achieves its objectives by being able to move market yields

more, even if it does so by exploiting temporary illiquidity of

the markets. But the temporarily greater movement in market

prices that is so obtained occurs only because these prices are

temporarily less well coupled to decisions being made outside

the financial markets. Hence it is not at all obvious that any

actual increase in the effect of the central bank’s action upon

the economy—upon the things that are actually relevant to the

bank’s stabilization goals—can be purchased in this way.

The simple model presented in the appendix may help illus-

trate this point. In this model, the economy consists of a group

of households that choose a quantity to consume and then

allocate their remaining wealth between money and bonds.

Monetary Policy in the Information Economy 191

Page 205: Technology and the New Economy

When the central bank conducts an open market operation,

exchanging money for bonds, it is assumed that only a fraction

g of the households are able to participate in the bond market

(and so to adjust their bond holdings relative to what they had

previously chosen). I assume that the rate of participation in

the end-of-period bond market could be increased by the cen-

tral bank by signaling in advance its intention to conduct an

open market operation, that will in general make it optimal for

a household to adjust its bond portfolio. The question posed is

whether ‘‘catching the markets off guard’’ in order to keep the

participation rate g small can enhance the effectiveness of the

open market operation.

It is shown that the equilibrium bond yield i is determined by

an equilibrium condition of the form1

dðiÞ ¼ ðDMÞ=g;

where DM is the per capita increase in the money supply

through open market bond purchases, and the function dðiÞindicates the desired increase in bond holding by each house-

hold that participates in the end-of-period trading, as a func-

tion of the bond yield determined in that trading. The smaller

is g, the larger the portfolio shift that each participating house-

hold must be induced to accept, and so the larger the change in

the equilibrium bond yield i for a given size of open market

operation DM. This validates the idea that surprise can increase

the central bank’s ability to move the markets.

But this increase in the magnitude of the interest-rate effect

goes hand in hand with a reduction in the fraction of house-

holds whose expenditure decisions are affected by the interest-

rate change. The consumption demands of the fraction 1� g of

households not participating in the end-of-period bond market

are independent of i, even if they are assumed to make their

192 Michael Woodford

Page 206: Technology and the New Economy

consumption-saving decision only after the open market oper-

ation. (They may observe the effect of the central bank’s action

upon bond yields, but this does not matter to them, because a

change in their consumption plans cannot change their bond

holdings.) If one computes aggregate consumption expenditure

C, aggregating the consumption demands of the g households

who participate in the bond trading and the 1� g who do not,

then the partial derivative qC=qDM is a positive quantity that is

independent of g. Thus up to a linear approximation, reducing

participation in the end-of-period bond trading does not in-

crease the effects of open market purchases by the central bank

upon aggregate demand, even though it increases the size of

the effect on market interest rates.

It is sometimes argued that the ability of a central bank (or

other authority, such as the Treasury) to move a market price

through its interventions is important for reasons unrelated to

the direct effect of that price movement on the economy; it is

said, for example, that such interventions are important mainly

in order to ‘‘send a signal’’ to the markets, and presumably the

signal is clear only insofar as a nontrivial price movement can

be caused.2 But while it is certainly true that effective signaling

of government policy intentions is of great value, it would be

odd to lament improvements in the timeliness of private-sector

information about government policy actions on that ground.

Better private-sector information about central bank actions

and deliberations should make it easier, not harder, for central

banks to signal their intentions, as long as they are clear about

what those intentions are.

Another possible argument for the desirability of surprising

the markets derives from the well-known explanation for cen-

tral bank ‘‘ambiguity’’ proposed by Cukierman and Meltzer

(1986).3 These authors assume, as in the ‘‘new classical’’

Monetary Policy in the Information Economy 193

Page 207: Technology and the New Economy

literature of the 1970s, that deviations of output from potential

are proportional to the unexpected component of the current

money supply. They also assume that policymakers wish to in-

crease output relative to potential, and to an extent that varies

over time as a result of real disturbances. Rational expectations

preclude the possibility of an equilibrium in which money

growth is higher than expected (and hence in which output is

higher than potential) on average. However, it is possible for

the private sector to be surprised in this way at some times, as

long as it also happens sufficiently often that money growth is

less than expected. This bit of leverage can be used to achieve

stabilization aims if it can be arranged for the positive surprises

to occur at times when there is an unusually strong desire for

output greater than potential (for example, because the degree

of inefficiency of the ‘‘natural rate’’ is especially great), and the

negative surprises at times when this is less crucial. This is

possible, in principle, if the central bank has information about

the disturbances that increase the desirability of high output

that is not shared with the private sector. This argument pro-

vides a reason why it may be desirable for the central bank to

conceal information that it has about current economic con-

ditions that are relevant to its policy choices. It even provides a

reason why a central bank may prefer to conceal the actions

that it has taken (for example, what its operating target has

been), insofar as there is serial correlation in the disturbances

about which the central bank has information not available to

the public, so that revealing the bank’s past assessment of these

disturbances would give away some of its current informa-

tional advantage as well.

However, the validity of this argument for secrecy about

central bank actions and central bank assessments of current

conditions depends upon the simultaneous validity of several

194 Michael Woodford

Page 208: Technology and the New Economy

strong assumptions. In particular, it depends upon a theory of

aggregate supply according to which surprise variations in

monetary policy have an effect that is undercut if policy can be

anticipated.4 While this hypothesis is familiar from the litera-

ture of the 1970s, it has not held up well under further scru-

tiny. Despite the favorable early result of Barro (1977), the

empirical support for the hypothesis that ‘‘only unanticipated

money matters’’ was challenged in the early 1980s (notably, by

Barro and Hercowitz 1980, and Boschen and Grossman 1982),

and the hypothesis has largely been dismissed since then.

Nor is it true that this particular model of the real effects of

nominal disturbances is uniquely consistent with the hypoth-

eses of rational expectations or optimizing beavior by wage

and price setters. For example, a popular simple hypothesis in

recent work has been a model of optimal price setting with

random intervals between price changes, originally proposed

by Calvo (1983).5 This model leads to an aggregate supply re-

lation of the form

pt ¼ kðyt � ynt Þ þ bEtptþ1; ð1Þ

where pt is the rate of inflation between dates t � 1 and t, yt is

the log of real GDP, ynt is the log of the ‘‘natural rate’’ of out-

put (equilibrium output with flexible wages and prices, here a

function of purely exogenous real factors), Etptþ1 is the expec-

tation of future inflation conditional upon period t public in-

formation, and the coefficients k > 0; 0 < b < 1 are constants.

As with the familiar new classical specification implicit in the

analysis of Cukierman and Meltzer, which we may write using

similar notation as

pt ¼ kðyt � ynt Þ þ Et�1pt: ð2Þ

This is a short-run ‘‘Phillips curve’’ relation between inflation

and output that is shifted both by exogenous variations in the

Monetary Policy in the Information Economy 195

Page 209: Technology and the New Economy

natural rate of output and by endogenous variations in ex-

pected inflation.

However, the fact that current expectations of future in-

flation matter for (1), rather than past expectations of current

inflation as in (2), makes a crucial difference for present pur-

poses. Equation (2) implies that in any rational expectations

equilibrium,

Et�1ðyt � ynt Þ ¼ 0;

so that output variations due to monetary policy (as opposed

to real disturbances reflected in ynt ) must be purely unfore-

castable a period in advance. Equation (1) has no such im-

plication. Instead, this relation implies that both inflation and

the output at any date t depend solely upon (i) current and

expected future nominal GDP, relative to the period t � 1 price

level, and (ii) the current and expected future natural rate of

output, both conditional upon public information at date t.

The way in which output and inflation depend upon these

quantities is completely independent of the extent to which any

of the information available at date t may have been antici-

pated at earlier dates. Thus signaling in advance the way that

monetary policy seeks to effect the path of nominal expendi-

ture does not eliminate the effects upon real activity of such

policy—it does not weaken them at all!

Of course, the empirical adequacy of the simple New

Keynesian Phillips curve (1) has also been subject to a fair

amount of criticism. However, it is not as grossly at variance

with empirical evidence as is the new classical specification.6

Furthermore, most of the empirical criticism focuses upon the

absence of any role for lagged wage and/or price inflation as a

determinant of current inflation in this specification. But if one

modifies the aggregate supply relation (1) to allow for infla-

196 Michael Woodford

Page 210: Technology and the New Economy

tion inertia—along the lines of the well-known specification of

Fuhrer and Moore (1995), the ‘‘hybrid model’’ proposed by

Gali and Gertler (1999), or the inflation indexation model

proposed by Christiano, Eichenbaum, and Evans (2001)—the

essential argument is unchanged. In these specifications, it is

current inflation relative to recent past inflation that determines

current output relative to potential; but inflation acceleration

should have the same effects whether anticipated in the past

or not.

Some may feel that a greater impact of unanticipated mone-

tary policy is indicated by comparisons between the reactions

of markets (e.g., stock and bond markets) to changes in

interest-rate operating targets that are viewed as having sur-

prised many market participants and reactions to those that

were widely predicted in advance. For example, the early study

of Cook and Hahn (1989) found greater effects upon Treasury

yields of U.S. Federal Reserve changes in the federal funds rate

operating target during the 1970s at times when these repre-

sented a change in direction relative to the most recent move,

rather than continuation of a series of target changes in the

same direction; these might plausibly have been regarded as the

more unexpected actions. More recent studies such as Bomfim

(2000) and Kuttner (2001) have documented larger effects

upon financial markets of unanticipated target changes using

data from the Fed funds futures market to infer market ex-

pectations of future Federal Reserve interest-rate decisions.

But these quite plausible findings in no way indicate that the

Fed’s interest-rate decisions affect financial markets only inso-

far as they are unanticipated. Such results only indicate that

when a change in the Fed’s operating target is widely antici-

pated in advance, market prices will already reflect this infor-

mation before the day of the actual decision. The actual change

Monetary Policy in the Information Economy 197

Page 211: Technology and the New Economy

in the Fed’s target, and the associated change at around the

same time in the federal funds rate itself, makes relatively little

difference insofar as Treasury yields and stock prices depend

upon market expectations of the average level of overnight

rates over a horizon extending substantially into the future,

rather than upon the current overnight rate alone. Informa-

tion that implies a future change in the level of the funds

rate should affect these market prices immediately, even if the

change is not expected to occur for weeks; while these prices

should be little affected by the fact that a change has already

occurred, as opposed to being expected to occur (with com-

plete confidence) in the following week. Thus rather than indi-

cating that the Fed’s interest-rate decisions matter only when

they are not anticipated, these findings provide evidence that

anticipations of future policy matter—and that market expec-

tations are more sophisticated than a mere extrapolation of

the current federal funds rate.

Furthermore, even if one were to grant the empirical rele-

vance of the new classical aggregate supply relation, the

Cukierman-Meltzer defense of central bank ambiguity also

depends upon the existence of a substantial information ad-

vantage on the part of the central bank about the times at

which high output relative to potential is particularly valuable.

This might seem obvious, insofar as it might seem that the

state in question relates to the aims of the government, about

which the government bureaucracy should always have greater

insight. But if one seeks to design institutions that improve the

general welfare, one should have no interest in increasing the

ability of government institutions to pursue idiosyncratic ob-

jectives that do not reflect the interests of the public. Thus the

only relevant grounds for variation in the desired level of

output relative to potential should be ones that relate to the

198 Michael Woodford

Page 212: Technology and the New Economy

economic efficiency of the natural rate of output (which may

indeed vary over time, due for example to time variation in

market power in goods and/or labor markets). Yet government

entities have no inherent advantage at assessing such states. In

the past, it may have been the case that central banks could

produce better estimates of such states than most private insti-

tutions, thanks to their large staffs of trained economists and

privileged access to government statistical offices. However,

in coming decades, it seems likely that the dissemination of

accurate and timely information about economic conditions

to market participants should increase. If the central bank’s

informational advantage with regard to the current severity

of market distortions is eroded, there will be no justification

(even according to the Cukierman-Meltzer model) for seeking

to preserve an informational advantage with regard to the

bank’s intentions and actions.

Thus there seems little ground to fear that erosion of central

banks’ informational advantage over market participants, to

the extent that one exists, should weaken banks’ ability to

achieve their legitimate stabilization objectives. Indeed, there is

considerable reason to believe that monetary policy should be

even more effective under circumstances of improved private-

sector information. This is because successful monetary policy

is not so much a matter of effective control of overnight inter-

est rates, or even of effective control of changes in the CPI, so

much as of affecting in a desired way the evolution of market

expectations regarding these variables. If the beliefs of market

participants are diffuse and poorly informed, this is difficult,

and monetary policy will necessarily be a fairly blunt instru-

ment of stabilization policy; but in the information economy,

there should be considerable scope for the effective use of the

traditional instruments of monetary policy.

Monetary Policy in the Information Economy 199

Page 213: Technology and the New Economy

It should be rather clear that the current level of overnight

interest rates as such is of negligible importance for economic

decision making; if a change in the overnight rate were thought

to imply only a change in the cost of overnight borrowing for

that one night, then even a large change (say, a full percentage

point increase) would make little difference to anyone’s spend-

ing decisions. The effectiveness of changes in central bank tar-

gets for overnight rates in affecting spending decisions (and

hence ultimately pricing and employment decisions) is wholly

dependent upon the impact of such actions upon other financial-

market prices, such as longer-term interest rates, equity prices

and exchange rates. These are plausibly linked, through arbi-

trage relations, to the short-term interest rates most directly

affected by central bank actions; but it is the expected future

path of short-term rates over coming months and even years

that should matter for the determination of these other asset

prices, rather than the current level of short-term rates by itself.

The reason for this is probably fairly obvious in the case of

longer-term interest rates; the expectations theory of the term

structure implies that these should be determined by expected

future short rates. It might seem, however, that familiar interest-

rate parity relations should imply a connection between ex-

change rates and short-term interest rates. It should be noted,

however, that interest-rate parity implies a connection between

the interest-rate differential and the rate of depreciation of the

exchange rate, not its absolute level, whereas it is the level that

should matter for spending and pricing decisions. One may

write this relation in the form

et ¼ Etetþ1 � ðit � Etptþ1Þ þ ði�t � Etp�tþ1Þ þ ct; ð3Þ

where et is the real exchange rate, it and i�t the domestic and

foreign short-term nominal interest rates, pt and p�t the domes-

200 Michael Woodford

Page 214: Technology and the New Economy

tic and foreign inflation rates, and ct a ‘‘risk premium’’ here

treated as exogenous. If the real exchange rate fluctuates over

the long run around a constant level e, it follows that one can

‘‘solve forward’’ (3) to obtain

et ¼ e�Xyj¼0

Etðitþj � ptþjþ1 � rÞ

þXyj¼0

Etði�tþj � p�tþjþ1 þ ctþj � rÞ; (4)

where r is the long-run average value of the term r�t 1

i�t � Etptþ1 þ ct. Note that in this solution, a change in current

expectations regarding the short-term interest rate at any fu-

ture date should move the exchange rate as much as a change

of the same size in the current short-term rate. Of course, what

this means is that the most effective way of moving the ex-

change rate, without violent movements in short-term interest

rates, will be to change expectations regarding the level of in-

terest rates over a substantial period of time.

Similarly, it is correct to argue that intertemporal optimiza-

tion ought to imply a connection between even quite short-

term interest rates and the timing of expenditure decisions of

all sorts. However, the Euler equations associated with such

optimization problems relate short term interest rates not to

the level of expenditure at that point in time, but rather to

the expected rate of change of expenditure. For example, (a

log-linear approximation to) the consumption Euler equation

implied by a standard representative household model is of the

form

ct ¼ Etctþ1 � sðit � Etptþ1 � rtÞ; ð5Þ

where ct is the log of real consumption expenditure, rt repre-

sents exogenous variation in the rate of time preference, and

Monetary Policy in the Information Economy 201

Page 215: Technology and the New Economy

s > 0 is the intertemporal elasticity of substitution. Many

standard business cycle models furthermore imply that long-

run expectations

ct 1 limT!y

Et½cT � gðT � tÞ;

where g is the constant long-run growth rate of consumption,

should be independent of monetary policy (being determined

solely by population growth and technical progress, here

treated as exogenous). If so, one can again ‘‘solve forward’’ (5)

to obtain

ct ¼ ct � sXyj¼0

Etðitþj � ptþj � rtþj � s�1gÞ: (6)

Once more, one finds that current expenditure should depend

mainly upon the expected future path of short rates, rather

than upon the current level of these rates.7 Woodford (2003,

chap. 4) similarly shows that optimizing investment demand

(in a neoclassical model with convex adjustment costs, but

allowing for sticky product prices) is a function of a distributed

lead of expected future short rates, with nearly constant

weights on expected short rates at all horizons.

Thus the ability of central banks to influence expenditure,

and hence pricing, decisions is critically dependent upon their

ability to influence market expectations regarding the future

path of overnight interest rates, and not merely their current

level. Better information on the part of market participants

about central bank actions and intentions should increase the

degree to which central bank policy decisions can actually

affect these expectations, and so increase the effectiveness of

monetary stabilization policy. Insofar as the significance of

current developments for future policy are clear to the private

sector, markets can to a large extent ‘‘do the central bank’s

work for it,’’ in that the actual changes in overnight rates

202 Michael Woodford

Page 216: Technology and the New Economy

required to achieve the desired changes in incentives can be

much more modest when expected future rates move as well.

There is evidence that this is already happening, as a result

both of greater sophistication on the part of financial markets

and greater transparency on the part of central banks, the two

developing in a sort of symbiosis with one another. Blinder et

al. (2001, 8) argue that in the period from early 1996 through

the middle of 1999, one could observe the U.S. bond mar-

ket moving in response to macroeconomic developments that

helped to stabilize the economy, despite relatively little change

in the level of the federal funds rate, and suggest that this

reflected an improvement in the bond market’s ability to fore-

cast Fed actions before they occur. Statistical evidence of in-

creased forecastability of Fed policy by the markets is provided

by Lange, Sack, and Whitesell (2001), who show that the abil-

ity of Treasury bill yields to predict changes in the federal

funds rate some months in advance has increased since the late

1980s.

The behavior of the funds rate itself provides evidence of a

greater ability of market participants to anticipate the Fed’s

future behavior. It is frequently observed now that announce-

ments of changes in the Fed’s operating target for the funds

rate (made through public statements immediately following

the Federal Open Market Committee meeting that decides

upon the change, under the procedures followed since Feb-

ruary 1994) have an immediate effect upon the funds rate,

even though the Trading Desk at the New York Fed does not

conduct open market operations to alter the supply of Fed

balances until the next day at the soonest (Meulendyke 1998;

Taylor 2001). This is sometimes called an ‘‘announcement

effect.’’ Taylor (2001) interprets this as a consequence of inter-

temporal substitution (at least within a reserve maintenance

Monetary Policy in the Information Economy 203

Page 217: Technology and the New Economy

period) in the demand for reserves, given the forecastability of

a change in the funds rate once the Fed does have a chance to

adjust the supply of Fed balances in a way consistent with the

new target. Under this interpretation, it is critical that the Fed’s

announced policy targets are taken by the markets to represent

credible signals of its future behavior; given that they are, the

desired effect upon interest rates can largely occur even before

any actual trades by the Fed.

Demiralp and Jorda (2001b) provide evidence of this effect

by regressing the deviation between the actual and target fed-

eral funds rate on the previous two days’ deviations, and upon

the day’s change in the target (if any occurs). The regression

coefficient on the target change (indicating adjustment of the

funds rate in the desired direction on the day of the target

change) is substantially less than one, and is smaller since 1994

(on the order of 0.4) than in the period 1984–1994 (nearly

0.6). This suggests that the ability of the markets to anticipate

the consequences of FOMC decisions for movements in the

funds rate has improved since the Fed’s introduction of explicit

announcements of its target rate, though it was non-negligible

even before this. Of course, this sort of evidence indicates

forecastability of Fed actions only over very short horizons (a

day or two in advance), and forecastability over such a short

time does not in itself help much to influence spending and

pricing decisions. Still, the ‘‘announcement effect’’ provides a

simple illustration of the principle that anticipation of policy

actions in advance is more likely to strengthen the intended

effects of policy, rather than undercutting them as the previous

view would have it. In the information economy, it should

be easier for the announcements that central banks choose

to make regarding their policy intentions to be quickly dis-

seminated among and digested by market participants. And to

204 Michael Woodford

Page 218: Technology and the New Economy

the extent that this is true, it should provide central banks with

a powerful tool through which to better achieve their stabili-

zation goals.

5.1.2 Consequences for the Conduct of Policy

I have argued that improved private-sector information about

policy actions and intentions will not eliminate the ability of

central banks to influence spending and pricing decisions.

However, this does not mean that there are no consequences

for the effective conduct of monetary policy of increased mar-

ket sophistication about such matters. There are several lessons

to be drawn, which are relevant to the situations of the leading

central banks even now but which should be of even greater

importance as information processing improves.

One is that transparency is valuable for the effective conduct

of monetary policy. It follows from my previous analysis that

being able to count upon the private sector’s correct under-

standing of the central bank’s current decisions and future

intentions increases the precision with which a central bank

can, in principle, act to stabilize both prices and economic

activity. I have argued that in the information economy,

improved private-sector information is inevitable; but central

banks can obviously facilitate this as well, though striving to

better explain their decisions to the public. The more sophisti-

cated markets become, the more scope there will be for com-

munication about even subtle aspects of the bank’s decisions

and reasoning, and it will be desirable for central banks to take

advantage of this opportunity.

In fact, this view has become increasingly widespread among

central bankers over the past decade.8 In the United States,

the Fed’s degree of openness about its funds rate operating

targets has notably increased under Alan Greenspan’s tenure

Monetary Policy in the Information Economy 205

Page 219: Technology and the New Economy

as chairman.9 In some other countries, especially inflation-

targeting countries, the increase in transparency has been even

more dramatic. Central banks such as the Bank of England, the

Reserve Bank of New Zealand, and the Swedish Riksbank

are publicly committed not only to explicit medium-run policy

targets, but even to fairly specific decision procedures for

assessing the consistency of current policy with those targets,

and to the regular publication of inflation reports that explain

the bank’s decisions in this light.

The issue of what exactly central banks should communicate

to the public is too large a question to be addressed in detail

here; Blinder et al. (2001) provide an excellent discussion of

many of the issues. I note, however, that from the perspective

suggested here, what is important is not so much that the cen-

tral bank’s deliberations themselves be public, as that the bank

give clear signals about what the public should expect it to do

in the future. The public needs to have as clear as possible an

understanding of the rule that the central bank follows in

deciding what it does. Inevitably, the best way to communicate

about this will be by offering the public an explanation of the

decisions that have already been made; the bank itself would

probably not be able to describe how it might act in all con-

ceivable circumstances, most of which will never arise. But it is

important to remember that the goal of transparency should be

to make the central bank’s behavior more systematic, and to

make its systematic character more evident to the public—not

the exposure of ‘‘secrets of the temple’’ as a goal in itself.

For example, discussions of transparency in central bank-

ing often stress such matters as the publication of minutes

of deliberations by the policy committee, in as prompt and as

unedited a form as possible. Yet it is not clear that provision of

the public with full details of the differences of opinion that

206 Michael Woodford

Page 220: Technology and the New Economy

may be expressed before the committee’s eventual decision is

reached really favors public understanding of the systematic

character of policy. Instead, this can easily distract attention to

apparent conflicts within the committee, and to uncertainty in

the reasoning of individual committee members, which may

reinforce skepticism about whether there is any ‘‘policy rule’’

to be discerned. Furthermore, the incentive provided to indi-

vidual committee members to speak for themselves rather than

for the institution may make it harder for the members to sub-

ordinate their individual votes to any systematic commitments

of the institution, thus making policy less rule based in fact,

and not merely in perception.

More to the point would be an increase in the kind of

communication provided by the Inflation Reports or Monetary

Policy Reports. These reports do not pretend to give a blow-

by-blow account of the deliberations by which the central bank

reached the position that it has determined to announce; but

they do explain the analysis that justifies the position that has

been reached. This analysis provides information about the

bank’s systematic approach to policy by illustrating its appli-

cation to the concrete circumstances that have arisen since the

last report; and it provides information about how conditions

are likely to develop in the future through explicit discussion of

the bank’s own projections. Because the analysis is made pub-

lic, it can be expected to shape future deliberations; the bank

knows that it should be expected to explain why views ex-

pressed in the past are not later being followed. Thus a com-

mitment to transparency of this sort helps to make policy more

fully rule based, as well as increasing the public’s understand-

ing of the rule.

Another lesson is that central banks must lead the markets.

Our statement above that it is not desirable for banks to surprise

Monetary Policy in the Information Economy 207

Page 221: Technology and the New Economy

the markets might easily be misinterpreted to mean that central

banks ought to try to do exactly what the markets expect, in-

sofar as that can be determined. Indeed, the temptation to

‘‘follow the markets’’ becomes all the harder to avoid, in a

world where information about market expectations is easily

available, to central bankers as well as to the market partic-

ipants themselves. But this would be a mistake, as Blinder

(1998, chap. 3, sec. 3) emphasizes. If the central bank delivers

whatever the markets expect, then there is no objective anchor

for these expectations: arbitrary changes in expectations may

be self-fulfilling, because the central bank validates them.10

This would be destabilizing, for both nominal and real vari-

ables. To avoid this, central banks must take a stand as to the

desired path of interest rates, and communicate it to the mar-

kets (as well as acting accordingly). While the judgments upon

which such decisions are based will be fallible, failing to give a

signal at all would be worse. A central bank should seek to

minimize the extent to which the markets are surprised, but it

should do this by conforming to a systematic rule of behavior

and explaining it clearly, not by asking what others expect it

to do.

This points up the fact that policy should be rule based. If

the bank does not follow a systematic rule, then no amount of

effort at transparency will allow the public to understand and

anticipate its policy. The question of the specific character of a

desirable policy rule is also much too large a topic for the cur-

rent occasion. However, a few remarks may be appropriate

about what is meant by rule-based policy.

I do not mean that a bank should commit itself to an explicit

state-contingent plan for the entire foreseeable future, specify-

ing what it would do under every circumstance that might

possibly arise. That would obviously be impractical, even

208 Michael Woodford

Page 222: Technology and the New Economy

under complete unanimity about the correct model of the

economy and the objectives of policy, simply because of the

vast number of possible futures. But it is not necessary, in

order to obtain the benefits of commitment to a systematic

policy. It suffices that a central bank commit itself to a sys-

tematic way of determining an appropriate response to future

developments, without having to list all of the implications of

the rule for possible future developments.11

Nor is it necessary to imagine that commitment to a system-

atic rule means that once a rule is adopted it must be followed

forever, regardless of subsequent improvements in understand-

ing of the effects of monetary policy on the economy, including

experience with the consequences of implementing the rule. If

the private sector is forward looking, and it is possible for the

central bank to make the private sector aware of its policy

commitments, then there are important advantages of commit-

ment to a policy other than discretionary optimization—

namely, simply doing what seems best at each point in time,

with no commitment regarding what may be done later. This

is because there are advantages to having the private sector

be able to anticipate delayed responses to a disturbance, that

may not be optimal ex post if one reoptimizes taking the

private sector’s past reaction as given. But one can create the

desired anticipations of subsequent behavior—and justify them

—without committing to follow a fixed rule in the future no

matter what may happen in the meantime.

It suffices that the private sector have no ground to forecast

that the bank’s behavior will be systematically different from

the rule that it pretends to follow. This will be the case if the

bank is committed to choosing a rule of conduct that is justifi-

able on certain principles, given its model of the economy.12

The bank can then properly be expected to continue to follow

Monetary Policy in the Information Economy 209

Page 223: Technology and the New Economy

its current rule, as long as its understanding of the economy

does not change; and as long as there is no predictable direc-

tion in which its future model of the economy should be dif-

ferent from its current one, private-sector expectations should

not be different from those in the case of an indefinite commit-

ment to the current rule. Yet changing to a better rule will re-

main possible in the case of improved knowledge (which is

inevitable); and insofar as the change is justified both in terms

of established principles and in terms of a change in the bank’s

model of the economy that can itself be defended, this need not

impair the credibility of the bank’s professed commitments.

It follows that rule-based policymaking will necessarily mean

a decision process in which an explicit model of the economy

(albeit one augmented by judgmental elements) plays a central

role, both in the deliberations of the policy committee and

in explanation of those deliberations to the public. This too

has been a prominent feature of recent innovations in the con-

duct of monetary by the inflation-targeting central banks, such

as the Bank of England, the Reserve Bank of New Zealand,

and the Swedish Riksbank. While there is undoubtedly much

room for improvement both in current models and current ap-

proaches to the use of models in policy deliberations, one can

only expect the importance of models to policy deliberations to

increase in the information economy.

5.2 Erosion of Demand for the Monetary Base

Another frequently expressed concern about the effectiveness

of monetary policy in the information economy has to do with

the potential for erosion of private-sector demand for mone-

tary liabilities of the central bank. The alarm has been raised

in particular in a widely discussed recent essay by Benjamin

210 Michael Woodford

Page 224: Technology and the New Economy

Friedman (1999). Friedman begins by proposing that it is

something of a puzzle that central banks are able to control the

pace of spending in large economies by controlling the supply

of ‘‘base money’’ when this monetary base is itself so small in

value relative to the size of those economies. The scale of the

transactions in securities markets through which central banks

such as the U.S. Federal Reserve adjust the supply of base

money is even more minuscule when compared to the overall

volume of trade in those markets.13

He then argues that this disparity of scale has grown more

extreme in the past quarter century as a result of institutional

changes that have eroded the role of base money in trans-

actions, and that advances in information technology are likely

to carry those trends still farther in the next few decades.14 In

the absence of aggressive regulatory intervention to head off

such developments, the central bank of the future will be ‘‘an

army with only a signal corps’’—able to indicate to the private

sector how it believes that monetary conditions should de-

velop, but not able to do anything about it if the private sector

has opinions of its own. Mervyn King (1999) similarly pro-

poses that central banks are likely to have much less influence

in the twenty-first century than they did in the previous one,

as the development of ‘‘electronic money’’ eliminates their

monopoly position as suppliers of means of payment.

The information technology (IT) revolution clearly has the

potential to fundamentally transform the means of payment in

the coming century. But does this really threaten to eliminate

the role of central banks as guarantors of price stability?

Should new payments systems be regulated with a view to

protecting central banks’ monopoly position for as long as

possible, sacrificing possible improvements in the efficiency of

the financial system in the interest of macroeconomic stability?

Monetary Policy in the Information Economy 211

Page 225: Technology and the New Economy

I argue that these concerns as well are misplaced. Even if the

more radical hopes of the enthusiasts of ‘‘electronic money’’

(e-money) are realized, there is little reason to fear that cen-

tral banks would not still retain the ability to control the level

of overnight interest rates, and by so doing to regulate spend-

ing and pricing decisions in the economy in essentially the same

way as at present. It is possible that the precise means used to

implement a central bank’s operating target for the overnight

rate will need to change in order to remain effective in a future

‘‘cashless’’ economy, but the way in which these operating tar-

gets themselves are chosen in order to stabilize inflation and

output may remain quite similar to current practice.

5.2.1 Will Money Disappear, and Does It Matter?

There are a variety of reasons why improvements in informa-

tion technology might be expected to reduce the demand for

base money. Probably the most discussed of these—and the

one of greatest potential significance for traditional measures

of the monetary base—is the prospect that ‘‘smart cards’’ of

various sorts might replace currency (notes and coins) as a

means of payment in small, everyday transactions. In this case,

the demand for currency issued by central banks might dis-

appear. While experiments thus far have not made clear the

degree of public acceptance of such a technology, many in

the technology sector express confidence that smart cards

will largely displace the use of currency within only a few

years.15 Others are more skeptical. Goodhart (2000), for ex-

ample, argues that the popularity of currency will never wane

—at least in the black market transactions that arguably ac-

count for a large fraction of aggregate currency demand—

owing to its distinctive advantages in allowing for unrecorded

transactions. And improvements in information technology

212 Michael Woodford

Page 226: Technology and the New Economy

can conceivably make currency more attractive. For example,

in the United States the spread of ATM machines has increased

the size of the cash inventories that banks choose to hold,

increasing currency demand relative to GDP.16

More to the point, in my view, is the observation that even

a complete displacement of currency by ‘‘electronic cash’’

(e-cash) of one kind or another would in no way interfere with

central bank control of overnight interest rates. It is true that

such a development could, in principle, result in a drastic re-

duction in the size of countries’ monetary bases, since currency

is by far the largest component of conventional measures of

base money in most countries.17 But neither the size nor even

the stability of the overall demand for base money is of rele-

vance to the implementation of monetary policy, unless central

banks adopt monetary base targeting as a policy rule—a pro-

posal found in the academic literature,18 but seldom attempted

in practice.

What matters for the effectiveness of monetary policy is cen-

tral bank control of overnight interest rates,19 and these are

determined in the interbank market for the overnight central

bank balances that banks (or sometimes other financial insti-

tutions) hold in order to satisfy reserve requirements and to

clear payments. The demand for currency affects this market

only to the extent that banks obtain additional currency from

the central bank in exchange for central bank balances, as a

result of which fluctuations in currency demand affect the sup-

ply of central bank balances, to the extent that they are not

accommodated by offsetting open market operations by the

central bank. In practice, central bank operating procedures

almost always involve an attempt to insulate the market for

central bank balances from these disturbances by automatically

accommodating fluctuations in currency demand,20 and this

Monetary Policy in the Information Economy 213

Page 227: Technology and the New Economy

is one of the primary reasons that banks conduct open mar-

ket operations (though such operations are unrelated to any

change in policy targets). Reduced use of currency, or even its

total elimination, would only simplify the central bank’s prob-

lem, by eliminating this important source of disturbances to the

supply of central bank balances under current arrangements.

However, improvements in information technology may also

reduce the demand for central bank balances. In standard

textbook accounts, this demand is due to banks’ need to hold

reserves in a certain proportion to transactions balances, owing

to regulatory reserve requirements. However, faster informa-

tion processing can allow banks to economize on required

reserves, by shifting customers’ balances more rapidly between

reservable and nonreservable categories of accounts.21 Indeed,

since the introduction of ‘‘sweep accounts’’ in the United States

in 1994, required reserves have fallen substantially.22 At the

same time, increased bank holdings of vault cash, as discussed

above, have reduced the need for Fed balances as a way of

satisfying banks’ reserve requirements. Due to these two devel-

opments, the demand for Fed balances to satisfy reserve

requirements has become quite small—only a bit more than $6

billion at present (see table 5.1). As a consequence, some have

argued that reserve requirements are already virtually irrele-

vant in the United States as a source of Fed control over the

economy. Furthermore, the increased availability of oppor-

tunities for substitution away from deposits subject to reserve

requirements predictably leads to further pressure for the

reduction or even elimination of such regulations; as a result,

recent years have seen a worldwide trend toward lower reserve

requirements.23

But such developments need not pose any threat to central

bank control of overnight interest rates. A number of coun-

214 Michael Woodford

Page 228: Technology and the New Economy

tries, such as the United Kingdom, Sweden, Canada, Australia,

and New Zealand among others, have completed eliminated

reserve requirements. Yet these countries’ central banks con-

tinue to implement monetary policy through operating targets

for an overnight interest rate, and continue to have consider-

able success at achieving their operating targets. Indeed, as

we show below, some of these central banks achieve tighter

control of overnight interest rates than does the U.S. Federal

Reserve.

The elimination of required reserves in these countries does

not mean the disappearance of a market for overnight central

bank balances. Instead, central bank balances are still used to

clear interbank payments. Indeed, even in the United States,

balances held to satisfy reserve requirements account for less

than half of total Fed balances (as shown in table 5.1),24 and

Furfine (2000) argues that variations in the demand for clearing

Table 5.1Reserves held to satisfy legal reserve requirements, and total balancesof depository institutions held with U.S. Federal Reserve Banks (aver-ages for the two-week period ending August 8, 2001, in billions ofdollars).

Required Reserves

Applied Vault Cash 32.3

Fed Balances to Satisfy Res. Req. 6.5

Total Required Reserves 38.8

Fed Balances

Required Clearing Balances 7.1

Adjustment to Compensate for Float 0.4

Fed Balances to Satisfy Res. Req. 6.5

Excess Reserves 1.1

Total Fed Balances 15.1

Sources: Federal Reserve Statistical Release H.3, 8/9/01, and Statisti-cal Release H.4.1, 8/2/01 and 8/9/01.

Monetary Policy in the Information Economy 215

Page 229: Technology and the New Economy

balances account for the most notable high-frequency patterns

in the level and volatility of the funds rate in the United States.

In the countries without reserve requirements, this demand

for clearing purposes has simply become the sole source of

demand for central bank balances. Given the existence of a

demand for clearing balances (and indeed a somewhat interest-

elastic demand, as discussed in section 5.2.2), a central bank

can still control the overnight rate through its control of the net

supply of central bank balances.

Nonetheless, the disappearance of a demand for required

reserves may have consequences for the way that a central

bank can most effectively control overnight interest rates. In an

economy with an efficient interbank market, the aggregate de-

mand for clearing balances will be quite small relative to the

total volume of payments in the economy; for example, in the

United States, banks that actively participate in the payments

system typically send and receive payments each day about

thirty times the size of their average overnight clearing bal-

ances, and the ratio is as high as two hundred for the most

active banks (Furfine 2000). Exactly for this reason, random

variation in daily payments flows can easily lead to fluctuations

in the net supply of and demand for overnight balances that

are large relative to the average level of such balances.25 This

instability is illustrated by figure 5.3, showing the daily varia-

tion in aggregate overnight balances at the Reserve Bank of

Australia, over several periods during which the target over-

night rate does not change, and over which the actual over-

night rate is also relatively stable (as shown in figure 5.2).

A consequence of this volatility is that quantity targeting—

say, adoption of a target for aggregate overnight clearing bal-

ances while allowing overnight interest rates to attain what-

ever level should clear the market, as under the nonborrowed

216 Michael Woodford

Page 230: Technology and the New Economy

reserves targeting procedure followed in the United States in

the period 1979–1982—will not be a reliable approach to sta-

bilization of the aggregate volume of spending, if practicable at

all. And even in the case of an operating target for the over-

night interest rate, the target is not likely to be most reliably

attained through daily open market operations to adjust the

aggregate supply of central bank balances, the method cur-

rently used by the Fed. The overnight rate at which the inter-

bank market clears is likely to be highly volatile, if the central

bank conducts an open market operation only once, early in

the day, and there are no standing facilities of the kind that

limit variation of the overnight rate under the ‘‘channel’’ sys-

tems discussed later. In the United States at present, errors in

judging the size of the open market operation required on a

given day can be corrected only the next day without this

resulting in daily fluctuations in the funds rate that are too

great, owing to the intertemporal substitution in the demand

for Fed balances stressed by Taylor (2001). But the scope for

intertemporal substitution results largely from the fact that

U.S. reserve requirements apply only to average reserves over a

two-week period; and indeed, funds rate volatility is observed

to be higher on the last day of a reserve maintenance period

(Spindt and Hoffmeister 1988; Hamilton 1996; Furfine 2000).

There is no similar reason for intertemporal substitution in the

demand for clearing balances, as penalties for overnight over-

drafts are imposed on a daily basis.26 Hence the volatility of

the overnight interest rate, at least at the daily frequency, could

easily be higher under such an operating procedure, in the

complete absence of (or irrelevance of) reserve requirements.27

Many central banks in countries that no longer have reserve

requirements nonetheless achieve tight control of overnight

interest rates, through the use of a ‘‘channel’’ system of the

Monetary Policy in the Information Economy 217

Page 231: Technology and the New Economy

kind described in section 5.2.2. In a system of this kind, the

overnight interest rate is kept near the central bank’s target

rate through the provision of standing facilities by the cen-

tral bank, with interest rates determined by the target rate.

Such a system is likely to be more effective in an economy

without reserve requirements, and one may well see a migra-

tion of other countries, such as the United States, toward such

a system as existing trends further erode the role of legal re-

serve requirements.

Improvements in information technology may well reduce

the demand for central bank balances for clearing purposes as

well. As the model presented later shows, the demand for non-

zero overnight clearing balances results from uncertainty about

banks’ end-of-day positions in their clearing accounts that has

not yet been resolved at the time of trading in the interbank

market. But such uncertainty is entirely a function of imperfect

communication; were banks to have better information sooner

about their payment flows, and were the interbank market

more efficient at allowing trading after the information about

these flows has been fully revealed, aggregate demand for

overnight clearing balances would be smaller and less interest

elastic. In principle, sufficiently accurate monitoring of pay-

ments flows should allow each bank to operate with zero

overnight central bank balances.

Yet once again I would argue that future improvements in

the efficiency of the financial system pose no real threat to cen-

tral bank control of overnight rates. The model presented later

implies that the effects upon the demand for clearing balances

of reduced uncertainty about banks’ end-of-day positions can

be offset by reducing the opportunity cost of overnight bal-

ances as well, by increasing the rate of interest paid by the

central bank on such balances. In order for the interbank mar-

218 Michael Woodford

Page 232: Technology and the New Economy

ket to remain active, it is necessary that the interest paid on

overnight balances at the central bank not be made as high as

the target for the market overnight rate. But as the interbank

market becomes ever more frictionless (the hypothesis under

consideration), the size of the spread required for this purpose

becomes smaller. There should always be a range of spreads

that are small enough to make the demand for clearing bal-

ances interest elastic, while nonetheless large enough to imply

that banks with excess balances will prefer to lend these in the

interbank market, unless the overnight rate in the interbank

market is near the deposit rate, and thus well below the target

rate. (This latter behavior is exactly what is involved in an

interest-elastic demand for overnight balances.) Thus once

again some modification of current operating procedures may

be required, but without any fundamental change in the way

that central banks can affect overnight rates.

Finally, some, such as Mervyn King (2000), foresee a future

in which electronic means of payment come to substitute for

current systems in which payments are cleared through central

banks.28 This prospect is highly speculative at present; most

current proposals for variants of e-money still depend upon the

final settlement of transactions through the central bank, even

if payments are made using electronic signals rather than old-

fashioned instruments such as paper checks. And Charles

Freedman (2000), for one, argues that the special role of cen-

tral banks in providing for final settlement is unlikely ever to

be replaced, owing to the unimpeachable solvency of these

institutions, as government entities that can create money at

will. Yet the idea is conceivable at least in principle, since the

question of finality of settlement is ultimately a question of the

quality of one’s information about the accounts of the parties

with whom one transacts—and while the development of

Monetary Policy in the Information Economy 219

Page 233: Technology and the New Economy

central banking has undoubtedly been a useful way of econo-

mizing on limited information-processing capacities, it is not

clear that advances in information technology could not make

other methods viable.

One way in which the development of alternative, electronic

payments systems might be expected to constrain central bank

control of interest rates is by limiting the ability of a central

bank to raise overnight interest rates when this might be

needed to restrain spending and hence upward pressure on

prices. Here the argument would be that high interest rates

might have to be avoided in order not to raise too much the

opportunity cost of using central bank money, giving private

parties an incentive to switch to an alternative payments sys-

tem. But such a concern depends upon the assumption, stan-

dard in textbook treatments of monetary economics, that the

rate of interest on money must be zero, so that ‘‘tightening’’

policy always means raising the opportunity cost of using cen-

tral bank money. Under such an account, effective monetary

policy depends upon the existence of central bank monopoly

power in the supply of payments services, so that the price of

its product can be raised at will through sufficient rationing of

supply.

Yet raising interest rates in no way requires an increase in

the opportunity cost of central bank clearing balances, for one

can easily pay interest on these balances, and the interest rate

paid on overnight balances can be raised in tandem with the

increase in the target overnight rate. This is exactly what is

done under the ‘‘channel’’ systems described later. Of course,

there is a ‘‘technological’’ reason why it is difficult to pay an

interest rate other than zero on currency.29 But this would not

be necessary in order to preserve the central bank’s control of

overnight interest rates. As noted earlier, the replacement of

220 Michael Woodford

Page 234: Technology and the New Economy

currency by other means of payment would pose no problem

for monetary control at all. (Highly interest-elastic currency

demand would complicate the implementation of monetary

policy, as large open market operations might be needed to

accommodate the variations in currency demand. But this

would not undermine or even destabilize the demand for cen-

tral bank balances.) In order to prevent a competitive threat to

the central bank–managed clearing system, it should suffice

that the opportunity cost of holding overnight clearing bal-

ances be kept low. The evident network externalities associated

with the choice of a payments system, together with the natural

advantages of central banks in performing this function

stressed by Freedman (2000), should then make it likely that

many payments would continue to be settled using central

bank accounts.

My conclusion is that while advances in information tech-

nology may well require changes in the way in which monetary

policy is implemented in countries like the United States, the

ability of central banks to control inflation will not be under-

mined by advances in information technology. And in the case

of countries like Canada, Australia, or New Zealand, the

method of interest-rate control that is currently used—the

‘‘channel’’ system described later—should continue to be quite

effective, even in the face of the most radical of the develop-

ments currently envisioned. I turn now to a further consider-

ation of the functioning of such a system.

5.2.2 Interest-Rate Control Using Standing Facilities

The basic mechanism through which the overnight interest rate

in the interbank market is determined under a ‘‘channel’’ sys-

tem can be explained using figure 5.1.30 The model sketched

here is intended to describe determination of the overnight

Monetary Policy in the Information Economy 221

Page 235: Technology and the New Economy

interest rate in a system such as that of Canada, Australia, or

New Zealand, where there are no reserve requirements.31 Under

such a system, the central bank chooses a target overnight in-

terest rate (indicated by i� in the figure), which is periodically

adjusted in response to changing economic conditions.32

In addition to supplying a certain aggregate quantity of

clearing balances (which can be adjusted through open market

operations), the central bank offers a lending facility, through

which it stands ready to supply an arbitrary amount of addi-

tional overnight balances at a fixed interest rate. The lending

rate is indicated by the level i l in figure 5.1. In Canada, Aus-

tralia, and New Zealand, this lending rate is generally set ex-

actly twenty-five basis points higher than the target rate.33

Thus there is intended to be a small penalty associated with the

use of this lending facility rather than acquiring funds through

the interbank market. But funds are freely available at this

facility (upon presentation of suitable collateral), without the

sort of rationing or implicit penalties associated with discount

window borrowing in the United States.34

Finally, depository institutions that settle payments through

the central bank also have the right to maintain excess clearing

balances overnight with the central bank at a deposit rate. This

rate is indicated by id in figure 5.1. The deposit rate is positive

but slightly lower than the target overnight rate, again so as

to penalize banks slightly for not using the interbank market.

Typically, the target rate is the exact center of the band whose

upper and lower bounds are set by the lending rate and the

deposit rate; thus in the countries just mentioned, the deposit

rate is generally set exactly twenty-five basis points below the

target rate.35 The lending rate on the one hand and the deposit

rate on the other then define a channel within which overnight

interest rates should be contained.36 Because these are both

222 Michael Woodford

Page 236: Technology and the New Economy

standing facilities, no bank has any reason to pay another bank

a higher rate for overnight cash than the rate at which it could

borrow from the central bank; similarly, no bank has any rea-

son to lend overnight cash at a rate lower than the rate at

which it can deposit with the central bank. Furthermore, the

spread between the lending rate and the deposit rate give banks

an incentive to trade with one another (with banks that find

themselves with excess clearing balances lending them to those

that find themselves short) rather than depositing excess funds

with the central bank when long and borrowing from the

lending facility when short. The result is that the central bank

can control overnight interest rates within a fairly tight range

regardless of what the aggregate supply of clearing balances

may be; frequent quantity adjustments accordingly become less

important.

Figure 5.1Supply and demand for clearing balances under a ‘‘channel’’ system

Monetary Policy in the Information Economy 223

Page 237: Technology and the New Economy

Overnight rate determination under such a system can be

explained fairly simply. The two standing facilities result in an

effective supply curve for clearing balances of the form indi-

cated by schedule S in figure 5.1. The vertical segment is

located at S; the net supply of clearing balances apart from any

obtained through the lending facility. This is affected by net

government payments and variations in the currency demands

of banks, in addition to the open market operations of the

central bank. Under a channel system, the central bank’s target

supply of clearing balances may vary from day to day, but it is

adjusted for technical reasons (for example, the expectation of

large payments on a particular day) rather than as a way of

implementing or signaling changes in the target overnight rate

(as in the U.S.). The horizontal segment to the right at the

lending rate indicates the perfectly elastic supply of additional

overnight balances from the lending facility. The horizontal

segment to the left at the deposit rate indicates that the pay-

ment of interest on deposits puts a floor on how low the equi-

librium overnight rate can fall, no matter how low the demand

for clearing balances may be. The equilibrium overnight rate is

then determined by the intersection of this schedule with a de-

mand schedule for clearing balances, such as the curve D1 in

the figure.37

A simple model of the determinants of the demand for

clearing balances can be derived as follows.38 To simplify, we

shall treat the interbank market as a perfectly competitive

market, held at a certain point in time, that occurs after the

central bank’s last open-market operation of the day, but be-

fore the banks are able to determine their end-of-day clearing

balances with certainty. The existence of residual uncertainty

at the time of trading in the interbank market is crucial;39 it

means that even after banks trade in the interbank market,

224 Michael Woodford

Page 238: Technology and the New Economy

they will expect to be short of funds at the end of the day with

a certain probability, and also to have excess balances with a

certain probability.40 Trading in the interbank market then

occurs to the point where the risks of these two types are just

balanced for each bank.

Let the random variable zi denote the net payments to bank i

during a given day; that is, these represent the net additions to

its clearing account at the central bank by the end of the day.

At the time of trading in the interbank market, the value of zi

is not yet known with certainty, although a good bit of the

uncertainty will have been resolved. Let e i 1 zi � EðziÞ repre-sent the eventual end-of-day surprise; here and in what follows

Eð�Þ denotes an expectation conditional upon information at

the time of trading in the interbank market. Suppose further-

more that the random variable e i=s i has a distribution with

cumulative distribution function (cdf) F for each bank; here

s i > 0 is a parameter (possibly different from day to day, for

reasons of the sort discussed by Furfine 2000) that indexes the

degree of uncertainty of bank i. Because of this uncertainty, a

bank that trades in the interbank market to the point where its

expected end-of-day balance (at the time of trading) is si will

have an actual end-of-day balance equal to si þ e i: It is conve-

nient to use si as the bank’s choice variable in modeling its

trading in the interbank market.

A risk-neutral bank should then choose si in order to maxi-

mize expected returns EðRÞ, where its net return R on its

overnight balances at the central bank is equal to

Rðsi þ e iÞ ¼ id maxðsi þ e i; 0Þ þ i l minðsi þ e i; 0Þ

�iðsi þ e iÞ; (7)

if i is the rate at which overnight funds can be lent or borrowed

in the interbank market. Note that the bank’s net lending in

Monetary Policy in the Information Economy 225

Page 239: Technology and the New Economy

the interbank market is equal to its beginning-of-day balances

plus EðziÞ � si; this differs by a constant (that is, a quantity

that is independent of the bank’s trading decision) from the

quantity �si that enters expression (7). If the cdf F is continu-

ous, the first-order condition for optimal choice of si is then

given by

ðid � iÞð1� Fð�si=s i ÞÞ þ ði l � iÞFð�si=s iÞ ¼ 0;

implying desired overnight balances of

si ¼ �s iF�1 i� id

i l � id

� �: (8)

Aggregating over banks i, we obtain the demand schedule

plotted in figure 5.1. As one would expect, the demand sched-

ule is decreasing in i. In the figure, desired balances are shown

as becoming quite large as i approaches id; this reflects assign-

ment of a small but positive probability to the possibility of

very large negative payments late in the day, which risk banks

will wish to insure against if the opportunity cost of holding

funds overnight with the central bank is low enough.

The market-clearing overnight rate i is then the rate that

results in an aggregate demand such that

Xi

s i ¼ Sþ u: (9)

Here the net supply of clearing balances expected at the time of

trading in the interbank market41 is equal to the central bank’s

target supply of clearing balances S, plus a random term u. The

latter term represents variation in the aggregate supply of

clearing balances (e.g., due to currency demand by banks or

government payments) that has not been correctly anticipated

by the central bank at the time of its last open-market opera-

tion (and so offset), but that has been revealed by the time of

226 Michael Woodford

Page 240: Technology and the New Economy

trading in the interbank market.42 The quantity Sþ u repre-

sents the location on the horizontal axis of the vertical segment

of the effective supply schedule in figure 5.1. (The figure depicts

equilibrium in the case that u ¼ 0.)

Substitution of (8) into (9) yields the solution

i ¼ id þ F � Sþ uPi s

i

� �ði l � idÞ: (10)

As noted earlier, the market overnight rate is necessarily within

the channel: id a ia i l: Its exact position within the channel

will be a decreasing function of the supply of central-bank

balances Sþ u. It is important to note that the interest rates

associated with the two standing facilities play a crucial role in

determining the equilibrium overnight rate, even if the market

rate remains always in the interior of the channel (as is typical

in practice, and as is predicted by the model if the support of

ei=si is sufficiently wide relative to the support of u). This is

because these rates matter not only for the determination of

the location of the horizontal segments of the effective supply

schedule S, but also for the location of the demand schedule

D. Alternatively, the locations of the standing facilities matter

because individual banks do resort to them with positive

probability, even though it is not intended that the overnight

rate should ever be driven to either boundary of the channel.

The model predicts an equilibrium overnight rate at the tar-

get rate (the midpoint of the channel),

i ¼ i� ¼ id þ i l

2;

when u ¼ 0 (variations in the supply of clearing balances are

successfully forecasted and offset by the central bank) and the

target supply of clearing balances is equal to

Monetary Policy in the Information Economy 227

Page 241: Technology and the New Economy

S ¼ �F�1ð1=2ÞXi

s i: (11)

As long as the central bank is sufficiently accurate in estimating

the required supply of clearing balances (11) and in eliminating

the variations represented by the term u, the equilibrium fluc-

tuations in the overnight rate around this value should be small

(and it should be near the target rate on average).

In the case of a symmetric distribution for e i (or any distri-

bution such that zero is the median as well as the mean), (11)

implies that the required target supply of clearing balances

should be zero. In practice, it seems that a small positive level

of aggregate clearing balances are typically desired when the

overnight rate remains in the center of the channel,43 indicating

some asymmetry in the perceived risks.44 Thus a small positive

target level of clearing balances is appropriate; but the model

explains why this can be quite small.

The more important prediction of the model, however, is

that the demand for clearing balances should be a function of

the location of the overnight rate relative to the lending rate

and deposit rate, but independent of the absolute level of any

of these interest rates.45 This means that an adjustment of the

level of overnight rates by the central bank need not require

any change in the supply of clearing balances, as long as the

location of the lending and deposit rates relative to the target

overnight rate do not change. Thus under a channel system,

changes in the level of overnight interest rates are brought

about by simply announcing a change in the target rate, which

has the implication of changing the lending and deposit rates at

the central bank’s standing facilities; no quantity adjustments

in the target supply of clearing balances are required.

Open market operations (or their equivalent) are still used

under such a system.46 But rather than being used either to

228 Michael Woodford

Page 242: Technology and the New Economy

signal or to enforce a change in the operating target for over-

night rates, as in the United States, these are a purely technical

response to daily changes in the bank’s forecast of external

disturbances to the supply of clearing balances, and to its fore-

cast of changes in the degree of uncertainty regarding payment

flows. The bank acts each day in order to keep ðSþ uÞ=P

i si

as close as possible to its desired value,47 which desired value is

independent of both the current operating target i� and the rate

i at which the interbank market might currently be trading,

unlike the reaction function of the Trading Desk of the New

York Fed described by Taylor (2001).48

The degree to which the system succeeds in practice in Aus-

tralia is shown in figure 5.2, which plots the overnight interest

rate since adoption of the complete system described here in

June 1998.49 The channel established by the RBA’s standing

facilities is plotted as well. One observes that the overnight in-

terest rate not only remains well within the channel at all times,

but that on most days it remains quite close to the target rate

(the center of the channel).

On the dates at which the target rate is adjusted (by 25 or 50

basis points at a time), the overnight rate immediately junps to

within a few basis points of the new target level. Furthermore,

these changes in the overnight rate do not require adjustments

of the supply of clearing balances. Both the RBA’s target

level50 of clearing balances (ES balances) and actual overnight

balances are plotted in figure 5.3. Here the vertical dotted lines

indicate the dates of the target changes shown in figure 5.2.

While there are notable day-to-day variations in both target

and actual balances, these are not systematically lower when

the bank aims at a higher level of overnight rates. Thus the

ability of the RBA to ‘‘tighten’’ policy is in no way dependent

upon the creation of a greater ‘‘scarcity’’ of central bank

Monetary Policy in the Information Economy 229

Page 243: Technology and the New Economy

balances. This is a direct consequence of the fact that interest

rates are raised under this system without any attempt to

change the spread between market rates of return and the

interest paid on bank reserves. Instead, the target supply of

clearing balances is frequently adjusted for technical reasons at

times unrelated to policy changes. For example, target balances

were more than doubled during the days spanning the ‘‘Y2K’’

date change, as a result of increased uncertainty about cur-

rency demand, though this was not associated with any change

in the bank’s interest-rate target, and only modest variation in

actual overnight rates.51

Figure 5.2The overnight rate since the introduction of the RTGS system inAustralia

230 Michael Woodford

Page 244: Technology and the New Economy

A similar system has proven even more strikingly effective in

New Zealand, where it was also adopted at the time of the

introduction of an RTGS payment system, in March 1999.52

Figure 5.4 provides a similar plot of actual and target rates,

as well as the rates associated with the standing facilities, in

New Zealand under the OCR system. On most days, the actual

overnight rate is equal to the OCR, to the nearest basis point,

so that the dotted line indicating the OCR is not visible in the

figure. Changes in the OCR bring about exactly the same

change in the actual overnight rate, and these occur without

any change in the RBNZ’s ‘‘settlement cash target,’’ which was

held fixed (at $20 million NZ) during this period, except for

Figure 5.3Total daily ES account balances in Australia. Dotted vertical linesmark the dates of target overnight rate changes.

Monetary Policy in the Information Economy 231

Page 245: Technology and the New Economy

an increase (to $200 million NZ) for a few weeks around the

‘‘Y2K’’ date change (Hampton 2000).

The accuracy with which the RBNZ achieves its target for

overnight rates (except for occasional deviations that seldom

last more than a day or two) may seem too perfect to be

believed. This indicates that the interbank market in New

Zealand is not an idealized auction market of the kind as-

sumed in our simple model. Instead, the banks participating in

this market maintain a convention of trading with one another

at the OCR, except for infrequent occasions when the tempta-

tion to deviate from this norm is evidently too great.53 The

appeal of such a convention under ordinary circumstances is

Figure 5.4The overnight rate under the OCR system in New Zealand

232 Michael Woodford

Page 246: Technology and the New Economy

fairly obvious. When the target rate is at the center of the

channel, trading at the target rate implies an equal division of

the gains from trade. This may well seem fair to both parties

(especially if each bank is likely to be a lender one day and a

borrower the next), and agreeing to the convention has the

advantage of allowing both to avoid the costs of searching for

alternative trading partners or of waiting for further informa-

tion about that day’s payment flows to be revealed.

If the central bank is reasonably accurate in choosing the size

of its daily open market operation, the Walrasian equilibrium

overnight rate (modeled above) is never very far from the cen-

ter of the channel in any event, and so no one may perceive

much gain from insisting upon more competitive bidding. Oc-

casional breakdowns of the convention occur on days when

the RBNZ is unable to prevent a large value of u from occur-

ring, for example on days of unusually large government pay-

ments; on such days, the degree to which the convention

requires asymmetries in bargaining positions to be neglected is

too great for all banks to conform. Thus even in the presence

of such a convention, our simple model is of some value in

explaining the conduct of policy under a channel system. For

preservation of the convention depends upon the central bank’s

arranging things so that the rate that would represent a Walras-

ian equilibrium, if such an idealized auction were conducted, is

not too far from the center of the channel.

Figure 5.5 similarly plots the overnight rate in Canada since

the adoption of the LVTS (Large-Value Transfer System) pay-

ment system in February 1999.54 Once again one observes

that the channel system has been quite effective, at least since

early in 2000, at keeping the overnight interest rate not only

within the bank’s fifty-basis-point ‘‘operating band’’ but usu-

ally within about one basis point of the target rate. In the early

Monetary Policy in the Information Economy 233

Page 247: Technology and the New Economy

months of the Canadian system, it is true, the overnight rate

was chronically higher than the target rate, and even above the

upper bound of the operating band (the Bank Rate) at times of

particular liquidity demand.55 This was due to an underesti-

mate of the supply of clearing balances S needed for the market

to clear near the center of the channel. The Bank of Canada

had originally thought that a zero net supply of clearing bal-

ances was appropriate (see, e.g., Clinton 1997), but by late

in 1999 began instead to target a positive supply, initially

$200 million Canadian (but at present only $50 million), as

noted earlier. This, together with some care to adjust of the

supply of settlement balances from day to day in response to

Figure 5.5The overnight rate since introduction of the LVTS system in Canada

234 Michael Woodford

Page 248: Technology and the New Economy

variation in the volume of payments, has resulted in much

more successful control of the overnight rate.

All three of these countries now achieve considerably tighter

control of overnight interest rates in their countries than is

achieved, for example, under the current operating procedures

employed in the United States. For purposes of comparison,

figure 5.6 plots the federal funds rate (the corresponding over-

night rate for the U.S.) since the beginning of 1999, together

with the Fed’s operating target for the funds rate. It is evident

that the daily deviations from the target rate are larger in the

United States.56 Nor can this difference easily be attributed to

differences in the size or structure of the respective economies’

Figure 5.6The U.S. Fed funds rate and the Fed’s operating target

Monetary Policy in the Information Economy 235

Page 249: Technology and the New Economy

banking systems; for in the first half of the 1990s, both Canada

and New Zealand generally had more volatile overnight inter-

est rates than did the United States (Sellon and Weiner 1997,

chart 3).

An especially telling comparison regards the way the differ-

ent systems were able to deal with the strains created by the

increase in uncertainty about currency demand at the time of

the Y2K panic. In the United States, where variations in the

supply of Fed balances is the only tool used to control over-

night rates, the Fed’s large year-end open market operations in

response to increased currency demand may have been per-

ceived as implying a desire to reduce the funds rate; in any

event, it temporarily traded more than one hundred and fifty

basis points below the Fed’s operating target (Taylor 2001).

Subsequent open market operations to withdraw the added

cash also resulted in a funds rate well above target weeks after

the date change. In New Zealand, large open market oper-

ations were also conducted, and in addition to accommodating

banks’ demand for currency, the RBNZ’s ‘‘settlement cash tar-

get’’ was increased by a factor of ten. But the use of a channel

system—with the width of the channel substantially narrowed,

to only twenty basis points—continued to allow tight control

of the overnight rate, which never deviated at all from the tar-

get rate (to the nearest basis point) during this period (Hamp-

ton 2000). Similarly, in Canada the overnight money market

financing rate never deviated by more than one or two basis

points from the Bank of Canada’s target rate in the days sur-

rounding the change of millennium. In Australia, the cash rate

fell to as much as six or seven basis points below target on

some days in the week before and after the date change, but

the deterioration of interest-rate control was still small and

short-lived.57

236 Michael Woodford

Page 250: Technology and the New Economy

Given a channel system for the implementation of monetary

policy, like that currently used in Canada, Australia, and New

Zealand, there is little reason to fear that improvements in

information technology should undermine the effectiveness of

central bank control of overnight interest rates. Neither the

erosion of reserve requirements nor improvements in the abil-

ity of banks to closely manage their clearing balances should

pose particular difficulties for such a system, for these are ex-

actly the developments that led to the introduction of channel

systems in the countries mentioned, and the systems have thus

far worked quite well.

Both the elimination of reserve requirements and increases

in the efficiency with which clearing balances can be tracked

should be expected not only to reduce the quantitative magni-

tude of the net demand for overnight central bank balances,

but to render this demand less interest sensitive. We have

discussed the way in which the presence of effective reserve

requirements (averaged over a maintenance period) makes the

daily demand for central bank balances more interest sensi-

tive, by increasing the intertemporal substitutability of such

demand. The effect of increased ability of banks to accurately

estimate their end-of-day clearing balances can be easily seen

with the help of the model just sketched; reduction of s i for

each of the banks shifts the demand schedule obtained by

summing (6) from one like D1 in figure 5.1 to one more like

D2. In either case, the reduction in the interest sensitivity of the

demand for central bank balances increases the risk of volatil-

ity of the overnight rate owing to errors in the central bank’s

estimate of the size of open market operation required on a

given day to fulfill that day’s demand for overnight balances

at the target interest rate, rendering quantity adjustments less

effective as a means of enforcing a bank’s interest rate target.

Monetary Policy in the Information Economy 237

Page 251: Technology and the New Economy

It is thus not surprising that in all three of the countries dis-

cussed, the channel systems described above were introduced

at the time of the introduction of new, more efficient clearing

systems.58

Under such a system, further improvements in the efficiency

of the payments system, tending to render the demand for over-

night balances even less responsive to interest-rate changes, can

be offset by a further narrowing of the width of the channel.

Note that (8) implies that the slope of the demand schedule in

figure 5.1, evaluated near the target interest rate (midpoint of

the channel), is equal to

dD

di¼ �

Pi s

i

ði l � idÞf ðmÞ ;

where m is the median value of e i=s i and f ðmÞ1 F 0ðmÞ is the

probability density function at that point. Thus interest-

sensitivity is reduced by reductions in uncertainty about banks’

end-of-day positions, as noted, but any such change can be

offset by a suitable narrowing of the width of the channel

i l � id; so that the effect upon the equilibrium overnight rate (in

basis points) of a given size error in the size of the required

open market operation on a particular day (in dollars) would

remain unchanged. Since the main reason for not choosing too

narrow a channel—concern that a sufficient incentive remain

for the reallocation of clearing balances among banks through

the interbank market (Brookes and Hampton 2000)—becomes

less of a concern under the hypothesis of improved forecast-

ability of end-of-day positions, a narrower channel would seem

quite a plausible response.

Nor should a channel system be much affected by the possi-

ble development of novel media for payments. The replacement

238 Michael Woodford

Page 252: Technology and the New Economy

of currency by smart cards would only simplify day-to-day

central bank control of the supply of clearing balances, ensur-

ing that the target S would be maintained more reliably. And

the creation of alternative payments networks would probably

not result in complete abandonment of the central bank’s sys-

tem for purposes of final settlement, as long as the costs of

using that system can be kept low. Under a channel system, the

opportunity cost of maintaining clearing balances with the

central bank is equal only to i� id, or (assuming an equilib-

rium typically near the midpoint of the channel) only half the

width of the channel. This cost is small under current con-

ditions (25 basis points annually, in the countries under dis-

cussion), but might well be made smaller if improvements in

information processing further increase the accuracy of banks’

monitoring of their clearing balances.

The development of alternative payments systems is likely to

lead to increasing pressure from financial institutions for re-

duction in the cost of clearing payments through the central

bank, both through reduction of reserve requirements and

through payment of interest on central bank balances. And the

reduction of such taxes on the use of central bank money can

be defended on public finance grounds even under current

conditions.59 From this point of view as well, the channel sys-

tems of Canada, Australia, and New Zealand may well repre-

sent the future of settlement systems worldwide.

It is worth noting, however, that a consideration of the use-

fulness of a channel system for monetary control leads to a

somewhat different perspective on the payment of interest on

reserves than is often found in discussions of that issue from

the point of view solely of tax policy. For example, it is some-

times proposed that it might be sufficient to pay interest on

Monetary Policy in the Information Economy 239

Page 253: Technology and the New Economy

required reserves only, rather than on total central-bank bal-

ances, on the ground that a tax that cannot be avoided (or can

be avoided only by reducing the scale of one’s operations) is

an especially onerous one. But if there continues to be zero in-

terest on ‘‘excess reserves,’’ then the interest rate on marginal

central bank balances continues not to be adjusted with

changes in the target level of overnight rates, and it continues

to be the case that changes in the overnight rate must be

brought about through changes in the degree to which the

supply of central bank balances is rationed.

Similarly, it is often supposed that the interest that should be

paid on reserves on efficiency grounds should be a rate that is

tied to market interest rates. This may seem to follow immedi-

ately from the fact that the spread i� id is analogous to a tax

on holding balances overnight with the central bank; fixing id

to equal i minus a constant spread would then be a way of

keeping this tax rate constant over time. But raising the deposit

rate automatically with increases in the overnight rate means

that such increases will no longer increase the opportunity cost

of holding overnight balances; this will make the demand for

overnight balances much less interest sensitive, and so make

control of the overnight rate by the central bank more difficult,

if not impossible.60 Tying the deposit rate to the target over-

night rate, as in the channel systems just described, instead

helps to keep the market rate near the target rate. In equilib-

rium, the spread between the market overnight rate and the

deposit rate should thereby be kept from varying much, so that

the goal of a fairly constant effective tax rate is also achieved.

But with this approach to the problem of reducing the cost of

holding overnight balances, the twin goals of microeconomic

efficiency and macroeconomic stability can both be served.

240 Michael Woodford

Page 254: Technology and the New Economy

5.3 Interest-Rate Control in the Absence of Monetary

Frictions

I have argued that there is little reason to fear that improve-

ments in information technology should threaten the ability

of central banks to control overnight interest rates, and hence

to pursue their stabilization goals in much the way they do

at present; indeed, increased opportunity to influence market

expectations should make it possible for monetary policy to be

even more effective. There is nothing to fear from increased

efficiency of information transmission in markets, because the

effectiveness of monetary policy depends neither upon fooling

market participants nor upon the manipulation of market dis-

tortions that depend upon monopoly power on the part of the

central bank.

Some will doubtless wonder if this can really be true. They

may feel that such an optimistic view fails to address the puzzle

upon which Friedman (1999) remarks: If banks have no special

powers at their disposal, how can it be that such small trades

by central banks can move rates in such large markets? In the

complete absence of any monopoly power on the part of cen-

tral banks—because their liabilities no longer supply any ser-

vices not also supplied by other equally riskless, equally liquid

financial claims—it might be thought that any remaining abil-

ity of central banks to affect market rates would have to de-

pend upon a capacity to adjust their balance sheets by amounts

that are large relative to the overall size of financial markets.

Of course, one might still propose that central banks should

be able to engage in trades of any size that turned out to be

required, owing to the fact that the government stands behind

the central bank and can use its power of taxation to make up

Monetary Policy in the Information Economy 241

Page 255: Technology and the New Economy

any trading losses, even huge ones.61 But I argue instead that

massive adjustments of central bank balance sheets would not

be necessary in order to move interest rates, even in a world

where central bank liabilities ceased to supply any services in

addition to their pecuniary yield. Thus the claim that banks

should still be as effective at pursuing their stabilization objec-

tives in a world with informationally efficient financial markets

does not depend upon a supposition that central banks ought

to be willing to trade on a much more ambitious scale than

they do at present.

5.3.1 The Source of Central Bank Control of Short-Term

Interest Rates

In the previous discussion, it was supposed that even in the

future there would continue to be some small demand for cen-

tral bank balances (if only for clearing purposes) at a positive

opportunity cost. But the logic of the method of interest-rate

control sketched above does not really depend upon this. Sup-

pose instead that balances held with the central bank cease

to be any more useful to commercial banks than any other

equally riskless overnight investment. In this case, the demand

for central bank balances would collapse to a vertical line at

zero for all interest rates higher than the settlement cash rate,

as shown in figure 5.7, together with a horizontal line to the

right at the settlement cash rate. That is, banks should still be

willing to hold arbitrary balances at the central bank, as long

as (but only if ) the overnight cash rate is no higher than the

rate paid by the central bank. In this case, it would no longer

be possible to induce the overnight cash market to clear at a

target rate higher than the rate paid on settlement balances.

But the central bank could still control the equilibrium

overnight rate, by choosing a positive settlement cash target, so

242 Michael Woodford

Page 256: Technology and the New Economy

that the only possible equilibrium would be at an interest rate

equal to the settlement cash rate, as shown in figure 5.7. Such a

system would differ from current channel systems in that an

overnight lending facility would no longer be necessary, so that

there would no longer be a ‘‘channel.’’62 And the rate paid on

central bank balances would no longer be set at a fixed spread

below the target overnight rate; instead, it would be set at

exactly the target rate. But perfect control of overnight rates

should still be possible through adjustments of the rate paid on

overnight central bank balances,63 64 and changes in the target

overnight rate would not have to involve any change in the

settlement cash target, just as is true under current channel

systems. Indeed, in this limiting case, variations in the supply

of central-bank balances would cease to have any effect at

Figure 5.7The interbank market when central bank balances are no longer usedfor clearing purposes

Monetary Policy in the Information Economy 243

Page 257: Technology and the New Economy

all upon the equilibrium overnight rate. Thus it would be es-

sential to move from a system like that of the United States at

present—in which variations in the supply of Fed balances is

the only tool used to affect the overnight rate, while the interest

rate paid on these balances is never varied at all65—to one in

which instead variations in overnight rates are achieved purely

through variations in the rate paid on Fed balances, and not at

all through supply variations.

How can interest-rate variation be achieved without any

adjustment at all of the supply of central bank balances? Cer-

tainly, if a government decides to peg the price of some com-

modity, it may be able to do so, but only by holding stocks of

the commodity that are sufficiently large relative to the world

market for that commodity, and by standing ready to vary

its holdings of the commodity by large amounts as necessary.

What is different about controlling short-term nominal interest

rates?

The difference is that there is no inherent ‘‘equilibrium’’ level

of interest rates to which the market would tend in the absence

of central bank intervention, and against which the central

bank must therefore exert a significant countervailing force in

order to achieve a given operating target.66 This is because

there is no inherent value (in terms of real goods and services)

for a fiat unit of account such as the ‘‘dollar,’’ except insofar as

a particular exchange value results from the monetary policy

commitments of the central bank.67 Alternative price-level

paths are thus equally consistent with market equilibrium in

the absence of any intervention that would vary the supply of

any real goods or services to the private sector. And associated

with these alternative paths for the general level of prices are

alternative paths for short-term nominal interest rates.

244 Michael Woodford

Page 258: Technology and the New Economy

Of course, this analysis might suggest that while central

banks can bring about an arbitrary level of nominal interest

rates (by creating expectations of the appropriate rate of infla-

tion), they should not be able to significantly affect real interest

rates, except through trades that are large relative to the econ-

omy that they seek to affect. It may also suggest that banks

should be able to move nominal rates only by altering inflation

expectations; yet banks generally do not feel that they can

easily alter expectations of inflation over the near term, so that

one might doubt that banks should be able to affect short-term

nominal rates through such a mechanism.

However, once one recognizes that many prices (and wages)

are fairly sticky over short time intervals, the arbitrariness of

the path of nominal prices (in the sense of their underdeter-

mination by real factors alone) implies that the path of real

activity, and the associated path of equilibrium real interest

rates, are equally arbitrary. It is equally possible, from a logical

standpoint, to imagine allowing the central bank to determine,

by arbitrary fiat, the path of aggregate real activity, or the path

of real interest rates, as it is to imagine allowing it to determine

the path of nominal interest rates.68 In practice, it is easiest for

central banks to exert relatively direct control over overnight

nominal interest rates, and so banks generally formulate their

short-run objectives (their operating target) in terms of the

effect that they seek to bring about in this variable rather than

one of the others.

Even recognizing the existence of a very large set of rational

expectations equilibria—equally consistent with optimizing

private-sector behavior and with market clearing, in the ab-

sence of any specification of monetary policy—one might

nonetheless suppose, as Fischer Black (1970) once did, that in a

Monetary Policy in the Information Economy 245

Page 259: Technology and the New Economy

fully deregulated system the central bank should have no way

of using monetary policy to select among these alternative

equilibria. The path of money prices (and similarly nominal

interest rates, nominal exchange rates, and so on) would then

be determined solely by the self-fulfilling expectations of mar-

ket participants. Why should the central bank play any special

role in determining which of these outcomes should actually

occur, if it does not possess any monopoly power as the unique

supplier of some crucial service?

The answer is that the unit of account in a purely fiat system

is defined in terms of the liabilities of the central bank.69 A

financial contract that promises to deliver a certain number of

U.S. dollars at a specified future date is promising payment in

terms of Federal Reserve notes or clearing balances at the Fed

(which are treated as freely convertible into one another by the

Fed). Even in the technological utopia imagined by the enthu-

siasts of ‘‘electronic money’’—where financial market partic-

ipants are willing to accept as final settlement transfers made

over electronic networks in which the central bank is not

involved—if debts are contracted in units of a national cur-

rency, then clearing balances at the central bank will still de-

fine the thing to which these other claims are accepted as

equivalent.

This explains why the nominal interest yield on clearing

balances at the central bank can determine overnight rates in

the market as a whole. The central bank can obviously define

the nominal yield on overnight deposits in its clearing accounts

as it chooses; it is simply promising to increase the nominal

amount credited to a given account, after all. It can also deter-

mine this independently of its determination of the quantity

of such balances that it supplies. Commercial banks may ex-

change claims to such deposits among themselves on whatever

246 Michael Woodford

Page 260: Technology and the New Economy

terms they like. But the market value of a dollar deposit in such

an account cannot be anything other than a dollar—because

this defines the meaning of a ‘‘dollar’’! This places the Fed in a

different situation than any other issuer of dollar-denominated

liabilities.70 Citibank can determine the number of dollars that

one of its jumbo CDs will be worth at maturity, but must then

allow the market to determine the current dollar value of such

a claim; it cannot determine both the quantity that it wishes to

issue of such claims and the interest yield on them. Yet the Fed

can, and does so daily—though as previously noted, at present

it chooses to fix the interest yield on Fed balances at zero, and

only to vary the supply. The Fed’s current position as monop-

oly supplier of an instrument that serves a special function is

necessary in order for variations in the quantity supplied to

affect the equilibrium spread between this interest rate and

other market rates, but not in order to allow separate determi-

nation of the interest rate on central bank balances and the

quantity of them in existence.

Yes, someone may respond, a central bank would still be

able to determine the interest rate on overnight deposits at the

central bank, and thus the interest rate in the interbank market

for such claims, even in a world of completely frictionless

financial markets. But would control of this interest rate nec-

essarily have consequences for other market rates, the ones

that matter for critical intertemporal decisions such as invest-

ment spending? The answer is that it must—and all the more

so in a world in which financial markets have become highly

efficient, so that arbitrage opportunities created by discrepancies

among the yields on different market instruments are immedi-

ately eliminated. Equally riskless short-term claims issued by

the private sector (say, shares in a money-market mutual fund

holding very short-term Treasury bills) would not be able to

Monetary Policy in the Information Economy 247

Page 261: Technology and the New Economy

promise a different interest rate than the one available on

deposits at the central bank; otherwise, there would be excess

supply or demand for the private-sector instruments. And de-

termination of the overnight interest rate would also have to

imply determination of the equilibrium overnight holding re-

turn on longer-lived securities, up to a correction for risk; and

so, determination of the expected future path of overnight

interest rates would essentially determine longer-term interest

rates.

5.3.2 Could Money Be Privatized?

The special feature of central banks, then, is simply that they

are entities the liabilities of which happen to be used to define

the unit of account in a wide range of contracts that other

people exchange with one another. There is perhaps no deep,

universal reason why this need be so; it is certainly not essen-

tial that there be one such entity per national political unit.

Nonetheless, the provision of a well-managed unit of account

—one in terms of which the equilibrium prices of many goods

and services will be relatively stable—clearly facilitates eco-

nomic life. And given the evident convenience of having a sin-

gle unit of account be used by most of the parties with whom

one wishes to trade, one may well suppose that this function

should properly continue to be taken on by the government.

Nonetheless, it is worth remarking that there is no reason of

principle for prohibiting private entry into this activity—apart

from the usual concerns with the prevention of fraud and finan-

cial panics that require regulation of the activities of financial

intermediaries in general. One might imagine, as Hayek (1986)

did, a future in which private entities manage competing mon-

etary standards in terms of which people might choose to con-

tract. Even in such a world, the Fed would still be able to

248 Michael Woodford

Page 262: Technology and the New Economy

control the exchange value of the U.S. dollar against goods and

services by adjusting the nominal interest rate paid on Fed

balances. The exchange value of the U.S. dollar in terms of

private currencies would depend upon the respective monetary

policies of the various issuers, just as is true of the deter-

mination of exchange rates among different national currencies

today.

In such a world, would central banks continue to matter?

This would depend upon how many people still chose to con-

tract in terms of the currencies the values of which they con-

tinued to determine. Under present circumstances, it is quite

costly for most people to attempt to transact in a currency

other than the one issued by their national government, be-

cause of the strong network externalities associated with such a

choice, even though there are often no legal barriers to con-

tracting in another currency. But in a future in which trans-

actions costs of all sorts have been radically reduced, that

might no longer be the case, and if so, the displacement of

national currencies by private payment media might come to

be possible.71 Would this be a disaster for macroeconomic

stability?

It is hard to see why it should be. The choice to transact in

terms of a particular currency, when several competing alter-

natives are available, would presumably be made on the basis

of an expectation that the currency in question would be man-

aged in a way that would make its use convenient. Above all,

this should mean stability of its value, so that fixing a contract

wage or price in these units will not lead to large distortions

over the lifetime of the contract (or so that complicated index-

ation schemes will not need to be added to contracts to offset

the effects of instability in the currency’s value). Thus com-

petition between currencies should increase the chances that

Monetary Policy in the Information Economy 249

Page 263: Technology and the New Economy

at least some of those available would establish reputations

for maintaining stable values. Of course the relevant sense in

which the value of a currency should remain stable is that the

prices of those goods and services that happen to be priced in

that currency should remain as stable as possible.72 Thus one

might imagine ‘‘currency blocs’’ developing in different sectors

of a national economy between which there would be substan-

tial relative-price variations even in the case of fully flexible

prices, with firms in each sector choosing to transact in a cur-

rency that is managed in a way that serves especially to stabi-

lize the prices of the particular types of goods and services in

their sector.73 The development of a system of separate cur-

rency blocs not corresponding to national boundaries, or to

any political units at all, might then have efficiency advantages.

Thus a future is conceivable in which improvements in the

efficiency of communications and information processing so

change the financial landscape that national central banks

cease to control anything that matters to national economies.

Yet even such a development would not mean that nominal

prices would cease to be determined by anything, and would

be left to the vagaries of self-fulfilling expectations—with the

result that, due to wage and price stickiness, the degree to

which productive resources are properly utilized would be

hostage to these same arbitrary expectations. Such a future

could only occur if the functions of central banks today are

taken over by private issuers of means of payment, who are

able to stabilize the values of the currencies that they issue.

And if in some distant future this important function comes to

be supplied by private organizations, it is likely that they will

build upon the techniques for inflation control being developed

by central banks in our time.

250 Michael Woodford

Page 264: Technology and the New Economy

Appendix: Market Participation and the Effectiveness of

Open-Market Operations

The following simple model may help to clarify the point made

in section 5.1 about the illusory benefit that derives from

increasing the central bank’s leverage over market rates by

making the bank’s interventions as much of a surprise as pos-

sible. Let the economy be made up of a group of households

indexed by j, each of which chooses consumption C j, end-of-

period money balances Mj, and end-of-period bond holdings

B j, to maximize an objective of the form

uðC j;Mj=PÞ þ l jðMj þ ð1þ iÞB jÞ; ðA:1Þ

where u is an increasing, concave function of consumption and

real money balances, P is the current period price level, i is

the nominal interest yield on the bonds between the current

period and the next, and l j > 0 is the household’s discounted

expected marginal utility of nominal wealth in the following

period. I assume here for simplicity that the expected marginal

utility of wealth l j is affected only negligibly by a household’s

saving and portfolio decisions in the current period, because

the cost of consumption expenditure and the interest foregone

on money balances for a single period are small relative to the

household’s total wealth; I thus treat l j as a given constant

(though of course in a more complete model it depends upon

expectations about equilibrium in subsequent periods, includ-

ing future monetary policy).

Each household chooses these variables subject to a budget

constraint of the form

Mj þ B j þ PC jaW j ¼ ~WW j þ B

j; ðA:2Þ

where W j is the household’s nominal wealth to be allocated

among the three uses. This last can be partitioned into the

Monetary Policy in the Information Economy 251

Page 265: Technology and the New Economy

household’s bond holdings Bjprior to the end-of-period trad-

ing in which the central bank’s open market operations are

conducted and the other sources of wealth ~WW j. I suppose fi-

nally that only a fraction g of the households participate in this

end-of-period bond trading; the choices of the other house-

holds are subject to the additional constraint that

B j ¼ Bj; ðA:3Þ

whether or not this would be optimal in the absence of the

constraint. Because advance notice of the central bank’s inten-

tion to conduct an open market operation will in general make

the previously chosen Bjno longer optimal, I suppose that

greater publicity would increase the participation rate g; but I

do not here explicitly model the participation decision, instead

considering only the consequences of alternative values of g.

All households are assumed to choose their consumption and

hence their end-of-period money balances only after the size of

the open market operation has been revealed; P and i are thus

each determined only after revelation of this information.

Assuming an interior solution, the optimal decision of each

household satisfies the first-order condition

ucðC j;Mj=PÞ � umðC j;Mj=PÞ ¼ l jP: ðA:4Þ

In the case of households that participate in the end-of-period

bond market, there is an additional first-order condition

umðC j;Mj=PÞ ¼ l jPi: ðA:5Þ

Using (A.4) to eliminate l j in (A.5), one obtains a relation that

can be solved (under the standard assumption that both con-

sumption and real balances are normal goods) for desired real

balances

Mj=P ¼ LðC j; iÞ; ðA:6Þ

252 Michael Woodford

Page 266: Technology and the New Economy

where the money demand function L is increasing in real pur-

chases C j and decreasing in the interest rate i. The optimal

decisions of these households are then determined by (A.2),

(A.4), and (A.5) (or equivalently (A.6)). The optimal decisions

of the households who do not participate in the final bond

trading are instead determined by the first two of these rela-

tions and by the constraint (A.3) instead of (A.5).

In the case of the nonparticipating households, these con-

ditions have a solution of the form

C j ¼ cnpð ~WW j=P; l jPÞ; ðA:7Þ

Mj=P ¼ mnpð ~WW j=P; l jPÞ: ðA:8Þ

Bond holdings are of course given by (A.3). Note that these

households’ decisions are unaffected by the bond yield i deter-

mined in the end-of-period trading. In the case of participating

households, conditions (A.4) and (A.5) can instead be solved to

yield

C j ¼ cpðl jP; iÞ; ðA:9Þ

Mj=P ¼ mpðl jP; iÞ: ðA:10Þ

In the standard case, both cp and mp will be decreasing func-

tions of i: The implied demand for bonds is then given by

B j ¼ ~WW j þ Bj � dðl jP; iÞ; ðA:11Þ

where

dðl jP; iÞ1 cpðl jP; iÞ þmpðl jP; iÞ:

Now suppose that the central bank increases the money

supply by a quantity DM per capita, through an open market

operation that reduces the supply of bonds by this same

amount. The effect on the interest rate i is then determined by

the requirement that participating households must be induced

to reduce their bond holdings by an aggregate quantity equal

Monetary Policy in the Information Economy 253

Page 267: Technology and the New Economy

to the size of the open market operation. The interest rate

required for this is determined by aggregating (A.11) over the

set of participating households. In the simple case that they are

all identical, the equilibrium condition is

dðlP; iÞ ¼~WW þ g�1DM

P; ðA:12Þ

as each participating household must be induced to sell g�1

times its per capita share of the bonds purchased by the central

bank. It is obvious that the resulting interest-rate decline is

larger (for a given size of DM and a given price level) the

smaller is g. This is favored by ‘‘catching the markets off

guard’’ when conducting an open market operation.

But this need not mean any larger effect of the open market

operation on aggregate demand. The consumption demands of

the fraction 1� g of households not participating in the end-of-

period bond market are independent of i. While the expendi-

ture of the participating households (at a given price level P) is

stimulated more as a result of the greater decline in interest

rates (this follows from (A.9)), there are also fewer of them.

Thus there need be no greater effect on aggregate demand from

the greater interest-rate decline.

Note that when the interest rate is determined by (A.12), the

implied consumption demand on the part of participating

households is given by

cpðlP; iÞ ¼ cnpð ~WW þ g�1DM; lPÞ:

This follows from the fact that the consumption of these

households satisfies (A.2) and (A.4) just as in the case of the

nonparticipating households, but with the equilibrium condi-

tion B jt ¼ B

j

t � g�1DM instead of B jt ¼ B

j

t . Aggregate real ex-

penditure is then given by

254 Michael Woodford

Page 268: Technology and the New Economy

C ¼ gcnpð ~WW þ g�1DM; lPÞ þ ð1� gÞcnpð ~WW; lPÞ:

The partial derivative of C with respect to DM, evaluated at

DM ¼ 0, is equal to

qC

qDM¼ cnp1 ð ~WW; lPÞ > 0;

which is independent of g as stated in the text.

Notes

Reprinted from Federal Reserve Bank of Kansas City, Economic Pol-icy for the Information Economy, 2001. I am especially grateful toAndy Brookes (RBNZ), Chuck Freedman (Bank of Canada), andChris Ryan (RBA) for their unstinting efforts to educate me aboutthe implementation of monetary policy at their respective centralbanks. Of course, none of them should be held responsible for theinterpretations offered here. I would also like to thank David Archer,Alan Blinder, Kevin Clinton, Ben Friedman, David Gruen, Bob Hall,Spence Hilton, Mervyn King, Ken Kuttner, Larry Meyer, HermannRemsperger, Lars Svensson, Bruce White, and Julian Wright for help-ful discussions, Gauti Eggertsson and Hong Li for research assistance,and the National Science Foundation for research support through agrant to the National Bureau of Economic Research.

1. See equation (A.12) in the appendix.

2. Blinder et al. (2001) defend secrecy with regard to foreign ex-change market interventions on this ground, though they find littleground for secrecy with regard to the conduct or formulation ofmonetary policy.

3. Allan Meltzer, however, assures me that his own intention wasnever to present this analysis as a normative proposal, as opposed to apositive account of actual central bank behavior.

4. Yet even many proponents of that model of aggregate supplywould not endorse the conclusion that it therefore makes sense for acentral bank to seek to exploit its informational advantage in orderto achieve output-stabilization goals. Much of the new classical liter-ature of the 1970s instead argued that the conditions under whichsuccessful output stabilization would be possible were so stringent as

Monetary Policy in the Information Economy 255

Page 269: Technology and the New Economy

to recommend that central banks abandon any attempt to use mone-tary policy for such ends.

5. See Woodford 2003 (chap. 3) for detailed discussion of the micro-economic foundations of the aggregate supply relation (1), and com-parison of it with the new classical specification. Examples of recentanalyses of monetary policy options employing this specificationinclude Goodfriend and King 1997, McCallum and Nelson 1999, andClarida, Gali, and Gertler 1999.

6. See Woodford 2003 (chap. 3) for further discussion. A number ofrecent papers find a substantially better fit between this equation andempirical inflation dynamics when data on real unit labor costs areused to measure the ‘‘output gap,’’ rather than a more conventionaloutput-based measure. See, for example, Sbordone 1998, Gali andGertler 1999, and Gali, Gertler, and Lopez-Salido 2000.

7. This is the foundation offered for the effect of interest rates onaggregate demand in the simple optimizing model of the monetarytransmission mechanism used in papers such as Kerr and King 1996,McCallum and Nelson 1999, and Clarida, Gai, and Gertler 1999, andexpounded in Woodford 2003 (chap. 4).

8. Examples of recent discussions of the issue by central bankers in-clude Issing 2001 and Jenkins 2001.

9. I mentioned earlier the important shift to an immediate announce-ment of target changes since February 1994. Demiralp and Jorda(2001a) argue that markets have actually had little difficulty correctlyunderstanding the Fed’s target changes since November 1989. Lange,Sack, and Whitesell (2001) detail a series of changes in the Fed’scommunication with the public since 1994 that have further increasedthe degree to which it gives explicit hints about the likelihood of futurechanges in policy.

10. It is crucial here to recognize that there is no unique equilibriumpath for interest rates that markets would tend to in the absence of aninterest-rate policy on the part of the central bank. See further discus-sion in section 5.3.

11. Giannoni and Woodford (2001) discuss how policy rules canbe designed that can be specified without any reference to particulareconomic disturbances, but that nonetheless imply an optimal equi-librium response to additive disturbances of an arbitrary type. Thetargeting rules advocated by Svensson (2001) are examples of rules ofthis kind.

256 Michael Woodford

Page 270: Technology and the New Economy

12. A concrete example of such principles and how they can be ap-plied is provided in Giannoni and Woodford 2001.

13. Costa and De Grauwe (2001) instead argue that central banks arecurrently large players in many national financial markets. But theyagree with Friedman that there is a serious threat of loss of monetarycontrol if central bank balances sheets shrink in the future as a resultof financial innovation.

14. Henckel, Ize, and Kovanen (1999) review similar developments,though they reach a very different conclusion about the threat posedto the efficacy of monetary policy.

15. Gormez and Capie (2000) report the results of surveys conductedat trade fairs for smart card innovators held in London in 1999 and2000. In the 1999 survey, 35 percent of the exhibitors answered yes tothe question ‘‘Do you think that electronic cash has a potential to re-place central bank money?’’ while another 47 percent replied ‘‘to acertain extent.’’ Of those answering yes, 22 percent predicted that thisshould occur before 2005, another 33 percent before 2010, and allbut 17 percent predicted that it should occur before 2020.

16. See, for example, Bennett and Peristiani 2001.

17. For example, it accounts for more than 84 percent of central bankliabilities in countries such as the United States, Canada, and Japan(Bank for International Settlements 1996, Table 1).

18. See, for example, McCallum (1999, sec. 5).

19. See Woodford 2003 (chaps. 2, 4) for an argument that ‘‘real-balance effects,’’ a potential channel through which variation in mon-etary aggregates may affect spending quite apart from the path ofinterest rates, are quantitatively trivial in practice.

20. This is obviously true of a bank that, like the U.S. Federal Reservesince the late 1980s, uses open market operations to try to achieve anoperating target for the overnight rate; maintaining the Fed funds ratenear the target requires the Fed to prevent variations in the supply ofFed balances that are not justified by any changes in the demand forsuch balances. But it is also true of operating procedures such as thenonborrowed reserves targeting practiced by the Fed between 1979and 1982 (Gilbert 1985). While this was a type of quantity targetingregime that allowed substantial volatility in the funds rate, maintain-ing a target for the supply of nonborrowed reserves also requiredthe Fed to automatically accommodate variations in currency demandthrough open market operations.

Monetary Policy in the Information Economy 257

Page 271: Technology and the New Economy

21. A somewhat more distant, but not inconceivable prospect is thate-cash could largely replace payment by checks drawn on bank ac-counts, thus reducing the demand for deposits subject to reserverequirements. For a recent discussion of the prospects for e-cash asa substitute for conventional banking, see Claessens, Glaessner, andKlingebiel 2001.

22. Again see Bennett and Peristiani 2001. Reductions in legal reserverequirements in 1990 and 1992 have contributed to the same trendover the past decade.

23. See Borio 1997, Sellon and Weiner 1996, 1997, and Henckel, Ize,and Kovanen 1999.

24. Roughly the same quantity of Fed balances represent ‘‘requiredclearing balances.’’ These are amounts that banks agree to hold onaverage in their accounts at the Fed, in addition to their requiredreserves; the banks are compensated for these balances, in credit thatcan be used to pay for various services for which the Fed charges(Meulendyke 1998, chap. 6). However, the balances classified thisway do not fully measure the demand for clearing balances. Banks’additional balances, classified as ‘‘excess reserves,’’ are also heldlargely to facilitate clearing; these represent balances that the bankschoose to hold ex post, above the ‘‘required balances’’ negotiated withthe Fed in advance of the reserve maintenance period. Furthermore,the balances held to satisfy reserve requirements also facilitate clear-ing, insofar as they must be maintained only on average over a two-week period, and not at the end of each day. Thus in the absence ofreserve requirements, the demand for Fed balances might well benearly as large as it is at present.

25. Fluctuations in the net supply of overnight balances, apart fromthose due to central bank open market operations, occur as a result ofgovernment payments that are not fully offset by open market oper-ations, while fluctuations in the net demand for such balances bybanks result from day-to-day variation in uncertainty about paymentflows and variation in the efficiency with which the interbank marketsucceeds in matching banks with excess clearing balances with thosethat are short.

26. This is emphasized by Furfine, for whom it is crucial in explaininghow patterns in daily interbank payments flows can create corre-sponding patterns in daily variations in the funds rate. However,the system of compensating banks for committing themselves to hold

258 Michael Woodford

Page 272: Technology and the New Economy

a certain average level of ‘‘required clearing balances’’ over a two-week maintenance period introduces similar intertemporal subsitutioninto the demand for Fed balances, even in the absence of reserverequirements.

27. The increase in funds rate volatility in 1991 following the reduc-tion in reserve requirements is often interpreted in this way; see, forexample, Clouse and Elmendorf 1997. However, declines in requiredreserve balances since then have to some extent been offset byincreased holdings of required clearing balances, and this is probablythe reason that funds rate volatility has not been notably higher inrecent years.

28. See also the views of electronic-money innovators reported inGormez and Capie 2000. In the 2000 survey described there, 57 per-cent of respondents felt that e-money technologies ‘‘can . . . eliminatethe power of central banks as the sole providers of monetary base inthe future (by offering alternative monies issued by other institu-tions).’’ And 48 percent of respondents predicted that these tech-nologies would ‘‘lead to a ‘free banking’ era (a system of competingtechnologies issued by various institutions and without a centralbank).’’ Examples of ‘‘digital currency’’ systems currently being pro-moted are discussed on the Standard Transactions Web site, hhttp://www.standardtransactions.com/digitalcurrencies.htmli.

29. Goodhart (1986) and McCulloch (1986) nonetheless propose amethod for paying interest on currency as well, through a lotterybased upon the serial numbers of individual notes.

30. For details of these systems, see, for example, Archer Brookes,and Reddell 1999, Bank of Canada 1999, Borio 1997, Brookes andHampton 2000, Campbell 1998, Clinton 1997, Reserve Bank of Aus-tralia 1998, Reserve Bank of New Zealand 1999, and Sellon andWeiner 1997.

31. Of course, standing facilities may be provided even in the pres-ence of reserve requirements, as is currently the case at the EuropeanCentral Bank (ECB). The ECB’s standing facilities do not establishnearly so narrow a ‘‘channel’’ as in the case of Canada, Australia, andNew Zealand—except for a period in early 1999 just after the intro-duction of the euro, it has had a width of two hundred basis points,rather than only fifty basis points—and open market operations inresponse to deviations of overnight rates from the target rate play alarger role in the control of overnight rates, as in the United States

Monetary Policy in the Information Economy 259

Page 273: Technology and the New Economy

(European Central Bank 2001). We also here abstract from the com-plications resulting from the U.S. regulations relating to ‘‘requiredclearing balances,’’ which result in substitutability of clearing balancesacross days within the same two-week reserve maintenance period, asdiscussed earlier.

32. This is called the ‘‘target rate’’ in Canada and Australia, and the‘‘official cash rate’’ (OCR) in New Zealand; in all of these countries,changes in the central bank’s operating target are announced in termsof changes in this rate. The RBNZ prefers not to refer to a ‘‘target’’rate in order to make it clear that the bank does not intend to in-tervene in the interbank market to enforce trading at this rate. InCanada, until this year, the existence of the target rate was notemphasized in the bank’s announcements of policy changes; instead,more emphasis was given to the boundaries of the ‘‘operating band’’or channel, and policy changes were announced in terms of changes inthe ‘‘bank rate’’ (the upper bound of the channel). But the midpoint ofthe ‘‘operating band’’ was understood to represent the bank’s targetrate (Bank of Canada 1999), and the Bank of Canada has recentlyadopted the practice of announcing changes in its target rate (see, forexample, Bank of Canada 2001b), in conformity with the practices ofother central banks.

33. In New Zealand, the lending rate (overnight repo facility rate)was briefly reduced to only ten basis points above the OCR during theperiod spanning the ‘‘Y2K’’ date change, as discussed later.

34. Economists at the RBA believe that there remains some smallstigma associated with use of the bank’s lending (overnight repo)facility, despite the bank’s insistence that ‘‘overnight repos are there tobe used,’’ as long as the same bank does not need them day after day.Nonetheless, the facility is used with some regularity, and clearlyserves a different function than the U.S., discount window. One of themore obvious differences is that in the United States, the Fed con-sistently chooses a target funds rate that is above the discount rate,making it clear that there is no intention to freely supply funds at thediscount rate, while the banks with channel systems always choose atarget rate below the rate associated with their overnight lendingfacilities. Lending at the Fed’s discount window is also typically for alonger term than overnight (say, for two weeks), and is thus not in-tended primarily as a means of dealing with daily overdrafts in clear-ing accounts.

260 Michael Woodford

Page 274: Technology and the New Economy

35. In each of the three countries mentioned as leading examples ofthis kind of system, a ‘‘channel’’ width of 50 basis points is currentlystandard. However, the Reserve Bank of New Zealand briefly nar-rowed its ‘‘channel’’ to a width of only twenty basis points late in1999, in order to reduce the cost to banks of holding larger-than-usual overnight balances in order to deal with possible unusual li-quidity demands resulting from the ‘‘Y2K’’ panic (Hampton 2000). Itis also worth noting that when the Reserve Bank of Australia firstestablished its deposit facility, it paid a rate only ten basis points be-low the target cash rate. This, however, was observed to result insubstantial unwillingness of banks to lend in the interbank market,as a result of which the rate was lowered to twenty-five basis pointsbelow the target rate (Reserve Bank of Australia 1998).

36. It is arguable that the actual lower bound is somewhat above thedeposit rate, because of the convenience and lack of credit risk asso-ciated with the deposit facility, and similarly that the actual upperbound is slightly above the lending rate, because of the collateralrequirements and possible stigma associated with the lending facility.Nonetheless, market rates are observed to stay within the channelestablished by these rates (except for occasional slight breaches of theupper bound during the early months of operation of Canada’s sys-tem—see figure 5.5), and typically near its center.

37. This analysis is similar to a traditional analysis, such as that ofGilbert 1985, of federal funds rate determination under U.S. operatingprocedures. But under U.S. arrangements, there is no horizontal seg-ment to the left (or rather, this occurs only at a zero funds rate), andthe segment extending to the right is steeply sloped, owing to ration-ing at the discount window. In recent years, U.S. banks have indicatedconsiderable reluctance to borrow at the discount window, so that theentire schedule may be treated as essentially vertical. However, a staticanalysis of this kind is only possible for the United States if the modelis taken to refer to averages over a two-week reserve maintenanceperiod, as Gilbert notes. Hence the existence of a trading desk reactionfunction of the kind described by Taylor (2001), in which the desk’sopen market operations each day respond to the previous day’s dis-crepancy between the funds rate and the Fed’s target, should give theeffective supply schedule over a maintenance period an upward slopein the case of the United States.

38. The account given here closely follows Henckel, Ize, and Kovanen1999 and Guthrie and Wright 2000.

Monetary Policy in the Information Economy 261

Page 275: Technology and the New Economy

39. In Furfine’s (2000) model of the daily U.S. interbank market, thisresidual uncertainty represents the possibility of ‘‘operational glitches,bookkeeping mistakes, or payments expected from a counterpartythat fail to arrive before the closing of Fedwire.’’

40. In practice, lending in the interbank market is observed to occurat a rate above the central bank’s deposit rate, despite the existenceof a positive net supply of clearing balances, even when there is a‘‘closing period’’ at the end of the day in which trades in the interbankmarket for overnight clearing balances are still possible while no fur-ther payments may be posted. Even though trading is possible at atime at which banks know the day’s payment flows with certainty, itis sufficiently inconvenient for them to wait until the ‘‘closing period’’to arrange their trades that a substantial amount of trading occursearlier, and hence under uncertainty of the kind assumed in the model.The model’s assumption that all trading in the interbank marketoccurs at a single point in time, and that the market is cleared at asingle rate by a Walrasian ‘‘auctioneer,’’ is obviously an abstraction,but one that is intended to provide insight into the basic determinantsof the average overnight rate.

41. This need not equal the actual end-of-day supply, apart fromborrowings from the lending facility, if there remains uncertaintyabout the size of government payments yet to be received by the endof the day.

42. Nontrivial discrepancies frequently exist between the target andactual supplies of clearing balances; see, for example, figure 5.3 in thecase of Australia. The procedures used in Canada evidently allowprecise targeting of the total supply of clearing balances; futhermore,the Bank of Canada’s target level of balances for a given day is alwaysannounced by 4:30 p.m. the previous day (Bank of Canada 2001a).Thus for Canada, u ¼ 0 each day.

43. In New Zealand, the ‘‘settlement cash target’’ since adoption ofthe OCR system has generally been fixed at $20 million NZ. At theBank of Canada, the target level of clearing balances was actually zeroduring the early months of the LVTS system. But as is discussed be-low, this did not work well. Since late in 1999, the bank has switchedto targeting a positive level of clearing balances, initially about $200million Canadian, and higher on days when especially high transac-tions volume is expected (Bank of Canada 1999, Addendum II). Thetarget level is now ordinarily $50 million Canadian (Bank of Canada

262 Michael Woodford

Page 276: Technology and the New Economy

2001a). In Australia, the target level varies substantially from day today (see figure 5.3), but is currently typically about $750 millionAustralian.

44. This may be because the effective lower bound is actually slightlyabove the deposit rate, and the effective upper bound is slightly abovethe lending rate, as discussed in n. 36. Hence existing channel systemsare not quite as symmetric as they appear.

45. Here I abstract from possible effects upon the si of changes in thevolume of spending in the economy as a result of a change in the levelof overnight interest rates. These are likely to be small relative to othersources of day-to-day variation in the si, and not to occur immediatelyin response to a change in the target overnight rate.

46. The Bank of Canada neutralizes the effects of payments to orfrom the government upon the supply of clearing balances through aprocedure of direct transfer of government deposits, but this techniquehas exactly the same effect as an open market operation.

47. For example, given that this desired value is a small positivequantity, the Bank of Canada increases its target S on days when hightransactions volume is expected, given that this higher volume ofpayments increases the uncertainty s i for the banks. Similarly, main-taining a constant expected supply of clearing balances S requires thatpredictable variations in currency demand or government paymentsbe offset through open market operations, and minimization of thevariance of u requires the bank to monitor such flows as closelyas possible, and sometimes to trade more than once per day. For anillustration of the degree of variation that would occur in the supplyof clearing balances in the case of New Zealand, if the RBNZ didnot conduct daily ‘‘liquidity management operations’’ to offset theseflows, see Figure 6 in Brookes 1999.

48. Of course, a substantial departure of the overnight rate from thetarget rate will suggest misestimation of the required supply of clear-ing balances (9), and this information is not ignored. In some cases,banks that operate a channel system even find a ‘‘second round’’ ofopen-market operations to be necessary, later in a given day, in orderto correct an initial misestimate of the desired S; and this is obviouslyin response to observed pressure on overnight rates in the interbankmarket. But in Australia and New Zealand, these are infrequent—inAustralia, they were necessary only four times in 1999, never in 2000,and twice so far (as of September) in 2001. In Canada, small open

Monetary Policy in the Information Economy 263

Page 277: Technology and the New Economy

market operations are often conducted at a particular time (11:45a.m.) to ‘‘reinforce the target rate’’ if the market is trading at an ap-preciable distance from the target rate. However, this interventiondoes not amount to an elastic supply of funds at the target rate, andits effect upon the end-of-day supply of clearing balances is alwayscanceled out later in the afternoon, so that the end-of-day supplyequals the quantity announced by 4:30 p.m. the previous day. Thusthe supply curve for end-of-day balances in Canada is completely ver-tical at S, as shown in figure 5.1.

49. The deposit facility existed prior to June 1998, but the lendingfacility was introduced only in preparation for the switch to a real-time gross settlement (RTGS) system for interbank payments, and waslittle used prior to the introduction of that system in late June (ReserveBank of Australia 1998).

50. This is the level aimed at in the bank’s initial daily open marketoperations. As noted earlier, there are a few days on which the banktraded again in a ‘‘second round.’’

51. In New Zealand, the ‘‘settlement cash target’’ was increased by afactor of ten in this period, with no effect at all upon actual overnightrates (Hampton 2000).

52. The regime change was more dramatic in New Zealand at thistime, as the RBNZ had not previously announced a target for over-night interest rates at all, instead formulating its operating target interms of a ‘‘monetary conditions index.’’ See Guthrie and Wright2000 for further discussion of New Zealand policy prior to the intro-duction of the OCR system.

53. Similar conventions appear to exist in Australia and Canada aswell, but, perhaps owing to larger size of these markets, trading is notso thoroughly determined by the norm as is true in New Zealand.

54. See Clinton 1997 and Bank of Canada 1999 for details of thesystem, and the connection between the change in the payment systemand the introduction of standing facilities for implementing monetarypolicy.

55. It is possible for the reported overnight rate—which includestransactions between banks and their customers as well as interbanktransactions—to slightly exceed the Bank Rate when banks chargerates to their customers, who do not have access to the Bank ofCanada’s lending facility, that exceed the banks’ own cost of funds.

264 Michael Woodford

Page 278: Technology and the New Economy

56. Since March 2000, the standard deviation of i� i� has been only1.5 basis points for Australia, 1.1 basis points for Canada, and lessthan 0.4 basis points for New Zealand, but 13.4 basis points for theUnited States.

57. Special procedures adopted in Australia to deal with the Y2Kpanic are described in Reserve Bank of Australia 2000.

58. Canada has defined its short-run policy objectives in terms of an‘‘operating band’’ for the overnight interest rate since June 1994, butdid not use standing facilities to enforce the bounds of the band priorto the introduction of the LVTS clearing system in February 1999.Before then, intraday interventions in the form of repos and reverserepos were used to prevent the overnight rate from moving outside theband (Sellon and Weiner 1997). The adoption of systems based onstanding facilities in both Australia and New Zealand also coincidedwith the introduction of a real-time gross settlement system for pay-ments (Reserve Bank of Australia 1998; Reserve Bank of New Zea-land 1999). In the case of New Zealand, an explicit operating targetfor the overnight rate (the ‘‘official cash rate’’) was also introducedonly at this time.

59. Chari and Kehoe (1999) review recent literature showing thatunder an optimal Ramsey taxation scheme the optimal level of thissort of tax is likely to be zero.

60. This may well have been a reason for the greater difficulty expe-rienced in New Zealand at achievement of the RBNZ’s short-runoperating targets prior to the introduction of the OCR system in1999. See Guthrie and Wright 2000 for discussion of New Zealand’sprevious approach to the implementation of monetary policy.

61. This seems to be the position of Goodhart (2000).

62. This presumes a world in which no payments are cleared usingcentral bank balances. Of course, there would be no harm in con-tinuing to offer such a facility as long as the central bank clearingsystem were still used for at least some payments.

63. Grimes (1992) shows that variation of the interest rate paid oncentral bank balances would be effective in an environment in whichcentral bank reserves are no more useful for carrying out transactionsthan other liquid government securities, so that open market pur-chases or sales of such securities are completely ineffective.

64. Hall (1983, 1999) has also proposed this as a method of price-level control in the complete absence of monetary frictions. Hall

Monetary Policy in the Information Economy 265

Page 279: Technology and the New Economy

speaks of control of the interest yield on a government ‘‘security,’’without any need for a central bank at all. But because of the specialfeatures that this instrument would need to possess, that are not pos-sessed by privately issued securities—it is a claim only to future deliv-ery of more units of the same instrument, and society’s unit of accountis defined in terms of this instrument—it seems best to think of itas still taking the same institutional form that it does today, namely,balances in an account with the central bank. Hall also proposes aspecific kind of rule for adjusting the interest rate on bank reserves inorder to ensure a constant equilibrium price level; but this particularrule is not essential to the general idea. One might equally well simplyadjust the interest paid on reserves according to a ‘‘Taylor rule’’ or aWicksellian price-level feedback rule (Woodford 2003, chap. 2).

65. It is true that required clearing balances are remunerated at a rateequal to the average of the federal funds rate over the reserve mainte-nance period. But this remuneration applies only to the balances thatbanks agree in advance to hold; their additional balances above thislevel are not remunerated, and so at the margin that is relevant to thedecision each day about how to trade in the federal funds market,banks expect zero interest to be paid on their overnight balances.

66. This does not mean that Wicksell’s (1936) notion of a ‘‘natural’’rate of interest determined by real factors is of no relevance to theconsideration of the policy options facing a central bank. It is indeed,as argued in Woodford 2003 (chap. 4). But the natural rate of interestis the rate of interest required for an equilibrium with stable prices; thecentral bank nonetheless can arbitarily choose the level of interestrates (within limits), because it can choose the degree to which pricesshall increase or decrease.

67. The basic point was famously made by Wicksell (1936, 100–101), who compares relative prices to a pendulum that returns alwaysto the same equilibrium position when perturbed, while the moneyprices of goods in general are compared to a cylinder resting on ahorizontal plane, that can remain equally well in any location on theplane.

68. This does not mean, of course, that absolutely any paths for thesevariables can be achieved through monetary policy; the chosen pathsmust be consistent with certain constraints implied by the conditionsfor a rational expectations equilibrium. But this is true even in the caseof the central bank’s choice of a path for the price level. Even in a

266 Michael Woodford

Page 280: Technology and the New Economy

world with fully flexible wages and prices, for example, it would notbe possible to bring about a rate of deflation so fast as to imply anegative nominal interest rate.

69. See Hall 1999 and White 2001 for expression of similar views.White emphasizes the role of legal tender statutes in defining themeaning of a national currency unit. But such statutes do not repre-sent a restriction upon the means of payment that can be used withina given geographical region—or at any rate, there need be no suchrestrictions upon private agreements for the point to be valid. Whatmatters is simply what contracts written in terms of a particular unitof account are taken to mean, and the role of law in stabilizingsuch meanings is essentially no different than, say, in the case oftrademarks.

70. Costa and De Grauwe (2001) instead argue that ‘‘in a cashlesssociety . . . the central bank cannot ‘force the banks to swallow’ thereserves it creates’’ (p. 11), and speak of the central bank being forcedto ‘‘liquidate . . . assets’’ in order the redeem the central-bank liabilitiesthat commercial banks are ‘‘unwilling to hold’’ in their portfolios.This neglects the fact that the definition of the U.S. dollar allows theFed to honor a commitment to pay a certain number of dollars to ac-count-holders the next day by simply crediting them with an accountof that size at the Fed—there is no possibility of demanding paymentin terms of some other asset valued more highly by the market. Simi-larly, Costa and De Grauwe argue that ‘‘the problem of the centralbank in a cashless society is comparable to [that of a] central bankpegging a fixed exchange rate’’ (n. 15). But the problem of a bankseeking to maintain an exchange-rate peg is that it promises to delivera foreign currency in exchange for its liabilities, not liabilities of itsown that it freely creates. Costa and De Grauwe say that they imaginea world in which ‘‘the unit of account remains a national affair . . . andis provided by the state’’ (p. 1) but seem not to realize that this meansdefining that unit of account in terms of central bank liabilities.

71. I should emphasize that I am quite skeptical of the likelihood ofsuch an outcome. It seems more likely that there will continue to besubstantial convenience to being able to carry out all of one’s trans-actions in a single currency, and this is likely to mean that an incum-bent monopolist—the national central bank—will be displaced only ifit manages its currency spectacularly badly. But history reminds usthat this is possible.

Monetary Policy in the Information Economy 267

Page 281: Technology and the New Economy

72. The connection between price stability and the minimization ofeconomic distortions resulting from price or wage stickiness is treatedin detail in Woodford 2003 (chap. 6).

73. The considerations determining the desirable extent of such blocsare essentially the same as those in the literature on ‘‘optimal currencyareas’’ in international economics.

References

Archer, David, Andrew Brookes, and Michael Reddell. 1999. ‘‘A cashrate system for implementing monetary policy.’’ Reserve Bank of NewZealand Bulletin 62: 51–61.

Bank for International Settlements. 1996. Implications for CentralBanks of the Development of Electronic Money. Basel, October.

Bank of Canada. 1999. ‘‘The framework for the implementation ofmonetary policy in the large value transfer system environment.’’ Webdocument, revised March 31, see also Addendum II, November.

Bank of Canada. 2001a. ‘‘Changes to certain Bank of Canada opera-tional procedures relating to the Large Value Transfer System and theuse of purchases and sales of bankers’ acceptances in managing theBank of Canada’s balance sheet.’’ Web document, March 29.

Bank of Canada. 2001b. ‘‘Bank of Canada lowers key policy rate by1/4 per cent.’’ Press release, May 29.

Barro, Robert J. 1977. ‘‘Unanticipated money growth and unemploy-ment in the United States.’’ American Economic Review 67: 101–115.

Barro, Robert J., and Zvi Hercowitz. 1980. ‘‘Money stock revisionsand unanticipated money growth.’’ Journal of Monetary Economics6: 257–267.

Bennett, Paul, and Stavros Peristiani. 2001. ‘‘Are U.S. reserve require-ments still effective?’’ Unpublished, Federal Reserve Bank of NewYork, March.

Black, Fischer. 1970. ‘‘Banking in a world without money: The effectsof uncontrolled banking.’’ Journal of Bank Research 1: 9–20.

Blinder, Alan S. 1998. Central Banking in Theory and Practice. Cam-bridge, MA: The MIT Press.

Blinder, Alan, Charles Goodhart, Philipp Hildebrand, David Lipton,and Charles Wyplosz. 2001. How Do Central Banks Talk? Geneva

268 Michael Woodford

Page 282: Technology and the New Economy

Report on the World Economy no. 3, International Center for Mone-tary and Banking Studies.

Bomfim, Antulio N. 2000. ‘‘Pre-announcement effects, news and vol-atility: Monetary policy and the stock market.’’ Federal ReserveBoard, FEDS paper no. 2000-50, November.

Borio, Claudio E. V. 1970. The Implementation of Monetary Policy inIndustrial Countries: A Survey, Economic Paper no. 47, Bank for In-ternational Settlements.

Boschen, John, and Herschel I. Grossman. 1982. ‘‘Tests of equilib-rium macroeconomics using contemporaneous monetary data.’’ Jour-nal of Monetary Economics 10: 309–333.

Brookes, Andrew. 1999. ‘‘Monetary policy and the Reserve Bankbalance sheet.’’ Reserve Bank of New Zealand Bulletin 62(4): 17–33.

Brookes, Andrew, and Tim Hampton. 2000. ‘‘The official cash rateone year on.’’ Reserve Bank of New Zealand Bulletin 63(2): 53–62.

Calvo, Guillermo A. 1983. ‘‘Staggered prices in a utility-maximizingframework.’’ Journal of Monetary Economics 12: 383–398.

Campbell, Frank. 1998. ‘‘Reserve Bank domestic operations underRTGS.’’ Reserve Bank of Australia Bulletin (November): 54–59.

Chari, V. V., and Lawrence J. Christiano. 1999. ‘‘Optimal fiscal andmonetary policy.’’ In Handbook of Macroeconomics, vol. 1C, ed. J. B.Taylor and M. Woodford, 1671–1745. Amsterdam: North-Holland.

Christiano, Lawrence J., Martin Eichenbaum, and Charles Evans.2001. ‘‘Nominal rigidities and the dynamic effects of a shock to mon-etary policy.’’ Unpublished, Northwestern University, May.

Claessens, Stijn, Thomas Glaessner, and Daniela Klingebiel. 2001. E-Finance in Emerging Markets: Is Leapfrogging Possible? The WorldBank, Financial Sector Discussion Paper no. 7, June.

Clarida, Richard, Jordi Gali, and Mark Gertler. 1999. ‘‘The science ofmonetary policy: A new Keynesian perspective.’’ Journal of EconomicLiterature 37: 1661–1707.

Clinton, Kevin. 1997. ‘‘Implementation of monetary policy in a re-gime with zero reserve requirements.’’ Bank of Canada working paperno. 97-8, April.

Clouse, James A., and Douglas W. Elmendorf. 1997. ‘‘Decliningrequired reserves and the volatility of the federal funds rate.’’ FederalReserve Board, FEDS paper no. 1997-30, June.

Monetary Policy in the Information Economy 269

Page 283: Technology and the New Economy

Cook, Timothy, and Thomas Hahn. 1989. ‘‘The effect of changes inthe federal funds rate target on market interest rates in the 1970s.’’Journal of Monetary Economics 24: 331–351.

Costa, Claudia, and Paul De Grauwe. 2001. ‘‘Monetary policy in acashless society.’’ CEPR discussion paper no. 2696, February.

Cukierman, Alex, and Allan Meltzer. 1986. ‘‘A theory of ambiguity,credibility, and inflation under discretion and asymmetric informa-tion.’’ Econometrica 54: 1099–1128.

Demiralp, Selva, and Oscar Jorda. 2001a. ‘‘The Pavlovian responseof term rates to Fed announcements.’’ Federal Reserve Board, FEDSpaper no. 2001-10, January.

Demiralp, Selva, and Oscar Jorda. 2001b. ‘‘The announcement effect:Evidence from open market desk data.’’ Unpublished, Federal ReserveBank of New York, March.

European Central Bank. 2001. The Monetary Policy of the ECB.Frankfurt.

Freedman, Charles. 2000. ‘‘Monetary policy implementation: Past,present and future—Will electronic money lead to the eventual demiseof central banking?’’ International Finance 3: 211–227.

Friedman, Benjamin M. 1999. ‘‘The future of monetary policy: Thecentral bank as an army with only a signal corps?’’ International Fi-nance 2: 321–338.

Fuhrer, Jeff, and George Moore. 1995. ‘‘Inflation persistence.’’ Quar-terly Journal of Economics 110: 127–159.

Furfine, Craig H. 2000. ‘‘Interbank payments and the daily federalfunds rate.’’ Journal of Monetary Economics 46: 535–553.

Gali, Jordi, and Mark Gertler. 1999. ‘‘Inflation dynamics: A structuraleconometric analysis.’’ Journal of Monetary Economics 44: 195–222.

Gali, Jordi, Mark Gertler, and J. David Lopez-Salido. 2000. ‘‘Euro-pean inflation dynamics.’’ Unpublished, Universitat Pompeu Fabra,October.

Giannoni, Marc P., and Michael Woodford. 2001. ‘‘Optimal Inter-est Rate Rules.’’ Unpublished, Federal Reserve Bank of New York,August.

Gilbert, R. Anton. 1985. ‘‘Operating procedures for conducting mon-etary policy.’’ Federal Reserve Bank of St. Louis Review (February):13–21.

270 Michael Woodford

Page 284: Technology and the New Economy

Goodfriend, Marvin and Robert G. King. 1997. ‘‘The new neoclas-sical synthesis and the role of monetary policy.’’ NBER Macro-economics Annual 12: 493–530.

Goodhart, Charles A. E. 1986. ‘‘How can non-interest-bearing assetscoexist with safe interest-bearing Assets?’’ British Review of Eco-nomic Issues 8(Autumn): 1–12.

Goodhart, Charles A. E. 2000. ‘‘Can central banking survive the ITrevolution?’’ International Finance 3: 189–209.

Gormez, Yuksel, and Forrest Capie. 2000. ‘‘Surveys on electronicmoney.’’ Bank of Finland, discussion paper no. 7/2000, June.

Grimes, Arthur. 1992. ‘‘Discount policy and bank liquidity: Implica-tions for the Modigliani-Miller and quantity theories.’’ Reserve Bankof New Zealand, discussion paper no. G92/12, October.

Guthrie, Graeme, and Julian Wright. 2000. ‘‘Open mouth oper-ations.’’ Journal of Monetary Economics 24: 489–516.

Hall, Robert E. 1983. ‘‘Optimal fiduciary monetary systems.’’ Journalof Monetary Economics 12: 33–50.

Hall, Robert E. 1999. ‘‘Controlling the price level.’’ NBER workingpaper no. 6914, January.

Hamilton, James. 1996. ‘‘The daily market in federal funds.’’ Journalof Political Economy 104: 26–56.

Hampton, Tim. 2000. ‘‘Y2K and banking system liquidity.’’ ReserveBank of New Zealand Bulletin 63: 52–60.

Hayek, Friedrich A. 1986. ‘‘Market standards for money.’’ EconomicAffairs 6(4): 8–10.

Henckel, Timo, Alain Ize, and Arto Kovanen. 1999. ‘‘Central bankingwithout central bank money.’’ IMF working paper no. 99/92, July.

Issing, Ottmar. 2001. ‘‘Monetary policy and financial markets.’’Remarks at the ECB Watchers’ Conference, Frankfurt, Germany,June 18.

Jenkins, Paul. 2001. ‘‘Communicating Canadian monetary policy:Towards greater transparency.’’ Remarks at the Ottawa EconomicsAssociation, Ottawa, Canada, May 22.

Kerr, William, and Robert G. King. 1996. ‘‘Limits on interest rates inthe IS model.’’ Federal Reserve Bank of Richmond Economic Quar-terly (Spring): 47–76.

Monetary Policy in the Information Economy 271

Page 285: Technology and the New Economy

King, Mervyn. 1999. ‘‘Challenges for monetary policy: New and old.’’In New Challenges for Monetary Policy, 11–57. Kansas City: FederalReserve Bank of Kansas City.

Kuttner, Kenneth N. 2001. ‘‘Monetary policy surprises and interestrates: Evidence from the Fed funds futures market.’’ Journal of Mon-etary Economics 47: 523–544.

Lange, Joe, Brian Sack, and William Whitesell. 2001. ‘‘Anticipationsof monetary policy in financial markets.’’ Federal Reserve Board,FEDS paper no. 2001-24, April.

McCallum, Bennett T. 1999. ‘‘Issues in the design of monetary policyrules.’’ In Handbook of Macroeconomics, vol. 1C, ed. J. B. Taylorand M. Woodford, 1483–1530. Amsterdam: North-Holland.

McCallum, Bennett T., and Edward Nelson. 1999. ‘‘An optimizingIS-LM specification for monetary policy and business cycle analysis.’’Journal of Money, Credit and Banking 31: 296–316.

McCulloch, J. Huston. 1986. ‘‘Beyond the historical gold standard.’’In Alternative Monetary Regimes, ed. C. D. Campbell and W. R.Dougan, 73–81. Baltimore: Johns Hopkins University Press.

Meulendyke, Anne-Marie. 1998. U.S. Monetary Policy and FinancialMarkets. New York: Federal Reserve Bank of New York.

Reserve Bank of Australia. 1998. ‘‘Operations in financial markets.’’Annual Report, Reserve Bank of Australia, 28–43.

Reserve Bank of Australia. 2000. ‘‘Operations in financial markets.’’Annual Report, Reserve Bank of Australia, 5–18.

Reserve Bank of New Zealand. 1999. ‘‘Monetary policy implementa-tion: Changes to operating procedures.’’ Reserve Bank of New Zea-land Bulletin 62(1): 46–50.

Sbordone, Argia M. 1998. ‘‘Prices and unit labor costs: A new test ofprice stickiness.’’ Stockholm University, IIES seminar paper no. 653,October.

Sellon, Gordon H. Jr., and Stuart E. Weiner. 1996. ‘‘Monetary policywithout reserve requirements: Analytical issues.’’ Federal ReserveBank of Kansas City Economic Review 81(4): 5–24.

Sellon, Gordon H. Jr., and Stuart E. Weiner. 1997. ‘‘Monetary policywithout reserve requirements: Case studies and options for the UnitedStates.’’ Federal Reserve Bank of Kansas City Economic Review82(2): 5–30.

272 Michael Woodford

Page 286: Technology and the New Economy

Spindt, Paul A., and Ronald J. Hoffmeister. 1988. ‘‘The micro-mechanics of the federal funds market: Implications for day-of-the-week effects in funds rate variability.’’ Journal of Financial andQuantitative Analysis 23: 401–416.

Svensson, Lars E. O. 2001. ‘‘What is wrong with Taylor rules? Usingjudgment in monetary policy through targeting rules.’’ Unpublished,Stockholm University, August.

Taylor, John B. 2001. ‘‘Expectations, open market operations, andchanges in the federal funds rate.’’ Federal Reserve Bank of St. LouisReview 83(4): 33–47.

White, Bruce. 2001. ‘‘Central banking: Back to the future.’’ ReserveBank of New Zealand, discussion paper DP 2001/05, September.

Wicksell, Knut. [1898] 1936. Interest and Prices. London: Macmillan.

Woodford, Michael. 2003. Interest and Prices: Foundations of aTheory of Monetary Policy. Princeton, NJ: Princeton University Press.Forthcoming.

Monetary Policy in the Information Economy 273

Page 287: Technology and the New Economy

This page intentionally left blank

Page 288: Technology and the New Economy

Postscript

Chong-En Bai and Chi-Wa Yuen

The chapters in this volume address a selection of important

topics, but they are by no means exhaustive. In this postscript,

we briefly discuss other topics that we think are relevant to the

IT revolution. This discussion is meant to highlight some re-

lated issues rather than to offer definitive answers.

Implications for Industrial Organization

The first topic is industrial organization in the information age,

which has also been mentioned by Bresnahan and Malerba in

chapter 2. Varian (2001) offers an excellent survey of some of

the issues related to the market structure. These issues include

differentiation of products and prices, search, bundling, pric-

ing of complementary products, switching costs and lock-in,

economies of scale, and network effects.

. With IT, producers have better information about con-

sumers. This enables the producers to offer personalized prod-

ucts and prices to consumers. Such price discrimination helps

the producers better extract consumer value. On the other

hand, it also strengthens the competition among producers.

The welfare implication is not immediately clear.

Page 289: Technology and the New Economy

. IT reduces the search cost of the consumers, but it also helpsthe producers implement complicated price structures. Again,

the net welfare implication is not obvious.

. It is common for an IT product to have strong com-

plementarity with other IT products. Complementarity makes

bundling more acceptable to the consumers, which helps the

supplier of the bundle deter entry.

. When complementary products are produced by independentfirms separately, each producer tends to charge too high a price

because it does not internalize the benefit of lowering its price

for his complementors. A merger would lower the prices, but

some other measures would yield a similar result. For exam-

ple, cross-shareholding among the complementors would also

lower the price and increase welfare.

. When switching costs are high, consumers are locked in bytheir incumbent supplier. This does not necessarily mean weak

competition. Suppliers will compete fiercely for new customers.

Furthermore, if a supplier cannot discriminate between new

and old customers, then the desire to attract more new cus-

tomers will limit its price.

. Many IT-related businesses have cost structure with largefixed costs and small marginal costs. Hence, there are strong

economies of scale on the supply side. Although this situation

is referred to as natural monopoly in economics textbooks, it

does not mean that the monopoly price will necessarily prevail.

Before a monopoly producer emerges, the competition to ac-

quire monopoly power will be fierce. Even if a monopoly

emerges, it will face the durable good monopoly problem that

charging a high price will deter old customers to upgrade.

Furthermore, the monopolist will face pressure from producers

of complementary products not to charge too high a price.

276 Chong-En Bai and Chi-Wa Yuen

Page 290: Technology and the New Economy

Finally, when the market grows rapidly, new players can find

room to enter the market. They can do this even if the incum-

bent’s technology is patented because they can invest around it.

. Many IT products also have demand-side economies of scale,or network effects. In this case, multiple equilibria may arise.

A critical mass of consumers has to exist before the product

can escape the low-level equilibrium trap. To reach the critical

mass, the supplier may have to charge low prices to attract the

initial customers. In summary, many of these factors appear

to imply concentrated market structure. However, there are

countervailing forces that encourage competition. With appro-

priate competition policy, as the investigation of Bresnahan

and Malerba illustrates, new products will emerge to take over

the dominant incumbent.

IT has also breathed fresh air into the management of trans-

actions. Computer-mediated transactions generate rich infor-

mation that was not available before. Such information enables

more efficient contracting between transacting parties. For ex-

ample, computerized recordkeeping in video rental stores has

made it feasible to implement a revenue-sharing contract be-

tween the video distributor and the rental stores. The expanded

scope of contracting may have significant implications for in-

centive design within and between organizations.

Implications for Financial Markets

The financial market may go through significant changes in

response to the IT revolution. One such change may be in the

efficiency of the securities market. D’Avolio, Gildor, and Shlei-

fer (2001) argue that IT has various effects on four key re-

quirements for a well-functioning securities market. IT has

Postscript 277

Page 291: Technology and the New Economy

resulted in many tools that help reduce the transaction costs in

the secondary market of securities. IT has also helped make

information dissemination more efficient so that more investors

can access information. Given the quality of available infor-

mation, these two developments should make the securities

market more efficient.

However, IT may have negative effects on the quality of in-

formation production. Given the two previously mentioned

developments, more investors can participate in the securities

market directly. But they may not have the technical sophisti-

cation to analyze the information received. This gives firms the

opportunity to influence market prices of their securities by

manipulating information disclosure. When firms raise signi-

ficant amounts of capital through the equity market via sea-

soned equity offerings, their costs of capital become lower when

the market prices for their equity shares become higher. This

gives firms strong incentives to manipulate information disclo-

sure to raise their share prices. Therefore, firms have both the

means and the motive to influence their share prices through

information manipulation. D’Avolio, Gildor, and Shleifer (2001)

present compelling evidence that information manipulation has

increased significantly in the last few years.

The final requirement for market efficiency is strong legal

protection of investors’ interests. The impact of IT on legal

protection is ambiguous. On the one hand, the improvement in

information dissemination makes it possible to narrow the gap

in information availability between individual investors and

institutional investors. This can potentially enhance the pro-

tection of legal rights of individual investors. On the other

hand, improvement in technology also makes it easier for cor-

porate insiders and financial intermediaries to trade quickly on

278 Chong-En Bai and Chi-Wa Yuen

Page 292: Technology and the New Economy

private information without disclosure. Overall, the develop-

ment of IT may result in deterioration of market efficiency.

Another potential change resulting from IT is the mode of

investment. In the IT age, more and more of a firm’s assets are

intangible, which makes them unsuitable as collateral. Finan-

cial instruments requiring collateral then become less feasible.

This may imply a lesser role for debt and leasing.

Implications for International Trade

The IT revolution may have a significant effect on international

trade. Traditionally, international trade in services (e.g., finan-

cial service) was mostly skill intensive and flowed from rich to

poor countries. Poor countries used to export only tradable

goods. With the IT revolution, they have also started to export

services. Prominent examples include the export of software

from India to developed economies. The development of the

IT has also enabled some back-office functions to be contracted

out from developed English-speaking countries to English-

speaking India. What is the implication of these developments

for the overall pattern of international trade?

Implications for Growth and Development

Solow’s (1956) contribution indicates that advances in tech-

nology are the ultimate source of growth. One natural question

is, How much has IT contributed to growth? Such contribu-

tions may come directly through technical progress and/or

factor accumulation via IT investment or indirectly through

changes in organizational and market structures and business

practices associated with the IT revolution (such as e-commerce)

Postscript 279

Page 293: Technology and the New Economy

and are therefore not easy to quantify. As noted by Quah in

chapter 3 (see also Gordon 2000), there is a Solow produc-

tivity paradox—that despite the proliferation of computers

and telecommunications equipment, the aggregate productivity

numbers fail to reveal that a large part of the economy has

benefited from spillovers from the IT sector. More recent and

careful studies have shown, however, that such a paradox is

unfounded—that substantial evidence based on industry- and

firm-level data exists about the acceleration of TFP growth

outside of the IT sector and especially in service industries that

are purchasing IT as well as in such old economy areas as

health care and government. Instead of being reflected only by

higher productivity and lower prices, the benefit from the IT

revolution may also show up in the form of improved con-

venience and expanded product choices for the consumer.

(See, e.g., Baily 2001; Baily and Lawrence 2001; and Litan and

Rivlin 2001.) Although measurement problems abound in

assessing the genuine contribution of IT and the Internet in

general and e-commerce in particular, these recent findings are

by and large positive. They seem to support Jovanovic and

Rousseau’s conclusion in chapter 1.

This leads to the question of whether the nature and me-

chanics of economic development have been affected in any

essential way by the IT revolution. If the Industrial Revolution

is viewed as a transition from a stagnant state to a growth state

(see Lucas 2001), can we interpret the IT revolution as another

industrial revolution that elevates the economy from a low-

growth state to a high-growth state? Is growth so stimulated

sustainable?

Being knowledge-driven, IT has increased the importance of

human capital relative to physical capital. What is the implica-

tion of this change for the pattern of economic growth across

280 Chong-En Bai and Chi-Wa Yuen

Page 294: Technology and the New Economy

countries with different capital and labor endowments? In

particular, will it generate convergence or divergence in the

levels and rates of growth of income across countries? Viewing

knowledge as a disembodied, global public good, Quah sug-

gests that international convergence will be easier to achieve

in this information age as the dissemination of knowledge be-

comes faster and more widespread.

Viewing knowledge as embodied, on the other hand, Razin

and Yuen (1997) have shown the crucial role of labor mobility

as a channel to facilitate knowledge spillovers across national

borders to induce income convergence. If we take account

also of this embodied component of knowledge, then the im-

plication of IT for convergence will be directly linked to its

implication for migration. As human capital becomes more

important, there seems to be more incentive for skilled labor to

migrate to rich countries where IT is more developed. On the

other hand, IT has allowed poor countries to export services as

well as tradable goods and therefore may help reduce the need

for migration. The net effect on migration, hence on conver-

gence, is not that obvious.

Implications for Income Distribution

The growing importance of human capital relative to physical

capital would imply growing inequality between labor income

and capital income, on the one hand, and growing inequal-

ity in wage earnings between skilled workers and unskilled

workers, on the other. However, its implication for the distri-

bution of household income or wealth is not immediately clear.

In chapter 1, Jovanovic and Rousseau suggest that we are in

the midst of a third wave of innovation that involves ‘‘inven-

tion in the method of inventing.’’ In such an economy, those

Postscript 281

Page 295: Technology and the New Economy

who possess knowledge of new ‘‘methods of investing’’ are

expected to be much more productive than those who do not.

What does this mean for income distribution? Given the dis-

tribution of human capital, income inequality would increase.

But how will this affect the evolution of human capital distri-

bution over time? If the uneven distribution of human capital

arises from part of the population being constrained from

acquiring human capital (due to, say, capital market imperfec-

tions), then there will be a tendency for the inequality in

human capital distribution to get worse over time—because

the unconstrained will invest more in human capital due to its

increased importance, while the constrained cannot. Other-

wise, increased importance of human capital may imply a de-

crease of the inequality in the human capital distribution. The

IT revolution can also affect human capital investment by

making information flow more efficiently. The improved infor-

mation flow may lead to increased accessibility of knowledge

to a wider segment of the population or, in the words of

Jovanovic and Rousseau, to ‘‘democratization of knowledge.’’

This prediction is also consistent with Quah’s classification of

knowledge as a nonrival and aspatial good.

Implications for Business Cycles

While IT innovations may create structural unemployment

among low-skilled workers, the improved efficiency of infor-

mation flow may help reduce frictional unemployment. In fact,

U.S. experience shows that both the inflation rate and the nat-

ural rate of unemployment have fallen as a result of faster

productivity growth. In other words, there has been an im-

provement in the inflation-unemployment trade-off. It is not

obvious, though, whether these effects will be long-lasting, es-

282 Chong-En Bai and Chi-Wa Yuen

Page 296: Technology and the New Economy

pecially given the uncertainty about the durability of the accel-

eration of productivity and output growth.

If one takes the real business cycle (RBC) view that macro-

economic fluctuations are by and large due to technology

shocks, then it is natural to ask whether the IT revolution can

be viewed as a global and possibly persistent productivity

shock. Quah argues, though, that it also contains some element

of a demand shock. And if so, would it generate more or less

volatilities in macro aggregates?

One common belief is that better information flow and

improved inventory control will lead to a decline in the inven-

tory cycle. It is interesting to examine whether this decline will

translate into a reduction in output volatility and a lengthening

of the expansion phases of the business cycle. On the other

hand, there is also a concern that, due to the development of

new risk management techniques, IT innovations may increase

financial volatility through the creation of systemic risk. Be-

sides, IT innovations have stimulated an expansion of global

trade, which provides larger trade linkages for the transmission

of shocks and business cycles across countries. Overall, the

effect of IT on macro fluctuations is not clear. By showing how

monetary policy can be made even more effective as a stabiliz-

ing tool in the information economy, however, Woodford

(chapter 5) has given us some relief about macro stability

under the IT revolution.

Evidently, due to our limited knowledge about the subject,

our discussion has only scratched the surface of some IT-

related economic issues. Among other things, we have left out

such important issues as fiscal administration in face of tax

avoidance/evasion activities via e-trade, taxation and regula-

tion of Internet access, and regulation and control of interna-

Postscript 283

Page 297: Technology and the New Economy

tional transactions via the Internet (see, e.g., Goolsbee 2000).

We hope, nonetheless, that the five chapters in the book as well

as our brief discussion of related topics will spark further

interest in research on the subject of technology and the new

economy.

References

Baily, Martin N. 2001. ‘‘Macroeconomic implications of the neweconomy.’’ Federal Reserve Bank of Kansas City, Jackson Hole Sym-posium, August.

Baily, Martin N., and Robert Z. Lawrence. 2001. ‘‘Do we have a newe-conomy?’’ American Economic Review 91 (May): 308–312.

D’Avolio, Dene, Efi Gildor, and Andrei Shleifer. 2001. ‘‘Technology,information production, and market efficiency.’’ Federal Reserve Bankof Kansas City, Jackson Hole Symposium, August.

Goolsbee, Austan. 2000. ‘‘The implications of electronic commercefor fiscal policy (and vice versa).’’ Journal of Economic Perspectives14 (Fall): 13–23.

Gordon, Robert J. 2000. ‘‘Does the ‘‘new economy’’ measure up tothe great inventions of the past?’’ Journal of Economic Perspectives14 (Fall): 49–74.

Litan, Robert E., and Alice M. Rivlin. 2001. ‘‘Projecting the economicimpact of the Internet.’’ American Economic Review 91 (May): 313–317.

Lucas, Robert E., Jr. 2001. ‘‘The Industrial Revolution: Past andFuture.’’ In Lectures on Economic Growth, Cambridge: HarvardUniversity Press.

Razin, Assaf, and Chi-Wa Yuen. 1997. ‘‘Income convergence withinan Economic Union: The Role of Factor Mobility and Coordination.’’Journal of Public Economics 66 (November): 225–245.

Solow, Robert. 1956. ‘‘A contribution to the theory of economicgrowth.’’ Quarterly Journal of Economics 70 (February): 65–94.

Varian, Hal. 2001. ‘‘High-technology industries and market struc-ture.’’ Federal Reserve Bank of Kansas City, Jackson Hole Sympo-sium, August.

284 Chong-En Bai and Chi-Wa Yuen

Page 298: Technology and the New Economy

Index

Agglomeration economies, 175Agro-biotechnology, 176Amazon, 13t, 15–16American Stock Exchange(AMEX), 16–17, 32, 42n3,45n16

America Online, 13tAnnouncement effect, 203–204,256n9

Antitrust lawsuits and legisla-tion, 26, 51, 55, 87n2, 89n21

AOL, 31Apple, 31, 68, 73Apple II, 68Argentina, 162–163, 170ARPAnet, 64, 80Asiaconcentration of technology in,166–167endogenous growth in, 169–170government spending on sciencein, 178–179higher education in, 177–178and the Industrial Revolution,117–119innovation strategy in, 157–158

intellectual property rights in,179–180and patents, 166–170and structure of businessenterprises, 181–182and success in adopting newtechnologies, 165–166, 168–169, 183technology consumers in, 96–97university-business relations in,179venture capital in, 180–181AT&T, 11–14, 25, 90n34Australia, 215–216, 221–222,229–231, 239, 259–260nn30–32, 261n35,262n42, 263–264nn48–49,264nn53–55, 265nn56–58

Banking, 26–27, 172–173. Seealso Financial markets;Monetary policyand central bank regulation,187–188, 189–210channel system, 221–240,263n44and electronic cash, 210–211,

Page 299: Technology and the New Economy

Banking (cont.)213, 219–220, 257n15,258n21, 259n28, 267n70and exchange rates, 245–246and the interbank market, 224–240, 262nn39–41and interest paid on savings,239–240, 246–248,265nn62–63and macroeconomic stability,188, 240and market surprise, 189–190,207–208and microeconomic efficiency,187–188, 240and the money supply, 210–212, 220–221and privatization of currency,248–250and quantity targeting forovernight clearing balances,216–218, 221–240and reduced demand formoney, 210–221, 257n20reserves, 214–219, 222–240,258–259nn24–27and Y2K panic, 236, 265n57Bayh-Dole Act of 1980, 176Biotechnology, 176Black, Fischer, 245–246Boeing, 12tBoulton, Matthew, 118Brand names, 70–71Brazil, 170Bristol-Myers Squibb, 12t, 14–15

Brookes, Andy, 255BTM, 56Bubbles, investment, 20–21Bull, 56

Burroughs/Unisys, 12t, 30–31Business cycles, 282–284Business data processing, 58,70–71, 89n16. See alsoComputer industry, the

Canada, 38, 39, 189, 215,221–222, 233–237, 259–260nn30–32, 261n36,262n42, 263–264nn46–48,264nn53–55, 265nn56–58

Capital accumulation, 159, 161–162

Caterpillar, 12tCentral banks. See Banking;Monetary policy

Channel system, 221–240,263n44

Characteristics of the innovationprocess, 170–174

Chemical and pharmaceuticalindustries, 11, 14–15, 33, 111

Chile, 170Chinaeconomic growth in, 117–119,151–152n2, 163, 165higher education in, 178and patents, 169and technological innovation,179–181

Coca Cola, 12tCommodore, 68Compaq, 13t, 31Competitionin the computer industry, 53–54, 62, 75–76, 88n6, 91n42and costs, 276and the Internet, 77–78and scale economies, 53–54Computer Associates, 31

286 Index

Page 300: Technology and the New Economy

Computer industry, the. See alsoInformation technology (IT);Internet, theand acceleration of invention,39–41and antitrust cases, 51–52,87n2, 89n21and the broad theory of persis-tence, 81–82and business applications, 58,70–71, 89n16competition in, 62, 75–76,88n6, 91n42and computer platformconcept, 53–54, 66–67and concentration and persis-tence in personal computers(PCs), 65–68and continuity, 68–69costs of products, 40and diffusion of computers, 25–26divided technical leadership in,66–68, 71–72dominance of IBM in, 53–61,72–73, 74–76, 87–88n5domination by the UnitedStates, 49–50, 56–61, 63–64,66, 73–74, 76early innovations in, 31, 57–58and electronic money, 210–211, 213, 219–220, 257n15,258n21, 259n28eras in, 52–80European and Japanese firmsin, 56–57, 62, 64, 66, 67, 69,88n6, 88n11, 89n14experimentation and explora-tion by, 59, 64, 68–69, 84–87firms based upon, 11, 15–16

founding of, 56–61, 84–86geographic concentration of,62–63, 67growth and change of, 84–86and hobbyists, 68–69and the Internet, 11, 15–16,77–80and mainframe computers, 53–56, 63, 67, 74–76, 87n3,88n10and military procurement, 62,64–65, 80and minicomputers, 61–64, 68,74, 90n27and networking, 75new trade theory, 82–83and operating systems, 67, 68,72, 73, 90n25, 95and personal computers, 65–74platforms, 53–54, 66–67, 72,73, 75, 79, 90n37and positive economics, 86–87and scale economies, 62,88nn8–9spread of, 27–28, 82–83and the three waves of techno-logical innovation, 4–5transitions in, 70–74, 81–82,84–86and users of different types oftechnologies, 53, 61winners and losers, 30–31, 62workforce, 62–63, 67and worldwide productivity,82–83and Y2K panic, 236, 265n57Consumers and the privatesector, 96–97, 276, 277and electronic money, 210–212

Index 287

Page 301: Technology and the New Economy

Consumers and the privatesector (cont.)and interest rates, 201–202,257n19and the money supply, 210–212and open-market operations,251–255

Corporate bonds, 43–44n8Costsand competition, 276of computing power andsoftware, 40and knowledge products, 116and monopolies, 276–277and scale economies, 54–55,62, 88nn8–9, 277and the securities market, 277–278

CP/M, 67, 68, 72, 73Crashes, stock market, 20–21,23–24, 33

Creative destruction, 173Currency, privatization of, 248–250

Data General, 31David, Paul, 24–25DEC, 31, 61, 62, 74, 90n25,90n34

Decentralized organization, 173–174, 181–182

Dell, 31Democratization of knowledge,39–41

Department of Agriculture, U.S.,176

Detroit Edison, 12tDietz, 62Digital Research, 68Disney, 12t

Dissemination mechanisms,103–105

Division of labor, 158–159, 162Domar, Evsey, 159Dow index, 17Dyson, Freeman, 112

eBay, 13te-commerce, 279–280Economic growthin the absence of technologicaladvances, 161–163, 183in Asia, 169–170and capital accumulation, 159,280–281and characteristics of theinnovation process, 171–172and division of labor, 158–159,162endogenous, 169–170and modes of advancingtechnological innovation,163–170and natural resourceexploitation, 162–163and nonrivalness, 171–172and the role of technology,158–161, 279–281and savings, 159and the Solow neoclassicalgrowth model, 124–125, 159–160, 280

Economic performancedemand side of, 114–119and economics of information,113–114and gross domestic product(GDP), 19–20, 28, 29f, 35,138and human capital, 87n1, 100,101–102, 106, 120–151

288 Index

Page 302: Technology and the New Economy

and the income gap betweenrich and poor, 104and knowledge as a globalpublic good, 103–106neoclassical growth model of,99–100in the new economy, 105–107,119–120paradoxes in knowledge-driven,106–109and physical capital accumula-tion, 97–98, 99, 100, 122and the Solow productivityparadox, 107–109supply side of, 112–113and technology, 99–102Economic Policy for the

Information Economy, 255Edison, Thomas, 14Education, higher, 177–178Electricityadoption by factories, 27–28compared to the informationtechnology revolution, 24–28,39, 111diffusion of, 25–26, 44n9firms based upon, 11–14and productivity growth, 10–11spread of, 27–28, 44n10Electronic cash, 210–211, 213,219–220, 257n15, 258n21,259n28, 267n70

Endogenous growth, 169–170Environmental ProtectionAgency (EPA), 176

European firms, 56–57, 62, 64,66, 67, 69, 88n11, 89n14,158and the Industrial Revolution,117–119, 151–152n2

and information and commu-nication technology (ICT),109–111

Exchange rates, 245–246Experimentation and explora-tion, 59, 64, 68–69, 84–87

Federal Open Market Commit-tee, 203–205

Federal Reserve, U.S., 189, 197–198, 203–206, 211, 214, 215,235–237, 244, 257n20

Financial markets, 26–27, 172–173. See also Bankingand announcement effect, 203–204, 256n9and bank reserve requirements,214–219, 258–259nn24–27and erosion of demand for themonetary base, 210–212and the federal funds rate,203–205, 205–206, 217,235–237, 261n37government policy effect on,120, 193, 198–199, 205–210and information technology(IT), 277–279and interest rates, 192–193,197, 200–202, 213–214,220–221liquidity of, 191, 241–242and macroeconomic stability,188and microeconomic efficiency,187–188of the 1920s, 26–27open, 191–192, 203–204,216–217, 228–233, 251–255,257n20, 264over-the-counter (OTC), 33predictability of, 209–210

Index 289

Page 303: Technology and the New Economy

Financial markets (cont.)and privatization of currency,248–250, 267n69and reduced demand formoney, 210–221, 257n20relationship with central banks,203–204and rule-based monetary policy,208–210surprise by central banks, 189–190, 207–208

Finland, 97, 110–111, 116–117Firmsages of, 9–10, 14–16, 23, 28–33aggregate investment in, 16–17,18–19based upon chemicals/phar-maceuticals, 14–15, 33based upon electronics, 11, 14based upon information tech-nology, 15–16and bubbles, 20–21and business cycles, 282–284and computer-mediated trans-actions, 210–212, 277and decentralized organization,173–174, 181–182early computer, 56–57and early informationtechnology (IT) applications,31, 84–86entrant, 32–33experimentation and explora-tion by, 59, 64, 68–69financing of new, 19–20, 26–27, 33and human capital, 87n1, 100,101–102, 106, 120–151incumbent, 28–32and industrial organization,275–277

large versus small, 26and market power, 21mergers and spin-offs, 18–19,27, 37–39, 42–43nn6–7,45nn16–18, 276new, 174, 180–181old versus new, 26–27, 28–33and organization capital, 18,22–23, 43n7and patents, 10, 34–39, 45n14,184n1physical capital accumulationby, 97–98, 99, 100, 122and quality of informationproduction, 278resiliency of, 9–10and role of technology increating lasting value, 16–17,98–99and scale economies, 53–54,62, 88nn8–9and shakeouts, 60and site specificity, 174structure in Asia, 181–182and technological shocks, 38time from founding to exchangelisting, 32–33, 42n3and total factor productivity(TFP), 99–102and venture capital, 172–173,176, 180–181vintage-based stability of, 21–24

Food and Drug Administration(FDA), 176

France, 56Freedman, Charles, 219, 255Friedman, Benjamin, 210–211,241

Funds rate, federal, 203–205,205–206, 217, 235–237,261n37

290 Index

Page 304: Technology and the New Economy

Gates, Bill, 15, 27, 77, 91nn40–41

Gateway, 31General Electric, 11–14, 25General Motors, 12t, 14, 29General purpose technology(GPT), 105

Geographyand agglomeration economiesin the United States, 175and characteristics of innova-tion, 170–171and the computer industry, 62–63, 67, 90n35and concentration of tech-nology in Asia, 166–167and knowledge, 103, 115and low-cost places to produce,165and site specificity, 174Germany, 62, 167–168, 169Global Competitiveness Report,165, 179, 180

Gordon, Robert, 39, 98Gould, 61Government spending onscience, 174–175, 178–179

Great Britain, 38–39Great Depression, 22, 29Greenspan, Alan, 205–206Gross domestic product (GDP),19–20, 28, 29f, 138, 213and patents, 35Gross national product (GNP),166, 174–175, 179

Harrod, Roy, 159Hewlett Packard, 12t, 61Hitachi, 56Hobbyists and personal com-puters (PCs), 68–69

Hodrick-Prescott filter, 45n13Honeywell, 31Hong Kong, 97, 169, 178–182

Human capital, 87n1, 100, 101–102, 106, 120–124, 152nn3–7, 280–281, 282and different technologies forgoods, 142–151growth with, 129–138and identical technologies forgoods, 138–142neoclassical growth model of,124–125output levels, 125–129unbound growth in, 129–138

Human genome project, 175Hybridization, 10–11

IBM, 30, 49, 52, 57–61, 87–88n5, 91n39and mainframe computers, 53–61, 74–76, 88n10and minicomputers, 63, 65,90n27PC, 66, 69–73, 90n30Income, per capitaand information technology,281–282and the ratio between rich andpoor, 104in South America, 162–163in the Soviet Union, 162India, 179–182, 279Indonesia, 169, 179–181Industrial organization, 82–83,275–277

Industrial Revolution, the, 117–119, 151–152n2, 159

Inflation, 194–197Inflation Reports, 207

Index 291

Page 305: Technology and the New Economy

Informationadvantage of central banks,198–199, 214, 255–256n4and communication technology(ICT), 105–107, 114–119,151n1dissemination mechanisms,103–105as a global public good, 103–106and monetary policy, 188and the new economy, 113–119quality, 278Information technology (IT). Seealso Computer industry, theand acceleration of invention,39–41and bank reserve requirements,218–219and business cycles, 282–284compared to electrification era,24–28, 39, 111and concentration of rent-generating supply within theUnited States, 50–51core firms in, 15–16and declining value of firms,23–24development of, 1early entrants into, 30–31,44n11and economic growth, 279–281and electronic money, 210–212, 219–220and the financial markets, 277–279and income distribution, 281–282and industrial organization,275–277

as an invention in the methodof inventing, 10–11and labor productivityimprovements, 107–109latter entrants into, 30–31and mobile telecommunica-tions, 97national forces outside theindustry affecting, 51and networking, 75and productivity, 11, 39–41,82–83and reduced demand formoney, 210–221and scale economies, 54–55,62, 88nn8–9, 277second wave, 32and the securities markets,277–278and shakeouts, 60and the Solow productivityparadoxes, 107–109spread of, 27–28winners and losers, 29–30Informix, 31Infoseek, 31Innovation. See Technologicalinnovation

Intel, 13t, 15, 66, 69, 72, 73Intellectual property, 114,116rights in Asia, 179–180Interbank market, 224–240Interest ratesand bank reserve requirements,216–221, 222–240and central banks, 192–193,197, 200–202, 213–214,256n7, 257n19and the channel system, 221–240

292 Index

Page 306: Technology and the New Economy

control in the absence ofmonetary frictions, 241–242control using standing facilities,221–240and electronic money, 220–221equilibrium, 242–248,266n66–67and the interbank market, 224–240overnight, 221–240, 242–248and price levels, 244–247and savings, 239–240, 246–248, 265nn62–63short-term control over, 242–248, 265n60

Internal combustion engine, 41,111

International trade, 279Internet, the, 11, 15–16, 31, 40,41, 91n40, 95, 280. See alsoComputer industry, the;Information technology (IT)cafes, 96–97and competition, 77–78founding of, 80and the personal computer(PC), 77–79shopping on, 98–99Invention, acceleration of, 39–41

Investment in firmsand bubbles, 20–21, 21and the gross domestic product(GDP), 19–20and legal protection of inves-tors’ interests, 278–279mergers and spin-offs, 18–19and organization capital, 18outside the United States, 167,180–181and patents, 172

and quality of information, 278role of new technology ininitial, 16–17in the United States, 174–175and venture capital, 172–173,176, 180–181

Ireland, 110–111Israel, 168, 179Ivory soap, 15

Japancomputer industry in, 56–57,62, 64, 66, 69, 88n6, 88n11,88n13, 97, 110–111stock market, 20–21, 39and technological innovation,158, 165–170, 175–182

Java technology, 78

Kienzle, 62Kimberly-Clark, 12tKing, Mervyn, 211, 219Knowledgeaccumulation, 101and decentralized organization,173–174dissemination mechanisms,103–105, 172, 282and geography, 103, 115as a global public good, 103–106, 171–172, 281and the new economy, 113–119, 280–281and nonrivalness, 171–172Konstanz, 62Koreaeconomic growth in, 110–111and technological innovation,168–169, 179–182

Krantz, 62Krugman, Paul, 97–98, 100

Index 293

Page 307: Technology and the New Economy

Labor productivity, 280–281and flexibility, 176and the Solow productivityparadox, 107–109

Large-Value Transfer System(LVTS), 233–237

Latin America, 158Legislation and the judicialsystemand antitrust cases, 26, 51, 55,87n2effect on the stock market, 36–37and Microsoft, 78–79, 87n2patent, 35–37Lerner, Josh, 35–36Lotus, 72, 90n37Lycos, 31

Mainframe computers, 53–56,63, 67, 74–76, 87n3

Malaysia, 169, 178–181Management structure of IBM,57–58

Market forcesand anticipated monetarypolicies, 190–205and brand names, 70–71, 276–277and the effectiveness of open-market operations, 251–255versus government-led out-comes, 64, 89n21and the IBM PC, 69–71and innovation, 157–158, 171and knowledge products, 116and legal protection of inves-tors’ interests, 278–279monitoring, 21and quality of information, 278and regulation, 176

and scale economies, 54–55,62and selection, 60–61and technological innovation,59–60, 275–277

McDonalds, 13tMeltzer, Allan, 255n3Merck, 12tMergers and spin-offs, 18–19,27, 42–43nn6–7, 45nn16–18,276intercountry similarities in, 38–39and patents, 37–39Microcomputers. See Personalcomputers (PCs)

Micron, 13tMicrosoft, 13t, 15, 31, 91n41and competition, 77–79, 87n2and computer technology, 51,66, 72, 73and the Internet, 77–79Military procurement, 62, 64,80

Minicomputers, 61–64, 68, 74,90n27

Mobile telecommunications, 97Monetary policy. See alsoBankingand announcement effect, 203–204, 256n9anticipated, 190–205and bank reserve requirements,214–219, 222–240, 258–259nn24–27central bank actions and, 189–210and central bank ambiguity,193–194and the channel system, 221–240, 263n44

294 Index

Page 308: Technology and the New Economy

and communication with thepublic, 205–207consequences for conduct of,205–210and consumption, 201–202effectiveness of, 188–189, 190–205effect of information on, 188and electronic money, 210–212, 213, 219–220, 257n15,258n21, 259n28, 267n70and erosion of demand for themonetary base, 210–212and exchange rates, 245–246and the federal funds rate,203–205, 205–206, 217,235–237, 261n37and inflation, 194–197and information advantage ofcentral banks, 198–199, 205,214, 255–256n4and interest rate control usingstanding facilities, 221–240and interest rates, 192–193,197, 200–202, 213–214,220–221, 242–248, 256n7and market surprise, 189–190,207–208and money supply, 194–195,251–255and open market operations,191–192, 203–204, 205–207,216–217, 228–233, 251–255,257n20, 264and predictability of financialmarkets, 209–210and price levels, 244–247,265–266n64and privatization of currency,248–250, 267n69and quantity targeting for

overnight clearing balances,216–218, 221–240and rational expectations, 194–196, 208, 245–246and reduced demand formoney, 210–221, 257n20and the relationship betweenfinancial markets and thecentral bank, 203–204rule-based, 208–210transparency in, 205–207unanticipated, 197–198Monetary Policy Reports, 207Monopolies, 79, 220, 276–277

Moore’s Law, 39Motorola, 69

Nanotechnology, 175Nasdaq, 16–17, 20–21, 21, 23–24, 32, 37, 42n3, 45n16

National Cash Register (NCR),31

National Science Foundation(NSF), 109

Natural resource exploitation,162–163

Netherlands, the, 39Netscape, 31, 77, 91n40Networking, 75New economy, theand declining value of firms,23–24defining, 9, 113–114and delivery lags, 114–115and demand side of theeconomy, 114–119and economics of information,113–114and general purpose technology(GPT), 105

Index 295

Page 309: Technology and the New Economy

New economy (cont.)and improved information, 114and information and commu-nications technology (ICT),105–107and intellectual property, 114,116and knowledge, 113–119knowledge basis of, 105–106,280–281paradoxes in, 106–109and service industries, 9studying, 95–96, 151n1and supply side of the econ-omy, 112–113and the technology/consumerlinkage, 98–99, 111–114,210–212

New trade theory, 4New York Stock Exchange(NYSE), 14, 15, 16–17, 27,28, 29, 32, 33, 43n7, 45n16

New Zealand, 189, 210, 215,221–222, 230–232, 236, 239,259–261nn30–35, 262–263n43, 263–264nn46–48,264nn51–53, 265nn56–58

Nixdorf, 62Nokia Corporation, 97Nonrivalness, 171–172Novell, 31

Olivetti, 56Open Source Software, 95, 114Oracle, 31, 74Organization capital, 18, 22–23,43n7

Overnight rates, 221–240. Seealso Interest rates

Over-the-counter (OTC) market,33

Pacific Gas & Electric, 12tPatentsin Asia, 166by country, 167–168effect on the stock market, 36–37and free dissemination ofknowledge, 172in Germany, 167as indicator of innovativeactivity within a firm, 34–39and information technology(IT), 39legislation, 35and mergers, 37–39monopoly privileges of, 172number of, 34–35, 45n14,184n1surge in, 10in the United States, 167–168,175, 184n1and universities, 176worldwide variation in laws on,35–36

Penicillin, 15Peoplesoft, 31Perkins-Elmer, 61Persistence, theory of, 81–82Personal computers (PCs)component markets, 66,89n24concentration and persistencein, 65–68firms, 68and hobbyists, 68–69IBM, 66, 69–73, 90n30and IBM clones, 72and the Internet, 77–79Pfizer, 12t, 14–15Philippines, the, 169, 179–183Phillips curve, 195–196

296 Index

Page 310: Technology and the New Economy

Physical capital accumulation,97–98, 99, 100, 122, 280–281

Polio vaccine, 15Population, Asian, 166–167Porter, Michael E., 184n2Positive economics and thecomputer industry, 86–87

Price levels, 244–247, 265–266n64

Prime Computer, 31Private sector. See Consumersand the private sector

Procter & Gamble, 12t, 14–15Productivityacceleration of, 39–41and capital accumulation, 159,161–162and division of labor, 158–159,162and information technology(IT), 11, 39–41, 82–83, 107–109paradox, Solow, 107–109, 280and specialization, 158–159total factor (TFP), 99–102worldwide, 82–83Public policy, U.S.and the broad theory of per-sistence, 81–82and communication with thepublic, 205–207, 255n2and demand and supply, 120,198–199and domination of the compu-ter industry, 51–52, 88n12effect on financial markets, 193,198–199, 205–210on experimentation and explo-ration, 59–60and the IBM PC, 73and monopolies, 79

and the PC hobbyists, 69and positive economics, 86–87

Pullman Company, the, 29

Real business cycle (RBC), 283Regulatory environment in theUnited States, 176

Research and development(R&D), 34, 53, 57, 62. Seealso Technological innovationin Asia, 178–179and characteristics of theinnovation process, 171and the Solow productivityparadox, 108–109sponsored by the United States,62, 64, 89n20and total factor productivity(TFP), 101and U.S. public policy, 59–60,175–176

Resiliency of firms, 9–10Romer, Paul, 171Route 128, 61, 67, 171, 175Ryan, Chris, 255

Savings accounts, 239–240,246–248, 265nn62–63

Scale economies, 54–55, 62,88nn8–9, 277

Schumpeter, Joseph, 173Schumpeterian-style creativedestruction, 10

Scientific Data Systems, 31Scientists, 170–171, 174–175Securities’ Act of 1933, 26, 33Semiconductor industry, 175Service industries, 280and information technology(IT), 28

Index 297

Page 311: Technology and the New Economy

Service industries (cont.)shift toward, 9Shakeouts, 60Sherman Antitrust Act of 1890,26

Sichel, Daniel, 39Siemens, 57Silicon Valley, 67, 97, 171, 175Singaporeeconomic growth in, 97–98and technological innovation,169, 179–182

Site specificity, 174Smart cards, 212–213, 257n15Smith, Adam, 158–159, 171Solow, Robert, 159–161, 279Solow neoclassical growthmodel, 124–125, 159–161

Solow productivity paradoxes,106, 107–109, 280

South America, 162–163, 170Soviet Union, 158, 161–163,173

Specialization, 158–159Sperry-Rand, 31Stalin, Joseph, 97–98, 161Standard & Poor (S&P) 500, 17Stock marketand banking investment, 26–27, 42nn3–4bubbles, 20–21, 21crashes, 20–21, 23–24, 33favoring of large firms, 26, 42–43nn5–6and firms based on chemicals/pharmaceuticals, 14–15and firms based on electricity,11–14, 24–28and firms based on informationtechnology, 15–16

and the Great Depression, 29and gross domestic product(GDP), 19–20indexes, 17and monitoring market power,21and the New York StockExchange (NYSE), 14, 15and organization capital ofvintage firms, 18, 22–23,43n7and patents, 36–37, 45n14and stability of firms groupedby vintage, 21–24, 41–42n2,45n12and the three waves of tech-nological innovation, 2–3and value of vintages over time,16–17, 21–24vintages of firms and techno-logical innovation, 9–10,16–17widespread participation in,26–27

Sweden, 39, 110–111, 189, 210,215

Taiwancomputer industry in, 67,90n32economic growth in, 168–169and technological innovation,179–182

Tandy, 68Technological innovationabsence of, 161–163and ages of firms, 10, 14–16,23, 28–33in Asia, 96–97, 165–170characteristics of, 170–174

298 Index

Page 312: Technology and the New Economy

in chemicals and pharmaceu-ticals, 14–15and computer-mediated trans-actions, 277and consumers, 96–97, 276and creative destruction, 172–173and decentralized organization,173–174, 181–182and demand, 116–119and e-commerce, 279–280and economic growth theory,158–161, 160–161, 279–281and economic performance,99–102in electronics, 11, 14and experimentation andexploration, 59, 64, 68–69,84–87as a factor in creating lastingvalue of firms, 16–17and general purpose technology(GPT), 105and higher education, 177and human capital, 87n1, 100,101–102, 106in information technology (IT),31, 57–58, 72–73and the Internet, 77–80and legal protection of inves-tors’ interests, 278–279legislation affecting, 35and markets, 157–158and mergers and spin-offs, 18–19, 37–39, 276and military procurement, 62,64–65and mobile telecommunica-tions, 97modes of advancing, 163–170

and monopolies, 79, 276–277and the new economy, 111–114before the new economy, 9paradoxes in, 106–109patents as indicator of, 10, 34–39, 166, 167–168, 172, 175and the personal computer, 70–71and progression from adoptionto innovation, 164–165prominent companies in, 11quality of, 9–10and rate of change in computersystems, 65–66and regulatory agencies, 176and scale economies, 54–55,62, 88nn8–9, 277and shocks, 38and site specificity, 174and the Solow productivityparadox, 107–109in South America, 170three-tiers in global, 167three waves of, 2–3in the United States, 174–177and venture capital, 172–173,176, 180–181and young firms, 28–33Telefunken, 56Telegraphs, 41Telephones, 41Thailand, 96–97, 169, 180Time Warner, 13tTotal factor productivity (TFP),99–102, 110–111

Trade, international, 279Transparency in monetarypolicy, 205–207

Triumph Adler, 62

Index 299

Page 313: Technology and the New Economy

United Kingdom, the, 38–39, 56,110, 189, 210, 215

United States, theand agglomeration economies,175and the broad theory of persis-tence, 81–82business start-ups in, 174communication betweengovernment, universities, andindustries in, 175–176domination of the computerindustry, 51–52, 59–61, 69,73, 81–82, 89n21domination of information andcommunication technology(ICT), 109–111Federal Reserve, 189, 197–198,203–206, 214, 215, 235–237,244, 257n20government investment inscience, 174–175higher education system, 177import of information andcommunication technology(ICT), 107, 110and innovation, 174–177labor market, 176and Microsoft, 78–79military procurement, 62, 64,80and monopolies, 79and the new trade theory, 82–83Patent and Trademark Office,167, 175, 184n1and patents, 167–168, 175,184n1and positive economics, 86–87regulatory environment, 176and research and development

(R&D), 34, 53, 57, 59–60,62, 64, 89n20, 101, 108–109and total factor productivity(TFP), 99–102, 110–111venture capital in, 176University of Chicago Center forResearch in Securities Prices(CRSP), 11

UNIX operating system, 64,90n25

Venture capital, 172–173, 176,180–181

Vintageand the role of technology invalue, 16–19stability of value over time, 21–24

WAP delivery, 95Warner, Andrew, 165Warner Bros. Motion PictureCompany, 14

Watt, James, 118WordPerfect, 66, 72, 90n37World War II, 56, 57, 111World Wide Web, 80

Yahoo, 31Y2K panic, 236, 265n57

Zuse, 56

300 Index