16
Research Policy 44 (2015) 34–49 Contents lists available at ScienceDirect Research Policy jo ur nal ho me p age: www.elsevier.com/locate/respol The evolving state-of-the-art in technology transfer research: Revisiting the contingent effectiveness model Barry Bozeman a,, Heather Rimes b , Jan Youtie c a Center for Organizational Research and Design, Arizona State University, United States b Department of Public Administration and Policy, University of Georgia, United States c Enterprise Innovation Institute and School of Public Policy, Georgia Institute of Technology, United States a r t i c l e i n f o Article history: Received 2 December 2013 Received in revised form 18 March 2014 Accepted 23 June 2014 Available online 28 July 2014 Keywords: Technology transfer Public policy Research Theory a b s t r a c t The purpose of our study is to review and synthesize the rapidly evolving literature on technology transfer effectiveness. Our paper provides a lens into relatively recent work, focusing particularly on empirical studies of US technology transfer conducted within the last 15 years. In doing so, we update and extend the Contingent Effectiveness Model of Technology Transfer developed by Bozeman (2000). Specifically, we include the growing interest in social and public value oriented technology transfer and, thus, the contingent effectiveness model is expanded to consider this literature. We categorize studies according their approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and offer recommendations for future studies. © 2014 Elsevier B.V. All rights reserved. 1. Introduction Technology transfer continues to be a popular topic among not only researchers, but also among managers and entrepreneurs trolling the academic literature and hoping for usable knowledge. As is the case for so many popular research topics, especially those addressed by the researchers from numerous, diverse disciplines, the research findings and theory developments in technology transfer evolve rapidly. Our paper provides a lens into relatively recent work, focusing particularly on the last 15 years. In 2000, Bozeman published in this journal a comprehensive state of the art review of domestic technology transfer literature. Our study updates and extends this review, with an emphasis, although not an exclusive one, on the US technology transfer pol- icy and program context, and research about these policies, and programs. In doing so, the paper employs a modestly revised version of the Contingent Effectiveness Model of Technology Trans- fer used in the earlier paper. The model has by this time been adapted or applied directly in scores of analyses or evaluations of The research was supported by the U.S. National Institute of Standards and Tech- nology under a under a subcontract from VNS Group, Inc. The opinions expressed in this monograph are the authors’ and do not necessarily reflect the opinions of any government agency, or Arizona State University, the University of Georgia, Georgia Tech, or VNS Group, Inc. Corresponding author. Tel.: +1 4806866336. E-mail address: [email protected] (B. Bozeman). technology transfer. 1 Using the same structure facilitates com- parison of the pre- and post-2000 technology transfer literature. Further, we categorize studies according their approaches to mea- suring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and offer recommendations for future studies. Since Bozeman’s previous study, the broader technology transfer literature has been expanding rapidly in several major directions. First, there have been many studies of government laboratory and research centers, especially those located European nations. During the period covered in Bozeman’s earlier review, the majority of studies focused on US laboratories and research centers. A second trend is that the vast majority of the post-2000 technology transfer literature focuses on transfer from university settings or from multi-organizational research centers or consortia (many of which are anchored by or housed entirely in universities). A third trend is that non-linear technology transfer mechanisms have been put forth and analyzed to a greater extent. Bradley et al. (2013) developed rich descriptions and sets of literature around these non-linear mechanisms. These authors highlight four such non-linear mechanisms: (1) reciprocal relationships among 1 The Contingent Effectiveness model has been used in application or as a concep- tual framework in a wide variety of articles, ranging from industrial ecology to higher education innovations to transfer of vaccines (see for example Ramakrishnan, 2004; Albors et al., 2006; Albors-Garrigos et al., 2009; Mohammed et al., 2010; Kitagawa and Lightowler, 2013; Hendriks, 2012). http://dx.doi.org/10.1016/j.respol.2014.06.008 0048-7333/© 2014 Elsevier B.V. All rights reserved.

The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

TR

Ba

b

c

a

ARRAA

KTPRT

1

ntAattr

sOaipvfa

ntgT

h0

Research Policy 44 (2015) 34–49

Contents lists available at ScienceDirect

Research Policy

jo ur nal ho me p age: www.elsev ier .com/ locate / respol

he evolving state-of-the-art in technology transfer research:evisiting the contingent effectiveness model�

arry Bozemana,∗, Heather Rimesb, Jan Youtiec

Center for Organizational Research and Design, Arizona State University, United StatesDepartment of Public Administration and Policy, University of Georgia, United StatesEnterprise Innovation Institute and School of Public Policy, Georgia Institute of Technology, United States

r t i c l e i n f o

rticle history:eceived 2 December 2013eceived in revised form 18 March 2014ccepted 23 June 2014

a b s t r a c t

The purpose of our study is to review and synthesize the rapidly evolving literature on technology transfereffectiveness. Our paper provides a lens into relatively recent work, focusing particularly on empiricalstudies of US technology transfer conducted within the last 15 years. In doing so, we update and extendthe Contingent Effectiveness Model of Technology Transfer developed by Bozeman (2000). Specifically,

vailable online 28 July 2014

eywords:echnology transferublic policyesearch

we include the growing interest in social and public value oriented technology transfer and, thus, thecontingent effectiveness model is expanded to consider this literature. We categorize studies accordingtheir approaches to measuring effectiveness, draw conclusions regarding the current state of technologytransfer evaluation, and offer recommendations for future studies.

© 2014 Elsevier B.V. All rights reserved.

heory

. Introduction

Technology transfer continues to be a popular topic amongot only researchers, but also among managers and entrepreneursrolling the academic literature and hoping for usable knowledge.s is the case for so many popular research topics, especially thoseddressed by the researchers from numerous, diverse disciplines,he research findings and theory developments in technologyransfer evolve rapidly. Our paper provides a lens into relativelyecent work, focusing particularly on the last 15 years.

In 2000, Bozeman published in this journal a comprehensivetate of the art review of domestic technology transfer literature.ur study updates and extends this review, with an emphasis,lthough not an exclusive one, on the US technology transfer pol-cy and program context, and research about these policies, andrograms. In doing so, the paper employs a modestly revised

ersion of the Contingent Effectiveness Model of Technology Trans-er used in the earlier paper. The model has by this time beendapted or applied directly in scores of analyses or evaluations of

� The research was supported by the U.S. National Institute of Standards and Tech-ology under a under a subcontract from VNS Group, Inc. The opinions expressed inhis monograph are the authors’ and do not necessarily reflect the opinions of anyovernment agency, or Arizona State University, the University of Georgia, Georgiaech, or VNS Group, Inc.∗ Corresponding author. Tel.: +1 4806866336.

E-mail address: [email protected] (B. Bozeman).

ttp://dx.doi.org/10.1016/j.respol.2014.06.008048-7333/© 2014 Elsevier B.V. All rights reserved.

technology transfer.1 Using the same structure facilitates com-parison of the pre- and post-2000 technology transfer literature.Further, we categorize studies according their approaches to mea-suring effectiveness, draw conclusions regarding the current stateof technology transfer evaluation, and offer recommendations forfuture studies.

Since Bozeman’s previous study, the broader technologytransfer literature has been expanding rapidly in several majordirections. First, there have been many studies of governmentlaboratory and research centers, especially those located Europeannations. During the period covered in Bozeman’s earlier review,the majority of studies focused on US laboratories and researchcenters. A second trend is that the vast majority of the post-2000technology transfer literature focuses on transfer from universitysettings or from multi-organizational research centers or consortia(many of which are anchored by or housed entirely in universities).A third trend is that non-linear technology transfer mechanismshave been put forth and analyzed to a greater extent. Bradley

et al. (2013) developed rich descriptions and sets of literaturearound these non-linear mechanisms. These authors highlight foursuch non-linear mechanisms: (1) reciprocal relationships among

1 The Contingent Effectiveness model has been used in application or as a concep-tual framework in a wide variety of articles, ranging from industrial ecology to highereducation innovations to transfer of vaccines (see for example Ramakrishnan, 2004;Albors et al., 2006; Albors-Garrigos et al., 2009; Mohammed et al., 2010; Kitagawaand Lightowler, 2013; Hendriks, 2012).

Page 2: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

arch P

uLsc(aatto

irOtjTSEnapaubAP

FooWtfisaaaaaatbm

gtc

2t

Toeamrditrln

dollars, under tax systems that include in most nations progressiveelements and promotion of equity. Thus, a rationale for infusingpublic values in science, technology and innovation policy is that

B. Bozeman et al. / Rese

niversity–industry- and government actors (Etzkowitz andeydesdorff, 2000); (2) “multiversity” approaches in which manyub-units and programs of the university can interact withompanies in diverse ways (Kerr, 2001); (3) open innovationChesbrough, 2003) in which the university can both acquirend distribute unused intellectual property; and (4) open sourcepproaches (such as the Creative Commons) in which knowledgeransfer extends to collaborators through standards creation andacit knowledge sharing and for which the technology transferffice can serve as a broker.

This current review of technology transfer evaluation stud-es focuses chiefly on empirical research, including qualitativeesearch, and has a US orientation, albeit not an exclusive one.ur primary data source was articles pertaining to evaluation of

echnology transfer programs or policies that appeared in scholarlyournals concerning technology, policy and management such ashe Journal of Technology Transfer, Research Policy, Organizationcience, Technovation, Research Evaluation, The Journal of Higherducation, Evaluation and Program Planning, Regional Studies, Tech-ological Forecasting and Social Change, Minerva, R&D Management,nd International Journal of Technology Management. We emphasizeost-2000 literature in this paper except in those cases wheren allusion to previous literature is necessary for clarifying ournderstanding of research trajectories. To this end, we have alsoeen guided by several technology transfer literature reviews (e.g.domavicius et al., 2008; Agrawal, 2003; Tran and Kocaoglu, 2009;rotogerou et al., 2012; Bradley et al., 2013).

Two criteria for exclusion of articles were applied to this search.irst, much of the technology transfer literature continues to focusn international relations and owner technologies, often focusedn relationships between nations and sometimes groups of firms.e do not examine the international technology transfer litera-

ure, though we do consider cross-national transfers among peerrms. Second, we also do not include the great many single casetudy papers resident in the gray literature in this review, albeit wecknowledge the importance of such gray literature in the field ofssessing the effectiveness of technology transfer. Evaluations thatppear in gray literature only are not always publicly accessible,lthough there are a few exceptions to this rule where gray liter-ture is included because it is accessible and it elucidates findingscross programs administered by multiple organizations. Withinhe parameters of this search, we acknowledge that despite ourest efforts to canvas this post-2000 literature, we may well haveissed some evaluations.Finally, and as we see below, the present study includes the

rowing interest in social and public value oriented technologyransfer. Thus, the contingent effectiveness model is expanded toonsider this literature.

. The revised contingent effectiveness model ofechnology transfer

Fig. 1 shows the original Contingent Effectiveness Model ofechnology Transfer. The revised model is nearly identical to theriginal. Both models identify five categories of technology transferffectiveness determinants or contingencies, including: (1) char-cteristics of the transfer agent, (2) characteristics of the transferedia, (3) characteristics of the transfer object, (4) demand envi-

onment, and (5) characteristics of the transfer recipient. Theseimensions are not entirely exhaustive but are broad enough to

nclude most of the variables examined in studies of government

echnology transfer activities. The arrows in the model indicateelations among the dimensions (broken lines indicate weakerinks). In a nutshell, both models maintain that the impacts of tech-ology transfer can be understood in terms of who is doing the

olicy 44 (2015) 34–49 35

transfer, how they are doing it, what is being transferred and towhom.

The term “contingent” is key in both the original and revisedmodel because of the assumption that technology transfer bydefinition includes multiple parties and these parties generallyhave multiple goals and, ergo multiple effectiveness criteria. Effec-tiveness is considered in terms of multiple criteria including (1)out-the-door (was anything transferred?), (2) market impact, (3)economic development, (4) political advantage, (5) developmentof scientific and technical human capital, and (6) opportunity costconsiderations. The revised model, shown in Fig. 2, adds an addi-tional effectiveness criterion: public value.

2.1. The addition of the public value criterion

The addition of the Public Value criterion arises from the recog-nition that transfer agents, particularly public sector transfer agentsbut others as well, are housed within agencies and organizationsthat are themselves in pursuit of broad public-interest goals. Thus,their endeavors are motivated, influenced, and directed by ever-changing constellations of public values (Jørgensen & Bozeman,2007). For example, each federal laboratory operates within afederal agency or department which in turn functions under theauspices of a mission to further some aspect of the public inter-est. As a case in point, part of the mission of the U.S. Departmentof Agriculture (USDA) is to “promote agriculture production sus-tainability that better nourishes Americans while also helping feedothers throughout the world; and to preserve and conserve ourNation’s natural resources” (USDA, 2013). Consequently, one meansby which to judge USDA technology transfer successful is if thetransfer in some way furthers the agency’s mission. Indeed, manyfederal agencies include narratives of technology transfer successstories in their annual reports, and often these success stories cen-ter on social impacts of agency technology transfer activities.2 Inthis way, evidence demonstrates tacit acceptance by practitionersthat public value is an important criterion for evaluating technologytransfer activity in some realms.

Importantly, the Public Value criterion counterbalances someof the emphasis on economic impacts of technology transfer. Tothis end, it is comparable to notions of responsible innovation,which take into consideration equity and inequality; sustainability,health and safety; and the improvement of quality of life throughaddressing societal needs or grand challenges. The expectationthat invention and innovation will produce economic growthis not new; however, inclusion of the Public Value criterion inthe Contingent Effectiveness Model acknowledges the fact thateconomic impacts are sometimes not the best measure of well-being. For example, if economic impacts are in aggregate favorablebut exacerbate inequalities then such an outcome may not insome circumstances be desired. There are three reasons to givegreater attention to public values in thinking about S&T policy.First, public values are more likely to encompass outcomes thatare ultimately important to most people. For example, despite itspervasiveness as an instrumental concern, few people care abouteconomic growth for its own sake. Instead, they care about betterhealth, more or better leisure, safety, educational opportunity,or increased likelihood of obtaining a satisfying job. Economicgrowth is prized because it is seen as enabling these first ordervalues. Second, public science and technology are supported by tax

2 See the following for examples: http://ttc.nci.nih.gov/about/success.php;http://spinoff.nasa.gov/index.html; http://techtransfer.energy.gov/energy.

Page 3: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

36 B. Bozeman et al. / Research Policy 44 (2015) 34–49

Fig. 1. Contingent effectiveness model of technology transfer.

Fig. 2. Revised contingent effectiveness model of technology transfer.

Page 4: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

arch P

to

sasvd1mdbt2sVfop

erc

3

tnmtif

3

tes

TT

B. Bozeman et al. / Rese

hose values are by definition broader values and, by implication,nes more likely to affect all or most citizens.

A third reason for systematic inclusion of public values incience, technology and innovation policy is that without directttention they are easily set aside or ignored. We can say thatcience, technology and innovation policy values, and indeed allalues expressed in major policies, are both dynamic and “stageependent.” That is to say, public policies evolve in stages (Rose,993; John, 1998), though not necessarily in fixed sequence. Inost instances, these stages include (1) agenda-setting, (2) policy

esign(s), (3) policy choice, (4) policy implementation and (usuallyut not always), (5) policy assessment or even systematic evalua-ion. Particularly in science and technology policy (Burgess et al.,007; Bozeman and Sarewitz, 2005), values are important at everytage, but they are changeable and not always in predictable ways.alues change as a result of learning, in other cases they fall aside

or lack of advocacy, and in still others they fall under the weightf new values injected by other self-interested parties in politicalrocesses (Beierle and Konisky, 2000).

Table 1 describes the public value criterion along with otherffectiveness criteria developed previously. The table also brieflyeviews the advantages and disadvantages of each effectivenessriterion.

. Technology transfer effectiveness research

This study reviews and discusses current research on technologyransfer effectiveness in light of the Revised Contingent Effective-ess model. To enable analysis according to the elements of theodel, we develop a table that reports findings and recommenda-

ions from the scholarly literature. Since the table is quite large wenclude it as an appendix (Appendix A) to this paper but we drawrom the table in our discussion below.

.1. “Out-the-Door” criterion for technology transfer effectiveness

Technology transfer research gives disproportionate attentiono what is referred to as the “Out-the-Door” technology transferffectiveness criterion. This criterion is most often used by bothcholars and practitioners and, in many cases, the only one used.

able 1echnology transfer effectiveness criteria.

Effectiveness criterion Key question Theory base

“Out-the-Door” Was technology transferred? Atheoretical or classorganization theory

Market Impact Did the transferred technologyhave an impact on the firm’s salesor profitability?

Microeconomics of

EconomicDevelopment

Did technology transfer efforts leadto regional economicdevelopment?

Regional science andfinance theory.

Political Did the technology agent orrecipient benefit politically fromparticipation in technologytransfer?

Political exchange thbureaucratic politics

Opportunity Cost What was the impact oftechnology transfer on alternativeuses of the resources?

Political economy, canalysis, public choi

Scientific and TechnicalHuman Capital

Did technology transfer activitylead to an increment in capacity toperform and use research?

Social capital theorypolitical science), hucapital theory (econ

Public Value Did technology transfer enhancecollective good and broad,societally shared values?

Public interest theorvalue theory

olicy 44 (2015) 34–49 37

For this reason, if no other, it warrants special attention. But, as wesee below, it also has the merit of practical utility and convenienceof measurement.

The primary assumption of the Out-the-Door criterion for tech-nology transfer effectiveness is that the technology transfer agent(e.g. the federal laboratory) has succeeded once the technologyhas been converted into a transfer mechanism, either formal orinformal, and another party has acquired the technology. The orga-nization acquiring the technology may or may not have put it touse. Thus, the organization receiving the intellectual property (IP)may do so reflexively or because there is a directive to do so, withan intent to use the IP or not, or even with an intent to quash thetechnology so that it is not available for rivals. Neither the motivenor the uses of the IP are considered in the Out-the-Door criterion.As suggested by the label, the goal is getting the IP out the door.

Within this general concept of the Out-the-Door model we candistinguish three sets of significantly different results revealed bythree different sets of indicators. In the first place we have the caseof the “Pure Out-the-Door” in which there is no indication that any-thing has occurred with research to the IP except for its transfer.Second, there is “Out-the-Door with Transfer Agent Impacts.” In somecases it is clear that the transferring organization has benefitedfrom the activity even if no one else ever does. Thus, if a federallaboratory obtains licensing revenue, that is a sort of impact. Thattype of impact might not be related to the primary goals of theUS Stevensen-Wydler Act or the US Technology Transfer Act, butit is an impact and one than provides benefit. Third, there is “Out-the-Door with Transfer Partner Impacts.” In most cases public policyfocuses not on enriching technology transfer partners but ratheron broader social and economic impacts. Nonetheless, if partnersbenefit then certainly that qualifies as an external benefit, thoughusually a relatively narrow one.

Among the surprisingly few academic studies examining datapertaining to technology transfer success, in either a federal labora-tory or a university setting, the vast majority employ Out-the-Doormeasures (see for example, Thursby et al., 2001; Siegel et al., 2003;

Anderson et al., 2007; Park et al., 2010; Heisey and Adelman, 2011).A typical approach is Jaffe and Lerner’s (2001). The authors examinepatenting results for 23 Department of Energy federally-financedresearch and development centers (FFRDC’s, i.e. US public research

Major advantage and disadvantage

ical Advantage: Does not hold transfer agent accountable for factorsthat may be beyond control.Disadvantage: Encourages cynicism and focuses on activity ratherthan outcome

the firm Advantage: Focuses on a key feature of technology transfer.Disadvantage: Ignores important public sector and nonprofittransfer; must accommodate market failure issues.

public Advantage: Appropriate to public sponsorship, focuses on resultsto taxpayer.Disadvantage: Evaluation almost always requires unrealisticassumptions.

eory, models

Advantage: Realistic.Disadvantage: Does not yield to systematic evaluation.

ost–benefitce

Advantage: Takes into account foregone opportunities, especiallyalternative uses for scientific and technical resources.Disadvantage: Difficult to measure, entails dealing with the“counterfactual”

(sociology,manomics)

Advantage: Treats technology transfer and technical activity as anoverhead investment.Disadvantage: Not easy to equate inputs and outputs.

y, public Advantage: Excellent and easily sanctioned criteria for publicpolicy.Disadvantage: Extremely difficult to measure systematically

Page 5: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

3 arch P

ipeaiptfugtnI

taMutearfttartauasdatiT

toatnfesbttonao

insHtonFtt

s

8 B. Bozeman et al. / Rese

nstitutes) seeking to determine factors related to the volume ofatenting, with no analysis of the impacts of the patents. Adamst al. (2003) provide another study focusing on federal laboratoriesnd Cooperative Research and Development Agreements (CRADA’s,.e. arrangements between a US public research institute and a com-any to engage in collaborative R&D). They employ survey data forwo years (1996 and 1998). The sample for the survey is based onederal laboratory CRADA partners. They find that CRADAs stim-late both industrial patents and industrial R&D and do so to areater extent than other technology transfer mechanisms. Thus,he Adams et al. (2003) study, focusing as it does on impacts inter-al to the firm, is viewed as Out-the-Door with Transfer Partner

mpacts.Most published technology transfer studies focus on university

echnology transfer and IP activity, perhaps because of the avail-bility of data compiled by the Association of University Technologyanagers (AUTM). Thus, for example, Powers (2003) analyzes 108

niversities and finds that the number of licenses produced relateso the technology transfer offices’ year of origin and to higher lev-ls of R&D funding. Powers also examines revenues from licensesnd finds that the sizes of technology transfer offices predict licenseevenue (and, thus, the study falls in the Out-the-Door with Trans-er Agent Impacts category). Caldera and Debande (2010) find thathe size of the technology transfer office in 52 Spanish universi-ies is associated with greater R&D income, spinoffs, and licensingctivity although not licensing revenue. While license activity andevenue do not necessarily provide evidence of impacts outsidehe transferring institution (for example, companies could pay for

license to suppress activity) it is likely that license revenue issually an indication of external impacts. Whether the impactsre in the Economic Development category is a question unan-wered here. Moreover, it is even unclear whether spinoffs, whicho commonly fall into the Economic Development category, actu-lly lead to broader economic outcomes for a region consideringhe propensity of these spinoffs to fail or, in the pursuit of proxim-ty to financing and markets, move to another region (Breznitz andaylor, 2009).

Despite obvious disadvantages to the Out-the-Door criterion,he model has a certain compelling logic. Depending upon whomne views as the transfer agent, care must be taken to give someccount of the agents’ domain of control. To put it another way, aechnology transfer agent such as an Office of Research and Tech-ology Applications (ORTA) officer (i.e. a technology transfer officer

or US public research institutes) typically has a domain of influ-nce but a limited one. For example, the ORTA office may haveome capability of strategic choice among technology options, maye able to induce work on selected technologies, and may be ableo develop good administrative and business practices such thatechnology transfer can be facilitated. However there are manyther factors over which the technology transfer agent may haveo control, particularly the ability of firms to effectively developnd market technology or the ability of firms to manage productsnce they have been brought to market.

To be sure, some might argue that the technology transfer agents at least partly culpable if it transfers technologies to compa-ies who have inadequate capital, manufacturing ability, or marketavvy to make a good technology into a good, profitable product.owever, since the transfer agent certainly does not control the

ransfer partner (or in many instances even have much influencen the partner) and since many transfer agents have limited oro background market forecasting (Piper and Naghshpour, 1996;ranza and Srivastava, 2009) it does not seem reasonable to hold

he agent and its technology transfer professionals responsible forhe actions or inactions of partnering firms.

The expansion beyond the Pure Out-the-Door category to con-ider impacts on, respectively, transfer agents and transfer partners

olicy 44 (2015) 34–49

suggests that the Out-the-Door models has some reach and via-bility. Likewise, the obvious fact that technology transfer agentshave clearly limited domains of control over the actions of trans-fer partners means that the criterion has some common senseappeal. Nevertheless, we must consider this: if one uses only Out-the-Door criteria one will likely never have direct knowledge thatthe technology transfer activities have achieved the goals of havingeconomic and social impacts beyond those accruing to the technol-ogy transfer partnership. Conceivably, despite the inferences onemight wish to make, it is possible that in many instances simplygetting technology out the door achieves little beneficial impactand, absent more intensive analysis, may actually do harm. Forexample, in one case study (Kingsley and Farmer, 1997) of stategovernment transfer of a transportation technology, it was deter-mined that the technology had been successfully transferred to afirm and for years the transfer was viewed as a major success. Onlylater was it learned that the technology was in short order sold bythe acquiring company to a foreign firm who used it to developa strong competitive edge against U.S.-based firms, arguably driv-ing some out of business. For many years (the technology is nowbeing used in the U.S.) the transfer had a significant negative eco-nomic effect on U.S. firms. Was the technology transferred? Yes.Was it beneficial? Only if one provides an expanded geographyof benefit.

Despite its critical limitations, the Out-the-Door model is,arguably, the most commonly used criterion and the basis formost metrics employed for technology transfer. The Out-the-Doormodel’s popularity seemingly goes hand-in-hand with the desirefor objective measures or metrics to evaluate or track technologytransfer. To be sure, data derived from pure Out-the-Door, Out-the-Door with Transfer Partner Impacts, and Out-the-Door withTransfer Agent Impacts measures could prove extremely useful.They are certainly good indicators of levels of technology trans-fer activity, but as stated they do not provide information aboutdownstream impacts and outcomes. While most technology trans-fer participants well understand that just getting technology orIP out the door certainly does not imply that there will be anybeneficial effect from the transfer, they are equally aware of thedifficulty of measuring technology transfer by any other means.Moreover, many technology transfer officers feel that their activ-ities, even when quite valuable may not have early, measurablereturns. As the U.S. General Accountability Office (GAO), the inves-tigative and evaluative arm of the US Congress, noted more than adecade ago:

(E)xperts in research measurement have tried for years todevelop indicators that would provide a measure of results ofR&D. However the very nature of the innovation process makesmeasuring the performance of science-related projects difficult.For example, a wide range of factors determine if and when aparticular R&D project will result in commercial or other bene-fits. It can also take many years for a research project to achieveresults (GAO 1989).

Nevertheless, the demand for accountability and effectivenessmeasures is unlikely to be deterred by the challenge of develop-ing timely, valid measures. Nor should it be. Federal laboratoriesand others in the technology transfer chain are not likely to receivea “pass” just because their results typically require more time togestate and fully develop. Witness the 2011 memorandum fromthe US Executive Office of the President directed to the heads ofR&D-performing executive agencies and departments (US WhiteHouse Office of the Press Secretary, 2011), which called for improv-

ing results of technology transfer and commercialization from theagencies through tracking performance against metrics as well asthrough streamlining the technology transfer process and partner-ing with state and local institutions. However, one reaction to the
Page 6: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

arch P

nm

3t

oiHtrtcm

siatiabtiP“eaoaHsnoetipie

tcceasa(

‘&hi

B. Bozeman et al. / Rese

eed to develop metrics for near term results is that these types ofetrics are often developed to measure activity not impacts.

.2. Market impact/economic development criterion forechnology transfer effectiveness

The “Market Impact/Economic Development” criterion focusesn (1) the commercial success of the transferred technology includ-ng (2) impacts on regional and or national economic growth.ereafter, the simpler term, Market Impact, criterion will be used

o signify either. Generally, market impact pertains to commercialesults obtained by a single firm or a few firms. However, much ofhe technology transfer activities undertaken by government agen-ies, as well as by universities, is rationalized by broader economicultipliers assumed to flow from technology transfer.To a large extent the Market Impact criterion is the ‘gold

tandard’ for technology transfer effectiveness evaluation. Fornstance, to a large extent federal policy reflects quite comfort-bly the idea that economic impact is de facto social impact andhat economic growth accruing from science and technology pol-cy investments are inherently good. Not all agree, but the Obamadministration, like virtually every Presidential administrationefore it, is on record articulating that science and technology runshe “engine for economic growth” in the US and economic growths the cardinal value for a great many federal programs. As noted inresident Obama’s speech on November 23, 2009, announcing theEducate to Innovate” policy initiative: “Reaffirming and strength-ning America’s role as the world’s engine of scientific discoverynd technological innovation is essential to meeting the challengesf this century.”3 Moreover, while much of the language of theforementioned memorandum on technology transfer (US Whiteouse Office of the Press Secretary, 2011) is actually quite broad,

o much so that it seems to encompass nearly all the effective-ess criteria presented here, the more specific terminology focusesn economic impacts. Thus, the memo articulates the quite gen-ral goal “to foster innovation by increasing the rate of technologyransfer and the economic and societal impact from Federal R&Dnvestments” (US White House Office of the Press Secretary, 2011,. 1), but when attention is turned to measures and metrics those

dentified as examples are ones chiefly relating to or supportingconomic and marketplace impacts:

These goals, metrics, and evaluation methods may vary byagency as appropriate to that agency’s mission and types ofresearch activities, and may include the number and qualityof, among other things, invention disclosures, licenses issuedon existing patents, Cooperative Research and DevelopmentAgreements (CRADAs), industry partnerships, new products,and successful self sustaining spinoff companies created forsuch products (US White House Office of the Press Secretary,2011, p. 1–2).

As mentioned in the discussion relating to the addition ofhe Public Value criterion, we see that economic effectivenessriteria should perhaps not pre-empt all others. Nevertheless, it islearly the case that most technology transfer policy is to a largextent rationalized by its economic impacts. The use of sciencend technology policy and, specifically, technology transfer to

pur economic development has sound basis in many public lawsnd policy documents and strong support from the general publicSeely, 2003).

3 The White House, Office of the Press Secretary, “President Obama LaunchesEducate to Innovate’ Campaign for Excellence in Science, Technology, Engineering

Math (Stem) Education,” November 23, 2009, downloaded January 24, 2013, from:ttp://www.whitehouse.gov/the-press-office/president-obama-launches-educate-

nnovate-campaign-excellence-science-technology-en.

olicy 44 (2015) 34–49 39

Even if the Market Impact model is the gold standard for effec-tiveness it can in some instances prove to be fool’s gold. Animportant problem with the Market Impact criterion is misattri-bution of success and poor understanding failure. If a particularinstance of transfer is not commercially successful, is it because theproduct or process transferred is of limited value? Perhaps. But thefailure may be owing to such factors as the recipient organization’sproblems in development, manufacturing, marketing, or strategy.Thus, if a new drill bit project enables deeper drilling, opening upNorth Sea oil exploration (Link, 1995), how much does one creditthe project versus prior science? If a firm that has been work-ing for years on automobile battery technology and finally, withthe help of a federal laboratory CRADA-based partnership, workswith a university consortium to produce a better battery and thenbrings it to market, how does one sort out the various contributions(Sperling, 2001; Sperling and Gordon, 2008)? How quickly wouldthe technology have developed if not for the project? Most impor-tant, if a U.S.-developed technology provides great benefits abroad,what does that do to the accounting? Analytical precision and closeaccountings are nearly impossible.

A number of studies employ the Market Impact model inassessing technology transfer effectiveness. However, the studiesare not recent ones. Among the older studies, Bozeman and col-leagues (Bozeman et al., 1995, 1999; Crow and Bozeman, 1998;Bozeman, 1994, 1997) and Roessner and his colleagues (Fellerand Roessner, 1995; Roessner and Bean, 1991) provide consis-tent evidence from different data sources that federal laboratorypartnerships yield a great deal of economic value in the transferof knowledge. Some studies (e.g. Bozeman et al., 1995; Link andScott, 2001; Meyers et al., 2003) go so far as to offer cost–benefitestimates. Typical among these earlier studies is Bozeman et al.’s(1995) study of 219 federal laboratory partnerships, most of thembased on CRADAs. They find that the mean value for companymanagers’ estimates of net economic benefits to the firm is approx-imately $1.5 million per project, whereas the median estimate iszero. This implies that such partnerships yield a few “big winners”and quite a lot of “no impact” projects.

During the past decade or so, several technology transfer evalu-ation studies have been produced using the Market Impact modeland based on economic impact measures. Almost all of these stud-ies have focused on university technology transfer, and many ofthese employ the AUTM database. Roessner and colleagues (2013)use the AUTM annual surveys from 1996 to 2010 and economicinput–output models to find that the impact of university licens-ing to the U.S. economy during that period is in excess of $162.1billion and that jobs created over the same period range from 7000to 23,000 per year. Using those same AUTM surveys, Cardozo et al.(2011) examine aggregate university activity and find that growthin revenues seems to have crested as technology transfer processeshave become more costly and less efficient. In one of the few recentpublications using Economic Impact criteria and focusing on federalagencies, Rowe and Temple (2011) conduct a smaller-scale studyfocused on 11 firms from the semiconductor industry partneringwith NIST. Their interviews and cost–benefit analysis show thatthe NIST projects had benefits well in excess of the full cost of theprojects.

3.3. Political reward criterion for technology transfer effectiveness

The Political Reward criterion receives relatively little attentionin the literature but is worth mentioning. Parties to technologytransfer think in terms of possible political rewards accruing from

compliance or from ‘good citizen’ activities. During various on-site interviews (Crow and Bozeman, 1998), university and federallaboratory officials have on many occasions made direct or, morefrequently, indirect reference to the political pay-offs expected
Page 7: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

4 arch P

farbG

pafarinfitu

rifvp“b

Pftcitpa

3e

raiooatiartene

caeppsootatt(

0 B. Bozeman et al. / Rese

rom technology transfer activities. Technology transfer activitiesre often seen as a way to curry favor or enhance political supportather than as a means providing significant economic and socialenefit. In this sense it is a means not an end (Rogers et al., 2001;uston, 2007).

As noted previously (Bozeman, 2000), there are at least threeossible avenues to political reward. In the least likely of scenarios,

transfer agent is rewarded because the technology it has trans-erred has considerable national or regional socio-economic impactnd the agent’s role in developing and transferring the technology isecognized by policy superiors and, in turn, the transferring entitys rewarded with increased funding or other resources. This sce-ario is not unprecedented but does not commonly occur. In therst place, few technologies have such an impact. But even whenhere are huge impacts from technology transfer, funding processessually do not respond to even documented ‘big successes.’

Another way in which the Political Reward criterion may yieldesource results for the transfer agent is through the transfer recip-ent. Under this scenario, the organization or industry benefitingrom the technology transfer, communicates to policymakers thealue of its interaction with the technology transfer partner. Theolicymaker then, in turn, rewards the transfer agent for being agood industrial partner.” There is evidence of such political rewardut, understandably, it is based on rumors and anecdotes.

Probably the most common and realistic rationale under theolitical Reward criterion is for the transfer agent to be rewardedor the appearance of active and aggressive pursuit of technologyransfer and commercial success. In this case, the Political Rewardriterion turns out to be much the same as Out-the-Door: activitys its own reward. Much bureaucratic behavior seems to supporthis view. For example, often federal laboratories are as active inublicizing their technology transfer and economic developmentctivities as in actually doing the transfer work.

.4. Opportunity cost criterion for technology transferffectiveness

When considering technology transfer activities it is well worthecognizing that technology transfer is one of many missions of angency or organization, and often not the one viewed as the mostmportant. For instance, in hundreds of interviews with federal lab-ratory scientists Crow and Bozeman (1998) found a wide rangef perspectives on technology transfer, ranging from enthusiasmnd avid participation to outright hostility and cynicism. Even asechnology transfer activity is enhanced and nurtured, it remainsmportant to understand that technology transfer takes its place,nd often a secondary place, to missions such as the advance of basicesearch and scientific theory, providing equipment and infrastruc-ure for the growth of scientific knowledge, training scientists andngineers, and, in the case of government agencies, ensuring theation can perform its defense, national security, public health andnergy missions.

While it is easy enough to understand the fact of opportunityosts in technology transfer it is not so easy to draw practical lessonsbout technology transfer measures and metrics. The National Sci-nce Foundation (NSF) established the Innovation Corps (I-Corps)rogram in 2011 to speed technology transfer of NSF research,articularly in light of limitations on the ability of professors touccessfully start up and run technology-based firms originatingut of their publicly-funded research. I-Corps puts together teamsf would-be entrepreneurs (such as students of these professors),he professors who were principal investigators on NSF grants,

nd mentors experienced in technology transfer and takes themhrough a six-week training process using lean startup and cus-omer discovery processes reflected in the popular business pressRies, 2011; Blank, 2013). Success not only concerns “go” decisions

olicy 44 (2015) 34–49

or validation from customers that there is a viable business model,but also “no go” decisions that there is no market for the technology,but that were reached more quickly without requiring significantexpenditure of technology transfer resources, thereby presumablyreducing opportunity costs.

The literature on university technology transfer gives atten-tion to this criterion, especially in relation to possible impacts onindividual researchers’ research agendas (Bercovitz and Feldman,2008), teaching responsibilities (Mendoza, 2007) and, more gener-ally, organizational culture (Lee and Gaertner, 1994; Slaughter andRhoades, 2004). Few recent studies focus directly on opportunitycosts and technology transfer. However, Saavedra and Bozeman’s(2004) study of federal laboratories and Woerter’s studies ofuniversity–industry activity do employ contingency-oriented mod-els and show that certain “portfolios” of technical activity are moreproductive than others. That is, while some federal laboratories are,because of their technical focus, able to engage in technology trans-fer activities with win-win results (for both the technology transferand for their other technical missions), other labs suffer declines ineffectiveness in some of their technical missions with an increasein technology transfer.

3.5. Scientific and technical human capital criterion fortechnology transfer effectiveness

A premise of the Scientific and Technical Human Capital modelis that one of the most critical objectives in almost all aspectsof science and technology policy is building human and institu-tional capabilities, even aside from particular accomplishmentsreflected in discrete knowledge and technology outputs (Bozemanet al., 2001). The focus of Scientific and Technical Human Capi-tal (hereafter STHC) is on long-term capacity building. Indeed, adeep understanding the value of scientific and technical knowledgerequires a view of the role of scientific and technical human cap-ital in the capacity for producing scientific work (Audretsch andStephan, 1999; Corolleur et al., 2004; Canibano et al., 2008) and anunderstanding that all such work is produced in networks (Casperand Murray, 2005). The formal and informal networks of scien-tists, engineers and knowledge users depend upon the conjoiningof equipment, material resources, organizational and institutionalarrangements for work, and the unique human capital embodiedin individuals (Dietz and Bozeman, 2005; Rigby and Edler, 2005;Ponomariov and Boardman, 2010). At any level, from the indi-vidual scientist to organizational actor, network, or entire fields,knowledge value is capacity—capacity to create new knowledgeand technology (Bozeman et al., 2001).

Capacity is revealed through the changing patterns of the sci-entific and technical human capital footprints individuals leavebehind throughout their careers. Dietz and Bozeman (2005) andGaughan and Ponomariov (2008) define STHC as the sum totalof personal skills, knowledge, and the social resources scientistsand engineers bring to, and develop from, their work. Thus, STHCincludes not only the individual human capital endowments tra-ditionally included in labor models (e.g. Becker, 1964; Schultz,1963), but also the individual scientist’s tacit knowledge (Polanyi,1969; Senker, 1995), craft knowledge, and know-how (Bidault andFischer, 1994). STHC further includes the social capital (Coleman,1988) that scientists inevitably draw upon in framing research andtechnological questions, creating knowledge, and developing socialand economic certifications for knowledge (Fountain, 1998; Landryet al., 2002).

As mentioned, much of scientific and technical human capital

is embedded in social and professional networks or technologi-cal communities (Liyanage, 1995; Murray, 2002). These networksintegrate and shape scientific careers. They provide knowledge ofscientists’ and engineers’ work activities, serve as resources for job
Page 8: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

arch P

oset

gBdcce(ti

eS(ridrtwSusauPpPNmaIPmo

3

(twemcp

suawtu“ac

B. Bozeman et al. / Rese

pportunities and job mobility, and reveal possible applications forcientific and technical work products. Increasing STHC generallynhances individuals’ capacities while simultaneously increasinghe capacity of networks of knowledge and technology producers.

Some technology transfer professionals, especially those inovernment agencies (Bozeman and Rogers, 2001; Rogers andozeman, 1997) take the view that technology transfer, even if itoes not have immediate effects from discrete projects, helps buildapacity within either a geographic area, a scientific and techni-al field or an institution (Fritsch and Kauffeld-Monz, 2010; Floridat al., 2010). For these reasons, among others, Autio and Laamanen1995) and Sala et al. (2011) argue that evaluation of technologyransfer is most appropriately directed to impacts on networks ofnterconnected scientific and commercial actors.

While there are no technology transfer assessments basedxclusively on an STHC model, there are a few studies in whichTHC plays a significant role. One study of Italian research centersCoccia and Rolfo, 2002) focuses on the complimentary roles ofesearch, education, and training and documents interdependentmpacts. Edler and colleagues (2011), find that 950 German aca-emics’ visits outside of their home country did not ‘crowd out’ butather complemented knowledge and technology transfer activitieso firms in Germany. Focusing on university researchers affiliatedith interdisciplinary centers, Lin and Bozeman (2006) employ an

THC model to identify the impacts of industrial interaction onniversity researchers’ careers and their productivity. In anothertudy employing an STHC model, but not for technology transferssessment, Bozeman and Corley (2004) examine the impacts ofniversity researchers’ collaborations on their accumulated STHC.erhaps the only full scale STHC research assessments are thoseroduced by Youtie and colleagues (2006) and by Gaughan andonomariov (2008), both focusing on knowledge impacts fromIH research centers. Youtie and colleagues employ qualitativeethodologies to trace the growth of collaborations and network

ctivity resulting from research sponsored by the NIH’s Nationalnstitute of Child Health and Human Development. Gaughan andonomariov provide a quantitative, time-series analysis (hazardodels) of university faculty curricula vita to show the impacts

f research center affiliation on the accumulation of STHC.

.6. Public value criterion for technology transfer effectiveness

The term “public value” has many meanings and implicationsBozeman, 2002, 2007; Benington and Moore, 2010). Some use theerm as equivalent to the collective good, others in connectionith the public interest, and still others as a sort of residual cat-

gory for commodities not encompassed in either private value orarkets (Jørgensen and Bozeman, 2007). At the broadest level, we

an begin with, and then build upon, a public values definitionrovided elsewhere (Bozeman, 2007, p. 37):

“A society’s “public values” are those providing normative con-sensus about (1) the rights, benefits, and prerogatives to whichcitizens should (and should not) be entitled; (2) the obligationsof citizens to society, the state and one another; (3) and theprinciples on which governments and policies should be based.”

While this definition has some merit for present purposes, ithows that public values may be the most fundamental criterionpon which to evaluate nearly any public policy. Its practical uses a criterion for technology transfer is quite limited, howeverould public value possibly be subverted in the case of technology

ransfer? A couple of examples will perhaps suffice. In the case of

niversity–industry technology transfer, a cornerstone of so-calledacademic capitalism,” some critics (Kleinman, 2003; Slaughternd Rhoades, 2004; Henkel, 2005) have alleged that the increasedommercialization of universities has undermined the core

olicy 44 (2015) 34–49 41

educational mission of universities. In reflecting on possibleimpacts of universities’ technology development and transferroles, former Harvard University president Derek Bok (2003, p.106) warns: “Even the appearance of hiring professors for commer-cial reasons will lower the morale of the faculty and diminish thereputations of the university[.]” The limited number of studies pro-viding systematic empirical evidence (Stephan, 2001; Ponomariov,2009; Bozeman and Boardman, in press) on the impact of uni-versity technology commercialization and transfer activities onuniversity educational missions shows that impacts are diverse,sometimes undermining education but in other cases augmentingthe mission. But the criticism remains worth noting: leaders mustbe vigilant that the primary public value of universities, education,not be undermined by the secondary economic value of technologycommercialization and transfer. This thwarting of public values canhappen in federal laboratories as well. For example, if technologytransfer activities undermine national security then there has beena supplanting of public values (Mowery, 1988; Aronowitz, 1999;Jaffe and Lerner, 2001; Kassicieh et al., 2002; Evans and Valdivia,2012). Likewise, if the private entrepreneurship enabled under theStevenson-Wydler Act (U.S. Congress, 1980, 1984a,b, 1986) were todiminish the core research capabilities of federal laboratories’ cor-porate research mission, here, too, would be a thwarting of publicvalues (see Coursey and Bozeman, 1992; Butler and Birley, 1998).

Overall, the “public values” criterion can be thought of as the“keep-your-eye-on-the-prize” criterion in the sense that it focuseson provision of beneficial public outcomes as opposed to thelesser value of organizational goal achievement. To this end, aspreviously mentioned, the public values criterion is consistentwith recent emphasis on responsible research and innovation. Forexample, von Schomberg (2013) anchored responsible innovationwithin the ethical promotion of social justice, sustainable devel-opment, and socially desirable quality of life. Rayner et al. (2013)put forth “The Oxford Principles,” in the UK which emphasizedpublic good regulation, public participation, research disclosure,independent assessment, and governance of widescale deploymentof geoengineering. Roco et al. (2011) emphasized four character-istics of responsible innovation: that it be transformative acrossdisciplines and sectors; that it consider equitable access and envi-ronmental, health, and safety concerns; that it include participationacross governmental agencies and other stakeholders; and that ittake a long-term perspective with measures that anticipate andadapt. Randles et al. (2012) found that in a Trans-Atlantic panel ofEuropean and US nanotechnology societal researchers, those fromEurope were more apt to enter the concept through philosophy andprinciples while those from the US emphasized points of practicalintervention but both groups regarded beneficial outcomes beyondmoving bench science along to be important aspects of responsibleinnovation.

Public value is a difficult and elusive criterion in terms of eval-uation. Recently, there have been efforts to move from the realmof broad values discourse to application (Bozeman and Sarewitz,2005; Slade, 2011; Valdivia, 2011). Bozeman and Sarewitz (2011)suggest that concerns about economic productivity have been dom-inant in science and technology policies and their assessment andthat there is a need for greater infusion of public values in scienceand technology policy. These researchers ground their work in pub-lic value failure theory (Bozeman, 2002, 2007), and the theory alsooffers some purchase for practically evaluating technology transferin terms of public values. For instance, Valdivia’s (2011) approachshows promise. He employs the Public Value model in connectionwith university technology transfer. Specifically, he evaluates the

Bayh-Dole Act using public value failure theory and its attendantanalytical approach, Public Value Mapping (PVM). The overarchingidea is that public values can be identified by analyzing appropri-ate documents and other media. Once values are identified, it is
Page 9: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

4 arch P

tacdcozdoeatoss

4

phe

(

2 B. Bozeman et al. / Rese

hen possible to ascertain whether a particular policy has failed tochieve them by analyzing the corresponding social outcome indi-ators (notably, these are sometimes difficult to measure and/orevelop). Another application of PVM includes identifying the pro-ess that links a particular policy or program to a public valueutcome. Maricle (2011) labels these sets of linkages an organi-ation’s “public value logic”. This idea is particularly pertinent toeveloping a public value failure approach to evaluating technol-gy transfer activities. Although these activities are not alwaysxpected to be direct mechanisms for achieving public value, theyre linked to public value pursuits through the public value logichat the organization or entity employs. Public value failure the-ry, therefore, suggests mapping public value logics and using casetudy methodology to assess whether technology transfer activitiesuccessfully fulfill their specified role.

. Conclusions and recommendations

In this concluding section a number of recommendations arerovided on the basis of implications of the literature reviewedere. They focus on general issues in assessing technology transferffectiveness.

1) Making the most of Out-the-Door. While the Out-the-Door modelof effectiveness is not ideal, it is realistic and useful (Geisler,1994; Lepori, 2006). For agencies able to develop large scale,contract-out, resource-intensive technology transfer assess-ment regimes, Out-the-Door criteria can be improved upon.But for agencies facing personnel scarcity, limited in-houseevaluation personnel, and no budget increment for externalevaluation contracting, it seems likely that the Out-the-Doormodel will continue to be the primary basis of any mea-surement activity (Geisler, 1994). Given these realities, therecommendation is for Out-the-Door done right. Ways to dothis include the following:

In recognition of the fact that some technology transfer out-comes are going to occur in streams of benefits and costsrealized over time, there is no more vital Out-the-Door activ-ity than providing good periodic benchmarks. If measures ofactivity are going to dominate metrics, then those measuresneed to be as precise as possible and need to be tracked overtime. A good number of the agencies’ responses recognize theimportance of quality, valid benchmark measures. For example,in the U.S. Department of Energy’s plans (U.S. Department ofEnergy, 2012) for technology transfer metrics, one of the crite-ria is “patenting effectiveness.” But rather than simply reportingthe number of patents, they plan to report the ratio of patentsin a given year to patent applications filed for a three year baseperiod, using a rolling three-year average as new metrics arereported.

Surprisingly few sets of Out-the-Door measures and metricsdeveloped thus far give any consideration to the resourcesagencies and their facilities bring to technology transfer activ-ities. It is not useful, and may even be counterproductive, toshow that the number of licenses has declined over a giventime period when, in fact, that decline may be owing to a sharpreduction of the technology transfer personnel available. Forany valid inference about effectiveness, activity measures mustrelate to resource measures.

Perhaps it is time to move away from what are referred tohere as Pure Out-the-Door measures. While it is sometimesexceedingly difficult to document particular causes and effects,

it is possible and useful to at least develop measures of Out-the-Door Transfer Agent Impacts and Out-the-Door TransferRecipient Impacts. These types of measures can likely be gath-ered and recorded even absent a large cadre of evaluation

olicy 44 (2015) 34–49

specialists available to that purpose. For example, in the case ofTransfer Recipient Impacts there may be desirable changes thatdo not immediately and directly translate into market impacts.For example, in working with a particular company a federal labmay have a strong impact on training firms’ personnel, benefitsthat will never show up directly and obviously in market indi-cators but that nonetheless have the potential to provide majoradvantages. Similarly, technology transfer recipients often ben-efit enormously from using state-of-the-art or even uniquescientific equipment and instruments made available to themby a transfer agent. Such benefits are out-the-door impacts, not(direct) market impacts and are well worth capturing. (For adiscussion of the indirect impacts of federal laboratories onindustry partners see Adams et al., 2003.)

(2) Identification of expected ranges of impact. A common problemfor most evaluation efforts, including attempts to evaluate tech-nology transfer impacts, is the failure to understand the domainof influence of the “intervention” (Midgley, 2006; Schalock andBonham, 2003). If at the beginning of a technology transfereffort there is at least some attention to providing a rationalefor the expected domain of influence of the transfer then thereis a guidepost to help one understand the diffusion of impacts.Absent such guideposts, it is altogether natural to claim impactsof great breadth when, in fact, the technology transfer activity isone significant event in a multi-causal chain of events. Equallyimportant, having a pre-established hypothesis about domainof influence leads to subsequent cues for obtaining evidence ofinfluence. An impact theory is a useful precursor to any attemptto measure impact. In developing impact measures, the ana-lyst does well to ask questions such as: (1) “What set of causalassumptions need be true for impacts to occur?” (2) “What isthe likely chronology of impact, when should benefits begin tooccur and why then?” (3) “What are the alternative causes thatcould result in this impact that seems to be caused by our tech-nology transfer efforts”? (i.e. alternative plausible hypotheses).Indeed, it may be worthwhile to routinely pose or even requireanswers to these and similar questions as part of any effort tomeasure out-the-door technology impacts or market impacts.

One way to accomplish this is to increase the use of logicmodels and mapping techniques. Systems of indicators aremore valuable than lone, discrete indicators. Systems of indica-tors brought together in logic models or mapping systems aremore valuable yet (Cooksy et al., 2001; Schalock and Bonham,2003). Logic models require attention to explicit assump-tions, requiring the analyst not simply to list but to showthe presumed causal connections among inputs (e.g. federallaboratory technology), activities (e.g. marketing technologies),outputs (e.g. licenses), and impacts (e.g. new products devel-oped by participating companies). Many textbooks on logicmodels include frameworks with specific templates that assurethat temporally-relevant questions are asked and that causalassumptions are explicated and inter-related (see Frechtling,2007, p. 65–78).

(3) Further development of scientific and technical human capital indi-cators. Research evaluators and program managers have knownfor some time that it is often at least as valuable to enhance thecapacities of organizations or knowledge producing communi-ties as to provide beneficial direct outputs. If a small companydevelops the capacity to use computer aided machine tools,that capacity may provide a stream of benefits stretching outfor many years. Some R&D managers assume that if knowledgeproducers’ capacity is fully developed then good things hap-

pen with the level of production and the quality of outputsand, indeed, there is at least some evidence for this capac-ity focus (e.g. Ponomariov and Boardman, 2010). Furthermore,it is sometimes easier to develop valid measures of scientific
Page 10: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

arch P

(

(

B. Bozeman et al. / Rese

and technical human capital than valid measures of economicimpact in over-determined systems of interrelated economicproducers and consumers. Thus, for example, one could tracethe career trajectories of researchers who have interacted witha federal laboratory, comparing those researchers to a groupsimilar in every other respect except that they have not inter-acted with a federal laboratory. Using the laboratory interactionas an inflection point in time, it is possible to compare differ-ences in one set of researchers (those interacting with the labs)with the other (who have not). With a sufficient sample sizefor valid “treatment” and “comparison” groups, any differencebetween the two sets’ career accomplishments could be owingto the resources and activities of their interactions with the fed-eral laboratories. Previous studies have used curricula vitae asa convenient means of examining the impacts of such eventson researchers’ careers (for examples of such applications seeBozeman and Gaughan, 2007; Canibano et al., 2008; Lepori andProbst, 2009).

4) Correlate process reforms and activity measures. If we take thesetwo categories (process and activity) of indicators together theycomprise at least 90% of typically used performance metrics.The problem is that the two are not, under most plans, broughttogether. While everyone recognizes that correlation is not cau-sation, it is at least of heuristic value to track activity measuresagainst implemented changes in technology transfer processesand managerial approaches.

5) Distinguish the relative importance of indicators associated with

technology transfer by universities, government laboratories, andindustry. Studies of technology transfer programs should adaptand customize their assessments to reflect differences in theorientation of the organizational home for the technology being

Effectivenesscriterion

In-text citation Full citation

Out-the-door Rogers et al.(1998)

Rogers, Evertt, Carayannis, Elias G.,Kazuo, Allbritton, Marcel M., 1998.Research and Development Agreemas technology transfer mechanismsManagement 28 (2), 79.

Out-the-Door Bercovitz et al.(2001)

Bercovitz, Janet, Feldman, MaryannBurton, Richard, 2001. Organizatioa Determinant of Academic Patent

Behavior: An Exploratory Study of

Hopkins, and Pennsylvania State UJournal of Technology Transfer 26 (

Out-the-Door Jaffe and Lerner(2001)

Jaffe, Adam B., Lerner, Josh, 2001. Rpublic R&D: patent policy and thecommercialization of national labotechnologies. RAND Journal of EconJournal of Economics) 32 (1), 167–1

Out-the-Door Thursby et al.(2001)

Thursby, Jerry G., Jensen, Richard, TC., 2001. Objectives, Characteristicsof University Licensing: A Survey oUniversities. The Journal of Techno(1–2), 59–72.

Out-the-Door Thursby andKemp (2002)

Thursby, Jerry G., Kemp, Sukanya. (and productive efficiency of univerproperty licensing. Research Policy109–124.

olicy 44 (2015) 34–49 43

transferred. The Contingent Effectiveness Model can be helpfulin this regard. While all metrics are important – particularlyfor example “out-the-door” metrics – one can expect differ-ences in emphasis for the other metrics. Universities couldreasonably be expected to prioritize scientific and technicalhuman capital measures and economic development measures.Government laboratories could be presumed to place greaterweight on political considerations and public value effective-ness. Companies could be expected to be particularly concernedwith market impact and opportunity costs. A pathway for futureresearch is to examine these anticipated differences and, if pos-itively supported by such research, should be incorporated intomore customized evaluation designs.

(6) Increase attention to and development of public value effective-ness criteria. As indicated by the adaptation to the ContinentEffectiveness Model, the idea of systematic evaluation of tech-nology transfer activities in terms of Public Value is relativelynew. However, the emphasis that many agencies place on edu-cation and outreach impacts are to a certain extent related toPublic Value criteria as are anecdotal reports of social impactsof transferred technologies. Challenges to further developmentof public value indicators lie in their inherent measurementdifficulties (e.g. Bozeman and Sarewitz, 2011; Gupta, 2002) aswell as the relatively resource intensive methods necessaryto capture a picture of an organization’s public value pur-suits. In spite of these difficulties, efforts to analyze technologytransfer through the lens of public value are bearing somefruit (Costa-Font and Mossialos, 2006; Sorensen and Chambers,2008; Rubenstein, 2003), and theories such as public valuefailure theory point to potential pathways to systematicallydevelop and explore public value effectiveness measures.

Appendix A. Technology transfer literature organized bycategories of the revised contingent effectiveness model

Relevant findings

Kurihara, Cooperativeents (CRADAs). R&D

Firms are critical of the amount of time and complexitynecessary to form a CRADA.

, Feller, Irwin,nal Structure asand LicensingDuke, Johnsniversities. The1–2), 21–35.

Differences in organizational structure and capacityresult in differences in technology transfer activities interms of patenting, leveraging, and the likelihood thatcustomer firms overlap across university units.

einventing

ratoryomics (RAND98.

Federal technology transfer legislation and initiativessince the 1980s have had a significant effect on thenumber of patents produced by DOE labs without acommensurate decrease in patent quality.

hursby, Marie For patents and sponsored research size of the

and Outcomesf Major U.S.logy Transfer 26

technology transfer office is positively associated withhigher levels. For licenses number of disclosures, sizeof the technology transfer office, and whether theuniversity has a medical school are statisticallysignificant. Also, the stage of technology development,size of the tech transfer office, and quality of theresearchers is associated with greater royalty values.

2002). Growthsity intellectual

31 (1),

Licensing has increased for reasons other than overallincreases in university resources.

Page 11: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

4

4 B. Bozeman et al. / Research Policy 44 (2015) 34–49

Effectivenesscriterion

In-text citation Full citation Relevant findings

Out-the-Door Adams et al.(2003)

Adams, James D., Chiang, Eric P., Jensen, Jeffrey L.,2003. The Influence of Federal Laboratory R&D onIndustrial Research. Review of Economics &Statistics 85 (4), 1003–1020.

CRADAs stimulate industrial patents and industrialR&D, and do so to a much greater extent than othertech transfer mechanisms.

Out-the-Door Friedman andSilberman(2003)

Friedman, Joseph, Silberman, Jonathan, 2003.University Technology Transfer: Do Incentives,Management, and Location Matter? The Journal ofTechnology Transfer 28 (1), 17–30.

Incentives for researchers, university location within aregion with a concentration of high technology firms, aclear technology transfer mission, and previoustechnology transfer experience are positivelyassociated with technology transfer performance.

Out-the-Door Powers (2003) Powers, Joshua B., 2003. CommercializingAcademic Research: Resource Effects onPerformance of University Technology Transfer.The Journal of Higher Education 74 (1), 26–50.

Universities with older technology transfer offices,higher quality researchers, and higher levels of R&Dfunding produce more patents.Those with older and larger technology transfer officesproduce more licenses.Researcher quality and technology transfer office sizeare positively associated with license revenue.

Out-the-Door Siegel et al.(2003)

Siegel, Donald S., Waldman, David, Link, Albert,2003. Assessing the impact of organizationalpractices on the relative productivity of universitytechnology transfer offices: an exploratory study.Research Policy 32 (1), 27–48.

Invention disclosures are positively associated withboth number of licenses and license revenue. The sizeof the technology transfer office staff results in morelicenses but not more revenue. Spending on externallawyers reduces the number of agreements butincreases license revenue.

Out-the-Door Chapple et al.(2005)

Chapple, Wendy, Lockett, Andy, Siegel, Donald,Wright, Mike, 2005. Assessing the relativeperformance of U.K. university technology transferoffices: parametric and non-parametric evidence.Research Policy 34 (3), 369–384.

University technology transfer offices in the U.K. arefound to have low levels of efficiency and decreasingreturns to scale.

Out-the-Door Link and Siegel(2005)

Link, Albert N., Siegel, Donald S., 2005. Generatingscience-based growth: an econometric analysis ofthe impact of organizational incentives onuniversity–industry technology transfer. TheEuropean Journal of Finance 11 (3), 169–181.

When licensing activities are the dependent variable,organizational incentives (financial incentives) impacttechnology transfer performance.

Out-the-Door Anderson et al.(2007)

Anderson, Timothy R., Daim, Tugrul U., Lavoie,Francois F., 2007. Measuring the efficiency ofuniversity technology transfer. Technovation 27(5), 306–318.

There are both efficient and inefficient universities interms of comparing research expenditures andtechnology transfer outputs.Universities with medical schools tend to be lessefficient than those without medical schools.

Out the Door Mowery andZiedonis (2007)

Mowery, David C., Ziedonis, Arvids A., 2007.Academic patents and materials transferagreements: substitutes or complements? TheJournal of Technology Transfer 32 (3), 157–172.

Materials Transfer Agreements at universities do notappear to inhibit patenting and licensing activities.

Out-the-Door Fukugawa(2009)

Fukugawa, Nobuya, 2009. Determinants oflicensing activities of local public technologycenters in Japan. Technovation 29 (12), 885–892.

Determinants of licensing activity vary based on thephase of technology transfer.Budget size and previous technology transferexperience does not affect licensing. Employing highquality scientists promotes licensing of grantedpatents. Organizational efforts aimed at encouragingscientists to understand the needs of small businessesincreases royalty revenues.

Out-the-Door Swamidass andVulasa (2009)

Swamidass, Paul M., Vulasa, Venubabu, 2009. Whyuniversity inventions rarely produce income?Bottlenecks in university technology transfer. TheJournal of Technology Transfer 34 (4), 343–363.

Lack of resources in terms of staff and budget result inuniversities focusing on filing patent applicationsrather than licensing technologies.

Out-the-Door Park et al.(2010)

Park, Jong-Bok, Ryu, Tae-Kyu, Gibson, David V.,2010. Facilitating public-to-private technologytransfer through consortia: initial evidence fromKorea. The Journal of Technology Transfer 35 (2),237–252.

Membership in research consortia can increase thetechnology transfer performance (in terms of inventiondisclosures, patents, licenses executed, and royalties)of participating public sector research institutions.

Out-the-Door Heisey andAdelman(2011)

Heisey, Paul W., Adelman, Sarah W., 2011.Research expenditures, technology transferactivity, and university licensing revenue. TheJournal of Technology Transfer 36 (1), 38–60.

The study finds conflicting evidence of the short-termeffect of research expenditures on licensing revenues.Both early initiation of a technology transfer programand technology transfer staff size positively affectexpected licensing revenues; however, they appear tobe substitutes.

Out-the-Doorand MarketImpact

Bozeman andCrow (1991a)

Bozeman, Barry, Crow, Michael, 1991. Red tape andtechnology transfer in US government laboratories.The Journal of Technology Transfer 16 (2), 29–37.

Labs involved in tech transfer do not have higher levelsof red tape than other labs.Out the door measures of tech transfer success areassociated with low levels of perceived red tape, andmeasures of market impact are associated with lowlevels of actual red tape in obtaining project fundingand low-cost equipment.

Out-the-Doorand MarketImpact

Bozeman andCoker (1992)

Bozeman, Barry, Coker, Karen, 1992effectiveness of technology transfergovernment R&D laboratories: the imarket orientation. Technovation 1

. Assessing the from USmpact of2 (4), 239–255.

Multi-faceted, multi-mission labs with lowbureaucratization, ties to industry, and a commercialfocus in project selection perform better onout–the-door and market impact effectivenessmeasures.

Page 12: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

B. Bozeman et al. / Research Policy 44 (2015) 34–49 45

Effectivenesscriterion

In-text citation Full citation Relevant findings

Out-the-Doorand MarketImpact

Bozeman(1994)

Bozeman, Barry, 1994. Evaluating governmenttechnology transfer: Early impacts of thecooperative technology paradigm. Policy StudiesJournal 22 (2), 322–337.

There is wide variation in labs in regards to out thedoor and market impact measures of effectivenesswith some evidence supporting a concentration ofsuccess in a few labs.Lab technology transfer strategy and lab mission arecorrelated with effectiveness.Different measures of success do not correlate wellwith each other.

Out-the-Doorand MarketImpact

Caldera andDebande(2010)

Caldera, Aida, Debande, Oliver, 2010. Performanceof Spanish universities in technology transfer: Anempirical analysis. Research Policy 39 (9),1160–1173.

The size of the technology transfer office in 52 Spanishuniversities is associated with greater R&D income,spinoffs, and licensing activity although not licensingrevenue.

Out-the-Doorand MarketImpact

Siegel et al.(2007)

Siegel, D.S., Veugelers, R., Wright, M., 2007.Technology transfer offices and commercializationof university intellectual property: performanceand policy implications. Oxford Review ofEconomic Policy 23 (4), 640–660.

The researchers collect both quantitative andqualitative data on the relative efficiency of universitytechnology transfer offices. They find that differencesin performance can be attributed to bothenvironmental and institutional factors and may alsodepend on organizational practices.

Out-the-Door,MarketImpact, andEconomicDevelopment

Rogers et al.(2001)

Rogers, Everett, Takegami, Shiro, Yin, Jing, 2001.Lessons learned about technology transfer.Technovation 21 (4), 253–261.

Articles in scientific journals are not an effectivetechnology transfer mechanism. Spin-offs are aneffective technology transfer mechanism.Organizations that provide assistance with technologytransfer, coupled with favorable entrepreneurial leavepolicies at federal labs, facilitate the growth ofspin-offs.

Out-the-DoorandEconomicDevelopment

Carlsson andFridh (2002)

Carlsson, Bo, Fridh, Ann-Charlotte, 2002.Technology transfer in United States universities.Journal of Evolutionary Economics 12 (1/2).

Organizational structure variables have an impact ontechnology transfer measures of licenses, patents, andstart-ups. However, based on their findings the authorsargue for technology transfer success to be consideredin broader context such as overall goals of theorganization.

Market Impact Cohen et al.,2002

Cohen, Wesley M., Nelson, Richard R., Walsh, JohnP., 2002. Links and Impacts: The Influence of PublicResearch on Industrial R&D. Management Science48 (1), 1–23.

In general, public research plays an important role inprivate sector manufacturing R&D. This impact flowsthrough a variety of formal and informal channels andtends to be greater for applied research rather thanbasic research. There are some differences in impactsacross industries as well, but few, if any, systematicdifferences between high tech industries and otherindustries.

Market Impact Hertzfeld(2002)

Hertzfeld, Henry R., 2002. Measuring the EconomicReturns from Successful NASA Life SciencesTechnology Transfers. The Journal of TechnologyTransfer 27 (4), 311–320.

For companies that developed spin-off products fromNASA investments the largest benefits accrued to largecompanies. Many small companies reported profitableproducts and benefits as well, but lacked the resourcesto expand to large scale production.

Market Impact Cardozo et al.,2011

Cardozo, Richard, Ardichvili, Alexandre, Strauss,Anthony, 2011. Effectiveness of universitytechnology transfer: an organizational populationecology view of a maturing supplier industry. TheJournal of Technology Transfer 36 (2), 173–202.

Conceptualizing universities engaged in technologytransfer activities as an industry, results show thatindustry growth is slowing and technology transferprocesses are becoming less efficient.

Market Impact Roessner et al.(2013)

Roessner, David, Bond, Jennifer, Okubo, Sumiye,Planting, Mark, 2013. The economic impact oflicensed commercialized inventions originating inuniversity research. Research Policy 42 (1), 23–34.

Summing over a 15 year period, the authors estimatethat the impact of university licensing on the U.S.economy is at least $162.1 billion. Estimates for jobscreated per year over the period range from 7000 to23,000. Models estimated with different substitutionrates still yield large effects on GDP.

Market ImpactandEconomicDevelopment

Hartmann andMasten (2000)

Hartmann, G. Bruce, Masten, John, 2000. Profiles ofState Technological Transfer Structure and ItsImpact on Small Manufacturers. The Journal ofTechnology Transfer 25 (1), 83–88.

Small manufacturers tend to have faster growth ratesin states that focus technology transfer assistance onsmall firms.

Market ImpactandEconomicDevelopment

Lindelöf andLöfsten (2004)

Lindelöf, Peter, Löfsten, Hans, 2004. Proximity as aResource Base for Competitive Advantage:University–Industry Links for Technology Transfer.The Journal of Technology Transfer 29 (3–4),311–326.

New technology based firms located in universityscience parks exhibit a competitive advantage overfirms not located in science parks in terms of productdevelopment.

Market Impactand ScientificandTechnicalHumanCapital

Coccia andRolfo (2002)

Coccia, Mario, Rolfo, Secondo, 2002. Technologytransfer analysis in the Italian National ResearchCouncil. Technovation 22 (5), 291–299.

Lab rankings change depending on which measure oftechnology transfer effectiveness is employed.Technological labs (applied science) perform better interms of market-oriented tech transfer andnon-technological labs (economics and naturalsciences) perform better in terms ofeducation-oriented tech transfer.

Market ImpactandOpportunityCost

Rowe andTemple (2011)

Rowe, Brent R., Temple, Dorota S., 2Superfilling technology: transferrinindustry from the National Instituteand Technology. The Journal of TechTransfer 36 (1), 1–13.

011.g knowledge to

of Standardsnology

Economic impact estimates suggest that the transfer ofsuperfilling knowledge generated by NIST to industrywere an efficient use of public resources.

Page 13: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

4

6 B. Bozeman et al. / Research Policy 44 (2015) 34–49

Effectivenesscriterion

In-text citation Full citation Relevant findings

EconomicDevelopment

Markusen andOden (1996)

Markusen, Ann, Oden, Michael, 1996. Nationallaboratories as business incubators and regionbuilders. The Journal of Technology Transfer 21(1–2), 93–108.

Barriers to business incubation and start-up at federallabs are identified and suggestions for improvement areoffered.

EconomicDevelopment

Phillips (2002) Phillips, Rhonda G., 2002. Technology businessincubators: how effective as technology transfermechanisms? Technology in Society 24 (3),299–316.

Technology business incubators have widely varyingrates of technology transfer, but overall levels are not ashigh as expected.

EconomicDevelopment

Shane andStuart (2002)

Shane, Scott, Stuart, Toby, 2002. OrganizationalEndowments and the Performance of UniversityStart-ups. Management Science 48 (1), 154–170.

Founder’s social capital is key to the outcome for thenew venture; firms with founders that have direct andindirect relationships with venture investors are morelikely to receive funding and less likely to fail.

EconomicDevelopment

O’Shea et al.(2005)

O’Shea, Rory P., Allen, Thomas J., Chevalier, Arnaud,Roche, Frank, 2005. Entrepreneurial orientation,technology transfer and spinoff performance ofU.S. universities. Research Policy 34 (7), 994–1009.

Previous spinoff development, the presence of leadingresearchers, the magnitude and nature of financialresources, and the amount of resources invested intechnology transfer office personnel at universities allincrease current spinoff activity.

EconomicDevelopment

Golob (2006) Golob, Elyse, 2006. Capturing the RegionalEconomic Benefits of University TechnologyTransfer: A Case Study. The Journal of TechnologyTransfer 31 (6), 685–695.

Universities that view their technology transferfunctions as revenue generators produce fewer start-upsthan universities that have economic development as anobjective. Also, entrepreneurs make location decisionsbased on a variety of factors including existingrelationships with the licensing entity.

EconomicDevelopment

Gulbransonand Audretsch(2008)

Gulbranson, Christine A., Audretsch, David B.,2008. Proof of concept centers: accelerating thecommercialization of university innovation. TheJournal of Technology Transfer 33 (3), 249–258.

The authors discuss the utility of proof of conceptcenters to facilitating transfer of university innovations.

EconomicDevelopment

Festel (2012) Festel, Gunter, 2012. Academic spin-offs, corporatespin-outs and company internal start-ups astechnology transfer approach. The Journal ofTechnology Transfer, 1–17.

Start-ups, spin-offs, and spin-outs are legitimatemechanisms for technology transfer.

EconomicDevelopmentand ScientificandTechnicalHumanCapital

Brown (1998) Brown, Kenneth M., 1998. Sandia’s Science Park: Anew concept in technology transfer. Issues inScience & Technology 15 (2).

Sandia’s science park presents a model of technologytransfer that requires different evaluation metrics thantechnology transfer under a CRADA.

Scientific andTechnicalHumanCapital

Edler et al.(2011)

Edler, Jacob, Fier, Heide, Grimpe, Christoph, 2011.International scientist mobility and the locus ofknowledge and technology transfer. ResearchPolicy 40 (6), 791–805.

German academics’ visits outside of their home countrydid not ‘crowd out’ but rather complemented knowledgeand technology transfer activities to firms in Germany.

MarketImpact/OpportunityCost

Saavedra andBozeman(2004)

Saavedra, Pablo, Bozeman, Barry, 2004. The“Gradient Effect” in Federal Laboratory–IndustryTechnology Transfer Partnerships. Policy StudiesJournal 32 (2), 235–252.

Technology transfer effectiveness is increased when thelab and firm play different but not far removed roles onthe basic-applied-development spectrum.

OpportunityCost

Woerter (2012) Woerter, Martin, 2012. Technology proximitybetween firms and universities and technologytransfer. The Journal of Technology Transfer 37 (6),828–866.

Technology proximity (work in the same patent class)fosters technology transfer intensity between firms anduniversities. This is the case especially for smaller firms.Also, if technology proximity is low, but expertise at auniversity is high then technology transfer intensity isincreased.

Public Value Rubenstein(2003)

Rubenstein, Kelly Day, 2003. Transferring PublicResearch: The Patent Licensing Mechanism inAgriculture. The Journal of Technology Transfer 28(2), 111–130.

USDA’s patent licensing is not revenue driven, and itdoes not appear to have altered the agency’s researchpriorities. Licenses vary in terms of four social benefits:food safety, human nutrition, human health, andenvironmental/natural resource protection. No evidenceis found of concentration of licenses in only a few firms.

Public Value Costa-Font andMossialos(2006)

Costa-Font, Joan, Mossialos, Elias, 2006. The Publicas a Limit to Technology Transfer: The Influence ofKnowledge and Beliefs in Attitudes towardsBiotechnology in the UK. The Journal of TechnologyTransfer 31 (6), 629–645.

The knowledge and beliefs of individuals as well asinformation channels affect attitudes towards newapplications of biotechnology in the UK.

Public Value Sorensen andChambers(2008)

Sorensen, Jill Ann Tarzian, Chambers, Donald A.,2008. Evaluating academic technology transferperformance by how well access to knowledge isfacilitated—defining an access metric. The Journalof Technology Transfer 33 (5), 534–547.

The authors suggest metrics that could be used toevaluate technology transfer performance in terms ofincreased access to knowledge.

Public Value Bozeman andSarewitz(2011)

Bozeman, Barry, Sarewitz, Daniel, 2011. PublicValue Mapping and Science Policy Evaluation.Minerva: A Review of Science, Learning & Policy 49

Suggested framework for including public values inscience policy evaluation.

(1), 1–23.Political Bozeman and

Crow (1991b)Bozeman, Barry, Crow, Michael, 1991transfer from U.S. government and ulaboratories. Technovation 11 (4), 23

b. Technologyniversity R&D1–246.

Influence from political authority is a major determinantof technology transfer activity, specifically whether thetechnology is transferred to government or industry.

Page 14: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

arch P

R

A

A

A

A

A

A

A

A

A

BB

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B

B. Bozeman et al. / Rese

eferences

dams, J.D., Chiang, E.P., Jensen, J.L., 2003. The influence of federal laboratory R&Don industrial research. Review of Economics and Statistics 85 (4), 1003–1020.

domavicius, G., Bockstedt, J., Gupta, A., Kauffman, R.J., 2008. Understandingevolution in technology ecosystems. Communications of the ACM 51 (10),117–122.

grawal, A.K., 2003. University-to-industry knowledge transfer: literature reviewand unanswered questions. International Journal of Management Reviews 3 (4),285–302.

lbors-Garrigos, J., Hervas-Oliver, J.L., Hidalgo, A., 2009. Analysing high technologyadoption and impact within public supported high tech programs: an empiricalcase. The Journal of High Technology Management Research 20 (2), 153–168.

lbors, J., Hervas, J.L., Hidalgo, A., 2006. Analysing high technology diffusion and pub-lic transference programs: the case of the European game program. The Journalof Technology Transfer 31 (6), 647–661.

nderson, Timothy R., Daim, Tugrul U., Lavoie, Francois F., 2007. Measuring theefficiency of university technology transfer. Technovation 27 (5), 306–318,http://dx.doi.org/10.1016/j.technovation.2006.10.003.

ronowitz, J.D., 1999. Controlling Militarily Significant Emerging Technologies.Army War College, Carlisle, PA.

udretsch, D.B., Stephan, P., 1999. How and why does knowledge spill over inbiotechnology? In: Audretsch, D.B., Thurik, R. (Eds.), Innovation, Industry Evo-lution and Employment. Cambridge University Press, Cambridge, pp. 216–229.

utio, E., Laamanen, T., 1995. Measurement and evaluation of technology transfer:review of technology transfer mechanisms and indicators. International Journalof Technology Management 10 (7–8), 7–8.

ecker, G.S., 1964. Human Capital Theory. Columbia, New York.ercovitz, J.E., Feldman, M.P., 2008. Academic entrepreneurs: organizational change

at the individual level. Organization Science 19 (1), 69–89.eierle, T.C., Konisky, D.M., 2000. Values, conflict, and trust in participatory environ-

mental planning. Journal of Policy analysis and Management 19 (4), 587–602.enington, J., Moore, M.H. (Eds.), 2010. Public Value: Theory and Practice. Palgrave

Macmillan, New York.idault, F., Fischer, W.A., 1994. Technology transactions: networks over markets.

R&D Management 24 (4), 373–386.lank, S., 2013. Why the lean start-up changes everything. Harvard Business Review

91 (5), 64–68.ok, D., 2004. Universities in the Marketplace: The Commercialization of Higher

Education. Princeton University Press, Princeton, N.J.ozeman, B., 1994. Evaluating government technology transfer: early impacts of the

‘cooperative technology paradigm’. Policy Studies Journal 22 (2), 322–337.ozeman, B., 1997. Commercialization of federal laboratory technology: results of a

study of industrial partners. In: Oakey, R.P. (Ed.), New Technology-Based Firmsin the 1990, vol. 3. Paul Chapman Publishing Limited, London, pp. 127–139.

ozeman, B., 2000. Technology transfer and public policy: a review of research andtheory. Research Policy 29 (4), 627–655.

ozeman, B., 2002. Public-value failure: when efficient markets may not do. PublicAdministration Review 62 (2), 145–161.

ozeman, B., 2007. Public Values and Public Interest: Counterbalancing EconomicIndividualism. Georgetown University Press, Washington, D.C.

ozeman, B., Boardman, C., 2014. Academic faculty working in university researchcenters: neither capitalism’s slaves nor teaching fugitives. The Journal of HigherEducation (in press).

ozeman, B., Coker, K., 1992. Assessing the effectiveness of technology transfer fromUS government R&D laboratories: the impact of market orientation. Technova-tion 12 (4), 239–255.

ozeman, B., Corley, E., 2004. Scientists’ collaboration strategies: implications forscientific and technical human capital. Research Policy 33 (4), 599–616.

ozeman, B., Crow, M., 1991a. Technology Transfer from US Government and Uni-versity R&D Laboratories. Technovation 11 (4), 231–246.

ozeman, B., Crow, M., 1991b. Red tape and technology transfer in US governmentlaboratories. The Journal of Technology Transfer 16 (2), 29–37.

ozeman, B., Gaughan, M., 2007. Impacts of grants and contracts on academicresearchers’ interactions with industry. Research policy 36 (5), 694–707.

ozeman, B., Papadakis, M., Coker, K., 1995. Industry Perspectives on Commer-cial Interactions with Federal Laboratories: Does the Cooperative TechnologyParadigm Really Work? Report to the National Science Foundation, Research onScience and Technology Program, January.

ozeman, B., Dietz, J.S., Gaughan, M., 2001. Scientific and technical human capital: analternative model for research evaluation. International Journal of TechnologyManagement 22 (7), 716–740.

ozeman, B., Rogers, J., 2001. Strategic management of government-sponsored R&Dportfolios. Environment and Planning C 19 (3), 413–442.

ozeman, B., Sarewitz, D., 2005. Public values and public failure in US science policy.Science and Public Policy 32 (2), 119–136.

ozeman, B., Sarewitz, D., 2011. Public value mapping and science policy evaluation.Minerva 49 (1), 1–23.

radley, S.R., Hayter, C.S., Link, A.N., 2013. Models and Methods of University Tech-nology Transfer. Now Publishers Incorporated, Boston.

urgess, J., Stirling, A., Clark, J., Davies, G., Eames, M., Staley, K., Williamson, S.,

2007. Deliberative mapping: a novel analytic-deliberative methodology to sup-port contested science-policy decisions. Public Understanding of Science 16 (3),299–322.

utler, S., Birley, S., 1998. Scientists and their attitudes to industry links. InternationalJournal of Innovation Management 2 (01), 79–106.

olicy 44 (2015) 34–49 47

Breznitz, D., Taylor, M., 2009. The communal roots of entrepreneurial-technologicalgrowth? Social fragmentation and the economic stagnation of Atlanta’s IT clus-ter. In: Industry Studies Association Annual Meeting, Chicago.

Canibano, C., Otamendi, J., Andújar, I., 2008. Measuring and assessing researchermobility from CV analysis: the case of the Ramón y Cajal Programme in Spain.Research Evaluation 17 (1), 17–31.

Caldera, A., Debande, O., 2010. Performance of Spanish universities in technologytransfer: an empirical analysis. Research Policy 39 (9), 1160–1173.

Cardozo, R., Ardichvili, A., Strauss, A., 2011. Effectiveness of university technol-ogy transfer: an organizational population ecology view of a maturing supplierindustry. The Journal of Technology Transfer 36 (2), 173–202.

Carlsson, B., Fridh, A.-C., 2002. Technology transfer in United State universities.Journal of Evolutionary Economics 12 (1/2).

Casper, S., Murray, F., 2005. Careers and clusters: analyzing the career networkdynamic of biotechnology clusters. Journal of Engineering and Technology Man-agement 22 (1), 51–74.

Chapple, W., Lockett, A., Siegel, D., Wright, M., 2005. Assessing the rel-ative performance of U.K. university technology transfer offices: para-metric and non-parametric evidence. Research Policy 34 (3), 369–384,http://dx.doi.org/10.1016/j.respol.2005.01.007.

Chesbrough, H., 2003. Open innovation: the new imperative for creating and prof-iting from technology. Harvard Business School Press, Boston.

Coccia, M., Rolfo, S., 2002. Technology transfer analysis in the Italian nationalresearch council. Technovation 22 (5), 291–299.

Cohen, W.M., Nelson, R.R., Walsh, J.P., 2002. Links and impacts: the influence ofpublic research on industrial R&D. Management Science 48 (1), 1–23.

Coleman, J.S., 1988. Social capital in the creation of human capital. American Journalof Sociology 94, S95–S120.

Cooksy, L.J., Gill, P., Kelly, P.A., 2001. The program logic model as an integrativeframework for a multimethod evaluation. Evaluation and program planning 24(2), 119–128.

Corolleur, C.D., Carrere, M., Mangematin, V., 2004. Turning scientific and technolog-ical human capital into economic capital: the experience of biotech start-ups inFrance. Research Policy 33 (4), 631–642.

Costa-Font, J., Mossialos, E., 2006. The public as a limit to technology transfer:the influence of knowledge and beliefs in attitudes towards biotechnol-ogy in the UK. The Journal of Technology Transfer 31 (6), 629–645,http://dx.doi.org/10.1007/s10961-006-0019-3.

Coursey, D., Bozeman, B., 1992. Technology transfer in US government and universitylaboratories: advantages and disadvantages for participating laboratories. IEEETransactions on Engineering Management 39 (4), 347–351.

Crow, M., Bozeman, B., 1998. Limited by Design: R&D Laboratories in the U.S.National Innovation System. Columbia University Press, New York.

Dietz, J.S., Bozeman, B., 2005. Academic careers, patents, and productivity: industryexperience as scientific and technical human capital. Research Policy 34 (3),349–367.

Edler, J., Fier, H., Grimpe, C., 2011. International scientist mobility and the locus ofknowledge and technology transfer. Research Policy 40 (6), 791–805.

Etzkowitz, H., Leydesdorff, L., 2000. The dynamics of innovation: from National Sys-tems and Mode 2 to a Triple Helix of university–industry–government relations.Research Policy 29 (2), 109–123.

Evans, S.A., Valdivia, W.D., 2012. Export controls and the tensions between academicfreedom and national security. Minerva, 1–22.

Festel, G., 2012. Academic spin-offs, corporate spin-outs and company internalstart-ups as technology transfer approach. The Journal of Technology Transfer,1–17.

Feller, I., Roessner, D., 1995. Identifying and measuring the benefits of collabo-rative research: lessons from a study of the Engineering Research Centers. In:Proceedings of the Technology Transfer Society . . . Annual Meeting, Interna-tional Symposium and exhibit, vol. 1995.

Florida, R., Mellander, C., Stolarick, K.M., 2010. Talent, technology and tolerance inCanadian regional development. The Canadian Geographer/Le Géographe cana-dien 54 (3), 277–304.

Fountain, J.E., 1998. Social capital: its relationship to innovation in science and tech-nology. Science and Public Policy 25 (2), 103–115.

Franza, R.M., Srivastava, R., 2009. Evaluating the Return on Investment for Depart-ment of Defense to private sector technology transfer. International Journal ofTechnology Transfer and Commercialisation 8 (2), 286–298.

Frechtling, J.A., 2007. Logic Modeling Methods in Program Evaluation. Jossey-Bass,San Francisco.

Friedman, J., Silberman, J., 2003. University technology transfer: do incentives, man-agement, and location matter? The Journal of Technology Transfer 28 (1), 17–30,http://dx.doi.org/10.1023/A:1021674618658.

Fritsch, M., Kauffeld-Monz, M., 2010. The impact of network structure on knowledgetransfer: an application of social network analysis in the context of regionalinnovation networks. The Annals of Regional Science 44 (1), 21–38.

Fukugawa, N., 2009. Determinants of licensing activities of local public technologycenters in Japan. Technovation 29 (12), 885–892.

Gaughan, M., Ponomariov, B., 2008. Faculty publication productivity, collaboration,and grants velocity: using curricula vitae to compare center-affiliated and unaf-filiated scientists. Research Evaluation 17 (2), 103–110.

Geisler, E., 1994. Key output indicators in performance evaluation of research anddevelopment organizations. Technological Forecasting and Social Change 47 (2),189–203.

Golob, E., 2006. Capturing the regional economic benefits of university technologytransfer: a case study. The Journal of Technology Transfer 31 (6), 685–695.

Page 15: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

4 arch P

G

G

G

H

H

H

H

H

J

JJ

K

KK

K

K

L

L

L

L

L

L

L

L

L

L

M

M

M

M

M

M

M

8 B. Bozeman et al. / Rese

ulbranson, C.A., Audretsch, D.B., 2008. Proof of concept centers: accelerating thecommercialization of university innovation. The Journal of Technology Transfer33 (3), 249–258.

upta, A., 2002. Public Value Mapping in a Developing Country Context: A Method-ology to Promote Socially Beneficial Public Biotechnology Research and Uptakein India. Prepared for the Rockefeller Foundation. Center for Science, Policy, andOutcomes (CSPO), Columbia University.

uston, D.H., 2007. Between Politics and Science: Assuring the Integrity and Pro-ductivity of Research. Cambridge University Press, Cambridge.

artmann, G.B., Masten, J., 2000. Profiles of state technological transfer structureand its impact on small manufacturers. The Journal of Technology Transfer 25(1), 83–88.

eisey, P.W., Adelman, S.W., 2011. Research expenditures, technology transfer activ-ity, and university licensing revenue. The Journal of Technology Transfer 36 (1),38–60.

endriks, J., 2012. Technology transfer in human vaccinology: a retrospective reviewon public sector contributions in a privatizing science field. Vaccine 30 (44),6230–6240.

enkel, M., 2005. Academic identity and autonomy in a changing policy environ-ment. Higher Education 49 (1), 155–176.

ertzfeld, H.R., 2002. Measuring the economic returns from successful NASA life sci-ences technology transfers. The Journal of Technology Transfer 27 (4), 311–320.

affe, A.B., Lerner, J., 2001. Reinventing public R&D: patent policy and the com-mercialization of national laboratory technologies. Rand Journal of Economics,167–198.

ohn, P., 1998. Analysing Public Policy. Pinter Publishing Limited, London.ørgensen, T.B., Bozeman, B., 2007. Public values an inventory. Administration &

Society 39 (3), 354–381.assicieh, S.K., Kirchhoff, B.A., Walsh, S.T., McWhorter, P.J., 2002. The role of small

firms in the transfer of disruptive technologies. Technovation 22 (11), 667–674.err, C., 2001. The Uses of the University. Harvard University Press, Cambridge, MA.ingsley, G., Farmer, M.C., 1997. Using technology absorption as an evaluation crite-

rion: case studies from a state research and development program. Policy StudiesJournal 25 (3), 436–450.

itagawa, F., Lightowler, C., 2013. Knowledge exchange: a comparison of policies,strategies, and funding incentives in English and Scottish higher education.Research Evaluation 22 (1), 1–14.

leinman, D.L., 2003. Impure Cultures: University Biology and the World of Com-merce. University of Wisconsin Press, Madison, WI.

andry, R., Amara, N., Lamari, M., 2002. Does social capital determine innova-tion? To what extent? Technological Forecasting and Social Change 69 (7),681–701.

ee, Y., Gaertner, R., 1994. Technology transfer to industry: a large scale experimentwith technology development and commercialization. Policy Studies Journal 22.

epori, B., 2006. Methodologies for the analysis of research funding and expen-diture: from input to positioning indicators. Research Evaluation 15 (2),133–143.

epori, B., Probst, C., 2009. Using curricula vitae for mapping scientific fields: a small-scale experience for Swiss communication sciences. Research Evaluation 18 (2),125–134.

in, M.W., Bozeman, B., 2006. Researchers’ industry experience and productivity inuniversity–industry research centers: a scientific and technical human capitalexplanation. The Journal of Technology Transfer 31 (2), 269–290.

indelöf, P., Löfsten, H., 2004. Proximity as a resource base for competitiveadvantage: university–industry links for technology transfer. The Journalof Technology Transfer 29 (3–4), 311–326, http://dx.doi.org/10.1023/B:JOTT.0000034125.29979.ae.

ink, A.N., 1995. Evaluation of the economic impacts associated with the NIST powerand energy calibration services (No. PB-95-188850/XAB; NISTIR-5565). NationalInst. of Standards and Technology (EEEL), Gaithersburg, MD (United States).Electricity Div.

ink, A.N., Scott, J.T., 2001. Public/private partnerships: stimulating competitionin a dynamic market. International Journal of Industrial Organization 19 (5),763–794.

ink, A.N., Siegel, D.S., 2005. Generating science-based growth: an econometricanalysis of the impact of organizational incentives on university–industry tech-nology transfer. The European Journal of Finance 11 (3), 169–181.

iyanage, S., 1995. Breeding innovation clusters through collaborative researchnetworks. Technovation 15 (9), 553–567.

aricle, G., 2011. Prediction as an impediment to preparedness: Lessons from the UShurricane and earthquake research enterprises. Minerva: A Review of Science,Learning & Policy 49 (1), 87–111.

arkusen, A., Oden, M., 1996. National laboratories as business incubators andregion builders. The Journal of Technology Transfer 21 (1–2), 93–108.

endoza, P., 2007. Academic capitalism and doctoral student socialization: a casestudy. The Journal of Higher Education 78 (1), 71–96.

eyers, S., McMahon, J.E., McNeil, M., Liu, X., 2003. Impacts of US federal energyefficiency standards for residential appliances. Energy 28 (8), 755–767.

idgley, G., 2006. Systems Thinking for Evaluation. Systems Concepts in Evaluation:An Expert Anthology. Edge Press, Point Reyes, CA, pp. 11–34.

ohammed, Y., Sax, U., Dickmann, F., Lippert, J., Solodenko, J., Voigt, G., Rienhoff, O.,

2010. On transferring the grid technology to the biomedical community. Studiesin Health Technology Information 159, 28–39.

owery, D.C., 1988. The changing structure of the US national innovation system:implications for international conflict and cooperation in R&D policy. ResearchPolicy 27 (6), 639–654.

olicy 44 (2015) 34–49

Mowery, D.C., Ziedonis, A.A., 2007. Academic patents and materials transfer agree-ments: substitutes or complements? The Journal of Technology Transfer 32 (3),157–172.

Murray, F., 2002. Innovation as co-evolution of scientific and technologicalnetworks: exploring tissue engineering. Research Policy 31 (8), 1389–1403.

O’Shea, R.P., Allen, T.J., Chevalier, A., Roche, F., 2005. Entrepreneurial orientation,technology transfer and spinoff performance of U.S. universities. Research Policy34 (7), 994–1009.

Park, J.B., Ryu, T.K., Gibson, D.V., 2010. Facilitating public-to-private technologytransfer through consortia: initial evidence from Korea. The Journal of Tech-nology Transfer 35 (2), 237–252.

Phillips, R.G., 2002. Technology business incubators: how effective as technologytransfer mechanisms? Technology in Society 24 (3), 299–316.

Piper, W.S., Naghshpour, S., 1996. Government technology transfer: the effective useof both push and pull marketing strategies. International Journal of TechnologyManagement 12 (1), 85–94.

Polanyi, M., 1969. Knowing and Being. Edited with an Introduction by MarjorieGrene. University of Chicago Press, Chicago.

Ponomariov, B., 2009. Student centrality in university–industry interactions. Indus-try and Higher Education 23 (1), 50–62.

Ponomariov, B.L., Boardman, P.C., 2010. Influencing scientists’ collaboration andproductivity patterns through new institutions: university research cen-ters and scientific and technical human capital. Research Policy 39 (5),613–624.

Powers, J.B., 2003. Commercializing academic research: resource effects on perfor-mance of university technology transfer. The Journal of Higher Education 74 (1),26–50.

Protogerou, A., Caloghirou, Y., Siokas, E., 2012. Twenty-five years of science-industrycollaboration: the emergence and evolution of policy-driven research networksacross Europe. The Journal of Technology Transfer, 1–23.

Ramakrishnan, S., 2004. An industrial ecology framework to assist transferringenvironmental technologies. International Journal of Technology Transfer andCommercialisation 3 (2), 147–165.

Randles, S., Youtie, J., Guston, D., Harthorn, B., Newfield, C., Shapira, P., Wickson, F.,Rip, A., von Schomberg, R., Pidgeon, N., 2012. A trans-atlantic conversation onresponsible innovation and responsible governance. In: Van Lente, H., Coenen,C., Fleischer, T., Konrad, K., Krabbenborg, L., Milburn, C., Thoreau, F., Zülsdorf, T.(Eds.), Little by Little: Expansions of Nanoscience and Emerging Technologies.Akademische Verlagsgesellschaft, Heidelberg, pp. 169–180.

Rayner, S., Heyward, C., Kruger, T., Pidgeon, N., Redgwell, C., Savulescu, J., 2013. TheOxford principles. Climatic Change 121 (3), 499–512.

Ries, E., 2011. The Lean Startup: How Today’s Entrepreneurs Use Continuous Inno-vation to Create Radically Successful Businesses. Random House Digital, Inc.,New York.

Rigby, J., Edler, J., 2005. Peering inside research networks: some observations onthe effect of the intensity of collaboration on the variability of research quality.Research Policy 34 (6), 784–794.

Roco, M.C., Harthorn, B., Guston, D., Shapira, P., 2011. Innovative and responsiblegovernance of nanotechnology for societal development. Journal of NanoparticleResearch 13 (9), 3557–3590.

Roessner, J.D., Bean, A., 1991. How industry interacts with federal laboratories.Research Technology Management 34 (July/August (4)), 22.

Roessner, J.D., Bond, J., Okubo, S., Planting, M., 2013. The economic impact of licensedcommercialized inventions originating in university research. Research Policy42 (1), 23–34.

Rogers, E., Carayannis, E.G., Kurihara, K., Allbritton, M.M., 1998. Cooperative Researchand Development Agreements (CRADAs) as technology transfer mechanisms.R&D Management 28 (2), 79.

Rogers, E., Takegami, S., Yin, J., 2001. Lessons learned about technology transfer.Technovation 21 (4), 253–261.

Rogers, J.D., Bozeman, B., 1997. Basic research and the success of federal lab-industrypartnerships. The Journal of Technology Transfer 22 (3), 37–47.

Rose, R., 1993. Lesson-Drawing in Public Policy: A Guide to Learning Across Timeand Space. Chatham House Publishers, Chatham, NJ.

Rowe, B.R., Temple, D.S., 2011. Superfilling technology: transferring knowledge toindustry from the National Institute of Standards and Technology. The Journalof Technology Transfer 36 (1), 1–13.

Rubenstein, K.D., 2003. Transferring public research: the patent licensing mecha-nism in agriculture. The Journal of Technology Transfer 28 (2), 111–130.

Saavedra, P., Bozeman, B., 2004. The gradient effect in federal laboratory–industrytechnology transfer partnerships. Policy Studies Journal 32 (2), 235–252.

Sala, A., Landoni, P., Verganti, R., 2011. R&D networks: an evaluation framework.International Journal of Technology Management 53 (1), 19–43.

Schalock, R.L., Bonham, G.S., 2003. Measuring outcomes and managing for results.Evaluation and Program Planning 26 (3), 229–235.

Schultz, T.W., 1963. The Economic Value of Education. Columbia University Press,New York.

Seely, B.E., 2003. Historical patterns in the scholarship of technology transfer. Com-parative Technology Transfer and Society 1 (1), 7–48.

Senker, J., 1995. Networks and tacit knowledge in innovation. Economies et Societes2 (9), 99–118.

Shane, S., Stuart, T., 2002. Organizational endowments and the performance of uni-versity start-ups. Management Science 48 (1), 154–170.

Siegel, D.S., Veugelers, R., Wright, M., 2007. Technology transfer offices and com-mercialization of university intellectual property: performance and policyimplications. Oxford Review of Economic Policy 23 (4), 640–660.

Page 16: The evolving state-of-the-art in technology transfer …...approaches to measuring effectiveness, draw conclusions regarding the current state of technology transfer evaluation, and

arch P

S

S

S

S

S

S

S

S

T

T

T

U

B. Bozeman et al. / Rese

iegel, D.S., Waldman, D., Link, A., 2003. Assessing the impact of organizational prac-tices on the relative productivity of university technology transfer offices: anexploratory study. Research Policy 32 (1), 27–48.

lade, C.P., 2011. Public value mapping of equity in emerging nanomedicine. Minerva49 (1), 71–86.

laughter, S., Rhoades, G., 2004. Academic Capitalism and the New Economy: Mar-kets, State, and Higher Education. Johns Hopkins University Press, Baltimore,MD.

orensen, J.A.T., Chambers, D.A., 2008. Evaluating academic technology transfer per-formance by how well access to knowledge is facilitated—defining an accessmetric. The Journal of Technology Transfer 33 (5), 534–547.

perling, D., 2001. Public–private technology R&D partnerships: lessons from USpartnership for a new generation of vehicles. Transport Policy 8 (4), 247–256.

perling, D., Gordon, D., 2008. Advanced passenger transport technologies. AnnualReview of Environment and Resources 33, 63–84.

tephan, P.E., 2001. Educational implications of university–industry technologytransfer. The Journal of Technology Transfer 26 (3), 199–205.

wamidass, P.M., Vulasa, V., 2009. Why university inventions rarely produceincome? Bottlenecks in university technology transfer. The Journal of Technol-ogy Transfer 34 (4), 343–363.

hursby, J.G., Jensen, R., Thursby, M.C., 2001. Objectives, characteristics and out-comes of university licensing: a survey of major U.S. universities. The Journal ofTechnology Transfer 26 (1–2), 59–72.

hursby Jerry G., Kemp Sukanya, 2002. Growth and productive efficiency of univer-sity intellectual property licensing. Research Policy 31 (1), 109–124.

ran, T.A., Kocaoglu, D.F., 2009. Literature review on technology transfer from

government laboratories to industry. In: Portland International Conferenceon Management of Engineering & Technology, 2009. PICMET 2009, IEEE, pp.2771–2782.

.S. Congress, 1980. Stevenson-Wydler Technology Innovation Act of 1980, P.L. 96-517, United States Code, Title 15, Section 3701-3714.

olicy 44 (2015) 34–49 49

U.S. Congress, 1984a. Uniform Patent Procedures Act of 1983. Public Law 98-620enacted November 8.

U.S. Congress, 1984b. Stevenson-Wydler Technology Innovation Act of 1980. UnitedStates Code, Title 15, Section 3701-3714: 96-517. USGPO, Washington, DC.

U.S. Congress, 1986. Federal Technology Transfer Act of 1986. 99th Congress, 2ndSession. USGPO, Washington, DC.

U.S. Department of Agriculture Mission Statement, 2013. United States Depart-ment of Agriculture. http://www.usda.gov/wps/portal/usda/usdahome?navid=MISSION STATEMENT

U.S. Department of Energy, 2012. U.S. DOE Plan for Transfer and Commer-cialization of Technology. http://www.nist.gov/tpo/publications/upload/DOE-Tech-Transfer-Plan-3.pdf

U.S. General Accounting Office, 1989. Technology Transfer: Implementation Statusof the Federal Technology Transfer Act of 1986. USGPO, Washington, DC.

U.S. White House, Office of the Press Secretary, 2011. Presidential Memoran-dum: Accelerating Technology Transfer and Commercialization of FederalResearch in Support of High Growth Businesses Downloaded November 10, 2012from: http://www.whitehouse.gov/the-press-office/2011/10/28/presidential-memorandum-accelerating-technology-transfer-and-commerciali

Valdivia, W.D., 2011. The stakes in Bayh-Dole: Public values beyond the pace ofinnovation. Minerva 49 (1), 25–46.

von Schomberg, R., 2013. A vision of responsible research and innovation. In: Owen,R., Bessant, J., Heintz, M. (Eds.), Responsible Innovation: Managing the Respon-sible Emergence of Science and Innovation in Society. John Wiley & Sons, pp.51–74.

Woerter, M., 2012. Technology proximity between firms and universities and

technology transfer. The Journal of Technology Transfer 37 (6), 828–866,http://dx.doi.org/10.1007/s10961-011-9207-x.

Youtie, J., Libaers, D., Bozeman, B., 2006. Institutionalization of university researchcenters: the case of the National Cooperative Program in Infertility Research.Technovation 26 (9), 1055–1063.