29
Submitted manuscript October 10, 2005 Exploring Causal Relationships in an Innovation Program with Robust Portfolio Modeling Ahti Salo 1 , Pekka Mild 1 and Tuomo Pentikäinen 2 1 Helsinki University of Technology Systems Analysis Laboratory P.O. Box 1100, 02015 TKK Finland 2 Finnish Science Park Association TEKEL Innopoli 1, Tekniikantie 12, 02150 Espoo Finland [email protected], [email protected], [email protected] Abstract: Many countries seek to foster the commercial exploitation of sci- ence-based research results through selective policy instruments. Typically, these instruments involve processes of follow-up data collection where the results of ex ante and ex post assessments are systematically recorded. Yet, several factors – such as the presence of multiple objectives, predominance of qualitative data and missing observations – may complicate the use of such data for adjusting the management practices of these instruments. With the aim of addressing these challenges, we adopt Robust Portfolio Modeling 1 (RPM) as an evaluation framework to the analysis of longitudinal data: specifically, we (i) determine subsets of outperforming and underper- forming projects through the development of an explicit multicriteria model for ex post evaluation, and (ii) carry out comparative analyses between these subsets, in order to identify which ex ante interventions and contextual characteristics may have contributed to later performance. We also report experiences from the application of RPM-evaluation to a Finnish innovation program and outline extensions of this approach that may provide further decision support to the managers of innovation programs. Keywords: Innovation policy, data analysis, decision modeling, research and technology programs. 1 See, e.g., http://www.rpm.tkk.fi/.

Exploring Causal Relationships in an Innovation Program with …salserver.org.aalto.fi/vanhat_sivut/tutkijakoulu/... · 2005. 10. 11. · Submitted manuscript October 10, 2005 Exploring

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

  • Submitted manuscript October 10, 2005

    Exploring Causal Relationships in an Innovation Program

    with Robust Portfolio Modeling

    Ahti Salo1, Pekka Mild1 and Tuomo Pentikäinen2

    1Helsinki University of Technology Systems Analysis Laboratory

    P.O. Box 1100, 02015 TKK Finland

    2Finnish Science Park Association TEKEL Innopoli 1, Tekniikantie 12, 02150 Espoo

    Finland

    [email protected], [email protected], [email protected]

    Abstract: Many countries seek to foster the commercial exploitation of sci-

    ence-based research results through selective policy instruments. Typically,

    these instruments involve processes of follow-up data collection where the

    results of ex ante and ex post assessments are systematically recorded. Yet,

    several factors – such as the presence of multiple objectives, predominance

    of qualitative data and missing observations – may complicate the use of

    such data for adjusting the management practices of these instruments.

    With the aim of addressing these challenges, we adopt Robust Portfolio

    Modeling1 (RPM) as an evaluation framework to the analysis of longitudinal

    data: specifically, we (i) determine subsets of outperforming and underper-

    forming projects through the development of an explicit multicriteria model

    for ex post evaluation, and (ii) carry out comparative analyses between these

    subsets, in order to identify which ex ante interventions and contextual

    characteristics may have contributed to later performance. We also report

    experiences from the application of RPM-evaluation to a Finnish innovation

    program and outline extensions of this approach that may provide further

    decision support to the managers of innovation programs.

    Keywords: Innovation policy, data analysis, decision modeling, research and

    technology programs.

    1 See, e.g., http://www.rpm.tkk.fi/.

  • 1. Introduction

    In knowledge-based economies, the development of new businesses is highly de-

    pendent on the pursuit of scientific and technological (S&T) research at universi-

    ties, research institutes and industrial firms (Dosi et al., 1988). With the aim of fa-

    cilitating the development of new businesses based on research results, several

    countries have established innovation programs that provide funds to proposed in-

    novation projects on a competitive basis (Lundvall, 1992; Smits and Kuhlmann,

    2004). These programs are quite dissimilar in their details (e.g, conditions placed

    on eligible projects), which reflects variations in the supply and instrumentation of

    risk-capital, as well as various market and systemic imperfections that call for dif-

    ferent policy responses and instruments (Kortum and Lerner, 1998). Yet, at the

    general level, innovation programs resemble each other in that they seek to pro-

    mote projects characterized by promising business prospects and competent per-

    sonnel (Mustar, 2001). They even share similarities with other instruments of inno-

    vation policy – such as research and technology development (RTD) programs – in

    which project proposals are solicited and assessed before some proposals are then

    promoted through financial support and possibly other actions, too (see, e.g., Sal-

    menkaita and Salo, 2002).

    Innovation programs are vital to the successful implementation of innovation poli-

    cies. This is but one of the reasons for why they have been studied extensively, with

    the aim of determining relationships among the factors that may contribute to suc-

    cessful innovation (see, e.g., Callan, 2001). Rigorous academic research in this area

    is typically characterized by the articulation of theoretical frameworks, careful op-

    erationalization of key constructs, formulation of testable hypotheses, collection of

    extensive data sets and use of statistical analyses in the validation of stated hy-

    potheses (see, e.g., Hall and Van Reenen, 2000). Such research does produce de-

    fensible claims as to what holds ‘on the average’ in a statistical sense. But from the

    viewpoint of any specific program, the relevance of these results remains suspect if

    the data sets are partly outdated, do not cover important contextual variables or

    stem from industrial or organizational contexts that are radically different from

    what is the case with the innovation program at hand.

    Motivated by pragmatic needs, funding agencies usually install systematic report-

    ing procedures for program monitoring and follow-up. These procedures do not

    2

  • necessarily generate data that would meet the requirements of ‘serious’ research,

    especially if the data is replete with missing entries or if the definition of variables

    is not aligned with well-established theoretical research frameworks. Such deficien-

    cies notwithstanding, follow-up data can impart valuable insights into the precon-

    ditions of successful innovation activities, because it is readily accessible and re-

    flects the contextual characteristics of the program. This observation, then, leads to

    the question of how such reporting data can be best explored, or ‘mined’, in order

    to better understand which ex ante indicators (e.g., project characteristics, context

    description, program actions) contribute to later ‘success’, as measured by ex post

    indicators on later developments. Here, continuous learning processes may be best

    supported by offering ‘standardized’ analyses on a regular basis (cf. Porter, 2004),

    even if the data can be explored in numerous other (and therefore possibly perplex-

    ing) ways.

    A key consideration in the analysis of innovation activities is that exceptional per-

    formance (or non-performance) is often of greatest interest to managers and policy

    makers (see, e.g., Kortum and Lerner, 1998). On one hand, this is because most

    revenues from commercialization activities are generated by very few successful

    projects while others result in modest or no revenues at all. On the other hand, the

    examination of downright failures may suggest ‘lessons learned’ that can be codi-

    fied into managerial principles and guidelines. In particular, a comparative analysis

    of outperformers and underperformers may suggest possible relationships between

    the ex ante characteristics of projects and their ex post impacts. These possible re-

    lationships can be subjected to tests by managerial reflection and statistical analy-

    ses, with the aim of gaining empirically grounded insights into how the innovation

    program might be improved upon.

    In this paper, we develop a novel evaluation methodology based on the recently de-

    veloped Robust Portfolio Modeling (RPM; Liesiö et al., 2005) framework. The salient

    features of this methodology – which we call RPM-evaluation for short – are (i) the

    construction of an explicit multi-criteria model for measuring project ‘success’

    based on recorded ex post evaluations; (ii) the ability to admit incomplete informa-

    tion on preferences and indicator measurements; (iii) the determination of outper-

    forming and underperforming projects; (iv) the exploration of these two subsets in

    view of their ex ante and ex post characteristics; (v) the examination of such rela-

    tionships by way of statistical tests and managerial judgment. We also report en-

    3

  • couraging experiences from the deployment of RPM-evaluation in a Finnish innova-

    tion program. On the whole, RPM-evaluation is a general methodology and can be

    readily applied in the context of other instruments where a more thorough under-

    standing of the relationship between ex ante and ex post indicators is sought for.

    The remainder of this paper is organized as follows. Section 2 discusses salient de-

    cision making perspectives in the management of innovation programs. Section 3

    presents the RPM framework and describes how it can be deployed as an evalua-

    tion methodology. Section 4 describes an application of RPM-evaluation to a Fin-

    nish innovation program and presents illustrative results. Section 5 discusses pos-

    sible extensions of the RPM-evaluation in related contexts. Section 6 concludes.

    2. Decision making perspectives into innovation programs

    Innovation programs that support the establishment of spin-off companies are of-

    ten similar in terms of their overall rationale and objectives (Callan, 2001). Because

    the rationale usually remains unchallenged for the duration of the program – and

    can thus be taken as a ‘given’ by the program management – the overriding man-

    agement concern is to ensure that the program serves its objectives as effectively

    and efficiently as possible. Here, program management can be assisted by follow-up

    reporting procedures which accumulate data about (i) ex ante characteristics of

    candidate projects, (ii) project decisions that are taken during the program, and (iii)

    ex post results and impacts after the completion of the projects.

    The managers’ legitimate demand for relevant decision support means that ex post

    variables should be derived from the program objectives. Such support can be pro-

    vided by multicriteria decision models that capture program objectives through cor-

    responding evaluation criteria and project-specific measurement scores; this, in

    turn, makes it possible to associate an aggregate performance measure with each

    project and to examine how well the projects have contributed to program objec-

    tives (see, e.g., Henriksen and Traynor, 1999). Yet, the construction of the ‘right’

    evaluation model may be difficult due to ambiguous or conflicting stakeholder per-

    ceptions about how important the criteria are. Thus, instead of attempting to elicit

    ‘correct’ criterion weights (Salo and Hämäläinen, 2001), it may be better to admit

    incomplete information that subsumes multiple interpretations about which crite-

    ria matter most. A further reason for working with incomplete information is that

    4

  • the follow-up data collection may produce uncertain or missing entries for some

    projects.

    In a multi-stakeholder organizational setting, continuous learning processes can be

    enhanced by inviting the managers to periodically deliberate on well-structured and

    and even provocative analyses on key questions. In innovation programs, such

    questions include, above all, why some projects have been successful (or unsuc-

    cessful), and what implications do the possible responses to this question have for

    the further improvement of the program. There are consequently close parallels to

    the project portfolio selection problem (see, e.g., Archer and Ghasemzadeh, 1999;

    Thore, 2002; Stummer and Heidenberger, 2003), except that in the evaluation

    framework it is of interest to identify the ‘best’ projects in view of their ex post indi-

    cators (rather than ex ante characteristics). Once the ‘best’ projects have been iden-

    tified, their earlier history can be used to explore the ex ante determinants of their

    success and to identify so-called ‘success factors’ (Di Benedetto, 1999; Calantone et

    al., 1999; Cooper et al., 1999).

    Another benefit of regular reporting procedures is that they reduce the additional

    workload required by retrospective ex post evaluations. They even mitigate prob-

    lems due to the ‘biased causality’ phenomenon in retrospective evaluations, mean-

    ing that if a project is identified as an outperformer ex post, there may be a bias

    towards over-positive statements about its ex ante premises (Di Benedetto, 1999).

    Furthermore, reflective analyses based on continuous reporting data can provide

    early-on decision support, in contrast to retrospective evaluations which offer re-

    sults only at the end of the program – at a time when it may be too late to act upon

    the results (cf. Salo and Salmenkaita, 2002).

    3. Identifying exceptional performance with RPM

    3.1 RPM framework for project portfolio evaluation

    The identification of particularly successful projects, based on ex post indicator

    data, is analogous to the conventional project portfolio selection problem where the

    organization seeks to choose projects that meet its objectives best in view of avail-

    able ex ante indicators and resources constraints. In project portfolio selection,

    5

  • these objectives are typically captured through (multiple) quantitative and/or quali-

    tative evaluation criteria, while budgetary constraints limit the number of projects

    that can be started.

    Thanks to extensive methodological research, there exists a broad variety of ap-

    proaches to project portfolio selection, ranging from simple scoring and checklist

    methods to complex optimization and dynamic programming models. While every

    key aspect in project portfolio selection is captured by one or few methods, the

    methods differ in their data requirements, mode of interaction with the decision

    makers and even their primary purpose of use (e.g., tentative screening of projects

    vs. selection of a unique portfolio). For reviews on project portfolio selection meth-

    ods, see, e.g., Gustafsson and Salo (2005) and Martino (1995).

    Scoring models, in particular, are widely employed in the evaluation and selection

    of projects and portfolios in settings where multiple objectives must be accounted

    for. These models comply with the theoretical foundation of Multiattribute Value

    Theory (MAVT) (Keeney and Raiffa, 1976) which provides a framework for priority-

    setting in the presence of multiple and incommensurate objectives. Variants of

    scoring models have been used, for example, in R&D project portfolio selection

    (Henriksen and Traynor, 1999), capital budgeting in healthcare (Kleinmuntz and

    Kleinmuntz, 1999), product launch evaluation (Di Benedetto, 1999; Calantone et

    al., 1999) and ex post evaluation of a national technology program (Salo et al.,

    2004). From the practical point of view, scoring models are reasonably transparent

    and easy to use; moreover, they require a moderate amount of data only, and can

    be readily adapted to the needs of different application contexts.

    In its basic variant, Robust Portfolio Modeling (RPM) is a scoring model, based on

    an additive value model where the projects’ performance on each evaluation crite-

    rion is mapped onto its criterion-specific score, and weights are used to indicate the

    relative importance of the criteria. Without loss of generality, criterion weights are

    positive and usually scaled so that they add up to one, while scores range over the

    unit interval from zero to one. In technical terms, the weight of a criterion indicates

    how important a unit increase in the corresponding evaluation score is relative to

    similar unit increases with regard to the other criteria.

    6

  • The overall value of a project is computed as the weighted average of its criterion-

    specific scores, meaning that more preferred projects have the higher overall value.

    The overall value of a portfolio is modeled as the sum of its constituent projects’ val-

    ues (cf. Golabi et al., 1981). Each project may consume several resources; if there is

    only a single monetary resource to be accounted for, this resource consumption is

    the cost of the project. If complete weight and score information is available, the

    most preferred portfolio can be obtained by maximizing the overall value of the

    portfolio subject to the resource constraints.

    Often, however, the elicitation of complete weight and score information can be

    costly or even impossible (see Salo and Punkka, 2005, and references therein). The

    need for extensive sensitivity analyses with regard to these parameter values also

    suggests that complete information may be unnecessary. Motivated by these con-

    cerns, the work on Preference Programming methods (Salo and Hämäläinen, 2001)

    has resulted in approaches to the modeling of incomplete information through set

    inclusion, whereby the results are based on feasible sets of weights and scores (in-

    stead of unique point estimates). Such incomplete weight information can be elic-

    ited, for example, through interval-valued ratio statements (Salo and Hämäläinen,

    1992; Mustajoki et al., 2005) or (in)complete rank orderings (Salo and Punkka,

    2005). Score information, in turn, can be modeled through intervals characterized

    by lower and upper bounds between which the ‘true’ score is assumed to lie.

    Incomplete information leads to value intervals for projects and portfolios alike. Al-

    though no portfolio usually has the highest overall value for all feasible parameter

    values, the available information can still be analyzed to determine which portfolios

    a rational decision maker who seeks to maximize the overall portfolio value would

    be interested in. Towards this end, it is useful to establish the notion of dominance

    between portfolios, meaning that portfolio p dominates p’, if (i) the value of portfolio

    p is higher than or equal to that of p’ for all feasible weights and scores, and (ii)

    there exists some feasible weights and scores such that the value of portfolio p is

    strictly higher than that of p’. Non-dominated portfolios are feasible portfolios that

    are not dominated by any other feasible portfolio: they are consequently viable can-

    didates in the search for the most preferred portfolio. Dominated portfolios, in con-

    trast, can be eliminated from further analyses, because it would be possible to

    identify another portfolio which would result in no less overall value for all feasible

    weights and scores.

    7

  • A central concept in RPM is the Core Index, which maps information about non-

    dominated portfolios to project level. The Core Index of a project is defined as the

    share of non-dominated portfolios in which the project is contained. Based on the

    Core Index values, the set of all projects can be partitioned into (i) core projects,

    which are included in all non-dominated portfolios, (ii) exterior projects, which are

    not included in any non-dominated portfolios, and (iii) borderline projects, which

    are included in some but not all non-dominated portfolios. From the view point of

    robustness, an essential feature is that all core projects would (and exterior pro-

    jects would not) belong to the recommended portfolio, even if additional information

    were to be given. In formal terms, additional information refers here to more re-

    strictive preference statements on weights or scores, i.e., statements that lead to a

    smaller set of feasible parameter values in the sense of set inclusion.

    In project portfolio selection, the RPM framework allows for a staged process where

    the initial weight and score information is loose enough so that it surely covers the

    ‘true’ parameter values. Based on this information, RPM computations (imple-

    mented in RPM-Solver© software2) can be carried out to determine non-dominated

    portfolios, Core Index values for all projects, and even robustness measures which

    are derived from the portfolios’ overall value intervals. If the decision maker is not

    willing act upon these results, she is encouraged to supply additional information

    which, by construction, can only reduce the set of non-dominated portfolios. This

    means that some borderline projects typically become new core and exterior pro-

    jects, while the value intervals of the remaining non-dominated portfolios become

    narrower. If necessary, a unique portfolio can be recommended by applying deci-

    sion rules in connection with portfolio-level measures. To date, reported examples

    of RPM case studies include the development of a strategic product portfolio in a

    telecommunications company (Lindstedt et al., 2005) and the screening of innova-

    tion ideas in the Foresight Forum of the Ministry of Trade and Industry in Finland

    (Könnölä et al., 2005).

    2 http://www.rpm.tkk.fi

    8

  • 3.2 Determination of outperformers and underperformers

    RPM-evaluation is based on ex post portfolio selection where scores are derived from

    recorded multicriteria project evaluations and the projects’ ex post overall value is

    represented by an additive weighting model of these scores. This set-up explicitly

    addresses the question which projects have performed ‘best’ in view of the criteria

    that relate to the program objectives. In answering this question, one needs to

    specify how many projects are included in the subset of ‘best’ projects.

    In the presence of incomplete weight and score information, the number of non-

    dominated portfolios (which are employed to establish the partition into core, exte-

    rior and borderline projects) can be large. By definition, core projects can be re-

    garded as outperformers in the sense that they belong to the ‘best’ subset of pro-

    jects in view of available parameter information. Exterior projects, in turn, are un-

    derperformers in the sense that they would not belong to the optimal portfolio for

    any choice of feasible weights and scores. Borderline projects may or may not be-

    long to non-dominated portfolios, depending on which feasible weights and scores

    are employed.

    In RPM-evaluation, the resulting sets of core and exterior projects are taken to rep-

    resent outperformers and underperformers, respectively. An intuitive justification

    for this is that a particular project is a core project only if it belongs the to subset of

    ‘best’ of projects for all combinations of feasible weights and scores. Conversely, a

    project is one of the exterior projects if it cannot enter the subset of ‘best’ projects,

    no matter what feasible weights and scores are used. These two subsets are conse-

    quently ‘robust’, because they do account for the presence incomplete information.

    A key design parameter in RPM-evaluation is the ex post resource constraint – or

    the ‘budget’ – which determines how many projects can be contained in non-

    dominated portfolios in ex post selection. This parameter also has an impact on

    how many projects are labelled as outperformers (core) and underperformers (exte-

    rior). Here, there are inherent trade-offs involved to be made: for if the budget is

    tight, these two sets will be so small that the application of statistical tests in the

    comparison of ex ante indicators for core and exterior projects is unlikely to yield

    statistically significant results. On the other hand, if the budget is very large, each

    non-dominated portfolio will contain many projects, to the effect that the core may

    9

  • contain projects that seem ‘average’ relative to others3. It therefore follows that the

    ex post resource constraint should be set so that non-dominated portfolios contain

    a sizable, but not an unduly large fraction of all projects.

    To sum up, the sets of outperforming core projects and underperforming exterior

    projects are constructed through the use of an explicit multicriteria model which

    accounts for the relative importance of the evaluation criteria and, moreover, allows

    for incomplete information about the criterion weights and the projects’ ex post in-

    dicators. At the ex post selection stage, the size of these sets can be controlled by

    limiting the number of projects in non-dominated portfolios through a budget con-

    straint. Even other constraints can be employed to account for additional restric-

    tions (e.g., minimum quotas for projects that represent different regions or tech-

    nologies), whereby the implications of such restrictions would be automatically re-

    flected in the determination of the corresponding sets of core and exterior projects.

    4. Application to a national innovation program

    In the following, we describe how RPM-evaluation was employed to the analysis of

    the Finnish innovation program TULI (Research into Business4), a large pre-seed

    funding program for academic innovations. This program was selected for the case

    study due to reasons of data availability and the willingness of program manage-

    ment to apply the RPM approach. However, the objectives, instruments and target

    groups of TULI are representative of many other programs and initiatives that seek

    to foster academic spin-offs in other countries (see, e.g., Callan, 2001).

    4.1 Characteristics of the TULI-program

    The strengthening of innovation activities has been a focal policy objective in Fin-

    land since the 1960’s (Lemola, 2001). The 1990’s, in particular, were a period of

    3 At the extreme, if the budget is equal to the total cost of all projects, then all projects

    would belong to the single non-dominated portfolio, meaning that all projects would belong

    to the core while the set of exterior and borderline projects would become empty. 4 http://www.tuli.info,

    http://www.tekel.fi/english/programmes_and_networks/research_into_business-tuli_pr/

    10

    http://www.tuli.info/

  • active development of the innovation system through efforts which were partly in-

    spired by Lundvall’s (1992) and Porter’s (1990) ideas. These efforts were central to

    the implementation of the strategic objectives of the Finnish science and technology

    policy (Hermesniemi et al., 1996; Lemola, 2001) where increasing attention was de-

    voted to the commercial utilization of results of academic research.

    In the 1990’s, the National Technology Agency – whose core mission is to provide

    R&D project funding for applied technological research at industrial firms, research

    institutes and universities – established a new innovation development and funding

    program, TULI. This on-going national program employs full-time commercializa-

    tion experts who work at universities, with the remit of seeking and evaluating new

    research-based business-ideas. During its 10 years of activity, the financial volume

    of TULI has grown to over 2.5 million euros per year. TULI presently operates

    through eight regional centers located near major universities and research insti-

    tutes in Finland. Each year, more than 600 research-based business ideas are

    evaluated, out of which more than 200 are approved for funding. TULI offers pre-

    seed funding, up to 10 000 euros per project (see Kuusisto et al., 2004).

    Funding decisions about research-based inventions are made by regional project

    groups which usually include up to ten members. The project manager of the re-

    spective TULI center acts as the secretary of the respective regional group, whose

    other members often include central IPR and innovation managers from universi-

    ties and research institutes, regional financiers, and also representatives from re-

    gional business development companies. The regional project groups follow com-

    mon guidelines in project evaluation and decision making. They also use a national

    electronic database to document the decision process according to common rules.

    The TULI-program is directed by a Steering Group which has ten high-level repre-

    sentatives from research and financial institutes, public authorities and innovation

    development organizations. The Steering Group sets the investment criteria and ac-

    cepts the tools and processes used at the different stages of the investment and fol-

    low-up process. It also monitors the performance of regional TULI centers and

    makes proposals about future budgets and the allocation of resources among the

    eight TULI-centers. Each year, TULI facilitates the establishment of circa 30 to 40

    academic spin-off companies and circa 25-35 licensing contracts (Kuusisto et al.,

    2004).

    11

  • The challenges faced by TULI are common to most pre-seed or early-stage financi-

    ers: prospective investments are highly uncertain, while appropriate risk assess-

    ment, due diligence and other evaluations and investment calculations are compli-

    cated by the fact that practically all cases lack any sort of quantitative data. TULI

    investments are also quite small, wherefore costly pre-investment evaluations are

    inappropriate. In consequence, TULI managers apply simple qualitative and partly

    subjective evaluation criteria to select eligible cases from the flow of incoming

    ideas. Approved cases, called TULI-projects, are subjected to a systematic follow-up

    procedure to obtain information that may be relevant for possible further invest-

    ment rounds.

    For completed TULI-projects, the follow-up procedure is carried out in four stages.

    The first assessment is made immediately after completion of the project. At this

    stage, data about the ex ante characteristics of the project and the actions that

    have been taken during the project are collected. Subsequently, three follow-ups

    are performed for all projects (i) six months, (ii) one year, and (iii) two years after

    the completion of the TULI-project. All this data is provided by the regional TULI

    managers who fill in background and follow-up questionnaires via an electronic

    web-based interface. All follow-up questionnaires have the same structure.

    Insert Figure 1 around here

    External evaluations of the TULI-program have confirmed that the follow-up proce-

    dure is accepted by the program participants. However, this procedure is relatively

    laborious in some cases, because highly productive TULI managers may have up to

    one hundred projects to follow up. The managers have also criticized that the re-

    sulting data has not been thoroughly analyzed, and that results based on this data

    have not been sufficiently distributed to them (Kuusisto et al., 2004).

    The TULI Steering Group has two major objectives for the follow-up. First, it is ac-

    countable for (and hence highly interested in) the impacts, effectiveness and results

    of TULI activities. Second, although TULI has been running for about ten years, it

    is still regarded as a pilot program and a testbed for the development of financial

    instruments for the early stages of innovation activities. The Steering Group is

    therefore keen on learning how successful the program has been and, moreover,

    12

  • interested in more comprehensive questions about the early stages of academic in-

    novations. For example, what instruments are most appropriate in a pre-seed fund-

    ing program? Or what characteristics help distinguish between successful and less

    successful cases?

    In this setting, we discussed with the Steering Group about possibilities for explor-

    ing follow-up data with the RPM methodology. The Steering Group agreed that an

    exercise based on this data should be performed, subject to following constraints:

    • No new data gathering will be made, i.e., the analysis must rely on existing data

    about the projects’ ex ante characteristics, TULI interventions, and later results.

    • Analysis should be restricted to start-up cases. This meant that licensing cases

    were to be left out of the analysis, because license negotiations involve long lead

    and the follow-up database did not contain sufficient data about licensed TULI-

    projects.

    • The exercise should distinguish between successful and less successful cases

    with regard to the program objectives, most notably the generation of new re-

    search-based business activities.

    • The exercise should support decision making activities (both about individual

    investments and the development of the TULI-program as a whole), in the rec-

    ognition of multiple program objectives.

    • The analysis should offer easily understandable visual presentations of results,

    in order to catalyze a constructive debate in the Steering Group and to motivate

    further data collection activities.

    4.2 Data set

    The data set consisted of all the 61 projects in the recently established follow-up

    database that had resulted in the establishment of a start-up company. This data

    had been supplied by the regional TULI managers in accordance with the follow-up

    procedure. In the RPM-evaluation, we used ex ante data (collected for pre-

    investment evaluation), intermediate data (collected to maintain a record of the

    TULI activity), and ex post performance data. In total, these data sets contained 59

    ex ante and intermediate variables and 32 ex post variables. Most variables per-

    13

  • tained to typical early stage business plan development and investment evaluation

    practices, and they had been set up by the Steering Group earlier on.

    In the analysis, we used six-month follow-up assessments. One reason for this was

    that the six-month follow-up data set was larger than the data sets for the later fol-

    low-up periods, because the follow-up procedure had been initiated only one year

    before we carried out our RPM study. The selection of this short follow-up period

    seemed justified also because the interventions in TULI-like innovation programs

    are quite short (typically 3-5 months) and the managers have a large set of projects.

    In consequence, they have a good recollection of recently completed projects, but

    tend to forget the details of projects that have been completed earlier on: thus, they

    are in a better position to understand aggregate analytical results in view of recent

    (rather then earlier) experiences. Furthermore, the majority of completed projects in

    TULI (just as in other early-stage high-volume innovation programs) often require

    further public support, financial investment rounds and business development ac-

    tivities. This means that it may be easier to attribute the impacts of the initial in-

    tervention to the program relatively soon after the project has been completed, at a

    stage when it is not yet necessary to consider to what extent these should be at-

    tributed to the other interventions.

    4.3 Evaluation model

    After thorough discussions, the TULI program manager noted that the ‘success’ of a

    project could be measured primarily through the following indicators contained in

    the follow-up data:

    1. Financing is in keeping the business plan (abbreviated ‘FI’),

    2. Cash-flow is in keeping with the business plan (‘CF’),

    3. The project team is in keeping with the business plan (‘TM’),

    4. Sales and marketing are in keeping with the business plan (‘SM’),

    5. The project has attracted a business angel (‘BA’),

    6. The project has attracted a major capital investment (‘CI’),

    7. The project is located at a technology incubator (‘TI’).

    At the project level, the program manager felt that these criteria were compensatory

    in the sense that “the accomplishment of any of them is beneficial regardless of

    14

  • others – the more the merrier on every criterion”. At the portfolio level, the additiv-

    ity assumptions seemed appropriate, too, because TULI-projects are independent in

    the sense that the results of one project do not depend on the others. An additive

    evaluation model based on the above variables thus seemed warranted.

    In the follow-up procedure, information about the projects was recorded through

    three possible responses for each of the above variables, i.e. ‘Yes’, ‘No’, or ‘N/A’.

    These responses were converted to scores by associating ‘No’ and ‘Yes’ responses

    with scores of 0 and 1, respectively, while ‘N/A’ was associated with 0.2. The rather

    negative scoring of ‘N/A’ responses sought to exclude the possibility that projects

    with many (perhaps hastily) recorded ‘N/A’ responses would outperform more me-

    ticulously completed assessments with ‘Yes’ and ‘No’ responses.

    Weight information for the seven criteria was acquired by eliciting an incomplete

    rank-ordering from the program manager (Salo and Punkka, 2005). Specifically, he

    stated that the criteria ‘TM’ and ‘SM’ were the two most important ones, but did not

    specify which one of them was the most important one: these criteria thus assumed

    rankings 1 and 2 (meaning that either criterion could assume either one of the two

    top-most these rankings). In the same way, the next two most important criteria

    were ‘FI’ and ‘CF’ with rankings 3 and 4, followed by ‘BA’ and ‘CI’ with rankings 5

    and 6. The least important criterion was ‘TI’ with ranking 7. The relevance of each

    criterion was ensured by putting a minimum lower bound of 0.035 on the weight of

    each criterion. Mathematically, the above statements corresponded to constraints

    on the feasible weights w = (wFI, wCF, wTM, wSM, wBA, wCI, wTI,) so that (i) wTM and wSM

    had to be greater than or equal to any of the other weights; (ii) wFI and wCF were re-

    quired to be less than or equal to wTM and wSM, but greater than or equal to wBA,

    wCI and wTI, and (iii) wTI could not exceed any others.

    4.4 Outperformers and underperformers

    Towards the identification of outperformers and underperformers (relative to the 61

    projects in the data set), it was necessary to limit the number of projects that would

    be contained in the non-dominated portfolios computed in ex post project portfolio

    selection. Based on his experience, the TULI-program manager estimated that

    about 25% of projects tend to flourish later on. This observation was employed as a

    constraint so that all non-dominated portfolios would contain a fourth of all pro-

    15

  • jects. The corresponding ‘budget’ constraint was implemented by associating a unit

    cost with all projects and by assuming a total budget of 15 units for the selection of

    the optimal ex post portfolio (i.e., the nearest integer to 61/4). The assumption of

    equal costs seemed defensible, because (i) TULI had provided rather similar finan-

    cial and professional support to all projects and (ii) the projects had been evaluated

    through binary-valued indicators (rather than through absolute measurement

    scales).

    The RPM computations resulted in 17 non-dominated portfolios which lead to the

    identification of 12 core, 12 borderline and 37 exterior projects. Not surprisingly,

    the response profile of the core projects contained mostly ‘Yes’ responses, especially

    on the first four criteria, while exterior projects were characterized by several ‘N/A’

    and ‘No’ responses. In principle, close approximations for these two subsets con-

    taining outperforming core projects and underperforming exterior projects might

    have been obtained through the sequential specification of threshold rules (cf. Di

    Benedetto, 1999, for example). But because the data was processed through an ex-

    plicit value model and ex post portfolio optimization, the logic behind the subsets

    was more transparent and driven by the program objectives, as opposed to the in-

    troduction of ad hoc threshold levels.

    4.5 Examples of results

    In order to explore possible relationships between ex ante and ex post responses,

    we performed a comparative analysis of several ex ante indicators for projects con-

    tained in the subsets of core and exterior projects, respectively. We also subjected

    observed differences to statistical tests by using Chi-square test for homogeneity at

    5% significance level. Depending on the ex ante indicator, we either compared the

    ‘Yes’, ’N/A’, ’No’ distributions when the responses were mutually exclusive (Figures

    2 and 5), or examined the share of ‘Yes’ responses on each option when several op-

    tions were allowed (Figures 3 and 4).

    Furthermore, we carried out sensitivity analyses with respect to (i) the score that

    was associated with the ‘N/A’ response (by allowing it to vary in the range of 0.1 –

    0.3) and (ii) the budget constraint (by using ‘budgets’ of 12 and 20 units which cor-

    responded to one fifth and one third of all projects). These variations did result in

    some minor changes in the sets of core and exterior projects, but the conclusions of

    16

  • all statistical tests on the ex ante indicators remained unchanged. There was no

    need to perform sensitivity analyses with regard to criterion weights because, by

    construction, the RPM analysis is based on the consideration of incomplete weight

    information.

    Insert Figures 2 – 5 around here

    Figures 2 through 5 show illustrative highlights from the RPM analysis, whereby

    statistically significant differences are indicated in the Figures:

    • The number of projects that would have been started without TULI-

    participation was higher among core projects (42%) than among exterior pro-

    jects (30%). This can be readily understood, because it is likely that the best

    cases – which ought to find their way to the core – would have proceeded even

    without TULI. However, 50% of projects in the core would not have been started

    without TULI. This is a strong positive indication of the ability of TULI to iden-

    tify promising and successful projects (Figure 2).

    • In the analysis of the prior funding and support services, it was found that sci-

    ence parks and innovation managers had often been involved in the successful

    core projects. Also, it seemed that the research institute’s own innovation ser-

    vices had been more often employed by the core projects than by the exterior

    projects, although this difference could not be statistically confirmed (Figure 3).

    • An analysis of ex ante characteristics showed that projects which were under-

    taken by a research group or team were more likely to find their way to the set

    of outperforming core projects. Also the presence of research and corporate co-

    operation seemed beneficial, even though this hypothesis was not confirmed by

    statistical tests. Other background characteristics (such as purely research-

    based background or international collaboration) did not seem to distinguish

    between core and exterior projects (Figure 4).

    • When looking at the role of science parks, the subset of outperforming core pro-

    jects contained (in proportionate terms) many more projects that had cooper-

    ated with these parks. This conclusion about the beneficial role played by the

    science parks was also confirmed through statistical tests (Figure 5).

    Taken together, the above statements illustrate the kinds of results that can be of-

    fered through RPM-evaluation. These results, together with further analyses on

    17

  • other ex ante indicators, were presented to the TULI Steering Group which found

    them insightful and interesting. The distinction between outperformers and under-

    performers, in particular, seemed intuitive and conceptually appealing. The analy-

    sis also addressed statements that were important to the Steering Group in strat-

    egy formulation, such as “science parks and innovation managers tend to generate

    best performing projects”, suggesting that TULI-funding should perhaps be targeted

    even more to the flow of proposals from science parks.

    An intriguing observation was that the number of projects that had received sup-

    port from National Technology Agency (Tekes) was proportionately higher among

    underperforming exterior projects than among outperforming core projects. At this

    stage, we do not have a full explanation for this observation, but we can formulate

    some viable hypotheses. That is, these projects may have been characterized by

    particularly ambitious (and hence partly unrealistic) objectives, which may have

    been useful for securing Tekes funding, but which may have made it more difficult

    to achieve these objectives. Another hypothesis is that the transition from Tekes

    projects (which often have a strong research orientation) to successful new busi-

    nesses may be difficult unless there is an intermediate stage with a strong applied

    orientation. Arguably, one may also note that Tekes has quite correctly borne risks

    when selecting projects, in keeping with its role as a public funding agency for ap-

    plied technological research.

    5. Discussion and extensions

    Even though our case study focused on the TULI-program, RPM-evaluation is a

    general approach and can be applied in much to the analysis of many other data

    sets on innovation processes and technology development activities. Examples in-

    clude, among others, RTD projects, spin-off firms and science parks which use ex

    ante and ex post indicators in the mapping of the contextual characteristics, pro-

    gram interventions and later impacts of public interventions. Even if the links be-

    tween the recorded follow-up data and the objectives are indirect, RPM-evaluation

    can still be useful, provided that variables in the follow-up data serve as surrogates

    for fundamental objectives (see, e.g., Keeney, 1992).

    In our case study, the determination of outperforming core projects and underper-

    forming exterior projects was based on the computation of non-dominated portfo-

    18

  • lios that contained equally many projects each. In principle, this assumption can

    be relaxed by assuming that the projects are not of equal cost when non-dominated

    portfolios are computed in ex post portfolio selection. This would make it possible

    to identify, for example, non-dominated portfolios that consume one third (or some

    other proportion) of the available funding. Such an approach would be called for, if

    the program is primarily a funding instrument (rather than a source of advisory

    support), and the projects differ considerably in terms of their funding volume.

    However, the introduction of such an approach is likely to necessitate changes also

    in data collection, because the ex post performance indicators should be recorded

    on absolute scales, to ensure that the higher costs of larger projects can be com-

    pensated through absolute indicators which are capable of reflecting the volume of

    the results from these projects. For instance, the resulting business turnover

    should be recorded in monetary terms and not on qualitative scales such as ‘poor’ –

    ‘satisfactory’ – ‘excellent’, as any judgmental statements on such qualitative scale

    would be contingent on project size and cost. Apart from avoiding these question-

    able linkages, the use of absolute measurements seems preferable also because it

    offers possibilities for further analyses: for example, one can examine to what ex-

    tent project size (as an input indicator per se) is indicative of later performance per

    unit of investment.

    RPM-evaluation can also be extended to examine how projects that are in their ear-

    liest phases might perform in the future, in view of past data on some earlier ana-

    logues (see also Porter et al., 1991). Towards this end, it is first necessary to assess

    how important different ex ante indicators are for determining whether or not two

    projects are (dis)similar. Based on this assessment, an aggregate distance metric

    (based on the ex ante indicators) can be constructed to identify a reference set of

    earlier projects that have been ‘most similar’ to the new project, in the sense of the

    corresponding RPM core set. Then, recorded follow-up data on projects in the refer-

    ence set can be employed to generate a spectrum of corresponding ex post indica-

    tors, whereby this spectrum of later and actual realizations may be indicative of

    how the new project might perform. Preconditions for the warranted use of this

    kind of ‘RPM-forecasting’ include, among others, (i) the availability of sufficient data

    for the identification of a large enough reference set, and (ii) the existence of persis-

    tent causal relationships between the ex ante and ex post indicators. Conceptually,

    19

  • this approach has parallels to Cooper’s (1985) NewProd model where the identifica-

    tion of the reference set, however, is based on other metrics.

    6. Conclusions

    The RPM-evaluation methodology developed in this paper provides decision support

    to the management of innovation programs and other policy instruments, based on

    systematic analyses of longitudinal data obtained from follow-up reporting proce-

    dures. Methodologically, the novelty of RPM-evaluation stems from (i) the develop-

    ment of an ex post multicriteria evaluation model, which is explicitly linked to the

    program objectives and recognizes the value of incomplete information in dealing

    with these objectives, and (ii) the identification of projects that can be incontestably

    regarded as outperformers (core projects) or underperformers (exterior projects) in

    view of the ex post evaluation model. By comparing the ex ante characteristics of

    projects in these two sets, one can explore and uncover relationships between the

    ex ante and ex post indicators which, in turn, may suggest ways in which the pro-

    gram could be improved upon.

    Although the data set in our case study was not large, RPM-evaluation can be ap-

    plied to much larger data sets. Our computational experiments with RPM-Solver©

    software suggest that the RPM approach can be readily applied to data sets con-

    taining hundreds of projects, at the very least. The introduction of further ex ante

    indicators does not increase the amount of computation effort considerably, be-

    cause core and exterior projects can still be analysed separately for each indicator.

    The inclusion of additional ex post indicators in the evaluation model, on the other

    hand, may result in a heavier computational burden, particularly if little informa-

    tion is supplied about how important the criteria are relative to each other. But

    even here, computational effort is unlikely to become an overriding critical issue,

    because core and exterior projects need be determined only once before their corre-

    sponding ex ante indicators are subjected to a closer analysis.

    Overll, RPM-evaluation is very much in the spirit of ‘TechMining’, because it serves

    to uncover new relationships in existing data (see, e.g., Watts and Porter, 1997).

    Indeed, once the ex post evaluation model has been developed in collaboration with

    managers, it can applied in an exploratory way to obtain visual presentations that

    illustrate how the ex ante and ex post indicators tend to differ for core and exterior

    20

  • projects. If differences are observed, these can be validated either by carrying out

    statistical tests or by subjecting them to managerial judgment that helps address

    additional perspectives that are not necessarily contained in the data (cf. Linstone,

    1999). In this way, RPM-evaluation can support the formulation new hypotheses,

    as opposed to the testing of pre-defined hypothess. Furthermore, because the focus

    is on outperforming and underperforming projects – which the managers tend to

    remember well – the approach puts managers in a good position to understand be-

    ter why the observed relationships seem to hold, and how this enhanced under-

    standing can be leveraged to support the further improvement of the program.

    Acknowledgments

    This research has been supported by the National Technology Agency of Finland.

    7. References

    1. Archer, N., Ghasemzadeh, F. : An Integrated Framework for Project Portfolio Selection, International Journal of Project Management 17(4), 207-216 (1999).

    2. Calantone, R.J., Di Benedetto, C.A., and Schmidt, J.B. : Using the Analytic Hierarchy Process in New Product Screening, Journal of Product Innovation Management 16, 65-76 (1999).

    3. Callan, B. : Generating Spin-offs: Evidence from Across the OECD, OECD STI Review 26, 13-55 (2001).

    4. Cooper, R.G. : Selecting Winning New Product Projects: Using the NewProd System, Journal of Product Innovation Management 2, 34-44 (1985).

    5. Cooper, R.G., Edgett, S.J., and Kleinschmidt, E.J. : New Product Portfolio Management: Practices and Performance, Journal of Product Innovation Man-agement 16, 333-351 (1999).

    6. Di Benedetto, C.A. : Identifying the Key Success Factors in New Product Launch, Journal of Product Innovation Management 16, 530-544 (1999).

    7. Dosi, G., Freeman, C., Nelson, R., Silverberg, G. and Soete, L. (Eds.) : Technical Change and Economic Theory, Pinter, London, 1988.

    8. Golabi, K., Kirkwood, C.W., and Sicherman, A. : Selecting a Portfolio of Solar Energy Projects Using Multiattribute Preference Theory, Management Science, 27(2), 174-189 (1981).

    21

  • 9. Gustafsson, J., and Salo, A. : Contingent Portfolio Programming for the Management of Risky Projects, Operations Research, forthcoming (2005).

    10. Hall, B., Van Reenen, J. : How Effective Are Fiscal Incentives in R&D? A Re-view of the Empirical Evidence, Research Policy, 29(4-5), 449-469 (2000).

    11. Henriksen, A.D, and Traynor, A.J. : A practical R&D Project-Selection Scoring Tool, IEEE Transactions on Engineering Management 46(2), 158-170 (1999).

    12. Hernesniemi, H., Lammi, M., and Ylä-Anttila, P. : Advantage Finland – The Future of Finnish Industries, ETLA Series B 113, Taloustaito, Helsinki, 1996.

    13. Keeney, R. : Value-Focused Thinking: A Path to Creative Problemsolving, Har-vard University Press, Harvard, MA, 1992.

    14. Keeney, R., and Raiffa, H. : Decisions with Multiple Objectives: Preferences and Value Trade-offs, John Wiley and Sons, New York, 1976.

    15. Kleinmuntz, C.E., and Kleinmuntz D.N. : A strategic approach to allocating capital in healthcare organizations, Healthcare Financial Management 53(4), 52-58 (1999).

    16. Kortum, S., and Lerner, J. : Does Venture Capital Spur Innovation?, NBER Working Paper Series No. 6846, 1998.

    17. Kuusisto, J., Kotala, S., Kulmala, R., Viljamaa, A., and Vinni, S. : Intermedi-ary Evaluation of the TULI-program (in Finnish with an English summary), Tekes Technology Programs Reports 8/2004, Tekes, Helsinki, 2004.

    18. Könnölä, T., Brummer, V., and Salo, A. : Diversity in Foresight: Insights from the Fostering of Innovation Ideas, submitted manuscript, Helsinki University of Technology, Systems Analysis Laboratory, October 2005.

    19. Lemola, T. : Science, Technology and Innovation for the Best of Society. A Look at the Recent History of Finnish Science and Technology Policy (In Finnish), VTT, Group for Technology Studies, Working papers 57/01, VTT, Espoo, 2001.

    20. Liesiö, J., Mild, P., and Salo, A. : Preference Programming for Robust Portfolio Modeling and Project Selection, submitted manuscript, Helsinki University of Technology, Systems Analysis Laboratory, September 2005.

    21. Lindstedt, M., Liesiö, J., and Salo, A. : Participatory Development of a Strategic Product Portfolio in a Telecommunication Company, International Journal of Technology Management (forthcoming).

    22. Linstone, H.A. : Decision Making for Technology Executives: Using Multiple Perspectives to Improve Performance, Artech House, Norwood, MA, 1999.

    22

  • 23. Lundvall, B.-Å. (ed) : National Systems of Innovation: Towards a Theory of In-novation and Interactive Learning, Pinter, London, 1992.

    24. Martino, J.P. : Research and Development Project Selection, John Wiley and Sons, New York, 1995.

    25. Mustajoki, J., Hämäläinen, R.P., and Salo, A. : Decision Support by Interval SMART/SWING - Incorporating Imprecision in the SMART and SWING Methods, Decision Sciences 36(2), 317-339 (2005).

    26. Mustar, P. : Spin-offs from Public Research: Trends and Outlooks, OECD STI Review 26, 165-172 (2001).

    27. Porter, A.L. : QTIP : Quick Technology Intelligence Process. In: New Horizons and Challenges for Future-oriented Technology Analysis, Proceedings of the EU-US Scientific Seminar: New Technology Foresight, Forecasting and As-sessment Methods, Seville, May 13-14, 2004. European Commission, Joint Research Centre.

    28. Porter, A.L., Roper, A.T., Mason, T.W., Rossini, F.A., and Banks, J. : Forecasting and Management of Technology, Wiley, New York, 1991.

    29. Porter, M. : The Competitive Advantage of Nations, The Free Press, New York, 1990.

    30. Salo, A., and Hämäläinen, R.P. : Preference Assessment by Imprecise Ratio Statements, Operations Research, 40(6), 1053-1061 (1992).

    31. Salo, A., and Hämäläinen, R.P. : Preference Ratios in Multiattribute Evaluation (PRIME) – Elicitation and Decision Procedures under Incomplete Information, IEEE Transactions on Systems, Man, and Cybernetics 31(6), 533-545 (2001).

    32. Salo, A., and Punkka, A., : Rank Inclusion in Criteria Hierarchies, European Journal of Operational Research 163(2), 338-356 (2005).

    33. Salmenkaita, J.-P., and Salo, A. : Rationales for Government Intervention in the Commercialization of Technologies, Technology Analysis & Strategic Management, 14(2), 183-200 (2002).

    34. Salo, A., Gustafsson, T., Mild, P. : Evaluation of a Cluster Programme for Finnish Forestry and Forest Industries, International Transactions on Operations Research 11, 139-154 (2004).

    35. Smits, R., and Kuhlmann, S. : The Rise of Systemic Instruments in Innovation Policy, International Journal of Foresight and Innovation Policy, 1(1), 4-32 (2004).

    23

    http://www.sal.hut.fi/Publications/pdf-files/psal04b.pdfhttp://www.sal.hut.fi/Publications/pdf-files/psal04b.pdf

  • 36. Stummer, C., and Heidenberger, K., : Interactive R&D Portfolio Analysis with Project Interdependencies and Time Profiles of Multiple Objectives, IEEE Transactions on Engineering Management 50(2), 175-183 (2003).

    37. Thore, S.A. (ed) : Technology Commercialization: DEA and Related Analytical Methods for Evaluating the Use and Implementation of Technical Innovation, Kluwer Academic Publishers, Boston, 2002.

    38. Watts, R.J:, and Porter, A.L. : Innovation Forecasting, Technological Forecasting & Social Change 56, 25-47 (1997).

    24

  • TULI-project, typically 4-6

    months

    Start of a TULI-project

    End of a TULI-project, pre and intermediatedata are gathered

    six month follow-up

    one year follow-up

    two year follow-up

    Mutually identical follow-up survey at three distinct stages

    TULI-project, typically 4-6

    months

    Start of a TULI-project

    End of a TULI-project, pre and intermediatedata are gathered

    six month follow-up

    one year follow-up

    two year follow-up

    Mutually identical follow-up survey at three distinct stages

    Figure 1: Data collection activities in the TULI-program.

    25

  • Core projects (12)"N/A" (1)

    8%

    "Yes" (5) 42%

    "No" (6) 50%

    Exterior projects (37)

    "No" (12) 32%

    "N/A" (14) 38%

    "Yes" (11) 30%

    Figure 2: Impact of TULI-activation on the initialization of the project; would the

    project have been started without tuli-activation?

    26

  • 0%10%20%30%40%50%

    Scienceparks**

    Innovationmanagers**

    Institute'sown

    innovationservices

    NationalTechnology

    Agency

    Others;specification

    requested

    Coreprojects(12)Exteriorprojects(37)

    statistically significant (p=0.05)

    **

    Figure 3: Prior funding and/or support services by different organizations.

    27

  • 0%

    20%

    40%

    60%

    80%

    100%

    Purelyresearch-

    based

    Research &corporate

    cooperation

    Oneinnovator

    Researchgroup or a

    team**

    Internationalcooperation

    Coreprojects(12)

    Exteriorprojects(37)

    statistically significant (p=0.05)

    **

    Figure 4: Differences in ex ante project characteristics.

    28

  • Core projects (12)

    "No" (5) 42%

    "Yes" (7) 58% **

    Exterior projects (37)"No" (33)

    89%

    "Yes" (4) 11% **

    statistically significant (p=0.05)

    **

    Figure 5: Co-operation with science parks during the TULI-project.

    29

    IntroductionDecision making perspectives into innovation programsIdentifying exceptional performance with RPMRPM framework for project portfolio evaluationDetermination of outperformers and underperformers

    Application to a national innovation programCharacteristics of the TULI-programData setEvaluation modelOutperformers and underperformersExamples of results

    Discussion and extensionsConclusionsAcknowledgmentsReferences