56
Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT Contract N. IST-4-026963-IP SF_D5.6.1_EvaluationPlan_v1.6.doc Page 1 of 56 CoSSIB SAFESPOT INTEGRATED PROJECT - IST-4-026963-IP DELIVERABLE SP5 – CoSSIB Cooperative systems applications infrastructure based Deliverable No. (use the number indicated on technical annex) D5.6.1 SubProject No. SP5 SubProject Title Cooperative systems applica- tions vehicle based Workpackage No. WP6 Workpackage Title Evaluation Task No. T1 Task Title Evaluation Plan Authors (per company, if more than one company provide it together) O. Fakler (TRV) with inputs from COFIROUTE, CRF, DIBE, LCPC, MIZAR, SODIT, TNO, TUM Status (F: final; D: draft; RD: revised draft): F Version No: 1.6 File Name: SF_D5.6.1_EvaluationPlan_v1.6.doc Planned Date of submission according to TA: 31/06/2008 Issue Date: 29/01/2010 Project start date and duration 01 February 2006, 53 Months Evaluation Plan

SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 1 of 56 CoSSIB

SAFESPOT INTEGRATED PROJECT - IST-4-026963-IP

DELIVERABLE

SP5 – CoSSIB

Cooperative systems applications infrastructure bas ed

Deliverable No. (use the number indicated on technical annex)

D5.6.1

SubProject No. SP5 SubProject Title Cooperative systems applica-tions vehicle based

Workpackage No. WP6 Workpackage Title Evaluation

Task No. T1 Task Title Evaluation Plan

Authors (per company, if more than one company provide it together)

O. Fakler (TRV) with inputs from COFIROUTE, CRF, DIBE, LCPC, MIZAR, SODIT, TNO, TUM

Status (F: final; D: draft; RD: revised draft): F

Version No: 1.6

File Name: SF_D5.6.1_EvaluationPlan_v1.6.doc

Planned Date of submission according to TA: 31/06/2008

Issue Date: 29/01/2010

Project start date and duration 01 February 2006, 53 Months

Evaluation Plan

Page 2: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 2 of 56 CoSSIB

Revision Log

Version Date Reason Name and Company

0.1 16.11.08 Initial version after kick-off meeting WP5.6 O. Fakler (TRV)

0.2 02.01.09 Update templates, annex etc. O. Fakler (TRV)

0.3 13.01.09 Update methodology O. Fakler (TRV)

0.4 03.02.09 Update methodology, structure O. Fakler (TRV)

0.5 10.03.09 Update structure O. Fakler (TRV)

0.10 16.03.09 Update structure, discussion with TUM and TNO

O. Fakler (TRV), T. Schendzielorz (TUM)

0.12 02.04.09 Adjustment to structure of WP4.6 O. Fakler (TRV)

0.15 09.04.09 Input of Test cases provided to date, consolidation O. Fakler (TRV)

0.17 17.04.09 Updates for WEST test cases F. Peyret (LCPC), O. Fakler (TRV)

0.20 20.04.09 Integration of pending test cases O. Fakler (TRV)

1. 0 25.05.09 Separation of Annex and update O. Fakler (TRV)

1.1 02.06.09 Update O. Fakler (TRV)

1.2 29.06.09 Integration of comments by part-ners O. Fakler (TRV)

1.3 30.06.09

Update Final version (Note: remarks of third year review

are not included, because not available at date of submission)

O. Fakler (TRV)

1.4 02.11.09 Update Final version

Adjustments on remarks of third year review are included

O. Fakler (TRV)

1.5 23.12.09 Update Final version, remarks of internal review are included

O. Fakler (TRV), G. Vivo (CRF)

1.6 29.01.10 Update Final version, remarks of peer review are included

O. Fakler (TRV), K. Belhoula, H. Pu (CA), F. Peyret (LCPC),

P. Mortara (MMSE) Test cases provided by:

• F. Bonnefoi (COFIROUTE) • F. Visintainer (CRF) • A. Possani (DIBE) • S. Glaser (LCPC) • C. Torres (MIZAR) • N. Etienne (SODIT) • P. Feenstra, E. Wilschut, M. Kievit (TNO) • T. Schendzielorz (TUM)

Page 3: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 3 of 56 CoSSIB

Abbreviation List

ACC Adaptive Cruise Control

AEV Assistance and Emergency vehicle

CoSSIB Cooperative Safety Systems Infrastructure Based Applications

D Deliverable

FCD Floating Car Data

FOT Field Operational Test

H&IW Hazard and Incident Warning

HLO High Level Objective

HMI Human Machine Interface

HW Hardware

IRIS Intelligent Cooperative Intersection Safety

mo Motorway road environment

RDep Road Departure

REQ Requirement

ru rural road environment

SF SAFESPOT

SMAEV Specifications for Safety Margin for Assistance and Emergency Vehicles

SP Sub project

SpA Speed Alert

SP5O SP5 (high level) objective

SW Software

TA Technical Annex

TS Test site

tt Test track

ur Urban road environment

UC Use Case

UN User Need

VRU Vulnerable Road User (pedestrian, cyclist)

WP Work package

Page 4: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 4 of 56 CoSSIB

Table of contents

Revision Log ....................................................................................................2

Abbreviation List ..............................................................................................3

Table of contents .............................................................................................4

List of Figures ..................................................................................................5

List of Tables ...................................................................................................5

EXECUTIVE SUMMARY .................................................................................6

1. Introduction ................................................................................................7

1.1. Contribution to the SAFESPOT Objectives.........................................9

1.2. Deliverable structure...........................................................................9

2. SAFESPOT SP5 applications and test sites ............................................11

2.1. Applications ......................................................................................11

2.2. Technologies and equipment used ...................................................13

2.3. Evaluation environment ....................................................................18

3. Methodology of assessment ....................................................................20

3.1. SP5 – design and assessment process based on the V-model ........20

3.1.1. Design cycle...............................................................................21

3.1.2. Assessment cycle (re-adjustment of design)..............................21

3.2. SP5 – Evaluation methodology.........................................................22

3.2.1. Assessment categories ..............................................................24

3.2.2. Evaluation methods....................................................................29

3.2.3. Definition of assessment objectives (Success criteria)...............33

3.2.4. Definition of expected impacts (pre-assessment).......................34

3.2.5. Study design ..............................................................................34

3.2.6. Statistical restrictions..................................................................37

3.2.7. Tests and execution of Evaluation .............................................37

4. SP5 Test cases........................................................................................38

4.1. Template for Test cases ...................................................................38

4.1.1. Header Section ..........................................................................39

4.1.2. Test Setup Frame.......................................................................39

4.1.3. Success Criteria .........................................................................41

4.1.4. Results reporting ........................................................................42

4.2. Summary of SP5 Test cases planned...............................................42

5. Standardised results reporting .................................................................53

5.1. Adjusted reporting.............................................................................53

5.1.1. Detailed result reporting .............................................................53

5.1.2. Highlighted result reporting ........................................................53

5.2. Non compliance reporting (adopted from D4.6.1) .............................54

6. Conclusions .............................................................................................55

7. References...............................................................................................56

Page 5: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 5 of 56 CoSSIB

List of Figures

Figure 1: Inputs for WP5.6 Evaluation.........................................................8

Figure 2: Location of test sites...................................................................18

Figure 3: V-shape cycle for SP5................................................................21

Figure 4: Evaluation approach of WP5.6...................................................23

Figure 5: Study design of WP5.6...............................................................35

Figure 6: Header section frame WP5.6 .....................................................39

Figure 7: Corresponding Use Case of test case/pilot ................................40

Figure 8: Test setup frame ........................................................................40

Figure 9: Success Criteria frame...............................................................41

Figure 10: Report on Non-Compliance and Results ....................................42

List of Tables

Table 1: SP5 sub-applications and required equipment/components..........13

Table 2: Summary table of SP5 applications and related test sites.............19

Table 3: Example of assessment parameters for Technical assessment....25

Table 4: Table of indicators and Evaluation methods for the assessment of the impact on road safety ..........................................................27

Table 5: Example of indicators and Evaluation methods for the assessment of the effect on traffic efficiency .................................28

Table 6: Example of indicators and Evaluation methods for the assessment of the User acceptance..............................................29

Table 7: Summary of SP5 Test cases planned ...........................................43

Page 6: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 6 of 56 CoSSIB

EXECUTIVE SUMMARY

D5.6.1 – Evaluation Plan stands for an adjusted planning of the evaluation process of the SAFESPOT SP5 – CoSSIB applications:

• Speed Alert (SpA) • Hazard & Incident Warning (H&IW) • Intelligent Cooperative Intersection Safety (IRIS) • Road Departure (RDep) • Safety Margin for Assistance and Emergency Vehicles (SMAEV)

This will be supported by the provision of suitable assessment methodology, the use of common templates for the planning and execution of tests as well as the use of predefined objectives from a user point of view to obtain a com-parable evaluation process among CoSSIB applications.

The special aim of the evaluation process of WP5.6 (and WP4.6) is to apply predefined user-related demands for the evaluation process of the applica-tions. This stands for the preference of practical and comprehensible objec-tives instead of theoretical/technical ones. The objectives are pre-defined (high level) objectives, user needs, requirements and risks developed within SAFESPOT/CoSSIB in earlier stages of the project, to achieve statements on the defined evaluation tasks

• Technical assessment, • Driver acceptance, • Effects on safety and efficiency and • Implementation aspects.

In WP5.6 Evaluation the SP5 applications, embedded in the fully functional system (including inputs of SP2 and SP3), will be tested predominantly by field trials in a real life context which has been stated as an important task for SAFESPOT tests. The field trials are realised in three different evaluation en-vironments – motorway, rural and urban roads. Additionally there will be lim-ited traffic simulation and driving simulator tests in order to obtain results e.g. if tests drive are too dangerous, if special effects of usability are to be tested or if the limited fleet of test vehicles cannot produce significant results on the traffic safety impact.

The main outcome of D5.6.1 Evaluation Plan is the definition of the Test cases for the evaluation of the applications developed in SP5 CoSSIB. The relevant information about test environment, equipment and configurations required as well as the description of test procedures, indicators to be deter-mined and thresholds to be met is provided on the basis of the common WP5.6 (and WP4.6) template.

Page 7: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 7 of 56 CoSSIB

1. Introduction

The SAFESPOT project aims to create systems and applications to increase road safety by extending drivers' time/space horizon and their perception of safety-relevant information as well as to improve the precision, reliability and quality of this information. Starting from the assumption that the driver is the main responsible and the unique decision maker in the vehicle, the strategy implemented is based on the warning and not on the intervention; for this rea-son the infrastructure must evaluate the safety margin taking into account the driver’s reaction time. Compared to simple sensor technologies, the SAFESPOT vehicle to vehicle and infrastructure to vehicle communication and the needed data fusion methods will create the relevant basis framework for the applications performing this challenging task.

This document D5.6.1 – Evaluation plan is the first out of five tasks defined in WP5.6 Evaluation:

1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary roads 5. Evaluation report, implementation strategy and recommendations

The Evaluation Plan of WP5.6 will offer a brief methodical introduction to the topic of evaluation. It’s main focus is the provision of the evaluation tests/pilots for SAFESPOT sub-project SP5 CoSSIB – Cooperative Safety Systems Infra-structure Based Applications. CoSSIB is intended to specify and develop a set of cooperative system applications which have a strong emphasis on the sup-port of roadside equipment (i.e. infrastructure-based sensing and actuation). Within CoSSIB five applications for the improvement of safety have been de-veloped:

• Speed Alert (SpA) • Hazard & Incident Warning (H&IW) • Intelligent Cooperative Intersection Safety (IRIS) • Road Departure (RDep) • Safety Margin for Assistance and Emergency Vehicles (SMAEV)

The applications will be implemented, validated and evaluated on various test sites in Germany, Italy, the Netherlands and West (France and Spain) in dif-ferent road environments (urban, rural, motorway) to create a link to a real life driving context. Additionally there will be limited tests via traffic simulations and driving simulators, e.g. if test drives would expose the driver to dangerous situations or to analyse the effects on traffic efficiency which would need more test vehicles than available. According to the Technical Annex these virtual test are to be carried out in WP5.5 but because the subject of the tests refers to the evaluation process, the outcome will be reported in WP5.6.

On the basis of the technical verification and validation which is located in SAFESPOT WP5.5, sub-project WP5.6 is dedicated to the evaluation of the operational applications of CoSSIB. The evaluation will include technical as-

Page 8: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 8 of 56 CoSSIB

sessment, driver acceptance, effects on safety and efficiency as well as im-plementation aspects.

WP5.5 Test and ValidationTechnical tests and validation of the SP5 component s

and the fused system (incl. SP2 and SP3). Provision of a tested system as basis for the

evaluation in WP5.6

WP5.6 EvaluationEvaluation of SP5 applications and the fused system

on the basis of SAFESPOT and SP5 (high level)objectives, user needs, requirements and SP6 risks:

• Technical assessment• Driver acceptance• Effects on safety and efficiency• Implementation aspect

SP2 INFRASENS SP3 SINTECH

SP5 CoSSIB WP5.1- 5.4

SP5 CoSSIB

Figure 1: Inputs for WP5.6 Evaluation

A special challenge for the evaluation procedure of CoSSIB will be the confla-tion of the affiliated sub-projects of SAFESPOT – SP2 INFRASENS which will provide the infrastructure sensing and data platform, SP3 SINTECH which develops e.g. the Local Dynamic Map used by CoSSIB and also to a certain extent SP4 SCOVA which represents vehicles and HMIs for the user. This means on the one hand that the performance of CoSSIB strongly depends on the affiliated sub-projects. On the other hand all other sub-projects also have work packages for validation (and evaluation) which implies a definite outline of tests to ensure clear statements for the evaluation of CoSSIB outputs. This leads to the following steps:

• Definition of systems (and its borders) to be evaluated • Selection of suitable evaluation methodologies for evaluation • Definition of significant test cases • Definition of objectives, user needs, requirements and risks to be

tested in the test case • Definition of performance indicators, thresholds and measurement

methodologies to evaluate the objectives • Collection/compilation of results and derivation of recommendations for

the implementation

The evaluation process of WP5.6 is strongly correlated with the process of WP4.6 SCOVA to maximise the efficiency of the evaluation process across the SAFESPOT project. This means that test procedures, templates, defini-tions etc. are utilised by both work packages to meet this goal.

Page 9: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 9 of 56 CoSSIB

1.1. Contribution to the SAFESPOT Objectives

The Evaluation Plan of the WP5.6 offers the basis for a coordinated planning and execution of the evaluation procedure including the provision of basic methodological information about assessment, testing as well as a checklist for the planning field trials.

Right from the start it was a major goal of WP5.6 to define the evaluation process and tools to be compliant with the common SAFESPOT methodology and to continue the approach provided by SP2 INFRASENSE (D2.5.1 – Plan for testing and validation activities). At the same time WP5.6 pursued a close collaboration with WP4.6 which is responsible for the task of evaluation in SP4 SCOVA. This resulted in a similar approach for the evaluation procedure and in particular by the utilisation of similar test procedures, templates, defini-tions etc.

The principle item of the evaluation procedures in WP5.6 (and WP4.6) is the use of key goals or objectives of the SAFESPOT project in terms of consider-ing predefined evaluation criteria which are the (for more see Annex):

• SAFESPOT and CoSSIB (high level) objectives (HLO) • User Needs (UN) • Requirements (REQ) and • Risks by SP6

This ensures the provision of common criteria for the evaluation of SAFESPOT applications by using these practical user related demands which do not need further derivation or explanation like theoretical technical and functional aspects. It is a major goal of the evaluation process of SP5 (and SP4) that the Test cases refer to the SAFESPOT objectives as much as pos-sible.

In the case of the Requirements, User Needs and CoSSIB objectives this cre-ates a common basis for the evaluation process within the CoSSIB sub-project. The SAFESPOT (high level) objectives and SP6 risks are valid for all SAFESPOT sub-projects and therefore create a certain overall standard within the project and a link to SP4 SCOVA.

1.2. Deliverable structure

Chapter 1 gives an overview of the Evaluation process to be carried out in SAFESPOT SP5.

In chapter 2 the SAFESPOT SP5 applications are briefly described taking into account the technologies used for implementation and the evaluation envi-ronments they will be tested in.

Chapter 3 describes the assessment process step by step envisaged for SAFESPOT WP5.6 to develop the Test cases for the evaluation. It includes a basic description of the assessment methodology to be applied.

Chapter 4 is the core chapter of this document. The chapter is introduced by a description of the templates to be used to develop the test cases followed by

Page 10: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 10 of 56 CoSSIB

a brief extract of the Test cases of the SP5 applications. The complete Test cases of SP5 are listed in the Annex.

Chapter 5 gives information about standardised results reporting including the non-compliance reporting – a procedure established for all SAFESPOT SPs to report problems that occurred.

Chapter 6 gives the conclusions of the document D5.6.1 Evaluation plan.

Chapter 7 includes the References.

The Annex includes a list of SAFESPOT objectives (HLO, SP5O, REQ, UN, Risks) and the complete Test cases of SP5, as far as provided. Additionally it offers a checklist for field trials that does not claim to be comprehensive but could help the partners to think of typical requirements or questions during the tests.

Page 11: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 11 of 56 CoSSIB

2. SAFESPOT SP5 applications and test sites

A prerequisite for successful assessment is a brief, but clear description of the applications and their key characteristics to be evaluated. The ‘Guidebook for Assessment of Transport Telematics Applications’ (Zhang et al., 1998) rec-ommends including at least the following information to maintain a compara-tive overview of the applications tested on different test sites:

• Application name or type • Functionality or service offered • Major technologies (hardware, software etc. used) • Evaluation environment

2.1. Applications

The applications are the major output of the SAFESPOT project. A detailed description of the SP5 applications was prepared in deliverables D.5.3.1-5.3.5. The applications include also hard- and software components from SP2 and SP3 which means that the technical function of the SP5 applications rely on the function of the used components and their proper integration.

The following tables give a brief overview of the relevant SP5 applications to be assessed including their sub-applications:

Application

Speed Alert (SpA)

Function

Application SpA gives advice on speed limits and/or warns drivers on excessive speed to improve the road safety. The scope of the application goes from a simple alert on legal speed limit to warning on a dynamic limit generated using traffic condition, environmental effect and road status.

SpA_01: Critical speed warning (legal)

The aim of this sub-application is to provide to the driver at every moment the legal speed limit and to correlate it with the roadside warning unit on the area covered by the roadside warning unit. Moreover, the application is a gateway for the infrastructure manager that is responsible for the road policy to modify locally a speed limit.

SpA_02: Critical speed warning (dynamic)

This sub-application is in direct link with the Hazard and Incident Warning application. It en-hances the message generated by the Hazard and Incident Warning with a speed recom-mendation.

SpA_03: Excessive speed alert to all vehicles

The sub-application aims to provide the driver a speed profile while approaching a static black spot of the infrastructure. The speed profile is generated taking into account the road surface and a detailed geometry of the road.

Page 12: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 12 of 56 CoSSIB

Application

Hazard and Incident Warning (H&IW)

Function

Application H&IW focuses on the non urban road environment and hence situations in which the high speed of vehicles are often an important factor in causing accidents, to-gether with the variable geometry of the infrastructure and inclement weather conditions. The objective of the application is to generate appropriate warnings in cases of safety risk like obstacles on the roadway and dangerous conditions resulting from lack of visibility or friction. These include general safety alerts, guidance and speed recommendations for the driver.

H&IW_01: Obstacles

The sub-application alerts drivers to the presence of static or semi-static objects detected on the road (e.g. a vehicle involved in an accident, a queue, a slow moving vehicle, or a pe-destrian on the road). Warnings will be given to approaching vehicles to allow them to re-duce speed or change lane in time to avoid a collision.

H&IW_02: Wrong way driving

This sub-application warns drivers of the potential danger caused by a vehicle travelling in the same lane as themselves but in the wrong direction, either due to an overtaking ma-noeuvre or ‘ghost driving’ (i.e. travelling against the flow on a motorway).

H&IW_03: Abnormal Road Conditions

The aim of this sub-application is to alert drivers to the presence of a hazard due to weather conditions (e.g. rain, ice or fog) which result in reduced friction or low visibility.

Application

Intelligent Cooperative Intersection Safety (IRIS)

Function

Application IRIS aims to protect the road-users against safety critical situations in the vicinity of urban intersections by providing recommendations and warnings in time. Typical critical situations addressed by IRIS are red light violations, left and right turns with intersecting traffic and the approaching of emergency vehicles.

IRIS_01: Basic Application

The objective of the basic version is to identify potential red light violators, to support the drivers turning right in being aware of pedestrians and cyclists as well as to assist left turn-ing vehicles without a separate signal stage.

IRIS_02: Support of Emergency vehicle

The objective of this sub-application is to enable the emergency vehicle to cross the inter-section safely and quickly by sending a warning message to other vehicles. In contrast SMAEV_02 application modifies the traffic lights to give priority to the emergency vehicle.

Page 13: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 13 of 56 CoSSIB

Application

Road Departure Prevention (RDep)

Function

RDep_01: Road Departure Prevention

Application RDep prevents the driver from going off road. If the application predicts that the trajectory of the vehicle will not remain within defined safety boundaries according to road geometry, vehicle dynamics and driver load; then a warning is issued to the driver.

Application

Specifications for Safety Margin for Assistance and Emergency Vehicles (SMAEV)

Function

Application SMAEV aims to improve the road safety for assistance and emergency vehicles (AEV) by sending crucial information (such as position of the incident, typology of incident etc.) to infrastructure and to warn all vehicles.

SMAEV_01: AEV signalling an event

This sub-application assists the operation of Assistant Vehicles (e.g. snow removal) or the management of events on motorways (detection and broadcasting of event by AV).

SMAEV_02: AEV crossing an intersection

The objective of this sub-application is to let the Emergency Vehicle (EV) go safely through the intersection, properly managing the crossing priorities. In contrast IRIS_02 application does not modify the traffic lights to give priority to the emergency vehicle, but sends a warn-ing message to other vehicles.

2.2. Technologies and equipment used

The following tables describe the HW and SW components and further equipment (test vehicles, RSU) required to test the SP5 application.

Table 1: SP5 sub-applications and required equipmen t/components

Application Hardware (HW) components Software (SW) components Vehicle(s),

RSU TS

Speed Alert (SpA)

- TNO gateway on - autobox - Main pc fused with app pc -

laptop - Pos pc -laptop - Router pc - self assembled

car pc with qfree wlan card - No laserscanner/radar

- Datafusion - LDM - Positioning - Router - Application

- SF-vehicles (Citroën C4, VW Passat)

NL

- ITS-Modeller NL

SpA_01 Critical speed warn-ing (legal)

- West RSU - VMS (optional)

- Datafusion - LDM - Positioning - Router - Application

- Renault Clio

WE

Page 14: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 14 of 56 CoSSIB

Application Hardware (HW) components Software (SW) components Vehicle(s),

RSU TS

- TNO gateway on - autobox - Main pc fused with app pc -

laptop - Pos pc -laptop - Router pc - self assembled

car pc with qfree wlan card - No laserscanner/radar

- Datafusion - LDM - Positioning - Router Application

- SF-vehicles (Citroën C4, VW Passat)

NL

SpA_02 Critical speed warn-ing (dy-namic)

- ITS-Modeller NL

SpA_03 Excessive speed alert to all vehi-cles

- West RSU - VMS (optional)

- Datafusion - LDM - Positioning - Router - Application

- Renault Clio

WE

Hazard and Incident Warning (H&IW)

1. SP1 platform with: - Main PC [ NqLDM – DF –

Positioning Software ] - Application PC [Appl. cli-

ent] - RouterPC [with QfreeCard] - GPS - Esposytor - Display device

2. Main PC [ NqLDM – DF – Positioning Software ]

- Application PC [Application client]

- RouterPC [with QfreeCard]

3. Main PC [NqLDM – DF – Positioning Software]

- Application PC [H&IW_01] - Router [with QfreeCard] - GPS - Esposytor

- DF-LDM-AppClient- MM -Vanet

- DF-LDM-AppClient- MM -Vanet

- DF-LDM-H&IWapp-AppCoord-MM-Vanet

- 1 x RSU

- 2 x Vehi-cles (LIVIC, cofiroute)

WE

H&IW_01 Obstacles

Using ECAID and WSN - Main PC [ LDM – DF – Po-

sitioning Software, SP2 and SP5 applications ]

- RouterPC [VAnet Router with QfreeCard]

- GPS - Display device - Sensors: WSN, 10 Sensors

+ gateway ) - GPS - GUI on the vehicle Using WSN - Main PC [ LDM – DF – Po-

sitioning Software, SP2 and SP5 applications ]

- RouterPC [VAnet Router] - GPS - Display device

At RSU (ECAID and WSN): - SP2 Data Fusion (Data Re-

ceiver, MM, ECAID Ex-tended Cooperative Auto-matic Incident Detection) in Main PC

- LDM in Main PC WSN and Thermal Imaging Camera: - SP2 Data Fusion (DR, OR,

MM) in Main PC - LDM in Main PC At Vanet: - SP3 Router Software - Application H&IW - SP5 Message Manager In Vehicle: - SP1 Data Fusion - PG-LDM

- 2 Vehicle FIAT BRAVO

- 1 RSU-Main PC (ECW-281B-ATOM)

IT

Page 15: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 15 of 56 CoSSIB

Application Hardware (HW) components Software (SW) components Vehicle(s),

RSU TS

- Sensors: WSN, 10 Sensors + gateway

- Application PC [H&IW_01] - GPS Using Thermal Imaging Camera: - Main PC [ LDM – DF – Po-

sitioning Software, SP2 and SP5 applications ]

- RouterPC [VAnet Router with QfreeCard]]

- GPS - Display device: - Sensors: Thermal imaging

cameras (provided by VTT) - GPS

- SP3 In Vehicle Positioning Software

- ITS-Modeller NL

1. SP1 platform with: Main PC [ NqLDM – DF – Posi-tioning Software ]

- Application PC [Application client]

- RouterPC [with QfreeCard] - GPS, Esposytor - Display device

2. Main PC [NqLDM – DF – Positioning Software] [CofiGateway]

- Application PC [H&IW_01] - Router[with QfreeCard] - GPS, Esposytor

1. DF-LDM-AppClient- MM -Vanet

2. DF-LDM-H&IWapp-AppCoord-MM-Vanet - Cofi-Gateway

- 1 x RSU

- 1 x Vehi-cle (LIVIC)

WE

H&IW_02 Wrong way driving

Using RFID: - Main PC [LDM – DF – Po-

sitioning Software, SP2 and SP5 applications]

- 2 PC connected to each antenna RFID

- RouterPC [VAnet Router] - GPS - Display device - Tags RFID - Sensors: RFID - RFID reader (PCMCIA) Using WSN: - Main PC [TeleAtlas LDM –

Data Fusion (Data Re-ceiver – Positioning Soft-ware, SP2) and SP5 appli-cations]

- 2 RouterPC [VAnet Router with Qfree Card]

- GPS - Display device: GUI - Sensors: 10 WSN + gate-

way. LEDs - GPS

At RSU: - SP2 Data Fusion (DR, OR,

MM) in Main PC - LDM in Main PC At Vanet: - SP3 Router Software - Application H&IW - SP5 Message Manager In Vehicle: - SP1 Data Fusion - PG-LDM - SP3 In Vehicle Positioning

Software

- 2 Vehicle FIAT BRAVO

- RSU Main PC (ECW-281B-ATOM)

IT

Page 16: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 16 of 56 CoSSIB

Application Hardware (HW) components Software (SW) components Vehicle(s),

RSU TS

H&IW_03 Abnormal road condi-tions

- ITS-Modeller

NL

IRIS

Roadside: - Main PC fused with app pc

- laptop - Laser scanners + PC - Router HW - Traffic Light Controller - GUI Laptop - LAN Switch In-Vehicle: - Main PC - Positioning PC - laptop - Router HW - self assem-

bled car pc with qfree wlan card

- TNO gateway on - autobox - HMI - Simulation: - BMW-driving simulator - StSoftware - Simulink

Roadside: - SP2 Data Fusion (DR, OR &

SR-Manoeuvre Estimator) in Main PC

- PG-LDM in Main PC - SP3 Router Software - Application IRIS - SP5 Message Manager In-Vehicle: - SP1 Data Fusion - PG-LDM - SP3 In-Vehicle Positioning

Software - HMI Software

- SF-vehicles DE: 1 Smart, Conti ve-hicle

- RSU

DE

- Driving simulator - St-Software - Matlab Simulink - TNO driver simulator soft-

ware

- Driving simulator NL

IRIS_01 Basic Appli-cation

- TNO gateway on - autobox

- Main pc fused with app pc - laptop

- Pos pc -laptop

- Router pc - self assembled car pc with qfree wlan card

- Laserscanner/radar

- Datafusion

- LDM

- Positioning

- Router - Application

- VW Passat

NL

IRIS_02 Support of Emergency vehicle

Roadside: - Main PC - Laser scanners + PC - Router HW - Traffic Light Controller - GUI Laptop - LAN Switch - In-Vehicle: - Main PC - Positioning PC - Router HW - HMI

At RSU: - SP2 Data Fusion (DR, OR &

SR-Manoeuvre Estimator) in Main PC

- PG-LDM in Main PC - SP3 Router Software - Application IRIS - SP5 Message Manager In-Vehicle: - SP1 Data Fusion - PG-LDM - SP3 In-Vehicle Positioning

Software - HMI Software

- SF-vehicles DE: 1 Smart, Conti ve-hicle

- RSU DE

Page 17: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 17 of 56 CoSSIB

Application Hardware (HW) components Software (SW) components Vehicle(s),

RSU TS

- Driving simulator - St-Software - Matlab Simulink - TNO driver simulator soft-

ware

- Driving simulator NL

Road Departure (RDep)

RDep_01 Road Depar-ture Preven-tion

- RSU Application PC - Laser scanner with PC - RSU Main PC - RSU Router - RSU ESPOSYTOR PC - Gateway to Meteorological

station - Both SF-vehicles with

complete SP1-SP4 plat-form including CRF gate-way, Main PC, Router and onboard HMI.

- RSU RDep Application - RSU Laser scanner sw - RSU LDM - RSU VANET sw - RSU DF - RSU Message Manager - ESPOSYTOR sw - Both SF-vehicles with com-

plete SP1-SP4 sw, including ExtMesApp./EMA

- RSU - 1-2 SF-

vehicles (RED, BLUE)

- NON SF-vehicle (under discus-sion)

IT, WE

Specifications for Safety Margin for Assistance and Emergency Vehicles (SMAEV)

- Main PC - SMAEV PC - Car Gateway - Ethernet Switch - Vehicle sensors (odometer,

yaw) - GPS - Variable Message Sign - VANET router

- SMAEV application - HMI server module - LDM manager module - VMS panel module - VANET message module - SP1 framework - HMI client module - LDM - Positioning - Car Gateway - Router software

- CRF As-sistance Vehicle

- SP1 or SP4 ve-hicle (choice is irrelevant)

IT

SMAEV_01 AEV signal-ling an event

- Powerful Main PC - Tablet PC - Router - Traffic Light Controller

- SP2 Datafusion (DR, OR & Manoeuvre Estim.)

- PG-LDM - SP3 In-Vehicle Positioning - SP3 Router - Application SMAEV01 - SP5 Message Manager

- RSU - SF Vehi-

cle - Non SF

Vehicle WE

SMAEV_02 AEV cross-ing an inter-section

- Powerful Main PC - Tablet PC - Router - Traffic Light Controller

- SP2 Datafusion (DR, OR & Manoeuvre Estim.)

- PG-LDM - SP3 In-Vehicle Positioning - SP3 Router - Application SMAEV01 - SP5 Message Manager

- RSU - SF Vehi-

cle - Non SF

Vehicle WE

Page 18: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 18 of 56 CoSSIB

2.3. Evaluation environment

The assessment of the SP5 SAFESPOT applications will be carried out on four SAFESPOT test sites to proof the functionality of the applications under real life conditions as well as the interoperability among different countries:

• TS Germany (DE) • TS Italy (IT) • TS Netherlands (NL) • TS West (WE), located in France

There will be no tests on the SAFESPOT TS Sweden (SWE).

The test sites will take into account the different regional aspects of Europe (topology, weather, infrastructure etc.) as well as the different context of road environment (urban, ru ral, mo torway). If tests are performed on a test track there is an additional referencing to the road environment (ur, ru, mo).

Additional to the test sites traffic simulations (ITS-modeller, Delft, NL) and tests in a driving simulator (Soesterberg, NL) will be carried out to proof the effects on traffic efficiency or the usability in cases where real tests would be too extensive or too dangerous for real life tests.

The location of the SP5 test sites is shown in Figure 2.

Figure 2: Location of test sites

The following tables provide an overview of applications and related test sites. A more detailed description for each test case is given in chapter 4.2 – Summary of SP5 Test cases planned as well as in the annex.

Dortmund, DE

Helmond, Soesterberg, Delft , NL

Torino, IT

Satory, West

Britanny, West

Saumur, West

Lyon, West

Brescia-Padova, IT

Page 19: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 19 of 56 CoSSIB

Table 2: Summary table of SP5 applications and rela ted test sites

Application

Speed Alert (SpA)

Sub-applications: DE IT NL WEST Simulation/ Simulator

SpA_01: Critical speed warning (legal) - - ru tt (mo) mo

SpA_02: Critical speed warning (dynamic) - - ru - mo

SpA_03: Excessive speed alert to all vehicles - - - tt (ru) -

Application

Hazard and Incident Warning (H&IW)

Sub-applications: DE IT NL WEST Simulation/ Simulator

H&IW_01: Obstacles - mo - mo, tt (ru, mo) mo

H&IW_02: Wrong way driving - mo, tt (ru) - - -

H&IW_03: Reduced friction or visibility - - - - mo

Application

Intelligent Cooperative Intersection Safety (IRIS)

Sub-applications: DE IT NL WEST Simulation/ Simulator

IRIS_01: Basic Application ur - ur - ur IRIS_02: Support of Emergency vehicle ur - - - ur

Application

Road Departure Prevention (RDep)

Sub-applications: DE IT NL WEST Simulation/ Simulator

RDep_01: Road Departure Pre-vention - tt (ru, mo) - tt (ru, mo) -

Application

Specifications for Safety Margin for Assistance an d Emergency Vehicles (SMAEV)

Sub-applications: DE IT NL WEST Simulation/ Simulator

SMAEV_01: AEV signalling an event - tt (mo) - tt (mo) -

SMAEV_02: AEV crossing an intersection - tt

(ur, ru, mo) - tt (ur) -

Page 20: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 20 of 56 CoSSIB

3. Methodology of assessment

This chapter intends to provide basic methodological information about as-sessment and testing without the claim of completeness. It is recommended to refer to respective literature for more detailed information.

After the implementation phase it is up to the steps of verification, validation and evaluation to state if the applications developed in SAFESPOT SP5 meet the expected demands.

The assessment of SP5 CoSSIB is divided into two parts:

• In WP5.5 the validation of the technical feasibility of the systems and its CoSSIB sub units will be assessed, including the technical tests whether the systems work and behave like expected (and defined)

• WP5.6 focuses on the technical assessment, effects on safety and effi-ciency, driver acceptance and implementation impacts of the system as a whole.

This document describes the evaluation process of WP5.6.

The design of the methodology of assessment of WP5.6 is based on the common methodology and tools defined within SAFESPOT for Verification & Validation and previous related SAFESPOT documents like D2.5.1 – Plan for testing and validation activities by SP2 INFRASENSE.

3.1. SP5 – design and assessment process based on the V-model

The method of evaluation for WP5.6 is based, like in most European projects, on the guidelines developed in the CONVERGE project which are the ‘Guide-book for Assessment of Transport Telematics Applications’ (Zhang et al., 1998 [16]) and the ‘Checklist for Preparing a Validation Plan’ (Maltby et al., 1998 [2]). Because the CONVERGE guidelines offer a very general approach that has to be adapted for the respective project, the evaluation procedure implemented in the project PReVAL has been drawn on as a second refer-ence. It is also based on the CONVERGE methodology, but it already offers a more dedicated approach on the evaluation of ADAS functions developed in the various PReVENT sub projects, which is very similar to the tasks in SAFESPOT.

PReVAL/PReVENT uses the V-shape cycle – widely used in software devel-opment – to describe the ‘road map’ from an idea or problem to be solved to the design of a product or application, its implementation and subsequent as-sessment (which might also results in a readjustment of the application). This process is adapted for SP5 CoSSIB in Figure 3.

Following the ‘V’ from the upper left, down to the bottom and again up to the right, it reflects the process applied in the SAFESPOT project for SP5 (and other SPs). Going down is equivalent to go more into detail, going up stands for regarding more general or superior aspects.

Page 21: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 21 of 56 CoSSIB

DESIGN

SP2-3 IntegrationSP5 Implementation

Systemfunctional

criteria

Hypothesis

Technicalvalidation

Systemfunctionalverification

WP5.6Evaluation

Implementationaspects

Safety andefficiencyTest drivesSimulation

TechnicalEvaluation on

test sites

Driver Acceptanceobservation, inter-view, test drives

Methodselection

Scenariodef inition

TestPlan

Scenariodef inition

Scenariodef inition

Methodselection

Methodselection

TestPlan

TestPlan

WP5.5Verification and

Validation

Integrationtesting/

verification

Technical/functional test

Driver acceptanceDriving simulatorSF USER NEEDS,

SYSTEMREQUIREMENTS

(D5.2.1, D5.2.3)

SP6 RISKS

SF (and SP5)HIGH LEVEL

OBJECTIVES (TA)

SP5 Specifications(D5.3.1 – 5.3.5)

Design cycle

Assessment cycle

SF USE CASES(D5.2.1)

Systemfunctional/

performancecriteria

Methodselection

Scenariodef inition

TestPlan

Safetymechanisms

Eff iciencyaspects

Safetyimpact

Efficiencyanalysis

Humanfactors

validation

Figure 3: V-shape cycle for SP5

3.1.1. Design cycle

The cycle starts from the upper left with the general intentions or ideas of the SAFESPOT project which are reflected by the SAFESPOT High Level Ob-jectives (HLO).

The definition of Use Cases (UC) is the first step down on the left branch of the ‘V’. A Use Case defines a subset of the functionalities of a system. They are primarily used to define the behaviour of a system without specifying its internal structure, or the context a system should act.

Going one step more into detail (and down) the User Needs (UN) state of what the system is expected to provide and the constraints under which it must operate, entirely from a user point of view. At the same level there are the Requirements (REQ) which provide the working characteristics of the system.

Close to the bottom of the left wing this leads to the SP5 specifications . Here the SP5 applications and their specifications are defined, in a way that the SP5 applications meet at best all the demands (HLO, UN, REQ) described above.

3.1.2. Assessment cycle (re-adjustment of design)

The SP5 applications developed through the steps of the left branch of the ‘V’ will be now verified, validated and evaluated on the right branch. On the one side this could be called ‘Assessment cycle’ nevertheless it is both ‘Assess-

Page 22: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 22 of 56 CoSSIB

ment’ and ‘Design’, because the results of the assessment should be utilised to adjust the product or application.

The detailed basic test for an application is the System functional verifica-tion . This means that the components of the application are tested techni-cally, predominantly on the basis of technical indicators.

One step above is the Technical validation . Now, criteria for validation is more generally defined, e.g. if the components but also the entire system of an application meet the requirements defined on the left branch of the ‘V’.

Both, the System functional verification and the Technical validation are lo-cated in WP5.5. These two components are vital to provide technically tested and approved applications for the evaluation phase of WP5.6.

Beyond the System functional verification and Technical validation are the domains of WP5.6 Evaluation with the following categories of assessment:

An exceptional position is dedicated to the Technical assessment – though it regards the technical aspect which is located in WP5.5, it belongs to WP5.6 with a more general view.

On the basis of a working application the Human Factors Validation concen-trates on the behaviour of a user experiencing the SP5 application, e.g. will a warning in case of a red light violation cause the reaction expected.

On the top of this there is the question on the Safety Impact and Efficiency of the system. If an application works as expected and has an effect on the driver will there be an effect on the road safety and e.g. in the case of the ap-plication ‘Hazard and Incident Warning’ this could be – does a warning ‘to drive slow’ in the case of fog has a positive or negative effect on the traffic flow?

3.2. SP5 – Evaluation methodology

The process of WP5.6 Evaluation is outlined in Figure 4.

A good way to start an Evaluation process before going into detail is to estab-lish an ‘orientation phase’ without using predefined steps or categories:

• Think of the challenges to be met, • borders of the systems, • required classifications (e.g. according to different user groups in-

volved), • impacts expected – before and after the implementation of the

SAFESPOT applications and • what questions lead to significant answers:

o Which is better (comparison)? o How good (does it meet a threshold)? o Why bad (analysis of critical points)?

All together this enables the creation of a Test case ‘story’ , which helps to keep the focus on the major goals to be detailed by the following steps.

Page 23: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 23 of 56 CoSSIB

WP5.6Evaluation

Use Cases, and scenarios

Description of Applications

Choice of Assessment Category

Technical Assessment

Safety and ef f iciency

Driver acceptance

Implementation aspects

Choice of Evaluation Method

Field trial

Traf f ic Simulation Measurement

Driving SimulatorDefinition of AssessmentObjectives HLO, UN, REQ, etc.

Definition of expected impacts (pre-assessment)

Study design for Test case Indicators

Reference case

Data collection

Measurement conditions

Statistics

Measurement plan

D5.2.1

D5.2.1, D5.2.3,Technical Annex

D5.3.1-5.3.5

See SAFESPOT deliverables:

Tests (measurement/estimation of effects)

Assessment of results

Observation

Data analysis

Test drives

… …

Before-af ter analysis

Actual-theoretical comp.Interview/Questionnaire

… …

Figure 4: Evaluation approach of WP5.6 (on the basis of Zhang et al., 1998)

The Evaluation process itself is based on the Use Cases and scenarios de-fined in earlier states of the SAFESPOT project (D5.2.1 [7]). Here typical ac-tual situations of road traffic, their deficiencies and opportunities have been defined as well as the improvements expected by the deployment of the SAFESPOT applications (D5.3.1-D5.3.5 [11]-[15]) which are briefly described in previous chapter 2.1.

According to the Technical Annex the Evaluation process of SP5 is to be dis-tinguished into four Assessment categories – the Technical assessment, the evaluation of the impacts on road safety and transport efficiency, the assess-ment of driver acceptance and of implementation aspects which is in principle not an additional category but the essence of the first three. The assessment categories are described in chapter 3.2.1.

The next step defines the methods of evaluation or how the variances of a test will be assessed. In SAFESPOT and SP5 a strong emphasis is laid on tests under real environmental conditions represented by field trials mostly using the method of a ‘Before-after analysis’ (with/without application). This will be supported by limited traffic simulations, driving simulator tests as well

Page 24: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 24 of 56 CoSSIB

as through observations and interviews (user acceptance tests), specified in chapter 3.2.2.

The next steps are dedicated to the reflection of expected impacts described in chapter 3.2.4 and the choice of Assessment objectives (see chapter 3.2.5) qualified to investigate these. This stage shows whether the Test case is cre-ated in a way to answer the predefined SAFESPOT objectives (Technical An-nex [6], D5.2.1 [7], D5.2.3 [10]) or if the objectives/Test cases have to be ad-justed.

These inputs lead to the Study design for the SP5 Test cases or Pilots as they are named in SP4, including all relevant information to define the specific tests of the systems and applications of SP5 and further details described (see chapter 3.2.5). The description of the SP5 test cases – how to use the WP5.6/4.6 templates and a summary of Test cases – is reported in chapter 4.1.

The final steps of the execution of the Test cases as well as their evaluation will be reported in further deliverables D5.6.2-5.6.5.

In the following the basic steps to create the Test cases are described. This will be carried out by a brief description of the topics touched without the claim to be exhaustive.

3.2.1. Assessment categories

The assessment of applications can be carried out in several ways. Typical assessment categories are the Technical assessment, Impact assessment, User acceptance assessment, Socio-economic evaluation, Financial assess-ment and Market assessment.

According to the Technical Annex WP5.6 Evaluation will concentrate on the non-monetary assessment categories:

• Technical assessment (system performance, reliability) • Impact assessment (safety, transport efficiency) • User acceptance assessment (driver and road users' opinions or reac-

tions)

which will recapitulated by the

• Implementation aspects (re-adjustment of the applications)

Technical assessment

The technical assessment is the most basic kind of evaluation. General as-sessment parameters like the functionality or performance of a system are assessed with the help of technical indicators. This process of assessing the parameters by the use of substitutional indicators or processes (e.g. assess-ment of the ‘actuality’ of a warning by the measurement of the duration/time of transmission) is called operationalisation . The indicators should be further specified by thresholds/target corridors to state whether a test was successful or not and by a description of an applicable test procedure (tool, method for testing). The process of the technical assessment is limited to the technical aspects defined without regarding other effects beside these technical board-

Page 25: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 25 of 56 CoSSIB

ers. Technical tests are conducted in all kind of evaluation environments (laboratory, virtual tests, hardware/software in the loop, simulations, real tests on test tracks or test sites etc.). The focus for WP5.6 tests is defined for test drives under real environmental conditions.

The following table gives an overview of typical parameters for a technical as-sessment including the indicators used for operationalisation, a test procedure and thresholds.

Table 3: Example of assessment parameters for Techn ical assessment

Assessment pa-rameter

Indicators for operationalisation Evaluation method Definition of thresholds/

target corridors

Actuality Time stamps Measurement of time needed for transmission using log files

e.g. < 1 Second

Accuracy Trajectories Comparison of trajectory calculated and driven in reality

e.g. < +/- 0,5 m lateral shift

Consistency Data Comparison if data is processed as defined e.g. 100 %

Correctness Road friction

Comparison of friction measured by vehicle and by stationary professional tool

e.g. < 10 %

Robustness Message Test of performance of system supplying wrong or incomplete data

e.g. 24h/7 d a week, no breakdown in any case

Availability Signal strength Test of availability of W-LAN on test site by pro-fessional tool

e.g. complete test site covered

Completeness Objects displayed Comparison of objects located on test site with objects included in LDM

e.g. complete coverage of all vehicles and VRUs

… … … …

In the case of SAFESPOT the technical assessment is geared to predefined SAFESPOT User Needs, Requirements and (High Level) Objectives. Never-theless if further objectives should be assessed an adjustment, extension or additional consideration of the more general assessment parameters is explic-itly desirable.

The technical assessment in WP5.6 will be performed on the basis of test drives on the SAFESPOT test sites.

Impact assessment (safety and traffic efficiency)

Impact assessment is related to the measurement or estimation of the effects of an application, e.g. on road safety, transport efficiency or environmental conditions. Here often a distinction of different target groups is needed be-cause of their different behaviour or expectations. The major goal of this proc-ess is the analysis of the changes generated by a new system or application and the social benefits achieved hereby. This is equivalent to a ‘before-and-

Page 26: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 26 of 56 CoSSIB

after’ study which compares the state of the system before (or without) an im-plementation of modifications and after (or with it). If these results are further compared e.g. to the costs of an investment this leads to cost benefit analy-ses which are not part of WP5.6 but located in SP6.

The decision context has to determine which impacts should be assessed or which target groups will be considered because this highly affects the choice of relevant indicators and thresholds. For a ‘before-and-after’ study it is essen-tial that the surrounding or general conditions of the test site do not change (e.g. if tested on different test sites). Otherwise this would falsify the results of the assessment. This requires a precise description of the tests and its sur-rounding conditions. If a comparability of tests cannot be ensured the system impacts should be generalised to meet the aspect of the comparability of tests.

A major goal of SAFESPOT is the increase of safety . Main indicators of traffic safety are the number of injuries and fatalities that have occurred in traf-fic accidents. However, for research projects like SAFESPOT it is unrealistic to measure these on the basis of real life traffic accident situations or statis-tics. It is very unlikely that within the project’s limited impact zones of the SAFESPOT test sites there are places that produce enough real accidents that the applications can be effectively tested.

Therefore the assessment of safety aspects has to rely on other indicators, direct ones such as conflicts avoided by the application or indirect indicators like the difference between the individual and the current average speed which relates to the number and seriousness of accidents (e.g. Ranta & Kall-berg 1996 [1]). The choice of a suitable indicator depends essentially on the ways in which an ITS project is expected to affect safety, for instance by refo-cusing the driver’s attention or by decreasing speeds. A recommendation list of indicators describing traffic safety is provided in the following table.

Page 27: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 27 of 56 CoSSIB

Table 4: Table of indicators and Evaluation methods for the assessment of the impact on road safety

Assessment pa-rameter

Indicators for operationalisation Evaluation method Definition of thresholds/

target corridors

Number of conflicts (near accidents)

Traffic conflict studies (observation) e.g. reduction of 90%

Mean and standard de-viation of speeds

Automatic traffic monitor-ing, radar studies, simula-

tion

e.g. all speeds within +/-10% of mean speed

Number of traffic viola-tions

Observation, interviews, surveys e.g. < 1%

Driver behaviour (alert-ness, focus of attention,

feeling of safety)

Measuring of reaction time, eye movement etc. by instrumented car, traf-fic simulator, observation,

interview

Rate of short accepted time gaps

Field studies, driving in an instrumented car or

traffic simulator …

Rate of short Time to Col-lision (TTC) values

Automatic traffic monitor-ing by detectors on test

site road …

Traffic safety

… … …

A very interesting question is the use of individual systems in the first stages of implementation with only low high penetration rates, e.g. will the recom-mendation of lower speeds lead to risky overtaking manoeuvres by un-equipped vehicles that do not have this information?

The second aspect of the SAFESPOT impact analysis focuses on the traffic efficiency . Like analyses on traffic safety, the assessment of traffic ef-ficiency depends on large sample sizes which cannot be covered by the lim-ited test fleet of SAFESPOT. Therefore the SAFESPOT Technical Annex en-visaged the use of limited simulator tests for selected SP5 applications.

Additionally it is in the nature of some safety relevant applications that e.g. a ‘Speed Alert’ warning to decrease the speed would increase the safety of the road user but increases at the same time the travel time (which could be re-garded as negative effect on traffic efficiency). This means that the positive effect of traffic safety often go together with a negative impact for the driver in first sight. In second sight this could be regarded as a positive aspect, be-cause it avoids accidents that would increase travel time much more.

Page 28: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 28 of 56 CoSSIB

Table 5: Example of indicators and Evaluation metho ds for the assessment of the effect on traffic efficiency

Assessment pa-rameter

Indicators for operationalisation Evaluation method Definition of thresholds/

target corridors

Travel time Measurement of travel

times via test drives video detection or simulation

e.g. reduction of > 10%

Length or severity of congestion

Measurements via sensor data, test drives, simula-

tion e.g. reduction of > 10%

Increase of mean speed Measurements via sensor data, test drives, simula-

tion e.g. increase of > 5%

Traffic effi-ciency

… … …

Like the Technical assessment, also the Impact assessment of SAFESPOT is based on the predefined User Needs, (non technical) Requirements and (High Level) Objectives that have to be operationalised to enable a subsequent evaluation. If additional parameters should be applied in the approach it is re-quested to include them.

For the Impact assessment of traffic safety and efficiency in WP5.6 tests will be performed on the basis of test drives on the SAFESPOT test sites next to limited tests by TNO traffic simulation.

User acceptance assessment (Driver acceptance)

This category of assessment aims on the measurement or estimation of the behaviour/perception of users of an application. In contradiction to testing an ‘objective’ system here ‘subjective’ human beings are involved in the assess-ment process. This calls for exceptional demands on the planning of tests to avoid biased results by systematic influences – like a sufficient sample size of pre-selected test persons to identify and represent the different types of be-haviour, the definition of dependent and independent variables, the use of a common, clearly comprehensible test design, similar test conditions and measuring tools (especially if executed on different test sites) to name just some of these.

The assessment of the User acceptance may be carried out through observa-tions, measurements (e.g. driving parameters), interviews or questionnaires.

Users may be authorities, service providers, system operators, drivers, vul-nerable road users (VRU) etc. Especially the diversion of user groups should be considered satisfactory because of their different intentions. Regarding the acceptance from an authority or a system operator point of view, the accep-tance of an overall system could be deduced from the improvement of traffic efficiency e.g. the reduction of total congestions by an even distribution of ve-hicles on the network. From the driver point of view this the main focus lays on the individual travel time without caring for the systems optimum or e.g. whether the information delivered is understandable or driving comfort is en-hanced.

Page 29: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 29 of 56 CoSSIB

Table 6: Example of indicators and Evaluation metho ds for the assessment of the User acceptance

Assessment pa-rameter

Indicators for operationalisation Evaluation method Definition of thresholds/

target corridors

Final output

Acceptance Compliance of instruction e.g. question ‘how would you react if this message is displayed’

e.g. > 90 % very good 90-70 % fair <70 % bad

Correct use of informa-tion offered

e.g. observation of action taken in test drive

e.g. 100 % Usability

Use of information e.g. observation of action taken in driving simulator

Description of actions taken

Workload Correct use of informa-tion offered without dis-

traction from driving

e.g. observation of action taken in driving simulator

e.g. 95 %

… … … …

Intermediate result

Acceptance Compliance of instruction e.g. observation of action taken in test drive

Input for simulation

Reaction time Time between display of

message and action taken by driver

e.g. observation of action taken in test drive

Input for simulation

… … … …

The output of the user acceptance assessment may be used as a direct or final output like as well as an intermediate result and input for further assess-ment. E.g. a very good ‘acceptance’ is a final result for a successful applica-tion but can be also used for further tests like for traffic simulations that e.g. depend on the information how many drivers would use the application to cal-culate the effects on traffic efficiency. In this case no threshold is required or vice versa the simulation may calculate how good the acceptance of an appli-cation has to be to create an effect.

For the User acceptance assessment in WP5.6 tests will be performed on the basis of test drives on the SAFESPOT test sites besides limited tests in the TNO driving simulator (WP5.5).

3.2.2. Evaluation methods

The Evaluation methods describe possible techniques how to perform the as-sessment of the applications. Evaluation methods represent a pool of quite heterogeneous elements and characteristics. E.g. a ‘questionnaire’ may be regarded as a ‘stand-alone’ method for evaluation – the description of a situa-tion, associated questions and answers are all included in the questionnaire. But a questionnaire may be also used in combination with test drives. In this case the test drives creates the situation to be evaluated and the question-naire is a subset of the whole test.

In the following the most common Evaluation methods applicable to the topics included in SP5 are listed and briefly described.

Page 30: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 30 of 56 CoSSIB

Field trial

A strong emphasis of the SAFESPOT project has been put on the test of the applications under real conditions . Depending on the SP5 applications the field trials will be performed on the SAFESPOT test sites which include test tracks and public test sites that represent three typical driving contexts – mo-torway, rural, urban road environment . The term of field trials stands for any test performed under real conditions with the SAFESPOT applications and systems integrated into this environment. Actions taken in a field trial could be observation (how do the systems of drivers react), interviews (does the system work sufficient in the real driving context), measurements (is the signal strength sufficient to broadcast the messages), test drives (good recep-tion of message in moving vehicle) etc.

Test drives

Test drives are a main item of the evaluation process of SAFESPOT. Test drives may be utilised for the assessment of the:

• (technical) functionality of applications and systems implemented, like the broadcast of a warning from SAFESPOT infrastructure to a moving SAFESPOT vehicle, the

• impact on the driver/user, like what is the reaction on the basis of this warning (e.g. distraction, immediate braking, driving parameters), or the

• impact on the traffic situation (e.g. the avoidance of an accident, be-cause of immediate braking) generated through the application.

The benefit of test drives is the creation of a direct link to the real life context . Integrated systems communicate with real drivers in a real driving environ-ment. At the same time this is the main drawback. Every real driving situation is unique (environment, situation, driver, type of vehicle, other road users in-volved, weather, daylight, traffic volume etc.); theoretically only a high number of samples allows to classify typical situations or to define influencing values as insignificant. For the limited testing actions in research projects this means that in the planning phase of test drives foremost efforts have to be estab-lished to identify or to determine the most relevant basic and influencing con-ditions to create the typical testing environments to test the impacts of the ap-plication and in the case of SAFESPOT to enable the testing of different test sites.

Traffic simulation

Traffic simulations are virtual reproductions of real environments in a com-puter environment that include the most relevant parameters which control the system to be evaluated. Though simulations do not replace real tests they al-low quite easily testing changes in the traffic system by varying the values of parameters or the combinations of basic conditions as well as by adding ele-ments not yet established in the reality (e.g. the effects of an additional lane without real implementation). Therefore simulations proved to be very suffi-cient for pre-testing to identify most relevant test scenarios in order to opti-mise real tests or where real tests would be too complex, too costly or simply not feasible because of systems not yet available in the real environment.

Page 31: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 31 of 56 CoSSIB

The latter applies especially for the assessment of traffic efficiency of SP5 applications. Because there is only a small fleet of SAFESPOT test vehicles (low penetration rate) it is most likely that the applications will have an impact on the individual vehicle but very little influence on whole traffic situation. Some of applications only work well if a sufficient number of vehicles are equipped (e.g. longitudinal equalisation of traffic by SP5 application SpA in case of severe weather conditions on motorways). In this case limited traffic simulations performed in WP5.5 could virtually equip other vehicles with the functionality of the SP5 applications. An outcome could be the identification of minimum penetration rates for an effective SAFESPOT system as well as its effects.

To obtain significant results that refer to the context of SP5 it is highly recom-mended to include the inputs from the SAFESPOT test drives for the calibra-tion of the simulation model to create a link to the real driving context.

Driving simulator

The driving simulator represents an evaluation method located between the real tests and the simulation. Though the driver ‘drives’ in a dummy car in a virtual environment displayed on screens it comes close to a real driving con-text because the virtual environment interacts with the driver’s behaviour. The driving context can be specially designed for relevant situations to be as-sessed and the situation reproduced as many times as required, e.g. if tested with more test persons.

Limited driving simulator tests are destined for the assessment of SP5 appli-cations e.g. for Driver acceptance or the performance of applications under controlled and ‘quasi’ real conditions. Typical reasons for using driving simula-tors are:

• to avoid to endanger test persons in safety critical situations (e.g. red light violations)

• to apply more complex methods of measurement (measurement of brain activity) that can only be provided in a laboratory

• to ensure a better comparability of tests by the utilisation of pre-defined driving and surrounding conditions (and with this the elimination of inci-dental effects which might occur in reality) and the repeated use of the same scenarios with different test persons

Interviews/questionnaire

The method of interviews/questionnaires is used to collect systematic informa-tion on the opinion, knowledge and behaviour of human beings. The method is utilised as a ‘stand-alone’ assessment method like through a stated-preference interview or an online questionnaire – e.g. based on a textual de-scription, picture or video the effects of applications can be evaluated on a theoretical basis like ‘watch the video and choose how you would react a), b) or c)?’. At the same time it could be applied as a subset of another evaluation method, like an interview performed during or after a test drive. For the per-formance of the tests a wide range of standardised tools are available. The

Page 32: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 32 of 56 CoSSIB

use of these pre-defined questions, answers or scales simplifies the analysis of results as well as facilitates its comparability.

Data analysis

The data analysis is mostly related to the technical assessment. Information collected by simulations, experiments, measurements etc. and stored in result tables or log files (reaction times, eye movements etc.) are analysed to vali-date e.g. functions or reactions. By definition the data analysis is a very exact method of evaluation. Therefore it requires a proper definition of the frame-work just as well a well defined execution of the analysis to allow it’s compa-rability across the different test sites. E.g. typical risks of error regarding time response originate from wrong synchronisations (e.g. different reference times, latencies of systems) or comparison of ‘equal’ data that has been proc-essed in a different way because of the lack of a common definition (e.g. to use the mean or median value).

Measurement

Measurements are defined as planned actions to obtain quantitative informa-tion of an indicator. Measurements are mostly related to the assessment of technical aspects, e.g. if a message was broadcasted in time, but may also refer to other assessment categories. Like the data analysis also the meas-urement requires a proper definition of the framework just as well a well de-fined execution of the analysis to allow it’s comparability across the different test sites. Within the SAFESPOT activities measurements can be carried out through components already included in the applications (e.g. measurement of transmissions between two interfaces) or by additional/external ones, like to measure the signal strength of the W-LAN broadcast by a professional tool.

Observation

Typically the method of observation is combined with another method of evaluation, mostly with test drives or driving simulator studies. An example performed in-vehicle is the 'Wiener Fahrprobe’ (Risser, 1985 [5]). This obser-vation method includes two observers in the vehicle watching how the driver behaves in real traffic. One of the observers registers standardised variables such as speed adaptation at junctions or obstacles, lane changes etc. The other observer does free observations, e.g. conflicts, communication, interac-tion and special events.

A very pragmatic use of the observations may be conducted for test drives on prepared test sites. E.g. the reception of a warning in time can be easily as-sessed by the use of markers that indicate important points of action – e.g. the warning message has to be received within the zone between marker 1 and 2 to enable a safety braking (before reaching marker 3).

Before-after analysis

The before-after analysis is the typical method of evaluation for the SAFESPOT applications. The situation before (without the SAFESPOT appli-cation) is compared to the situation after implementing the SAFESPOT sys-tems. For this purpose the relevant situations for testing the applications have

Page 33: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 33 of 56 CoSSIB

to be defined as well as the parameters and indicators qualified to assess the impacts. It is important to assure that the surrounding conditions remain con-stant during the tests to ensure that the changes refer to the implementation of the application.

Target-performance comparison

The target-performance comparison is an evaluation method often used in the context of controlling. The method corresponds to the general approach of assessment envisaged for WP5.6 (and 4.6) which is to evaluate the applica-tions on the basis of the SAFESPOT objectives operationalised through rep-resentative indicators and thresholds to be met.

An example for this process is shown in Table 3 of chapter 3.2.1. The first column of the table describes the goal of the application, e.g. the provision of a correct message. Column two and three describe which indicators are to be used to test the correctness and how this should be done. The last column ‘Definition of thresholds/target corridors’ includes the threshold to be met. If it the threshold is not met this should lead to a deduction of measures to im-prove the result.

3.2.3. Definition of assessment objectives (Success criteria)

The most central element of evaluation next to the application itself refers to the definition of the objectives of assessment. According to the V-model the SAFESPOT applications were developed on the basis of predefined objec-tives (see 3.1), from more general demands to detailed ones. In a consequent way these objectives are used again to assess if the applications meet the goals once defined. The success criteria are classified into four classes:

• SAFESPOT and CoSSIB (high level) objectives (HLO) • User Needs (UN) • Requirements (REQ) and • Risks by SP6

The SAFESPOT (high level) objectives and SP6 risks are valid for all SAFESPOT sub-projects and therefore create a certain overall standard within the project (and to SP4 SCOVA). The Requirements, User Needs and CoSSIB objectives create a common basis for the evaluation process within the CoSSIB sub-project.

SAFESPOT and SP5 High Level Objectives

The SAFESPOT and CoSSIB (high level) objectives (HLO) have been ex-tracted from the SAFESPOT Technical Annex and represent the major goals defined for the SAFESPOT project and CoSSIB (see also Annex Chapter 3.1). Most of them are also used by WP4.6.

The HLO are on a quite general level. To be assessed in the test case the ob-jectives have to be replaced by representative indicators.

Page 34: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 34 of 56 CoSSIB

SP5 User needs

The User needs have been developed in D.5.2.1 [7] – User Needs and Re-quirements for every SP5 application. Though they are more specific than the HLO still the instrument of operationalisation is needed to translate the User needs (UN) into indicators to be measured or assessed. Because some of the UN are rather technical and thus more related to the technical validation of WP5.5 there the original UN have been reduced to the evaluation related ones. This also applies if applications do not address those UN anymore Nevertheless the testers are free to test these (see Annex Chapter 3.2).

SP5 Requirements

The requirements (REQ) were developed for each application in D.5.2.1 [7] and D5.2.3. [10] – Definition of use case and user requirements and Applica-tion Scenarios and System Requirements. The requirements mostly refer to technical questions which are part of WP5.5 – Technical test. Nevertheless some of the requirements also refer to a more general view or are only appli-cable in a real driving context. In this case they will be also included in WP5.6 (see Annex Chapter 3.3). Nevertheless the testers are free to test also the other REQ.

SP6 Risks

SP6 (BLADE) that deals with business models, legal aspects and deployment compiled a list of possible risks that might influence the success of the de-ployment of the SAFESPOT applications. The risks address general problems that might occur during and after the implementation of the applications and therefore consider all SPs. Also here a selection was made to put a stress on the task of evaluation (see Annex Chapter 3.4) and reduce the technically ori-ented risks.

3.2.4. Definition of expected impacts (pre-assessment)

It is the task of evaluation to assess the expected impacts of an application through well defined test cases. Due to restraints in time and budget it is most important not only to focus if an impact is represented by a test case but also how significant the result would be. Therefore a phase of ‘pre-assessment’ is essential to identify which combinations of

• application, • assessment category, • evaluation method, • situation/surrounding condition, • etc.

promise to generate the most significant results for the respective target groups to be addressed. This process is fundamental to avoid a distinction into too many subsets and the performance of insignificant tests on the cost of significant ones.

3.2.5. Study design

Page 35: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 35 of 56 CoSSIB

After the definition of the assessment objectives the assessment methodology has to be designed in detail which leads to the Study design for the Test cases. The affiliated steps according to the ‘Guidebook for Assessment of Transport Telematics Applications’ (Zhang et al., 1998 [16]) are listed in the following figure.

Steps to create the Test case

Definition of indicators

Reference case

Data collection

Conditions of experiment

Test case/Pilot

Statistical considerations

Measurement plan

Integrity of experiment

Figure 5: Study design of WP5.6 (on the basis of Zhang et al., 1998)

Definition of Indicators

To check if an objective has been met by a SP5 application or not, suitable indicators, thresholds and experimental methods have to be defined. This process of substitution is called operationalisation. The indicator or a set of indicators have to reflect the performance or impact of the application and have to deliver reliable results for the measurement methods and tools cho-sen.

If applications are tested on more than one SAFESPOT test site it is indispen-sable to define indicators and measurement methods in a common way to enable a cross-site comparison of results.

Reference case

Especially for the ‘Before-after analysis’ a reference case is needed to define a clear status before the application is integrated in the traffic system. De-pending on the assessment category and if the application is tested on more than one test site it is more or less easy to define comparable reference cases for the evaluation process.

The technical assessment of standardised components principally represents an uncritical category of assessment while e.g. a user acceptance test may

Page 36: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 36 of 56 CoSSIB

lead to different results for different user types. In this case a proper classifi-cation of user types will be inevitable. If applications are tested on more than one test site it is recommended to define the reference case as clear as pos-sible to facilitate the comparison of results. In fact this is not always feasible in particular for tests on real test sites, all offering other setups of the road envi-ronment.

Data collection

Most relevant evaluation methods for data collection are measurements, ob-servations, interviews, questionnaires and simulations. For empirical state-ments measurements and observations in real life contexts are mandatory but often extensive and too small-scaled for representative results. Simulations offer virtual results but enable the creation of system under various and re-peatable environmental conditions, especially for situations that would be dangerous or impossible create in practice.

Conditions of experiment

The conditions of measurement include a more detailed view of surrounding conditions for the experiments and are a continuation and detailing of the ref-erence case described above. Their main task is to define a controlled and homogeneous situation for the tests by incorporating factors like traffic vol-umes required, weather conditions, availability of day light etc.

Statistical considerations

Typically it is necessary to estimate the expected effects and representative-ness or statistical confidence of the experiment.

As a matter of fact and typical for research projects the limited test fleet of the SAFESPOT project and therefore presumably small sample sizes will prevent a statistical approach for most tests (see also 3.2.6).

Measurement plan

The measurement plan represents the timing of the performance of the (measuring/observation) experiments without influencing the performance of the application to be tested as well as the reference case.

Integrity of experiment

Limited resources for assessment often lead to concentrated testing of im-pacts regarded as most important with the risk of ignoring other important im-pacts. In addition accidental effects may affect the results, like respondent fa-tigue, prototype HMI etc. The ‘Guidebook for Assessment of Transport Telematics Applications’ (Zhang et. al. [16]) summarises this into three con-siderations to be made – about completeness, insularity and disturbance of the assessment process (see also 3.2.6).

Page 37: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 37 of 56 CoSSIB

3.2.6. Statistical restrictions

This evaluation is used to gain first insights in the effects of the systems on safety, efficiency, technical aspects, driving performance and workload. Tai-lor-made scenarios target the critical scenarios for each system. In that way the scenarios should indicate large positive or negative effects of the system in specific situations with predefined settings.

To be able to evaluate the total impact of the SAFESPOT systems a complete Field Operational Test (FOT) should be executed; but in fact in SAFESPOT the amount of data that can be measured for each test case is limited by re-sources and time. For this reason the evaluation of the SAFESPOT systems aims at providing at least ten runs for every scenario to cover some of the variations between the test runs. Descriptive statistics will used to illustrate the outcome of the evaluation tests.

Whenever the data set of the evaluation allows for hypothesis testing these statistical tests should be executed. Hypothesis testing for the evaluation plan will generally take the form of testing a null hypothesis: “There is no effect of the SAFESPOT system.” against an alternative hypothesis such as: “The SAFESPOT system causes a decrease in speed variation of x % ”. To carry out such a test two data sets are needed, one set of data in which the partici-pants drive with the system switched off (the baseline) and one data set where participants drove with the system switched on with a sufficient sample size. Comparing the measures between data sets analysis will be done using standard techniques such as a t-test, ANOVA and repeated measurement, or in the case of categorical data Chi-square testing will be used. Results will only be reported significant at the p<.05 level.

3.2.7. Tests and execution of Evaluation

The execution of the tests and the assessment of the results obtained will be reported in follow-up documents D5.6.2-5.6.6.

Page 38: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 38 of 56 CoSSIB

4. SP5 Test cases

4.1. Template for Test cases

This chapter is dedicated to the presentation of the template used to define the SP5 Test cases in integrated way. The template is commonly utilized by WP4.6 and WP5.6 and was created on the basis of the SAFESPOT Test Form provided by partner MMSE for the technical validation to provide a common approach within the SF project. The WP5.6 template adapts the structure and main subjects of the SAFESPOT Test Form but includes addi-tional sections that are not available with the Test. These adjustments are necessary for the evaluation process which is different to the technical valida-tion covered by the MMSE Test Form. The major added value of the WP5.6 template is a more detailed and structured main section for the success crite-ria to ensure a better comparison of results through SP5 tests.

A detailed description of the structure and contents of the WP5.6 Template is included in the annex.

The Test Form is divided into four sections:

• A Header Section that collects the general information like related task and application, authors, used hardware and software components, test type, test purpose, etc.

• Test Setup Frame with the description of the test setup and scenario • Statement of Success Criteria (SAFESPOT and SP5 Objectives, User

Needs, Requirements and Risks which will be the basis for the evalua-tion process) to be met,

• Report of Non-Compliance (things that did not work as expected) and results

In the following an example of a WP5.6 template completed for the SP5 IRIS application IRIS_01 is shown.

Page 39: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 39 of 56 CoSSIB

4.1.1. Header Section

The Header section is shown in Figure 6 includes the general information re-quired to define the framework of the tests:

• Task : Identifies environment which the test is related to (motorway, ru-ral, urban)

• Compiled by / Company, Date, Sheet number • Application(s) tested : applications / sub-applications tested in the test

case. • Vehicles / RSU, HW components, SW components • Test Type : technical evaluation, human factors test or evaluation traffic

safety and efficiency • Test purpose • Test environment: tests on SAFESPOT test sites, traffic simulation,

driving simulator, other

Figure 6: Header section frame WP5.6

4.1.2. Test Setup Frame

Next to the success criteria the Test Setup Frame is the main content of the template and contains the selection of corresponding use case(s), the de-scription of the test setup which includes in fact the description of the test (see Figure 7 and Figure 8).

• Use Cases : according to D5.2.1 • Multiple Applications

Page 40: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 40 of 56 CoSSIB

Figure 7: Corresponding Use Case of test case/pilot

• Test setup and scenario: with recommended contents – Heading – Brief description of test case and the goals of test – General/settings – Figure/sketch of test procedure – Description of tests/variations and goals of test

Figure 8: Test setup frame

Page 41: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 41 of 56 CoSSIB

4.1.3. Success Criteria

The section of success criteria defines the assessment criteria to state whether the SAFESPOT applications were successfully developed and inte-grated or not. The tables foreseen in the template are subdivided into the dif-ferent types of success criteria:

• SAFESPOT and SP5 (high level) objectives (HLO) • User Needs (UN) • Requirements (REQ) • SP6 Risks • success criteria not yet included

The relevant success criteria have to be extracted out of the pre-defined HLO, SP5O, UN, REQ and Risks included in the annex of D5.6.1. Figure 9 shows the Success Criteria frame of the WP5.6 template. The objectives are to be further detailed by additional columns that should contain information on the operationalisation process which is the choice of suitable indicators, thresh-olds and experimental methods to meet the objectives in a test. This may be difficult in the initial stage of test planning but for the final evaluation it is im-portant to have clearly defined methods and quantitative thresholds.

Figure 9: Success Criteria frame

Page 42: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 42 of 56 CoSSIB

4.1.4. Results reporting

The last section of the WP5.6 template already includes space for the Non-Compliance Reporting to be applied for all SAFESPOT applications and the results obtained. These are included for the sake of completeness but will be completed in the applications or other tasks.

Figure 10: Report on Non-Compliance and Results

4.2. Summary of SP5 Test cases planned

The following table gives a brief overview of the SP5 Test cases by the SAFESPOT partners for the applications:

• Speed Alert (SpA) • Hazard & Incident Warning (H&IW) • Intelligent Cooperative Intersection Safety (IRIS) • Road Departure (RDep) • Safety Margin for Assistance and Emergency Vehicles (SMAEV)

Page 43: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 43 of 56 CoSSIB

Table 7: Summary of SP5 Test cases planned

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

Speed Alert (SpA)

SpA_ Satory-Test_ WE_01

SpA_01 TS WEST – Satory Test track

Motorway

SP5_UC32

Speed warning in case of legal speed violation – Technical validation

Technical evaluation

Reliability

Correctness

SpA_ Satory-Test_ 02_HF

SpA_01 TS WEST – Satory Test track

Motorway

SP5_UC32

Speed warning in case of legal speed violation – Human fac-tors evaluation

Human Factors

Usability

Acceptance

SpA_ Satory-Test_ 03

SpA_01 TS WEST – Satory Test track

Rural

SP5_UC41,SP5_UC42,SP5_UC45

Speed warning in case of black spot – Technical validation

Technical evaluation

Reliability

Correctness

Page 44: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 44 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

SpA_ Satory-Test_ 04_HF

SpA_01 TS WEST – Satory Test track

Rural

SP5_UC22

Speed warning in case of black spot – Human factor evaluation

Human Factors

Usability

Acceptance

SpA_ N629_ NL_ 01_HF

SpA_01

SpA_02

TS NL – N629

Rural (intersec-tion)

SP5_UC42 Human factors assessment of SpA – Assessment by HF-experts of the SpA application for different warning modalities

Human factors Evaluation

Usability

Acceptance

Workload

SpA_ Delft_ NL_ 01_Sim

SpA_01 Traffic simula-tion – NL, Delft

Motorway

SP5_UC32 Traffic simulation: excessive speed – Effects on safety and efficiency through the critical speed warning a driver will re-ceive in case of an SpA/H&IW application for obstacles

Efficiency Evaluation

Correctness

Page 45: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 45 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

SpA_ Delft_ NL_ 02_Sim

SpA_01 Traffic simula-tion – NL, Delft

Motorway

SP5_UC13, 14

Traffic simulation: obstacles (accidents and jams) – Effects on safety and efficiency through the critical speed warn-ing a driver will receive in case of an SpA/H&IW application for obstacles

Efficiency Evaluation

Correctness

H&IW (Hazard and Incident Warning)

H&IW_ CRF-Test_ IT_01

H&IW_02 TS IT – CRF Test Track

Rural

SP5_UC34 Detection of wrong way (Ghost) driver using RFID sensors and warning of approaching SF-vehicle

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

H&IW_ Torino-Caselle_ IT_02

H&IW_02 TS IT – Torino-Caselle

Motorway

SP5_UC34 Detection of wrong way (Ghost) driver on the highway and warning of approaching SF-vehicle using WSN

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

Page 46: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 46 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

H&IW_ Brescia-Padova_ IT_03

H&IW_01 TS IT – Brescia-Padova

Motorway

SP5_UC13, 14, 15

Detection of obstacle/stopped vehicle on motorway through analysis made by ECAID and warning of approaching SF-vehicle using WSN

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

H&IW_ Torino-Caselle_ IT_04

H&IW_01 TS IT – Torino- Caselle

Motorway

SP5_UC13, 14, 15

Detection of obstacle/stopped vehicle on road and warning of approaching SF-vehicle using WSN

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

Page 47: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 47 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

H&IW_ Torino-Caselle_ IT_05

H&IW_01 TS IT – Torino- Caselle

Motorway

SP5_UC17 Detection of a simulated pe-destrian/animal on road and warning of approaching SF-vehicle using Thermal imaging camera

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

H&IW_ A85_ WE_01

H&IW_01 TS WEST – A85

Rural/ Motorway

SP5_UC13 Accident as an obstacle on mo-torway – Information transmit-ted by the SF vehicle itself and forwarded to the approaching SF vehicles by the RSU

Technical evaluation

Safety evaluation

Correctness

H&IW_ A85_ WE_02

H&IW_01 TS WEST – A85

Rural/ Motorway

SP5_UC16 Accident/roadworks on motor-way – Information transmitted by the SF vehicle itself and forwarded to the approaching SF vehicles by the RSU

Technical evaluation

Safety evaluation

Correctness

Page 48: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 48 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

H&IW_ A85_ Satory-Test_ WE_03

H&IW_01 TS WEST – A85, Satory test track

Rural/ Motorway

SP5_UC13, 16

Accident/roadworks/stopped vehicle on motorway –Information transmitted by the SF vehicle itself and forwarded to the approaching SF vehicles by the RSU

Human Fac-tors

Technical evaluation

Safety evaluation

Usability

Acceptance

Performance

Reliability

Correctness

H&IW_ Delft_ NL_01_Sim

H&IW_01, H&IW_03

Traffic simula-tion – NL, Delft

Motorway

SP5_UC13, SP5_UC14,

Traffic impact assessment – For different configurations (communication, penetration etc.) the traffic efficiency, ca-pacity and throughput will be quantified.

Traffic im-pact Evalua-tion

Correctness

IRIS

IRIS_ Dort-mund_ DE_01

IRIS_01 TS DE –Dortmund

Urban

SP5_UC31 Red light violation – Preven-tion of potential crash with other vehicles or vulnerable road users at the intersection

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

Page 49: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 49 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

IRIS_ Dort-mund_ DE_02

IRIS_01 TS DE –Dortmund

Urban

SP5_UC31 Red light violation – Preven-tion of cash or at least mitiga-tion of impact for SF vehicles at the intersection in case of red light violation

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

IRIS_ Dort-mund_ DE _03

IRIS_01 TS DE –Dortmund

Urban

SP5_UC22 Right turn – Prevention of po-tential crash of SF-vehicle with VRU at the intersection

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

IRIS_ Dort-mund_ DE _04

IRIS_01 TS DE –Dortmund

Urban

SP5_UC22 Left turn – Prevention of poten-tial crash of SF-vehicle with oncoming vehicle at the inter-section

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

Page 50: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 50 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

IRIS_ Dort-mund_ DE _05

IRIS_02 TS DE –Dortmund

Urban

SP5_UC55 Emergency vehicle – Preven-tion of potential crash of SF-vehicles with the Emergency vehicle

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

IRIS_ Hel-mond_ NL_01_HF

IRIS_01 TS NL – Helmond

Urban

SP5_UC55 Human Factors Instrumented vehicle study - To assess the driver behaviour, acceptance etc. for various warnings mo-dalities for left and right turn situations.

Human factors Evaluation

Usability

Acceptance

IRIS_ Soesterberg_ NL_02_HF

IRIS_01 IRIS_02

Driving simula-tor – NL, Soesterberg

Urban

SP5_UC55 Human Factors Driving simula-tor study - Assessment of driver behaviour for different warnings strategies when Emergency vehicles and red light violator approach.

Human factors Evaluation

Usability

Acceptance

Workload

Performance

Page 51: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 51 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

Road Departure (RDep)

RDep_ CRF-Test_ IT_01

RDep_01 TS IT – CRF test track

Rural/ Motorway

SP5_UC16

SP5_UC45

Prevention of road departure caused by road works

Technical Evaluation

Safety evaluation

Reliability

Correctness

RDep_ Satory-Test_ WE_01

RDep_01 TS WEST – Satory Test Track

Rural/ Motorway

SP5_UC21

SP5_UC41

SP5_UC42

SP5_UC45

Prevention of road departure due to a dangerous curve

Technical Evaluation

Safety evaluation

Reliability

Correctness

Specifications for Safety Margin for Assistance and Emergency Vehicles (SMAEV)

SMAEVCRF-Test_ IT_01

SMAEV_ 01

TS IT – CRF Test Track

Motorway

SP5_UC11

Signalised slow vehicle – the incoming SF-vehicles is warned of slowly driving Assis-tant vehicle

Technical evaluation

Safety evaluation

Reliability

Correctness

SMAEV CRF-Test_ IT_02

SMAEV_ 01

TS IT – CRF Test Track

Urban/ Rural/ Motorway

SP5_UC51

Event warning - vehicle stopped – measurement of sig-nalling performance

Technical evaluation

Safety evaluation

Reliability

Correctness

Page 52: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 52 of 56 CoSSIB

No. Sub-Application

Test site – Environment Use case Test case – title and/or short

description Test case – figure Test Type Test Purpose

SMAEVCG22_WE_01

SMAEV_ 01

TS WEST – CG22

Motorway

SP5_UC51

The incoming SF-vehicles will be warned that an event oc-curred and will be able to adapt their speed or their behaviour

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

SMAEV Satory-Test_ WE_02

SMAEV_ 01

TS WEST – Satory Test Track

Urban

SP5_UC52

Priorisation of EV at a signal-ised intersection to achieve safe and fast crossing of EV vehicle

Technical evaluation

Safety evaluation

Performance

Reliability

Correctness

Page 53: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 53 of 56 CoSSIB

5. Standardised results reporting

5.1. Adjusted reporting

After the completion of the development of test cases, the evaluation tests will be further detailed and executed in the next months on the SAFESPOT test sites. This process will be accompanied in related tasks and deliverables D5.6.2-5.6.4.

To continue the common approach pursued, an adjusted reporting of results obtained in the evaluation tests is intended by the definition of a conjoint document structure and contents to be included in co-ordination with WP4.6. If reasonable, also a standardised way of reporting the results (e.g. by table, histogram) will be proposed.

5.1.1. Detailed result reporting

D4.6.1 already proposes the following document structure for a detailed result reporting (max. 10 pages) together with some remarks of the contents to be included:

• Summary / Abstract • Introduction • Method • Results • Interpretation of results • Discussion / Conclusion • References • Annexes

5.1.2. Highlighted result reporting

Especially for the comparison of results of different applications (also cross-SP) or for the task of presentation a brief overview of the results obtained would be desirable.

This should be a presentation of the test case or application, e.g.:

• Summarised on one page • Pre-defined layout – picture of field trial, short description, test exe-

cuted, simplified result reporting, possibly more details including ideas for improvement or suggestions

• Simplified result reporting, e.g. only using a few categories to describe the test results:

o A (green) – threshold achieved (e.g. vehicle stops in time) o B (yellow) – threshold not achieved � reduced positive impact,

ideas for improvement (e.g. vehicle does not stop in time but achievement of reduced impact)

Page 54: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 54 of 56 CoSSIB

o C (orange/red) – threshold not achieved � ideas for improve-ment (e.g. vehicle does not stop in time)

o D (gray) – not tested (e.g. in case of comparison of different ap-plications)

5.2. Non compliance reporting (adopted from D4.6.1)

Problems during the testing activities shall be reported in a common Non Compliance Management Tracking Form [3] to have the possibility to manage the collection/submission/tracking and fixing of the problems.

The Non Compliance Management Tracking Form is transversal to all techni-cal Sub Projects from SP1 to SP5. It is an excel document, composed by a sheet for each technical Sub Project.

It must be used any time that there is a non compliance behaviour in a testing activity involving components provided by different sources, whose integration caused a fault.

Who individuated the fault (the owner) must recognize which Sub Project is involved in the fault, and fill in a new row in the correspondent sheet of the Non Compliance Management Tracking Form. Then he must set the status to OPEN and upload the document on the BSCW area, by sending at the same time the Non Compliance Management Tracking Form to the Sub Project in-volved responsible (the addressee).

If more than one Sub Project is involved in the detected fault, the owner must send an email to the reference persons of all the involved Sub Projects. The more involved Sub Project must coordinate the non compliance fixing phase and set the status of the row to ACKNOWLEDGE.

When the addressee solves the problem (or coordinates the solutions of more involved Sub Projects), he sets the status of the row to FIXED. He uploads the document on the BSCW, and sends back an email to the owner.

The owner must verify the solution of the problem and set the status to CLOSED.

The Core Group will be in charge of the supervisory of the application of the Non Compliance process.

The entire process of downloading and uploading the document, the filling of each field of the new row for the non compliance detected, is explained in the tutorial [4].

Page 55: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 55 of 56 CoSSIB

6. Conclusions

The Evaluation Plan of WP5.6 is the framework document for the definition of the test cases of SP5 applications. The main contents of the document are the basic description of the assessment methodology to be applied, the WP5.6 templates to create the SP5 test cases together with the success crite-ria to be applied to state whether the SP5 applications met the goals initially defined by SAFESPOT. This leads to the core content of D5.6.1, the SP5 test cases which are to be developed on this basis provided and included in this document and the Annex.

An important step proved to be the co-ordinated development of the WP5.6 Evaluation Plan together with WP4.6 SCOVA which maximised the efficiency and comparability within the SAFESPOT project. This was achieved by the use of a common WP4.6/5.6 template for the definition of test cases (devel-oped on the basis of the general SAFESPOT Test form by MMSE) and the use of key goals of the SAFESPOT project in terms of considering predefined evaluation criteria which are the SAFESPOT and CoSSIB (high level) objec-tives, user needs, Requirements and risks by SP6. This stands for the prefer-ence of practical objectives instead of theoretical/technical ones.

This document D5.6.1 – Evaluation plan is the first out of five tasks defined in WP5.6 Evaluation:

• Evaluation plan (D5.6.1) • Evaluation on urban roads (D5.6.2) • Evaluation on highways, expressways and tunnels (D5.6.3) • Evaluation on rural and secondary roads (D5.6.4) • Evaluation report, implementation strategy and recommendations

(D5.6.5)

After the completion of the development of test cases, the evaluation tests will be further detailed and executed in the next months on dedicated SAFESPOT test sites. This process will be described in related deliverables D5.6.2-5.6.4 (Evaluation on urban roads, Evaluation on highways, expressways and tun-nels, Evaluation on rural and secondary roads). To assist this process Chap-ter 3.7 of the Annex provides basic recommendations to support the perform-ance of the practical field trials as well as Chapter 5 gives a perspective on the results reporting.

Page 56: SP5 – CoSSIB · 2010-10-26 · WP5.6 Evaluation: 1. Evaluation plan 2. Evaluation on urban roads 3. Evaluation on highways, expressways and tunnels 4. Evaluation on rural and secondary

Deliverable D5.6.1 Dissemination Level (PU) Copyright SAFESPOT

Contract N. IST-4-026963-IP

SF_D5.6.1_EvaluationPlan_v1.6.doc Page 56 of 56 CoSSIB

7. References

[1] Kallberg, V.-P. (2004): Effects of minor speeding offences on road ac-cidents; 17th ICTCT workshop Tartu, Estonia

[2] Maltby, D., et al., (1998): Checklist for Preparing a Validation Plan: Updated Version, CONVERGE Project TR 1101, Deliverable D2.4.1

[3] Mortara, P., De Gennaro, M., “Non Compliance Management Track-ing Form”, BSCW link: http://bscw.safespot-eu.org/bscw/bscw.cgi/ 187954

[4] Mortara, P., De Gennaro, M., “Non Compliance Management Track-ing Form – Tutorial” (internal report), BSCW link: http://bscw.SAFESPOT-eu.org/bscw/bscw.cgi/187949

[5] Risser, R. (1985): Behaviour in traffic conflict situations. Accident Analysis and Prevention Vol. 17, No. 2, pp 179-197

[6] SAFESPOT Technical Annex, SAFESPOT consortium, 2006

[7] SAFESPOT D4.6.1 – Pilot Plan

[8] SAFESPOT D4.6.1 – Pilot Plan – Annex

[9] SAFESPOT D5.2.1 – Definition of use case and user requirements

[10] SAFESPOT D5.2.3 – Application Scenarios and System Require-ments

[11] SAFESPOT D5.3.1 – Specifications for Speed Alert

[12] SAFESPOT D5.3.2 – Specifications for Hazard & Incident Warning

[13] SAFESPOT D5.3.3 – Specifications for Intelligent Cooperative Inter-section Safety

[14] SAFESPOT D5.3.4 – Specifications for the Road Departure Applica-tion

[15] SAFESPOT D5.3.5 – Specifications for Safety Margin for Assistance and Emergency Vehicles

[16] Zhang, X., et al., (1998): Guidebook for Assessment of Transport Telematics Applications: Updated Version, CONVERGE Project TR 1101, Deliverable D2.3.1