13
1 Efficient Development of the Herschel-Planck Mission Data Systems J.S.Dodsworth, G Di Girolamo, M Spada, D Verrier European Space Agency/European Space Operations Centre (ESA/ESOC) Robert-Bosch Str. 5 64293 Darmstadt Germany [email protected], [email protected], [email protected], [email protected] Abstract—The European Space Agency’s Space Operations Centre (ESOC), in Darmstadt, Germany, has been involved in the development of operational spacecraft simulators and mission control systems (MCS) for ESA projects over the past 35 years. Together, these form the Mission Data Systems (MDS) for each project. This paper presents the Mission Data Systems for the Herschel-Planck project, which is part of ESA’s Astronomical Mission Family. The Herschel and Planck spacecraft are planned to be launched together by Ariane 5 in 2007 and will both be in Lissajous orbits around the L2 point, approximately 1.5 million km from Earth in the anti-sun direction. Herschel is an observatory mission and Planck is a survey mission. At ESOC the two spacecraft will have a common Flight Control Team and the same Mission Data Systems. The Herschel-Planck missions represent a major step forward in terms of technology and technological demands. The spacecraft and instruments are designed for autonomous operations driven by an on-board schedule, relying on a single 3-hour telecommunication period per day. Huge amounts of data will be stored on-board and downlinked at data rates (1.5 Mbps) that are unprecedented for such missions. The short contact window and high data rates impose very high performance demands on the Mission Data Systems. In particular, there are very high monitoring and archiving requirements on the mission control system. Furthermore, emulation of the on-board processors (2 per spacecraft) is extremely demanding and is the main driver of the simulator design and platform selection. The large amount of on-board storage also impacts the simulator resources and performance. The Herschel mission has a planned lifetime of 3.5 years and Planck 2 years. This results in maintainability requirements for the Mission Data Systems from 2004, start of the development, until at least 2010. The paper discusses in detail the challenges related to the above and the solutions that have been identified. Herschel and Planck are also the first ESA missions to adopt the promising concept of a “Smooth Transition”, involving both space and ground segment development, which can be summarized as “Reuse and share rather then redevelop”. This paper describes ESOC’s contribution to the implementation of this concept, the benefits this brings to the project and to the Mission Data Systems developments in particular. The following areas are covered: —The enlargement of the Spacecraft Control and Operations System (SCOS-2000) infrastructure user community: more specifically the adoption of SCOS-2000 across the Project as the central control and monitoring element of the various Electrical Ground Support Equipments and the use by the scientific community of the On-Board Software Maintenance System (OBSM) developed as part of the mission control system; —Mission Data Systems commonality between the two missions and reuse from predecessor data systems and space segment SCOS-2000-based developments. The Herschel-Planck project is a further step in the continuous improvement of the Mission Data Systems (MDS) development process at ESOC. The paper describes a new development strategy that emphasises the importance of efficient and effective validation in a world of scarce resources and less and less time to develop systems that are more and more complex. The approach adopted by the Herschel-Planck team maximises the usage of MDS infrastructure test tools for mission control system and simulator validation and introduces system automated testing based on new tools. It also adopts solutions aimed at alleviating the dependence of the simulator development on external inputs during the different stages of its development. TABLE OF CONTENTS INTRODUCTION ......................................................... 2 HERSCHEL & PLANCK MISSIONS ............................ 2 OPERATIONAL CONCEPT ......................................... 3 THE SMOOTH TRANSITION ...................................... 3 IMPLICATIONS OF THE OPERATIONS CONCEPT ...... 4 MCS PERFORMANCE REQUIREMENTS.................... 4 SIMULATOR CHALLENGES ....................................... 6 REUSE AND COMMONALITY..................................... 8 IMPROVING THE DEVELOPMENT PROCESS ............. 8 CONCLUSIONS......................................................... 10 SpaceOps 2006 Conference AIAA 2006-5771 Copyright © 2006 by European Space Agency. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

[American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - The Herschel Planck Mission Data Systems: New Approaches and

Embed Size (px)

Citation preview

1

Efficient Development of the Herschel-Planck Mission Data Systems

J.S.Dodsworth, G Di Girolamo, M Spada, D Verrier European Space Agency/European Space Operations Centre (ESA/ESOC)

Robert-Bosch Str. 5 64293 Darmstadt

Germany [email protected], [email protected], [email protected], [email protected]

Abstract—The European Space Agency’s Space Operations Centre (ESOC), in Darmstadt, Germany, has been involved in the development of operational spacecraft simulators and mission control systems (MCS) for ESA projects over the past 35 years. Together, these form the Mission Data Systems (MDS) for each project. This paper presents the Mission Data Systems for the Herschel-Planck project, which is part of ESA’s Astronomical Mission Family. The Herschel and Planck spacecraft are planned to be launched together by Ariane 5 in 2007 and will both be in Lissajous orbits around the L2 point, approximately 1.5 million km from Earth in the anti-sun direction.

Herschel is an observatory mission and Planck is a survey mission. At ESOC the two spacecraft will have a common Flight Control Team and the same Mission Data Systems. The Herschel-Planck missions represent a major step forward in terms of technology and technological demands. The spacecraft and instruments are designed for autonomous operations driven by an on-board schedule, relying on a single 3-hour telecommunication period per day. Huge amounts of data will be stored on-board and downlinked at data rates (1.5 Mbps) that are unprecedented for such missions. The short contact window and high data rates impose very high performance demands on the Mission Data Systems. In particular, there are very high monitoring and archiving requirements on the mission control system. Furthermore, emulation of the on-board processors (2 per spacecraft) is extremely demanding and is the main driver of the simulator design and platform selection. The large amount of on-board storage also impacts the simulator resources and performance. The Herschel mission has a planned lifetime of 3.5 years and Planck 2 years. This results in maintainability requirements for the Mission Data Systems from 2004, start of the development, until at least 2010. The paper discusses in detail the challenges related to the above and the solutions that have been identified.

Herschel and Planck are also the first ESA missions to adopt the promising concept of a “Smooth Transition”, involving both space and ground segment development, which can be summarized as “Reuse and share rather then redevelop”. This paper describes ESOC’s contribution to the implementation of this concept, the benefits this brings to the project and to the Mission Data Systems

developments in particular. The following areas are covered:

—The enlargement of the Spacecraft Control and Operations System (SCOS-2000) infrastructure user community: more specifically the adoption of SCOS-2000 across the Project as the central control and monitoring element of the various Electrical Ground Support Equipments and the use by the scientific community of the On-Board Software Maintenance System (OBSM) developed as part of the mission control system;

—Mission Data Systems commonality between the two missions and reuse from predecessor data systems and space segment SCOS-2000-based developments.

The Herschel-Planck project is a further step in the continuous improvement of the Mission Data Systems (MDS) development process at ESOC. The paper describes a new development strategy that emphasises the importance of efficient and effective validation in a world of scarce resources and less and less time to develop systems that are more and more complex. The approach adopted by the Herschel-Planck team maximises the usage of MDS infrastructure test tools for mission control system and simulator validation and introduces system automated testing based on new tools. It also adopts solutions aimed at alleviating the dependence of the simulator development on external inputs during the different stages of its development.

TABLE OF CONTENTS

INTRODUCTION......................................................... 2 HERSCHEL & PLANCK MISSIONS ............................ 2 OPERATIONAL CONCEPT ......................................... 3 THE SMOOTH TRANSITION ...................................... 3 IMPLICATIONS OF THE OPERATIONS CONCEPT ...... 4 MCS PERFORMANCE REQUIREMENTS.................... 4 SIMULATOR CHALLENGES ....................................... 6 REUSE AND COMMONALITY..................................... 8 IMPROVING THE DEVELOPMENT PROCESS ............. 8 CONCLUSIONS......................................................... 10

SpaceOps 2006 Conference AIAA 2006-5771

Copyright © 2006 by European Space Agency. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

2

INTRODUCTION

ESOC has developed simulators and mission control systems for almost all of its missions. Historically ESOC has followed the approach of developing a core infrastructure that provides functions that can be tailored to meet the needs of a set of spacecraft missions.

ESOC has developed mission control system infrastructure on a range of platforms, and the current generation, SCOS-2000[1], supports distributed processing, packet telemetry, packet telecommanding and runs under Sun/Solaris1 and Linux2. It has been designed to support a range of classes of mission, including deep-space missions, low-earth orbiting missions, constellations and astronomical observatories.

Although SCOS-2000 was fundamentally conceived as a kernel spacecraft operating system, it is now a set of generic applications that can be used ‘out of the box’ and also a set of object-oriented components that can be reused and tailored to other purposes. As it is based upon state of the art technology using readily affordable platforms, its use has spread from purely a post-launch role to use in all mission phases [2].

Herschel and Planck are two of the European Space Agency’s forthcoming astronomy missions. This paper shows the process that has been developed in order to meet the project’s data processing needs and develop operational simulators.

HERSCHEL & PLANCK MISSIONS

Herschel will be launched together with Planck by Ariane 5 into a transfer orbit to a large Lissajous orbit at the L2 point. The transfers into their operational orbits will last 125 to 130 days.

The Herschel Space Observatory (Figure 1) is an observatory mission. It will perform photometry and spectroscopy in the far infrared and sub-millimetre part of the spectrum, covering the 60-670 µm band. Herschel is the only space facility dedicated to this wavelength range.

The Herschel science objectives target the “cold” universe. Black-bodies with temperatures between 5 K and 50 K peak in the Herschel wavelength range, and gases with temperatures between 10 K and a few hundred K emit their brightest molecular and atomic emission lines here. The key science objectives emphasise specifically the formation of stars and galaxies, and the interrelation between the two.

1 Sun and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and in other countries. 2 Linux is a registered trademark of Linus Torvalds.

Science operations (in which commissioning and Performance Verification are included) can start during the transfer to L2. In order to achieve the higher data rates Herschel may have to orient the high gain antenna to the earth. The routine phase, planned to last at least 3yrs at L2, will be conducted with long 3-axis stabilised observations interspersed with periods of data transfer, during which the attitude of the spacecraft will be restricted.

Planck (Figure 2) is a survey mission. Its scientific objectives are to map the temperature anisotropies of the Cosmic Microwave Background (CMB) over the whole sky (at least 95%), at all angular scales larger than 10 arc minutes with an accuracy set by fundamental astrophysical limits (sensitivity better than ∆T/T ~2x10-6), to map all major galactic and extragalactic sources of emission at the wavelengths measured by Planck (25 to 950 GHz) (galactic synchrotron, free-free and dust emission, extra-galactic compact and point sources, and S-Z effects from clusters and galaxies) over the whole sky and to characterise the polarisation state of the CMB.

Planck’s routine phase will be conducted with the spacecraft spin axis in the anti-sun direction, with the spacecraft spinning slowly at 1rpm. Manoeuvres will be conducted approximately every hour to maintain this attitude. The routine phase is planned to last 15months, or the time to make 2 full-sky surveys.

Figure 1 The Herschel Spacecraft

3

OPERATIONAL CONCEPT

The key points of the operations concept for Herschel and Planck are:

• The spacecraft and instruments are designed for autonomous operations and compliant with the operations interface requirements document

• In general, science operations will be on-board schedule (or Mission Timeline, MTL) driven except in special cases for commissioning/performance verification and troubleshooting.

• Real time operations will be restricted to 3 hour periods per spacecraft during the Daily Telecommunication Period (DTCP), and will be principally directed towards spacecraft maintenance (MTL loading, data dumping etc.)

• No Real Time science data will be required except in special cases: Commissioning / Performance Verification, troubleshooting

• Data dumped from the on-board Solid State Mass

Memory (SSMM) during the DTCP will be stored at the ground station and transferred to ESOC at lower than received rate.

• The on-board schedule shall cover at least 48 hours of operations.

Although the Herschel and Planck spacecraft have very different types of mission, the off-line nature of the operations and the fact that they are two spacecraft procured by one Project allows a common approach to spacecraft control and the definition of the interfaces with the science ground segments. The major difference between the two mission control systems lies in the mission planning process and the supporting flight dynamics services.

ESA’s deep space ground station at New Norcia, near Perth, Australia, will be scheduled to support the Herschel and Planck spacecraft with one DTCP of 3 hours per spacecraft each day. In general the operations will be scheduled such that a single shift of spacecraft controllers per day is sufficient for the real time operations. Engineering tasks and mission planning will be normally done during the working day, with on-call support to meet turn-around requirements.

Each spacecraft transfers about 24 hours of data to the ground station in about 3 hours. The spacecraft are capable of storing at least 48 hours of telemetry, in order to give some margin for data recovery if a pass is missed completely or partially.

The high rate data cannot be transferred to the Mission Operation Centre (MOC) in real time. In order to reduce the communications and processing costs, the telemetry transfer rate will be reduced such that the transfer is completed within about 16 hours, giving a reasonable margin for recovery of communications outages before the data from the next pass arrives. Thus operationally, housekeeping (HK) telemetry must be sufficient to judge spacecraft and instrument status, as this is transferred with highest priority. Non-periodic telemetry is required quickly for diagnosis (events etc.) and command verification. Recorded housekeeping is required within 1 hour and the aim is to provide the recorded science telemetry within 12 hours.

THE SMOOTH TRANSITION

The “Smooth Transition” concept is shared at project level between scientists, the manufacturer and the operations team. The aim is to reuse and share rather than redevelop. The use of common subsystems will permit a reduction in the operations preparation effort compared to conventional missions. The use of the kernel control system developed by the European Space Agency, SCOS-2000, is widespread throughout the project, including the mission control system, the central checkout system and instrument workstations.

Figure 2 The Planck Spacecraft

4

The Central Checkout System (CCS) is a software system, which is part of the Electrical Ground Support Equipment (EGSE) used during spacecraft integration activities to command and monitor the spacecraft and checkout equipment.

The instrument tests systems and science data systems are being used in instrument prototyping, integration, test, and the same systems will be used after launch.

Since they are all based on the ESA’s SCOS 2000 development, it is possible to combine the same components in different places to meet the different needs. For example, ESOC has develop an On-Board Software Management System (OBSM) and this has been provided to the instrument teams for them to use in their development activities. This same OBSM is now being integrated as part of the Mission Control System at ESOC.

The use of a common database for the mission control system and checkout (including the instruments) will reduce the effort required for data base validation and system configuration. The Herschel-Planck System Databases (HPSDB) will be managed centrally and shared between all Project entities, including the space and ground segments.

IMPLICATIONS OF THE OPERATIONS CONCEPT

These missions impose some special challenges for the development of Mission Data Systems. The development process needed to be able to both participate in, and contribute to, the smooth transition in a way that reduces life-cycle costs. For example, the mission control system was able to reuse the Central Checkout System features, since it was also based upon SCOS2000, but at the same time, by advancing the development, contribute the component for on-board software maintenance to the instrument EGSE development. This was needed by the instrument teams for instrument testing a long time before the flight control team needed the mission control system. Launch is planned for 2007 and the Herschel lifetime will be at least 3.5 years. It is necessary to keep the mission control system and simulator operational, so the MDS have to run on platforms with operating system and hardware support from 2004 until at least 2010. Further challenges come from the design of the spacecraft themselves. The service module for each spacecraft has an autonomous design with 2 processors (2x ERC32, each providing approximately 14 Million instructions per second, Mips). In order to obtain the required simulator fidelity and to retain flexibility towards on-board software changes, these processors will be emulated. Running both emulators in parallel in real-time is very demanding. Furthermore, the management of the large amount of on-board storage (8

GB) has to be considered. Similar amounts of storage have been flown on other ESA missions such as Rosetta, Mars Express and Cryosat, but the important difference here is the flight software can access the entire memory address space as RAM, whereas on previous missions it was accessible via an external data bus with moveable sequential access, similar to tape or a magnetic disk drive with a file system. The random access has a significant impact on the simulator resources and performance. Each spacecraft has a higher data storage rate and access rate than previous missions. This also impacts upon Mission Data System performance. Some features of the operations scenario also make life difficult for the Mission Data Systems: the short contact window leads to a high telemetry downlink rates (1.5 Mbps), combined with a short turn-around time to process telemetry, leads to very high processing requirements on the mission control system, and has impacts on the required performance of simulated ground equipment.

MCS PERFORMANCE REQUIREMENTS

MCS Monitoring Requirements

Tests show that the key performance factor is the telemetry packet rate. The data flow estimates show that the average packet rate is below 150 packets/sec, which is the demonstrated known processing capability of SUN/Solaris SCOS2000 based mission control system systems. (The newest version of SCOS2000 version 3.1, is specified to handle 300 packets/sec). However, as hinted at in the mission operations scenario, the data flow is not uniform. First of all, the non-periodic event and report telemetry is dumped. These command acknowledgements, reports and exceptions are usually very short packets. Since the overall bit rate remains the same, each DTCP has periods of data containing extremely short packets at a rate of 700 packets/sec

The high processing demand impacts mission control system clients and server(s) differently. On the server side the potential performance bottleneck has been identified in the packet distribution services between server tasks. On the client side, the potential performance bottleneck is the cache, which is the interface mechanism between server(s) and clients.

Solutions to the problem of a high peak rate (12 minutes of VC2 part 1 at 700 packets/sec) under consideration are:

• Implement a direct interface between the telemetry receivers and some of the server tasks - for example - the behavioural limit checker, the status consistency checker, the synthetic parameter packet generator and the parameter interface. This has been successfully implemented on a previous

5

control system also based on SCOS2000 (for the Radarsat project).

• If one server is not powerful enough to run all the server tasks, they may be distributed amongst different machines using the inherent scalability of SCOS2000.

• Short spikes of 700 packets/sec or more can be handled by tuning the buffering of the SCOS2000 General Packetiser.

MCS Archiving Requirements

The driver for the archiving performance is again the telemetry packet rate. The major bottleneck is expected to be disk I/O. For archiving the average packet rate is critical. The Herschel-Planck packet rate peaks can be absorbed during periods of less intense archiving activity. Previous missions have shown this to be possible. The Rosetta/Mars Express control system (running on SUN/Solaris) can handle up to 300 packets/sec, whilst the HP CCS (on PC/Linux) easily copes with 200 packets/sec - and a similar system, the Radarsat MCS (also on PC/Linux), supports up to 2000 packets/sec.

Further considerations included separating the archives on different disks (application level control vs. RAID), or

ultimately migrating to SCOS2000 5.0, which will support 1000 packets/sec. The latter option depends on the eventual compatibility of the delivery of SCOS2000 5,0 with the Herschel-Planck schedule.

MCS Platform

The Herschel-Planck mission control system has been made platform independent with very limited overhead, suitable for deployment on Solaris, Linux or Solaris/Linux hybrid platform. The baseline platform for Herschel-Planck mission control system is SUN/Solaris, in common with other ESOC control systems, although the compatibility with Linux or Solaris/Linux is guaranteed at compile/link level.

Initially only an exclusive validation on SUN/Solaris initially was foreseen for cost reasons. However after architectural review it was decided to perform more accurate performance measurements based on the mission specific workload, and at end of 2005 it was decided to exercise options (included into our baseline) for hybrid Solaris/Linux platform.

New platform baseline foresees that server tasks runs on PC Linux, whilst clients application runs on Sun/Solaris hardware.

Figure 3 Mission Control System Overview

Legend TM -telemetry TC -telecommand STC -station computer SLE -space link extension DDS -data dissemination system HSC -herschel science centre IFMS -intermediate frequency

modulation system NDIU -network data interface unit OBSM -onboard software maint. system PSO -planck science office HSC -herschel science centre RT I/F -real time interface

DDS

NCTRS

TM

TC

Mission Control System

TM, TC, OBSM

Planning

TM RT I/FArchive

OBSM

DB

IFMS

TM/TCBase-band

STC 2

Kourou

Vilspa

NNO

Remote Station Controland Monitoring

NDIU

Spac

ecra

ft

SLE

ESAOPSNET

ESAOPSNET

Science Ground Segment Herschel HSC Planck PSO + PIs

Attitude Mon

Orbit Detn.Flight Dynamics System

Planning

6

The above set-up derives from an ESOC constraint of the configuration homogeneity of the ESOC operational shared.

Onboard Software Maintenance

ESA’s mission control system infrastructure includes a set of customisable components for onboard software management. These include standard facilities for generating PUS telecommands for loading memory areas (e.g. patches and software images), downloading memory areas (dumping), comparison with reference images and the calculation of checksums.

The instrument developers needed precisely this capability in order to permit them to integrate and test their instruments, which are sophisticated multi-processor assemblies driven by large amounts of software. As part of the smooth transition, ESOC provides this component for use by the instrument teams. Since the request for this component came at quite short notice, and the delivery schedule was challenging (the final delivery of the OBSM was needed well before the first delivery of the mission control system) the development of the OBSM was kicked-off under a separate contract, and it is currently being integrated within the main mission control system.

Simulator Development

ESA has an established simulation infrastructure consisting of a simulation kernel, generic models, models of ground equipment and real-time emulations of space processors. Before the start of development, lack of industrial competition was a major concern, as a significant amount of experience was concentrated in one of the two consortia within the ESOC simulator frame contract.

This was resolved by the following measures that permitted both consortia to build experience on SIMSAT Windows3 infrastructure and gain knowledge about the spacecraft:

• The less experienced consortium was given a chance to gain experience on infrastructure work that was not so time critical and performed the majority of porting of SIMSAT to Windows 2003®.

• In a change to the normal process, a preliminary design phase was funded to allow both consortia to get more insights on the mission and simulator issues. Both Consortia were required to produce a simulator Software Budget Report, a draft Software Architectural Design and to identify problem areas.

Both Consortia identified issues in the areas of emulation, Simsat (ESOC’s simulation kernel), and ground model 3 Windows is a registered trademark of Microsoft Corporation in the United States and other countries.

performance, and were able to reuse this information in their proposals for the simulator development. As each Contractor had different strengths and weaknesses, ESOC was also able to use both studies to refine requirements to focus the formal tender action with a Request for Proposals and Statement of Work in the most appropriate direction.

These actions had enormous value to the industrial competition and were a positive benefit for ESOC. Two good proposals were received, with good prices. This considerably reduced the risk and cost of the project for a small investment.

SIMULATOR CHALLENGES

All ESOC spacecraft simulators are based on a series on core components: a simulator infrastructure, a set of ground models and usually an emulator or the onboard processor(s). A simplified diagram of the proposed architecture is shown in Figure 4. It can be seen that the infrastructure provides fewer directly reusable sub-systems than the control system infrastructure. This can be attributed to the wider differences between different spacecraft and instruments than between the control systems that are specified by the user community. However, there are trends to standardise the interfaces between spacecraft sub-systems and also expand the simulation infrastructure to make it easier to reuse models from one simulator to another. As shown in Figure 4, the simulator infrastructure currently consists of

a kernel that schedules the models,

a Man-Machine Interface (MMI) that allows the user to visualise data within the simulator, set values and run scripts that provide a simple level of automation

models of ground station equipment.

Some generic models are also supplied, such as a position and environment model, an event-driven network model that can be used for electrical and thermal sub-systems, and the emulator suite. The emulator suite provides tools for developing an emulator of an arbitrary processor, customising the memory model and interfacing the emulator to the rest of the simulated Input-Output devices.

Problems

When the project started, the original major technical concerns were in the areas of support for the simulator kernel, problems with emulators, and the performance of the ground models.

7

Simulation Kernel— SIMSAT Infrastructure lifetime was limited (it ran under a version of Windows for which support was scheduled to stop in 2004).

Processor emulator —The users specified that both active processors must be emulated. At that time there were two possibilities, neither of which was adequate for the project needs. ESOC’s infrastructure emulator had insufficient performance on a 32-bit PC. For real-time performance, 14 Million instructions per second (Mips) are needed, where as it could only provide 7 Mips.

The ESOC emulator was tested on the new 64-bit processor IA64 but it only had marginal performance (13.8 Mips on 1.3Ghz IA64 – compared to the 14 Mips needed). If the emulator had been ported to native assembler, we were confident that the performance would have been much better, however this would have become very platform specific and we did not want to decrease the portability of our infrastructure software.

Another alternative was a Commercial Off-the-Shelf (COTS) product. At the time of testing, it was only possible to run one processor emulator at a time. Since the Herschel-Planck simulator needs 2 emulators to run in parallel this would have needed more work. It was also felt undesirable to increase our reliance upon a single source COTS which was not ESA intellectual property.

Ground Models —The high telemetry downlink rate also puts high demands on the simulation of parts of the ground segment. Previous ESA operational simulations have used software models of the telemetry and telecommand equipment in the ground station. Test showed that the performance of the simulated ground models would not be adequate for the high data rates of the Herschel and Planck spacecraft. Moreover, the intention was to transfer telemetry and telecommand data using the CCSDS Space Link Extension (SLE) services, including two new services that were not modelled by the simulation infrastructure. These were the services Forward Space Packets (FSP) and Return Operational Control Field (ROCF).

Solutions for the Simulator

Simulation Kernel— The SIMSAT infrastructure has been ported to Windows 2003. This platform has official support sufficient for Herschel-Planck (EOL 2010).

Emulator— As part of the risk management associated with the MDS development, the project purchased a modern machine (AMD644). The infrastructure developers supported us to use the latest tools and recompile the infrastructure software using the latest gcc compiler. The results with the new compiler were very pleasing. The Intel 32-bit platform had sufficient performance (14 MIPS on 3GHz IA32) but the margin was small, although this would 4 AMD is a trademark or registered trademark of Advanced Micro Devices, Inc in the US and other countries.

Ground TC/TM SLE

SIMSAT Kernel

MMI

SMI

GroundSimulation

MCS

MCS Direct I/f

(DIF)

Infrastructure Components

Mission Specific Simulator Components

TMTCS Interface

(TIF)TMTCS

Instrument

Instrument

Instrument TC-TM Interface

Position andEnvironment

Model Sensors andActuators

Dynamics

Electrical

Thermal

TCDecoder

TMEncoder

CDMU

ACC

SSMM Emulator

Emulator

TM/TCinterface

Ground TC/TM SLE

SIMSAT Kernel

MMI

SMI

GroundSimulation

MCS

MCS Direct I/f

(DIF)

Infrastructure Components

Mission Specific Simulator Components

TMTCS Interface

(TIF)TMTCS

Instrument

Instrument

Instrument TC-TM Interface

Position andEnvironment

Model Sensors andActuators

Dynamics

Electrical

Thermal

TCDecoder

TMEncoder

CDMU

ACC

SSMM Emulator

Emulator

Instrument

Instrument

Instrument TC-TM Interface

Position andEnvironment

Model Sensors andActuators

Dynamics

Electrical

Thermal

TCDecoder

TMEncoder

CDMU

ACC

SSMM Emulator

Emulator

TM/TCinterface

Figure 4 Simulator Architecture

Legend ACC -attitude control computer CDMU -central data management unitMCS -mission control system SLE -space link extension SMI -simulation model interface SSMM -solid-state mass memory TMTCS -telecommand and telemetry

subsystem

8

be expected to improve with faster chips. The results demonstrated very high performance on the new platform - 33 MIPS on a 2.0Ghz AMD64 64-bit platform - providing a large performance margin. The optimism about the new hardware platform had to be tempered by the knowledge that the s/w environment was far from mature, with the OS not on a formally supported version, and the development tools were also unstable.

Ground Models—The solution found was to use the real back-end equipment (TMTCS) instead of existing models. This offers high performance and also supports ESA’s technology harmonisation strategy, which aims, where practical, to reduce the number of different systems that we have that perform fundamentally the same task. To do this, a new infrastructure ground model called the TMTCS Interface (TIF) which enables the simulator to be plugged into the real ground station equipment will be developed. The current SLE models remain as a backup, since the performance improved significantly after the porting to C++ and Windows 2003. Another new development was the Direct Interface (DIF) ground model, which provides a very simple interface between the mission control system and simulator without the use of intermediate protocols and systems. The DIF will be used for the early deliveries, to make it easier for the users to set up and operate the simulator and mission control system together, whilst relaxing the pressure on the TIF development schedule. The resulting lack of fidelity in the area of the ground systems is certainly acceptable at this stage of the development.

REUSE AND COMMONALITY

Herschel-Planck mission control system design assumes no substantial differences between the missions apart from the interfaces to the Science Ground Segment. It is assumed to be two customised instances of the same application software. This analysis is based upon the fact that only 20 Software Requirements out of more then 1000 are mission specific. However, the design does not prevent the introduction of differences that may be identified later.

The Herschel-Planck simulator requirements reflect the differences in spacecraft and instruments design. Basically, Herschel is three axis pointing for long periods of time, and Planck is slowly spinning. The payload scientific instruments are of course completely different, although they use a common hardware and software interface. The software requirements are 85% common to both missions (HP), 10% Herschel specific, and 5% Planck specific.

The simulator will consist of one set of software that can be installed on all the machines shared by the project, and the user will be able to select at run-time whether to simulate the Herschel or Planck spacecraft.

Mission Control System Re-usage

Re-usage of the Central Checkout System (CCS) is a key element in the Herschel-Planck mission control system development strategy. For the first time, a copy of the CCS was available at ESOC to the users for evaluation during the Requirements Engineering. This permitted an evaluation of SCOS2000 extension features and a selection of those features to be “repatriated” into ESOC’s mission control system infrastructure. It also permitted the Flight Control Team (FCT) to familiarise themselves with a SCOS2000 based system.

As mentioned above, some of the CCS specific functionality is planned to be retrofitting into the MDS infrastructure. Specifically, version 4.0 of SCOS2000 (due to be released in the first quarter of 2005) will contain:

• A Database editor modified to support the extensions

• Calibration Curve selection,

• Delta limits

• Online Database Patch and MMI

• Enable, inhibit, Parameters, Packets, Groups

• Automatic testing

Since the Central Checkout System (CCS) was developed for ESA to control the Assembly, Integration and Test (AIT) process and these two spacecraft on the ground, it would have been tempting to mandate that the control system should be based upon the CCS. However, the life-cycle for the CCS and the MCS is different, and this places demands on the long term maintainability of the MCS where it is clearly an advantage to follow the infrastructure evolution as far as possible to optimise the level of infrastructure support available throughout the programme. Hence the development was baselined to start on an extended SCOS 2000 system with plans to migrate to later versions. This also had the advantage of providing a level playing field to the bidders for the development, resulting in a number of very competitive proposals.

IMPROVING THE DEVELOPMENT PROCESS

For many years ESOC used to develop simulators and control systems on-site, using contractor manpower. This provided a very flexible development process, and meant that the technical officers could retarget the development effort when necessary.

The dramatic increase of the amount of software and the use of programmable elements such as Field-Programmable Gate Arrays (FPGA) on spacecraft has meant that it has become more and more difficult to define the exact

9

behaviour of the spacecraft and the simulator early on in the programme.

When the work was performed in a flexible fashion under the direct control of the technical officer, changes could be introduced relatively easily, although of course the effort would have to taken from other areas. This meant that simulators were often late, expensive and did not meet the user’s expectations in different areas.

As part of a general trend, ESOC started developing the simulator and mission control systems off-site under fixed price conditions. For mission control systems this has become very effective due mainly to standardised requirements from the users, and a major investment in a portable, extensible control system infrastructure. However, in areas where the developments show a strong dependence on the availability and maturity of external inputs such as spacecraft design information, onboard software and the spacecraft database, e.g. mission planning and simulators, this is not so much the case. Recent efforts to make spacecraft programmes go ‘faster, cheaper, better’ have often been associated with shortfalls in quality and completeness as well as the timeliness of these items. Whilst tighter iterations of these items may be beneficial for the spacecraft development, the impact on the development of simulators and control systems is not so positive.

Simulators, in particular did not benefit from cost savings in the same way that mission control systems did. Over the last few simulator developments, costs typically doubled from the first estimate, largely due to the quality of the inputs. In contrast, the cost of developing mission control systems has sharply reduced,. Mission control systems now cost about the same as simulators to develop, although the maintenance and operations costs are often much higher due to the high availability requirements and high usage throughout the mission.

One area where all theorists agree is that it is much cheaper to detect errors and inconsistencies in a product early on. High quality testing is required to trap problems as early as possible. On the developer side, there is a clear need of a representative test environment. On the customer side we need to be able to perform effective and efficient validation. Unfortunately, the turn-around time between MDS deliveries reduces as launch approaches and as the users identify problems with the deliveries. The scarce resources available for testing (both manpower and time) are insufficient.

Old strategy

In the past an early simulator delivery was used as a data source/sink for mission control system testing. This usually meant that there were two new systems to debug, the simulator and mission control system, at the same time. This

Infrastructure components

Mission

Control

System

NCTRS

TMTCS

DIF

TIF

Simulator

Ground SLETest &

Validation

Tools

PSS

Mission Specific

Infrastructure components

Mission

Control

System

NCTRS

TMTCS

DIF

TIF

Simulator

Ground SLETest &

Validation

Tools

PSS

Mission Specific

Mission

Control

System

NCTRS

TMTCS

DIF

TIF

Simulator

Ground SLETest &

Validation

Tools

PSS

Mission Specific

Figure 5 System Testing

This figure shows how the mission control system can first be tested using the direct interface to the test and validation tools, and then the other components can be added, such as the NCTRS via the simulated ground models and then finally to the real ground station back-end equipment, the TMTCS. Within the station a simple data source and sink, the portable satellite simulator (PSS) can be used. The simulator can also be tested stand-alone, then with the standard test and validation tools, and then connected to the mission control system, either directly or using the more realistic interfaces, such as the direct interface (DIF) or the interface to back-end equipment TIF.

Legend NCTRS -network command &

telemetry routing system PSS -portable satellite simulator SLE -space link extension TIF -TMTCS interface TMTCS –telemetry & telecommand system

10

obviously made it difficult to localise problems and to determine who should be responsible for their correction. Since the simulator was often burdened with problems in the on-board software and databases, it was not an easy test tool to use, and the more realistic it becomes, for example by including spacecraft hardware elements, the less appropriate it becomes as a test tool. After all, very few people would want to use the spacecraft as a test tool for the mission control system!

Clearly there was a need for a validated test tool available early in the programme that was simple to operate and could be used to repeat the same scenario again and again.

New Strategy

It is now possible to modify the integration process, since many parts of the system have become standardised. First of all, the spacecraft telemetry and telecommand systems are now all based upon the use of packet standards. New missions are increasingly using the Packet Utilisation Standard as a basis for their data exchange, albeit after customisation. This means that not only do their packets have a similar structure but they are also used in a similar way. When this is combined with the use of well-defined standards such as the Space Link Extension (SLE) for communication between the control centre and the ground station, the interfaces also become standardised. This made it possible to develop a standard set of infrastructure tools that have been validated on other missions. This means that it is now possible to test a mission control system, at least initially, without a dedicated simulator.

The integration testing can be further simplified by first testing the mission control system with the test tools via a Direct Interface (DIF). The infrastructure supports a standard database structure and content, so that even if the spacecraft schedule is delayed, the developers can still develop and test new components. The other components, such as the Network Command and Telemetry Routing System (NCTRS) and the more complicated (but realistic) ground models, can be added later, and the same tests can be repeated to ensure that the system still performs as desired.

Now mission control system testing at the developer site will be based on the MDS infrastructure Realistic Test Environment (RTE) and Test and Validation Tools (TVT). The RTE provides a reference set of Mission Information Base (MIB) data and tools for managing the data. Automated testing is required as far as possible for the mission control system and simulator.

The Automated Testing solution for the mission control system is based on the re-usage of the HP CCS Automated Testing capabilities and the MDS infrastructure Test and Validation Tool. It is based on the Test and Operation Procedure Environment (TOPE) operational language

complemented by an automated sequence editor and test execution engine. This tool will be particularly effective for regression testing, where time and reproducibility are at a premium.

The availability of standard tools for testing the mission control system means that it is possible to delay the first simulator delivery so that the schedule for developing the simulator can be driven by FCT needs rather than the needs of the mission control system developers.

For the Herschel-Planck simulator development, an additional step was introduced to try to decouple the development from some of the more volatile Customer Furnished Items (CFIs). In particular, to permit the development of the onboard software to slip several months, we decided to request that the first delivery contain functional models of the Command and Data Management Unit (CDMU) and Attitude Control Computer (ACC), but not emulated versions of these processors. This was to permit the models of the sub-systems to be tested in isolation before the introduction of the complexity of the emulators. Furthermore, we decided to maintain the functional models as an option for simulator regression testing and problem isolation after the emulators had been introduced.

The simulator automated testing based on a new tool for export of procedures from Assembly, Integration and Test (AIT) and the Flight Operations Plan (FOP) (both using a tool called MOIS) into simulator procedures. If the procedures for AIT are available, this will greatly facilitate acceptance testing of the simulator.

CONCLUSIONS

This paper has illustrated how the mission data systems contributed to and benefited from the Herschel-Planck smooth transition concept. However, the overall process could perhaps be improved further by having a closer link between EGSE and mission control system development.

This is challenging given the different lifecycles for the different developments, but could lead to a closer alignment of versions of infrastructure and platform used by the different parties. Nevertheless, the use of the OBSM subsystem by the instrument teams was beneficial to the whole project, although more lead time would have been needed in order to make the most cost effective development.

The use of the mission control system infrastructure by the spacecraft contractor and the instrument teams reduces costs and increases synergies across the project, but requires minor investments in support arrangements for the external users of the ESOC MDS infrastructure.

11

The Herschel-Planck mission represents a technical challenge for the mission data systems in terms of performance and maintainability. Solutions have been identified for both the mission control system and the simulator.

We have shown an innovative procurement and development approach, and demonstrated how, particularly for the simulator, pre-tender actions were used to obtain competitive prices by improving the market.

The overall development process is targeted at reducing the development risk. To this end, test tools (GSTVi) have been introduced consistently so as to optimise the overall procurement, and automated testing has been incorporated.

We have shown that we have selected a path towards a platform-independent mission control system which can be run on our current Sun/Solaris hardware or on PC/Linux systems.

References

[1] Pecchioli M., Haag S., Alberti M.; “Developing a Generic Mission Control System Infrastructure Based on a Distributed Architecture: The SCOS-2000 Experience”; SpaceOps 2000 Toulouse

[2] Nestor Peccia, “SCOS-2000 ESA’s Spacecraft Control for the 21st Century” 2003 Ground System Architectures Workshop, March 4–6, 2003, Manhattan Beach, California.

Many of the products and the names of companies mentioned are trademarks or registered trademarks of their respective owners. Their use neither constitutes a claim of the trademarks by ESA nor affiliation of the trademark owners with ESA.

12

Biographies

Gianpiero Di Girolamo studied Computer Science at University of Pisa and graduated in 1986. In 1987, Gianpiero joined the European Space Operations Centre of the European Space Agency, initially in the Data Processing Division and then in the System Engineering Section. Following

the re-organisation of 1996, he was detached as Engineering support to TOS-O where he was responsible for the INTEGRAL MCS design and development. In 2000 he re-joined the Data Processing Division (now OPS-GD) where he currently works. In this position, he was Technical Officer for several projects including the Hipparcos, EURECA, CLUSTER-I, and Integral Mission Controls Systems. He is now taking care of the Integral routine operations support and maintenance, and of Herschel-Planck Mission Data System. He is keen on sports (he played soccer and basketball), and a very active in watching sports on TV, and he is a well reputed painter, with few art exhibitions on his Art book.

Mariella Spada is a graduate in Computer Science from the University of Pisa, Italy, 1989, with a post-graduate Master’s Degree in Business Administration (MBA) from the Bradford University (UK), 2001. She started her career in Italy as a

researcher, then continued at ESA/ESTEC as a Young Graduate Trainee (1991-1992) and then joined ALENIA Italy as a software engineer. She came to ESOC in 1994 to work as a contractor on the Artemis and XMM projects in the simulation domain. She moved to the European Central Bank (ECB) in 1997 and worked there as IT Technical Co-ordinator. After joining ESA in 1999, she worked as software engineer in the Simulation Section of the ESOC Engineering Department, where she was responsible for the XMM, Integral, Cryosat and Goce simulators, plus some simulation infrastructure maintenance activities. She was acting head of the Simulation Section before being appointed Head of the Science Mission Data Systems Section in September 2002. Since December 2003,she has been Co-Chairman of the ESA Board for Software Standardisation and Control (BSSC).

Leisure pursuits/hobbies: Reading (novels, management and scientific publications), cooking..

David Verrier graduated in physics at Sheffield University in 1986 and then completed a Master’s degree in Astronautics and Space Engineering at Cranfield University in 1987, where he started to fly. After several years developing spacecraft simulators in the UK and Germany, he

moved into a training position as a simulations office at ESOC, where he put the flight control teams of Italsat, ERS-1 and Eureca through their paces. He then took an operational role in the Cluster-1 mission and also took time out to get a commercial pilot’s licence and instrument rating. In 1997 he moved to Paris to manage the operational aspects of the procurement and entry into service of a Russian satellite built for the satellite operator Eutelsat. He completed a doctorate in parallel to his day job and is now studying for an International Master’s in Management – an executive MBA programme. He has been working for ESA since 2001 and is responsible for the Herschel-Planck simulator development. His hobbies include economics, cooking and foreign languages.

John Dodsworth studied Chemistry at Leeds University, leaving with a doctorate in 1973. He started his career with the British Aircraft Corporation and he moved to ESOC in 1977, where he worked in the manned spaceflight program on the Spacelab 1 instrument operations. He

has been since involved with many ESA missions, notably ERS, Huygens, ISO, Cluster, Envisat and the Meteosat series as an operations engineer, Spacecraft Operations Manager, Ground Segment Manager and Flight Operations Director. He is currently the Head of the Astronomical Observatory and Survey Missions Division responsible for XMM, Integral, Herschel/Planck, GAIA and LISA Pathfinder, and will shortly be leading the Mission Control team for the MetOp launch. In his spare time, he and his family look after his small herd of sheep, a horse and other domestic animals, and he enjoys rambling and relaxing with a good book.

13