16
1 LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS Workshop on Electronics for LHC and future Experim 12-16 September 2005,Heidelberg, Germany

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

Embed Size (px)

Citation preview

Page 1: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

1LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

The VMEbus processor hardware and software infrastructure in

ATLAS

Markus Joos, CERN-PH/ESS

11th Workshop on Electronics for LHC and future Experiments,12-16 September 2005,Heidelberg, Germany

Page 2: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

2LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Talk outline

VMEbus systems in ATLASVMEbus controller H/W

Basic requirements to the H/WLessons learnt by evaluating SBCsFinal choice of VMEbus controller

VMEbus S/WAnalysis of third partyVMEbus I/O packagesThe ATLAS VMEbus I/O package

StatusConclusions

Page 3: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

3LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

VMEbus in ATLAS front-end DAQ

LvL1/2ROS PCs

TTC and auxiliary modules, some RODs

Event data,Timing and trigger

signals~100 * 9U ~55 * 6U

RODs and some other modules (e.g. ROD-busy)

TTCEvent data

Event data

ATLAS (LHC) VMEbus crates: VME64x 6U or 9U (with 6U section) Air or water cooled Local or remote Power supply Connected to DCS via CAN

interface

Initialisation of slave modules Status monitoring Event data read-out for

MonitoringCommissioning

VMEbus used (by ROD-crate DAQ) for:

More on TTC: Talk by S. Baron

Page 4: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

4LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

VMEbus controller: basic requirements

Controllers have to be purchased by the sub-detector groups Decision to standardise on one type of controller

To bring down price by economy of scaleTo ease maintenance and provision of sparesTo avoid S/W incompatibilities

Keep technology evolution in mind

Main technical requirements Mechanical: 6U, 1 slot, VME64x mechanics VMEbus protocol: Support for single cycles, chained DMA and interrupts VMEbus performance: At least 50% of the theoretically possible bus transfer rates Software: Compatibility with CERN Linux releases

Not required 2eVME & 2eSST

Page 5: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

5LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Basic choice: Embedded SBC or Link to PC

SBC Space conservative (important in

ATLAS underground area) Typically better VMEbus

performance (especially single cycles)

Cheaper (system price)

Link to PC Computing performance can be

increased by faster host PC Possibility to control several

VMEbus crates from one PC Vendors usually provide both C

library and LabView VIs

SBC better suited for the requirements of ATLAS

The basic requirements can be met by both an embedded SBC and an interface to a PC system

Page 6: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

6LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Finding an appropriate SBC

Type of CPU

Intel Higher clock rates Better support for (CERN) Linux

PowerPC Big endian byte ordering (matches VMEbus) Vector processing capability

Other technical requirements

(just the most important ones, formulated in 2002)

Let the market decide

300 MHz (PowerPC) / 600 MHz (Intel) CPU 256 MB main memory One 10/100 Mbit/s Ethernet interface One PMC site (e.g. for additional network interfaces) VME64 compliant VMEbus interface 8 MB flash for Linux image (network independent booting) Support for diskless operation

Page 7: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

7LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Issues with the Universe chip

Lack of built-in endian conversion Intel based SBCs have to have extra logic to re-order the bytes

Performance Single cycles: ~ 1 μs (arbitration for and coupling of CPU bus, PCI and VMEbus) Posted write cycles: ~500 ns Block transfers: ~60% of theoretical maximum (i.e. 25 MB/s for D32 BLT, 50 MB/s

for D64 MBLT) VMEbus bus error handling

Errors from posted write cycles are not converted to an interrupt Lack of constant address block transfers for reading out FIFOs

May be required to read out a FIFO-based memory It is possible to have the Universe chip do it with (slow) single cycles (~13 MB/s) Danger of loosing last data word on BERR* terminated transfers

Many of today’s SBCs are based on the Tundra Universe chip which was designed in ~1995. Evaluations of several SBCs have identified a few shortcomings that still apply to the current revision of the device (Universe II D):

Page 8: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

8LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Other issues Concurrent master and slave access

If a SBC is VMEbus master and slave at the same time deadlock situations on PCI are possible. Some host bridges resolve them by terminating the inbound VMEbus cycles with BERR

Remote control and monitoring Most SBCs do not support IPMI Some vendors put temperature or voltage sensors on their SBCs but there is no

standard way of reading this information Remote reset: Via 2-wire connector (in the front panel) or SYSRESET*

Mechanics VME64x alignment pin and P0 are incompatible with certain crates. Most vendors

provide alternative front panels or handles. EMC gaskets can be “dangerous” for solder side components on neighboring card.

Use solder side covers

Page 9: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

9LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Choice of the operating system

Requirements: Unix compatible environment (to leverage existing experience) No hard real time

Interrupts are used in some applications but their latency is not crucial SBC has to run under LynxOS (just in case..)

Full local developing and debugging environment Cheap (ideally no) license costs

Linux is the obvious choice

The ATLAS default SBC has to work with the SLC3 release Only minor modifications to the kernel configuration to support diskless operation “Look and feel” as on a Linux desktop PC (X11, AFS, etc.)

Page 10: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

10LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Final choice of VMEbus SBC In 2002 a competitive Call for Tender was carried out

Non-technical requirements:Controlled technical evolution 3 years warranty10 years supportFixed price for repair or replacement

All major suppliers contacted ~10 bids received 1 Supplier selected

Features of the default SBC 800 MHz Pentium III 512 MB RAM Tundra Universe VMEbus interface ServerWorks ServerSet III LE host bridge

Since 2005 a (software compatible) more powerful SBC is available as an alternative 1.8 GHz Pentium M 1 GB RAM KVM ports Two GB Ethernet ports Intel 855 GME host bridge

Page 11: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

11LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

VMEbus S/W

Analysis of third party VMEbus I/O libraries Design

There is (despite VISION, ANSI/VITA 25-1997) no standard API for VMEbus access

Board manufacturers define their own APIs Some libraries (unnecessarily) expose details of the underlying hardware to the

user Implementation

Tradeoff between speed and functionality may not suit user requirements Completeness

Sometimes less frequently used features are not supported Support

Not always guaranteed (especially for freeware)

We decided to specify our own API and to implement a driver & library

Page 12: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

12LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

The ATLAS VMEbus I/O package

Linux allows the development of an I/O library (almost) fully in user space Low S/W overhead (no context switches) Multi tasking and interrupt handling difficult

Our solution: Linux device driver

Support for multi tasking SMP not an issue (SBC has one CPU, no Hyper-threading) Interrupt handling Block transfers

User library C–language API Optional C++ wrapper

Comprehensive S/W tools for configuration, testing, debugging and as coding examples

Independent package for the allocation of contiguous memory (for block transfers)

Page 13: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

13LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Main features of the ATLAS VMEbus I/O package

Single cycles Static programming of the PCI to VMEbus mapping Fast access (programmed I/O from user code) Safe functions (full synchronous BERR* checking via driver) Special functions for geographical addressing (CR/CSR space access)

Block transfers Support for chained block transfers Fixed address single cycle block transfers for FIFO readout

Interrupts Support for ROAK and RORA Synchronous or asynchronous handling Grouping of interrupts

Bus error handling Synchronous or on request

Performance Fast single cycles: 0.5 to 1 μs Safe single cycles: 10 to 15 μs Block transfers (S/W overhead): 10 to 15 μs

Page 14: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

14LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Status

Supply contract for ATLAS established in 2003 So far ~170 SBCs purchased

Successfully used in 2004 ATLAS combined test-beam Many small test-benches and laboratory systems Installation of SBCs in final ATLAS DAQ system started

The CERN Electronics Pool is phasing out the previous generation of PowerPC / LynxOS based SBCs in favor of the ATLAS model

The ALICE Experiment will use the same model in the VMEbus based part of the L1 trigger system

Page 15: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

15LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Conclusions

ATLAS had no special requirements (such as e.g. 2eSST support) on the SBC

The time spent on the evaluation of SBCs was well invested. Some important lessons have been learnt and helped with the Call for Tender

Specifying our own VMEbus API and implementing the software from scratch paid out in terms of flexibility and performance

Both the SBC and the software have been used successfully in a large number of systems

Page 16: LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop

16LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS

Acknowledgements

I would like to thank: Chris Parkman, Jorgen Petersen and Ralf Spiwoks for their

contribution to the technical specification of the ATLAS SBC and VMEbus API

Jorgen Petersen for his assistance with the implementation of the software

The members of the ATLAS TDAQ team for their contributions and feed back