Upload
garey-bradford
View
216
Download
0
Embed Size (px)
Citation preview
André Augustinus10 October 2005
ALICE Detector ControlStatus Report
A. Augustinus, P. Chochula, G. De Cataldo, L. Jirdén, S. Popescuthe DCS team, ALICE collaboration, CERN Geneva Switzerland
10 October 2005ICALEPCS 2005, Geneva 2André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Outline The ALICE experiment at CERN Organization of the controls activities in ALICE Design goals and strategy DCS architecture Key concepts DCS infrastructure Summary - Conclusion
10 October 2005ICALEPCS 2005, Geneva 3André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
ALICE is one of the four LHC experiments Located at point 2 of the LHC at CERN 18 different sub-detectors, 2 magnets Dedicated for heavy ion physics; participate in pp 1000 members,
86 institutes, 29 countries
Introduction
Located at point 2 of the LHC at CERN
A Large Ion Collider Experiment
10 October 2005ICALEPCS 2005, Geneva 4André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Introduction Many sub-detector teams have limited expertise in
controls, especially in large scale experiments ALICE Controls Coordination (ACC) team put strong
emphasis on coordination andsupport
Joint COntrols Project (JCOP) is acollaboration between CERN and all LHC experiments to exploit communalities in the control systems
JCOP
ATLAS
CMS
LHCb
CERN(IT/CO)
ALICE(ACC)
SubDetSubDet
SubDetSubDet
10 October 2005ICALEPCS 2005, Geneva 5André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Design goals DCS shall ensure safe and efficient operation
• Intuitive, user friendly, automation Many parallel and distributed developments
• Modular, still coherent and homogeneous Changing environment – hardware and operation
• Expandable, flexible Operational outside datataking, safeguard equipment
• Available, reliable Large world-wide user community
• Efficient and secure remote access Data collected by DCS shall be available for offline
analysis of physics data
10 October 2005ICALEPCS 2005, Geneva 6André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Strategy and methods Common tools, components and solutions
• Strong coordination within experiment (ACC)• Close collaboration with other experiments (JCOP)
In ALICE there are many similar sub-systems Identify communalities
through User Requirements• Collected in URD (lightweight)
and Overview Drawings• Through meetings and
workshops
10 October 2005ICALEPCS 2005, Geneva 7André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Hardware Architecture 3 layers: supervisory, control, and field layer
• Supervisory: operator nodes, server nodes• Control: worker nodes connecting to devices• Field: devices, sensors and actuators
Reduce sharing of equipmentbetween sub-detectors
Standard hardware for computers Limit diversity of devices in field layer
• Dependent on sub-detector hardware• Use common hardware for similar tasks
• General Purpose Monitoring System
Interlocks and DSS for protection of equipment • DSS is safe and reliable part of DCS
Detector and experiment equipment
Powersupply
NodeNode
Node
LAN (Ethernet)
Centraloperator
Centraloperator
External systems and services
(LHC, Electricity, Safety, etc.)
Protection
Fie
ldb
us
NodePowersupply
Fieldbus
Fie
ldb
us
Powersupply
VME crate
PLC
Su
per
viso
ry la
yer
Co
ntr
ol l
ayer
Fie
ld la
yer
External Users
Localoperator
File ServersDataBase Servers
System tasks
Servers (SE) OperatorNodes (ON)
WorkerNodes (WN)
10 October 2005ICALEPCS 2005, Geneva 8André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Software Architecture A tree like structure; representing sub-detectors,
sub-systems and devices• Leaves (Device Units) ‘drive’ devices• Nodes (Control Units) model and control sub-tree below• Commands flow down, states flow up the tree
Operation is done from the root node• Any sub-tree can be removed from tree and operated
independently and concurrently : partitioning
Behaviour and functionality of a control unit is modelled as a Finite State Machine (FSM)
10 October 2005ICALEPCS 2005, Geneva 9André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Software Architecture
Sub1.N
Con
trol
Uni
tsD
evic
e U
nits
Devices
Control Unit
Device Unit
Det 1
DCS
Det 2 Det N
Sub1.1
Sub1.2
SubN.1
Sub1.1.1
Sub1.1.2
Sub1.1.2.1
Sub1.1.2.2
SubN.1.1
SubN.1.2
DevN-1
DevN
Dev1
Dev2
Dev4
Dev3
Dev5
Operator
...
...
...
Operator
Operator
Operator
Co
mm
an
ds
Sta
tes
an
d a
larm
s
Each CU logicallycombines states anddistributes commands
10 October 2005ICALEPCS 2005, Geneva 10André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
DCS key concepts The FSM concept is fundamental to the DCS
• Intuitive and generic method to model behaviour of a system or a device
• An object has a well defined collection of states• Moves between states by executing actions
• Triggered by an operator or an external event
DCS will interface to variety of Front End Electronics• Front End Device (FED) concept: hides the
implementation details through a common client-server interface (based on DIM)
Use common software tools:• PVSSII, JCOP framework
10 October 2005ICALEPCS 2005, Geneva 11André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Common solutions Does not stop with selection of common tools and
standard hardware Define a standard behaviour for the same class of
devices (e.g. HV power supplies)• Provide the sub-detectors with a standard state diagram
Define standard states/actions/operational sequences (automation) that can be used when defining behaviour of sub-detector
Guidelines for development, naming, numbering, look and feel of user interfaces etc.
10 October 2005ICALEPCS 2005, Geneva 12André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
DCS infrastructure DCS needs adequate infrastructure (computers,
network, ...) Installation and maintenance of the network will be
done by the CERN networking group (IT/CS) All computers installed for the DCS will be procured,
installed and maintained by a central team• Highly standardized hardware
Operation of network and computers will follow rules and guidelines and use tools from the “Computing and Network Infrastructure for Controls” (CNIC) working group
10 October 2005ICALEPCS 2005, Geneva 13André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Network The controls network will be a separate, well
protected network• Without direct access from outside the experimental area• With remote access only through application gateways• With all equipment on secure power
A first estimate shows the need of around 350 network connections, 2/3 in the experimental cavern
• Not including ~50 switches connecting ~800 embedded processors on the detector
Current installations use the CERN campus network The controls network will be operational starting the
2nd quarter of 2006
10 October 2005ICALEPCS 2005, Geneva 14André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Remote access
With the large world-wide user community remote access is an important aspect
Remote users will access PVSSII projects through a remote user interfaces via a Terminal Server
By default only observer rights, higher privileges can be granted to experts for specific, well defined tasks
10 October 2005ICALEPCS 2005, Geneva 15André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Remote access
This strategy has been tested with 60 remote users simultaneously running a user interface• No degradation of performance (nor project, nor TS)
Tested successfully from several places around the world
10 October 2005ICALEPCS 2005, Geneva 16André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Computers
2U rack mounted PCs, in specially equipped racks• Cooling doors, power control, on secure power
Baseline operating system is Windows• Linux is used in specific cases
The DCS will be run as a large distributed PVSSII system• Based on several performance tests on large distributed
systems• More detailed performance tests on several components
are being performed
10 October 2005ICALEPCS 2005, Geneva 17André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Computers
A first distribution of tasks across computing nodes has led to the need of 80-90 nodes• Including servers and system management nodes• Combining low resource demanding tasks• Maintaining separation between sub-detectors
A core DCS system has been installed this summer• 5 machines, to be used by sub-detectors for equipment
test at first installations• More worker nodes and devices to be installed soon• 50% installed by 1st quarter 2006, rest 3rd quarter 2006
10 October 2005ICALEPCS 2005, Geneva 18André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Further activities at site The DSS is being commissioned
• Experimental area surveillance
Interfaces to first gas systems and site infrastructure (CERN safety system, power control, environment monitoring, …)
are installed and made available to users• Will be extended gradually as the installation of the
services (cooling, electricity, etc.) progress
Coordinated operation of the online systems (DAQ, Trigger, DCS) will start early 2006• Cosmic runs with TPC and other detectors
10 October 2005ICALEPCS 2005, Geneva 19André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Summary Many sub-detectors have implemented parts of their
control system and used them in lab and beam tests• They could profit from the coordination and collaboration• Their very valuable feedback allowed us to optimise and
improve the DCS design• The chosen architecture proved to be well adapted to the
sub-detector needs
The process will continue with the first installations and this together with extensive performance tests will help us to further optimise and refine the system
10 October 2005ICALEPCS 2005, Geneva 20André Augustinus
CE
RN
– E
urop
ean
Org
aniz
atio
n f
or N
ucl
ear
Res
earc
h
Conclusion Results so far make that we are confident that the
ALICE Detector Control System will be fully operational at the beginning of 2007.Well in time to allow safe and efficient operation of the experiment to record first collisions at LHC