67
The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

Embed Size (px)

Citation preview

Page 1: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

The Fermilab Control System

Jim Patrick - FermilabCERN AB Seminar

October 5, 2006

Page 2: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 2

Outline

Accelerator Complex Overview Control System Overview Selected Control System Features Migration Efforts Summary

Page 3: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

Fermilab Accelerator Complex

J. Patrick

September 19, 2006

NUMI

Recycler

miniBooNE

Tevatron

Main Injector

Linac

Booster

pbar

Fixed Target

e-cool

Page 4: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 4

Fermilab Accelerator Complex

400 MeV Linac (up to 15 Hz) 8 GeV Booster Synchrotron (up to 15 Hz) 120 GeV Main Injector Synchrotron (~0.5 Hz) 980 GeV Tevatron Superconducting Synchrotron/Storage

Ring 8 GeV Debuncher/Accumulator for antiproton production 8 GeV “Recycler” for antiproton accumulation (same

tunnel as MI) with electron cooling

Page 5: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 5

Operating Modes

1.96 TeV proton-antiproton collider (24-30 hour stores) 8 GeV fixed target from booster (miniBooNE) - ~5 Hz 120 GeV fixed target from Main Injector - ~1/minute 120 GeV fixed target for NUMI (~0.5 Hz) 120 GeV from MI for pbar production (0.25 – 0.5 Hz)

Cycles interleaved Main Injector load contains batches for both NUMI and

pbar

Page 6: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 6

The Run II Challenge

Operate the Tevatron collider at > 10 x the luminosity of the previous (1992-6) run (began Mar 2001) Collider stores + pbar production Incorporate Recycler ring (May 2004)

With Electron cooling (July 2005)

Operate miniBooNE - (Dec 2001) Operate 120 GeV fixed target (Feb 2003) Operate NUMI high intensity neutrino beam (Dec 2004) Get all this working, all at the same time Meanwhile modernize the control system without affecting

commissioning and operation of the above Simultaneous collider and fixed target operation new to

Fermilab

Page 7: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 7

Outline

Accelerator Complex Overview Control System Overview Selected Features Migration Efforts Summary

Page 8: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 8

Control System Overview

Known as “ACNET” Unified control system for the entire complex

All accelerators, all machine and technical equipment Common console manager can launch any application Common software frameworks Common timing system

Three-tier system Applications Services Front-ends

Distributed intelligence Enabled by front-ends, special hardware System is tolerant to high level software disruptions

Page 9: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 9

Control System Overview

ConsoleApplications

CentralServices

IRMFront-Ends

JavaApplications

WebApplications

MOOCFront-Ends

LabviewFront-Ends

Open AccessClients

Field Hardware

ethernet

Database

CAMAC, VME, PMC, IP, Multibus, CIA, GPIB, …

field bus: VME, SLD, Arcnet, ethernet, …

Servlets ConsolidatorsCentral

Front-End

>500

120

70+70+

Application

Page 10: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 10

ACNET History

Originally developed for Tevatron in early 80’s Substantial evolution over the years CAMAC field hardware -> very broad diversity

Major modernization in the 90’s

PDP-11/Mac-16 front-ends -> x86 Multibus -> PPC VME DEC PCL network-> IEEE 802.5 token ring -> ethernet VAX database -> Sybase PDP-11 applications -> VAXstations

VAX only through 2000 Java framework, port to Linux since then

Page 11: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 11

Device Model

Maximum 8 character device names e.g. T:BEAM for Tevatron beam intensity Informal conventions provide some rationality More verbose hierarchy layer created, but not utilized/maintained

Each device has one or more of a fixed set of properties: Reading, setting, analog & digital alarms, digital status & control

Device descriptions stored in central Sybase database Monotype data, arrays are transparently supported Mixed type structures done, but support not transparent

Application (library) layer must properly interpret the data from the front-end and perform required data translations

~200,000 devices/350,000 properties in the system

Page 12: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 12

ACNET Protocol

Custom protocol used for intra-node communication Layered, task directed protocol developed in early 80’s

Old, but ahead of its time Currently uses UDP over ethernet

Timeout/retry mitigates UDP unreliability Very little packet loss in our network

Get/Set devices Single or repetitive reply; immediate, periodic, or on clock event Multiple requests in single network packet

Fast Time Plot (FTP) Blocks of readings taken at up to 1440 Hz returned every 0.5 sec.

Snapshot Block of 2048 readings taken at an arbitrary rate

Page 13: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 13

ACNET Protocol

Has proven to be very reliable, robust, scalable and extensible protocol for many years For example, new GET/SET protocol adds precise time stamps,

collection on event+delay, states, larger device address space

Main weakness is lack of transparent mixed type support But this is often done manually

Difficult to program, but high level API hides details Difficult to port to new platforms but has been done

Page 14: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 14

Timing and Control Networks

TCLK (Clock Events) Trigger various actions in the complex Dedicated 10 MHz serial network 256 possible events defined for the complex Includes 1 Hz GPS time sync

MDAT (Machine Data) General mechanism for rapid data transfer 36 slots updated at 720 Hz

Beam Sync – revolution markers derived from RF States – discussed later Both TCLK and MDAT are multicast at 15 Hz for central

layer processes that don’t require hard real-time response

Page 15: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 15

Application Environment

Console environment VAX based; Now mostly ported to Linux Integrated application/graphics/data access architecture C language (+ legacy FORTRAN)

Java environment Platform independent; Windows/Sun/Linux/Mac all utilized Full access to entire control system

Both environments have a standard application framework that provides a common look and feel

All applications launched from “Index Pages” Separate for Console, Java environments Tightly managed code in both environments

Also some web browser applications of varying styles

Page 16: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 16

Index Page

Page 17: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 17

Parameter Page

Page 18: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 18

Device Database GUI

Page 19: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 19

Synoptic Display (Web)

Drag&Drop Builder

Uses AJAX technology

Page 20: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 20

Alarms

Front-ends post (push) alarmsto central server for distributionand display

Page 21: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 21

Central Layer

Database Open Access Clients (“virtual front-ends”) Java Servlets

Bridge device requests from outside firewall and for web apps

Request Consolidation Requests from different clients consolidated to a single request to

the front-end

Alarms server Front-end download

Load last settings on reboot

Page 22: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 22

Database

Central Sybase database Three parts:

All device and node descriptions Application information

Used by applications as they wish Save/Restore and Shot Data database

MySQL used for data loggers Data volume too high for Sybase Not for general application use

Many older programs still use central “file sharing” system

Page 23: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 23

Open Access Clients (OACs)

“Virtual” front-ends Obey same ACNET communication protocol as front-ends

Several classes: Utility – Data loggers, scheduled data acquisition, virtual devices Consolidate devices across front-ends into single array Calculations – Both database driven and custom Process control – Pulse to pulse feedback of fixed target lines Bridges – to ethernet connected scopes, instrumentation, etc.

Easy access to devices on multiple front-ends Friendlier programming environment than VxWorks front-ends

Framework transparently handles ACNET communication Access to database, other high level operating system features Do not provide hard real-time response; clock events by multicast > 100; Almost all run in Java framework

Page 24: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 24

Front-Ends

MOOC – Minimally Object-Oriented Communication Very wide diversity of functionality Processors all VME (VXI) based

Very wide diversity of field hardware supported - CAMAC, Multibus, VME, VXI, CIA, ethernet attached instruments, etc.

VxWorks Operating System ~325 in the system

IRM – Internet Rack Monitor Turnkey system with 64 channel ADC + DAC + Digital I/O PSOS operating system ~125 in the system

Labview – Used for one or few of a kind instrumentation A front-end to the control system; only experts use Labview GUI

Page 25: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 25

Outline

Accelerator Complex Overview Control System Overview Selected Features Migration Efforts Summary

Page 26: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 26

Sequencer Overview

The Sequencer is an accelerator application which automates the very complex sequence of operations required to operate the collider (and other things)

Operation divided into “Aggregates” for each major step “Inject Protons”, “Accelerate”, etc.

Aggregates are described in a simple command language Easily readable, not buried in large monolithic code Easily modified by machine experts No need to compile/build etc.

The sequencer program provides an editor to assist writing aggregates Pick from list of commands

Aggregate descriptions are stored in the database

Page 27: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 27

Tevatron Collider Operation

Proton Injection

porch

Proton Injectiontune up

Inject Protons

Eject Protons

Inject Pbars

PbarInjection

porch

BeforeRamp

Flattop

Acceleration

Squeeze InitiateCollisions

Remove Halo

HEP

Pause HEP

ProtonRemoval

Unsqueeze

Deceleration

Extractionporch

Extract Pbars

Cogging

Recovery

Ramping

Flattop2

Reset

Page 28: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 28

Sequencer Overview

Each Aggregate contains a number of commands Set, get, check devices Wait for conditions Execute more complex operations implemented as ACL scripts or

specially coded commands Start regular programs (for example TeV injection closure) Start plots Send data to shot log …

The Sequencer program steps through the commands Stops on error Allow operator to restart at failed command

Fermilab sequencer is not a formal finite state machine It is not just for collider operations

Configure other parts of complex, studies

Page 29: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 29

Sequencer GUI

Inject Protons

Page 30: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 30

Sequencer Overview

The Fermilab complex employs “distributed intelligence” Separate sequencer instances for TeV, MI, pbar, recycler Lower level programs control complex operations such as

collimator movement for scraping Sequencer just issues high level control commands

Waveforms preloaded to ramp cards, triggered by clock events

“States” concept used to provide synchronization

Page 31: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 31

States

ACNET devices owned by STATES OAC process When a state device is set, it is reflected to the rest of the

control system by the STATES process Multicast Registered listeners

Thus can be used to trigger actions, do synchronization Condition persists in time unlike clock events Examples

Collider State Inject Protons, Inject pbars, Accelerate, HEP, etc.

Collimators – Commands for collimator movement Go to Init; Remove Halo, Retract for Store

Wire scanner profile calculation complete

Page 32: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 32

Waveform Generators

Magnets need to do different things at different times in the machine cycle

Rather than redownload on each cycle, custom function generator cards that preload up to 32 waveforms are used Correct one for a part of the cycle triggered by clock event Waveforms are functions of time and MDAT frame values On board processor does smooth interpolation

Thus waveforms need not be downloaded at each phase of the machine cycle Enhances reliability Reduced dependence on high level software in stable operation

Page 33: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 33Clock events

Clock Events and Waveform Generators

Example of preloaded hardware triggered by clock events

Page 34: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 34

Clock events

Time table Energy table Squeeze table

All three tables play simultaneously

Details of a Waveform Generator

Page 35: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 35

Finite State Machine (Java)

Graphical Drag&Drop Builder State machine description

stored in database FSM OAC runs machines Viewer application for control

and monitoring Primary current usage is e-cool

Pelletron control Keeps track of HV sparks and

beam related trips Automates recovery sequence

More formal state machine than sequencer, perhaps less scalable

Page 36: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 36

SDA

Sequenced Data Acquisition – Shot Data Analysis Record data at each step in collider operation in dedicated

database Compute and track injection efficiencies, emittance

growth, luminosity, … Variety of tools to extract, analyze data

Web reports created in real time “Supertable” summary – web page + Excel spreadsheet Many predefined plots available on the web SDA Viewer – spreadsheet like Java application SDA Plotter – plot snapshots and fast time plots Java Analysis Studio input module Java API for custom programs

Page 37: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 37

SDA Reports

Page 38: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 38

Data Loggers – aka “Lumberjack”

70 instances running in parallel on individual central nodes Allocated to departments to use as they wish, no central

management. Continual demand for more ~48,000 distinct devices logged

Rate up to 15 Hz or on clock or state event (+ delay)

80 GB storage per node in MySQL database Stored in compressed binary time blocks in the database Keyed by device, block start time Circular buffer; wraparound time depends on device count/rates

Data at maximum 1 Hz rate extracted from each logger to “Backup” logger every 24 hours Currently 850 million points extracted per day Backup logger is available online => data accessible forever

Page 39: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 39

Data Loggers

Logger tasks are Open Access Clients Reads via logger task rather than direct database access

Variety of retrieval tools available Standard console applications, Console and Java environments Web applications Java Analysis Studio programmatic APIs

Retrieval rates typically ~10-30K points/second

Page 40: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 40

Data Logger Plots

Page 41: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 41

Security

Limit access to the control system Control system network inside a firewall with limited access in/out

Limit setting ability Privilege classes (roles) defined Roles assigned to users, services, nodes Devices tagged with map of which roles may set it Take logical and of user/service/node/device mask

Allows fine grained control, but in practice Device masks allow all roles to set; flat device hierarchy tedious… A limited number of roles are actually used

Applications outside MCR start with settings disabled Settings must be manually enabled. Times out after some period

Applications started outside MCR exit after some time limit

Page 42: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 42

Remote Access

Console environment via remote X display Settings generally not allowed

Java applet version of console Java apps can be launched from anywhere via Webstart

Access to inner parts of control system only via VPN Servlets provide read access for some applications “Secure Controls Framework” – Kerberos authentication for low

level communication – allows read access outside firewall

Access from outside standard application environments: xml-rpc protocol used; bindings to many languages Readings of any device; setting of select “virtual” devices No setting of hardware associated devices Used by all current experiments; both fetch and input data

Page 43: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 43

Accountability

Considerable usage information is logged Who/what/where/when Most reports web accessible

Settings Errors – reviewed each morning Database queries Device reads from Java clients Application usage Clock (TCLK) events State transitions

Page 44: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 44

Minimize Restarts

Minimize need to restart parts of the system: Changes to the device or node database are multicast to the

control system, they do not need to be restarted to pick these up For many front-ends devices can be added without a reboot

When something is restarted, minimize what else must be restarted Restarts set state transition Monitor (periodic) requests automatically reestablished Application automatically recovers middle layer restart

Page 45: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 45

Redirection

The console manager may cause I/O requests to be redirected from the normal hardware.Possibilities alternatives are:

An OAC process A default “MIRROR” process reflects settings into readings

A Save file An SDA file This allows testing basic functionality of some applications

without risk This provides a potentially good framework for integrating

accelerator models, but hasn’t been done…

Page 46: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 46

Some Issues…

Short device names, no hierarchy Structured device properties not transparent Old fashioned GUI in console environment Lack of standard machine configuration description Lack of integration with models Lack of integration with Matlab and related tools Weak inter-front end communication Specific to Fermilab environment, no attempt at portability

Page 47: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 47

Outline

Accelerator Complex Overview Control System Overview Some Selected Features Migration Efforts Summary

Page 48: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 48

Migration Strategy

By the late 90’s VAXes had been eliminated from Fermilab except in the accelerator control system

Some migration strategy was essential CDF/D0 run until ~2006-8; BTeV until ~2012+ (since cancelled)

Strategy would have to be incremental Not possible to replace the entire control system in one shutdown Should be possible to replace a single application at a time

New and old parts would have to interoperate Should require no or minimal changes to front-ends

Most of these had been migrated from Multibus/i386/RTOS to VME/PPC/VxWorks in the 90’s

Implied that ACNET protocol would remain primary communication mechanism

Page 49: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 49

Migration Plan

Architecture developed using the Java language Multi-tier

Applications Data Acquisition Engines (middle layer) Front-ends (unmodified)

Applications divided into thin GUI client with business logic in DAE. Predated J2EE and related ideas Communication via Java RMI Drifted back to heavy client, DAE only does device access

Open Access Clients running in DAE (same Java virtual machine)

Device request consolidation across all Java clients

Page 50: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 50

Java Migration Progress

Required infrastructure was created prior to 2001 run start ACNET protocol DAE framework OAC framework Device access library Database access library Application framework Etc.

SDA implemented; continually evolved to meet needs Data acquisition Analysis tools

Most VAX OACs converted to Java and new ones written Java versions of many utility applications written

Page 51: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 51

Java Migration

SDA has been very successful in tracking performance of the accelerators

Data loggers had greatly expanded capacity and rate capability, this has been valuable

Ease of creating new OACs has been valuable Much more information is now available over the web Some machine applications have been written, but

machine departments did not in general embrace the Java application framework Some key Tevatron applications done Extensive operational usage in electron cooling Pbar source department also has many expert Java applications

Page 52: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 52

Java Application Migration Issues

Accelerator performance a higher priority Number of active applications badly underestimated Authors of many applications long gone Long transition period in division management Development environment much more difficult than VAX

Two-tier framework; device access API, development environment These issues were mostly corrected but late

Separate console managers, not easy to merge Primitive graphics => fast; fast application startup “If the Java application works differently, I don’t want to

learn how to use it.” “If the Java application works the same, then why should I

use it?”

Page 53: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 53

Console Migration to Linux

In the fall of 2003, a division wide group was formed to assess the strategy for controls migration

The base of active applications was determined to be much larger then generally believed; more than 800

Porting all of these to Java with the resources available, while operating and upgrading the accelerator complex was deemed impractical

A port the VAX console infrastructure and all active VAX applications to Linux was initiated in ~March 2004

The Java framework continues to be supported and new applications written using it. Nothing backported!

Page 54: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 54

Linux Migration Phases

Phase I Console infrastructure ACNET communication protocol Core libraries Emulate VAX development environment (MECCA) ~1M lines of code Done entirely by controls department

Phase II Applications Done incrementally, not en masse In practice, most porting done by controls department personnel Some porting and extensive testing done by other departments

Phase III Remaining VAX services

Page 55: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 55

Linux Port

Much harder than anticipated Large volume of code - ~3.5M non-comment lines Extensive interdependencies – Primary/Secondary

application communication K&R C -> ANSI C++

C++ compiler required for variable argument capability Stricter compiler did catch some actual bugs Insisted all compiler warnings be addressed

Extensive use of VAX system services “BLOBs” in the database and in flat files and over network

Floating point issues Structure alignment issues

Page 56: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 56

Port Mechanics

Scripts developed to make many changes automatically Required manual changes flagged

Nightly builds with detailed web accessible reports Liason assigned from each department for Phase II Database developed with information on each application

Sensitivity to operations assigned to each application Status – imported, compiles, builds, all manual issues addressed,

initializes, tested, certified, default Linux

Testing of sensitive applications requires scheduling Certification of sensitive applications requires keeper +

department liason + operations sign-off Dual hosted console manager developed that runs

applications on correct platform (maintained in database)

Page 57: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 57

Application Tracking

Page 58: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 58

Status

Applications Certified

Phase I completed ~late 2005

Phase II expected to be complete by end of 2006

Linux usage in operations began at end of shutdown in June 2006

Very modest impact on operations. No lost collider stores!

Phase III expected to be complete by mid 2007 (next shutdown)

Begin Beam Operation

Page 59: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 59

Modernizing BPMs and BLMs

BPMs in Recycler, Tevatron, Main Injector, Transfer Lines Performance issues Obsolete hardware technology Need for more BPMs in transfer lines Replaced with a digital system using Echotek VME boards

Beam Loss Monitors in Tevatron and Main Injector Obsolete hardware technology Additional features needed with more beam in MI, Tevatron Replaced with new custom system (still in progress)

All were replaced in a staged manner without interruption to operations Library support routines provided common data structure for both

old and new systems during the transition period Enormous improvement in performance and reliability

Page 60: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 60

Outline

Accelerator Complex Overview Control System Overview Some Selected Features Migration Efforts Summary

Page 61: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 61

Collider Peak Luminosity

Page 62: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 62

Collider Integrated Luminosity

Recent observation of Bs mixing by CDF

Page 63: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 63

NUMI Performance

Total NuMI Protons Starting 3 Dec 2004

0

100

200

300

400

500

600

700

800

900

1000

1100

1200

1300

1400

1500

1600

1700

1800

1900

2000

5/1/

2005

5/6/

2005

5/11

/200

55/

16/2

005

5/21

/200

55/

26/2

005

5/31

/200

56/

5/20

056/

10/2

005

6/15

/200

56/

20/2

005

6/25

/200

56/

30/2

005

7/5/

2005

7/10

/200

57/

15/2

005

7/20

/200

57/

25/2

005

7/30

/200

58/

4/20

058/

9/20

058/

14/2

005

8/19

/200

58/

24/2

005

8/29

/200

59/

3/20

059/

8/20

059/

13/2

005

9/18

/200

59/

23/2

005

9/28

/200

510

/3/2

005

10/8

/200

510

/13/

2005

10/1

8/20

0510

/23/

2005

10/2

8/20

0511

/2/2

005

11/7

/200

511

/12/

2005

11/1

7/20

0511

/22/

2005

11/2

7/20

0512

/2/2

005

12/7

/200

512

/12/

2005

12/1

7/20

0512

/22/

2005

12/2

7/20

051/

1/20

061/

6/20

061/

11/2

006

1/16

/200

61/

21/2

006

1/26

/200

61/

31/2

006

2/5/

2006

2/10

/200

62/

15/2

006

2/20

/200

62/

25/2

006

3/2/

2006

3/7/

2006

3/12

/200

63/

17/2

006

3/22

/200

63/

27/2

006

4/1/

2006

4/6/

2006

4/11

/200

64/

16/2

006

4/21

/200

64/

26/2

006

5/1/

2006

5/6/

2006

5/11

/200

65/

16/2

006

5/21

/200

65/

26/2

006

5/31

/200

66/

5/20

066/

10/2

006

6/15

/200

66/

20/2

006

6/25

/200

66/

30/2

006

7/5/

2006

7/10

/200

67/

15/2

006

7/20

/200

67/

25/2

006

7/30

/200

68/

4/20

068/

9/20

068/

14/2

006

8/19

/200

68/

24/2

006

8/29

/200

69/

3/20

069/

8/20

069/

13/2

006

9/18

/200

69/

23/2

006

date

pro

ton

s (

xE

17

)

Best current measurement of neutrino oscillationdespite problems with target, horn, etc.

Page 64: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 64

MiniBooNE Performance

Page 65: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 65

Accelerator Future

Tevatron collider is scheduled to run for ~3 more years Emphasis for complex will then be neutrino program

Recycler + pbar accumulator get converted to proton stackers for NUMI – up to ~1 MW average beam power

NOA experiment approved – 2nd far detector in Minnesota

ILC accelerator test facilities under construction These will not use ACNET but instead probably some combination

of EPICS and the DESY DOOCS system used at TTF Strategy not yet determined – broad collaborative facility Some interoperation efforts with EPICS; bridge in middle layer

Evaluating EPICS 4 “Data Access Layer” (DAL)

Page 66: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 66

Java Parameter Page/EPICS Devices

Page 67: The Fermilab Control System Jim Patrick - Fermilab CERN AB Seminar October 5, 2006

October 5, 2006 Fermilab Control System - CERN AB Seminar 67

Summary

The control system has met the Run II Challenge Accelerator complex has gone well beyond previous runs in

performance and diversity of operational modes Unified control system with solid frameworks and distributed

intelligence was instrumental in this success

During accelerator operation the control system has been substantially modernized Incremental strategy essential It is difficult to upgrade a large system while operating! It is difficult to get people to upgrade a system that works! The control system will finally be VAX-free at the next major

shutdown currently scheduled for June, 2007 Control system will be viable for the projected life of the complex