75
Project Documentation Document SPEC-0021 DRAFT - Revision C TCS Software Design Description Authors Chris Mayer, David Terrett & Pat Wallace Date 11 th September 2006 Advanced Technology Solar Telescope 950 N. Cherry Avenue Tucson, AZ 85719 Phone 520-318-8102 [email protected] http://atst.nso.edu Fax 520-318-8500

TCS Software Design Description · 2020. 1. 22. · TCS Software Design Description Authors Chris Mayer, David Terrett & Pat Wallace Date 11th September 2006 Advanced Technology Solar

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

  • Project Documentation Document SPEC-0021

    DRAFT - Revision C

    TCS Software Design Description

    Authors Chris Mayer, David Terrett & Pat Wallace

    Date

    11th September 2006

    Advanced Technology Solar Telescope 950 N. Cherry Avenue Tucson, AZ 85719 Phone 520-318-8102 [email protected] http://atst.nso.edu Fax 520-318-8500

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page i

    Revision Summary:

    1. Date: 16th December 2004 Revision: A Changes: Original version

    2. Date: 9th May 2006 Revision: B Changes: Updated to reflect Panguitch release

    3. Date: 11th September 2006 Revision: Draft of Rev C Changes: Updated for TCS CDR

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page ii

    Table Of Contents

    1. Introduction ............................................................................................... 1 1.1 Purpose ..............................................................................................................................1 1.2 Scope .................................................................................................................................1 1.3 Definitions, Acronyms and Abbreviations ........................................................................1

    1.3.1 Definitions..................................................................................................................1 2. System Overview....................................................................................... 2

    2.1 Introduction .......................................................................................................................2 2.2 Deployment Diagram ........................................................................................................4 2.3 Implementation Language .................................................................................................5 2.4 Source code .......................................................................................................................5 2.5 Use of ATST Common Services .......................................................................................6

    3. System Context ......................................................................................... 6 3.1 Context diagram ................................................................................................................6 3.2 External Interfaces.............................................................................................................7 3.3 Time System......................................................................................................................8

    4. SYSTEM DESIGN............................................................................................ 9 4.1 Configurations ...................................................................................................................9 4.2 Events ..............................................................................................................................11 4.3 The Device Model ...........................................................................................................12 4.4 The Controller Model ......................................................................................................12 4.5 Controller Commands .....................................................................................................14

    4.5.1 Load .........................................................................................................................14 4.5.2 Init ............................................................................................................................14 4.5.3 Startup ......................................................................................................................14 4.5.4 Shutdown..................................................................................................................14 4.5.5 Uninit .......................................................................................................................15 4.5.6 Submit ......................................................................................................................15 4.5.7 Pause ........................................................................................................................15 4.5.8 Resume.....................................................................................................................16 4.5.9 Get ............................................................................................................................16 4.5.10 Set.............................................................................................................................16

    4.6 Engineering screens.........................................................................................................16 4.7 TCS Components ............................................................................................................20 4.8 The “OSL” Controllers....................................................................................................22

    4.8.1 OslController2..........................................................................................................23 4.8.2 HeadController .........................................................................................................24

    4.9 Logging ...........................................................................................................................24 4.10 Pointing and Tracking..................................................................................................25

    4.10.1 The Interface to the Telescope Control System .......................................................25 4.10.2 Supported Coordinate Systems ................................................................................34 4.10.3 Solar Ephemerides ...................................................................................................36

    4.11 The sollib library..........................................................................................................37 4.12 Scanning ......................................................................................................................38

    4.12.1 Random ....................................................................................................................38 4.12.2 Grid ..........................................................................................................................39 4.12.3 Scan ..........................................................................................................................40

    4.13 Controlling Devices That Track...................................................................................42 4.13.1 Change of mode to follow........................................................................................44

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page iii of 71

    4.13.2 Change of track identifier.........................................................................................44 4.14 Position feedback.........................................................................................................44 4.15 Telescope and Shutter Alignment................................................................................45 4.16 Handling The Zenith Blind Spot..................................................................................46

    5. DETAILED DESIGN.................................................................................. 47 5.1 Loading and Initialization................................................................................................47 5.2 STARTUP .......................................................................................................................48 5.3 Operational startup ..........................................................................................................48

    5.3.1 Observing startup .....................................................................................................49 5.4 Configurations .................................................................................................................49

    5.4.1 Sequencing ...............................................................................................................52 5.4.2 Complex TCS attributes ...........................................................................................52

    5.5 Interlocks .........................................................................................................................54 5.6 Pointing Kernel................................................................................................................55

    5.6.1 Sub-components .......................................................................................................56 5.6.2 Operating Modes......................................................................................................58

    5.7 Pointing Update Tool ......................................................................................................59 5.8 Environment ....................................................................................................................60 5.9 Thermal Control Package ................................................................................................61

    5.9.1 Consolidate thermal data..........................................................................................63 5.9.2 Seeing reduction.......................................................................................................64 5.9.3 M1 Thermal Model ..................................................................................................64 5.9.4 Thermal control ........................................................................................................64

    6. Compliance Matrix .................................................................................. 65 7. ACKNOWLEDGEMENTS......................................................................... 68 8. REFERENCES.......................................................................................... 68 9. Appendices.............................................................................................. 70

    9.1 Container Manager ..........................................................................................................70 9.1.1 Overview..................................................................................................................70 9.1.2 Description ...............................................................................................................70 9.1.3 ATST Application Control.......................................................................................70

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 1 of 71

    1. INTRODUCTION 1.1 PURPOSE The intention of this document is to describe the structure of the software that constitutes the ATST Telescope Control System, how it interfaces to the remainder of the ATST control system and how the requirements expressed in [1] are met. The intended audiences of this document are:

    - The reviewers of the TCS Critical Design - The developers of the TCS work package - The developers of the TCS sub-system work packages

    The layout of this document is as follows: Section 2 provides a brief overview of the whole TCS system. It describes what the TCS is and how it is structured and deployed. Section 3 sets the TCS within the context of the remainder of the ATST control system and in particular specifies which other systems the TCS must interface with. The detail of what passes over each of these interfaces is described in separate Interface Control Documents (ICDs). Section 4 is an introduction to the infrastructure that the TCS will be built with, known as the ATST Common Services. Finally Section 5 moves on to the detailed design of the TCS. It describes the packages that make up the TCS together with specifics of the TCS components and classes. A compliance matrix can be found in Section 6 cross referencing how the design described in this document meets the TCS design requirements [1]. Between the TCS PDR in March 2005 and the CDR in October 2006, there were major changes to the ATST infrastructure software. In particular, the Controller class was introduced and the Device class was dropped. The design presented here is based on the version of the ATST Common Services known as “Panguitch_base” available from the ATST CVS repository. Note that Panguitch_base does not yet include a C++ implementation of the common services. The consequences of this are discussed later. 1.2 SCOPE The software described by this design is the ATST Telescope Control System. It is referred to throughout this and other ATST documentation as the Telescope Control System or more usually by its acronym the TCS. The purpose of the TCS is to provide a high quality stable image of a specified point on the solar disk or corona to instruments at the Gregorian, Nasmyth or Coudé focal planes. The TCS achieves this by coordinating and controlling the activities of its subsystems under instruction from the Observatory Control System (OCS). Note that as defined here the ATST Telescope Control system does not include direct control of any ATST hardware. That job is the responsibility of the TCS subsystems which are described in separate design documents. 1.3 DEFINITIONS, ACRONYMS AND ABBREVIATIONS Specific definitions, acronyms and abbreviations used in this document are described below. For a more general glossary and acronym list as used in ATST see [4]. 1.3.1 Definitions

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 2 of 71

    Telescope subsystems – the subsystems of the TCS are the Mount Control System (MCS, also known as the Telescope Mount Assembly TMA), the M1 Control System (M1CS), the M2 Control System (M2CS), the Feed Optics Control System (FOCS), the Acquisition Control System (ACS), the Wavefront Correction Control System (WCCS), the Heat Stop Assembly (HSA), the Enclosure Control System (ECS) and the Polarization Analysis and Calibration System (PAC, also sometimes referred to as the Gregorian Optical Station GOS) Virtual telescope – the astrometric building block used to construct the pointing kernel. The virtual telescope is an ideal telescope that responds instantaneously to new demands. The demands can be in the form of a changed sky target or image position. The virtual telescope can also predict target or image coordinates knowing the mount encoder readings. Slew – a discontinuous change in position or velocity demand from the TCS. 2. SYSTEM OVERVIEW 2.1 INTRODUCTION The ATST control system consists of four principal systems, the telescope control system (TCS), the observatory control system (OCS), the data handling system (DHS) and the instrument control system (ICS). The OCS is responsible for high level observatory operations like scheduling, allocating resources and running “experiments”. Experiments consist of a series of observations with a particular instrumentation setup. The data from these experiments is stored and displayed by the DHS. The role of the TCS in this system is to

    • point and track the telescope in a range of coordinate systems. • monitor and control the thermal loads on the telescope • perform scans and offsets coordinated with other observatory activities • monitor and control the active and adaptive optics systems • provide interactive control for the observatory operators

    An overview of how these requirements are met by the TCS software is shown in the package diagram below. The util package at the top of the diagram contains the utilities used by the TCS to build the software system. Although all the other packages shown are dependent on util, the dependences are not shown so as not to clutter the diagram. Amongst other tools, the util package contains in particular the sub classes of the Common Services Controller class that are will be used throughout the TCS software. Below util in the diagram are the jes and cm packages. These contain respectively the Java Engineering Screens (JES) and the TCS Container Manager (CM). The JES is an extension of the Component class that can be used to graphically layout an engineering user interface using the Swing widget set. Once designed, the screens can be activated such that they connect to other components as well as register for events. This allows configurations to be issued as well as status to be displayed. Full details of the JES and how it can be used can be found in [24]. The TCS Container Manager is used to load, initialize and startup the various components of the TCS. Depending on how it is configured, this can be done automatically or step by step. The manager is flexible enough that it can start up any container of the overall ATST control system and then load and initialize any components into these containers. The selection of containers and components is done graphically. A more detailed description of a current CM prototype can be found in the Appendix (see

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 3 of 71

    section 9.1) Together, the JES and CM provide the interactive control of the TCS for the observatory operators. The remaining packages in the diagram constitute the TCS itself.

    tpk

    thermal

    environment

    configs

    subsys

    util

    jes cm

    Figure 1 Top level TCS packages

    At the heart of the TCS lies the pointing kernel package called tpk. This software system produces a stream of demands to the tracking mechanisms of the ATST in order to accurately follow the current science target and guide object. Its main role therefore is to meet the requirements of the first and third bullet points above. The exact value of the demands produced will depend on how the TCS is configured. The pointing kernel is a generic package that can be configured to control a range of different telescopes. In order to keep telescope specific code out of the kernel a separate package called “subsys” is provided to translate the outputs of the kernel into the signals and controls required by the ATST. Amongst other roles the package will ensure the kernel data is in the correct format and units for the TCS subsystems, clamp demands, track the current wrap states etc. Configuration is handled by the Configs package. This package verifies and manipulates the configurations received from the OCS and ICS. Configurations consist of a set of attribute value pairs that the TCS will match by sending configurations to its subsystems and to its own internal components. A configuration might consist of a heliocentric coordinate pair plus a required focal station for example. In response to such a configuration the TCS would adjust its internal state such that suitable streams of position demands were generated for the mount and focal station. The practical result of this would be

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 4 of 71

    that the telescope would slew to the target such that it would appear stationary at the specified point of the required focal station. Due to the substantial heat loads on the enclosure and mirrors, an important task for the TCS is to manage and monitor the thermal environment of the telescope. This task is performed by the thermal control package. This thermal control package is dependent on the current configuration of the TCS which in turn is under the control of the operator and/or the OCS. Finally, the Environment package handles the reading and averaging of the environmental parameters needed by the TCS such as temperature, humidity and pressure. 2.2 DEPLOYMENT DIAGRAM Another view of the TCS can be found by looking at the TCS deployment diagram. This is shown in Figure 2.

    Engineers workstation

    >TCS GUI

    TCS Controller

    >TCS

    Weather server

    >Environment

    Event channel

    Event & Command channel

    Figure 2 TCS Deployment Diagram

    The deployment diagram shows where the main components of the TCS will run. As can be seen, the deployment diagram is quite straightforward. The bulk of the TCS will run on a dedicated Linux workstation and be written in a combination of Java and C++. The operating system used will be Red Hat Enterprise Linux (the version is TBD) or the equivalent CentOS release. The standard hardware for running ATST Linux applications has not yet been specified. The baseline machine we have assumed here is the Advantech ACP-2000 with a PCA-6187 processor card which supports an up to 3.4GHz Pentium 4 class processor. The machine will be configured with 512Mbyte memory and an 80 GB local disk. The Advantech is a 2U industrial rack mounted PC with a least two space PCI slots for the GIS and time bus interfaces. It has 300W dual redundant power supplies that are hot swappable.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 5 of 71

    The Environment package is shown as running on a separate machine. There are two reasons for this. The first is that we want the environmental data collection to be interrupted as little as possible. In particular we want to make it available even if the TCS is shut down. Secondly we do not yet know what hardware will be needed to read the environmental data. It may be that a Linux workstation such as is used for the TCS is not appropriate. The TCS engineering interface will be written in Java and communicate with the TCS using the ATST common services. It is shown running on a separate machine although this is not a requirement. It could be run on any workstation supporting the Java and ATST common software environments. The current expectation however is that the TCS GUI will execute on a separate workstation from the TCS itself to avoid any potential problems with the graphical display starving the TCS of resources. For further details refer to Section 4.6. The TCS is expected to be robust and run continuously. It will only be shutdown for engineering purposes and not for example at the beginning of each observing day. If the TCS is shut down it will return all system resources to the operating system i.e. there will be no memory leaks, unreturned buffers etc. The TCS will assume that all communications within this environment are secure i.e. it will not implement any firewall, encryption or password access. All security considerations will be dealt with by the ATST common services and the observatory wide computer systems. 2.3 IMPLEMENTATION LANGUAGE Earlier versions of the TCS Design envisioned the TCS to be written completely in C++. The main reasons for this were that the pointing kernel of the TCS although requiring modification for the ATST was written as a combination of C/C++ and that the ATST Common Services supported C++ containers and components. To date only a Java implementation of the ATST Common services has been available and this has meant all prototyping has had to be via Java components and containers. As a result of this prototyping it was realized that there was no compelling reason to write the remainder of the TCS in C++ just because the pointing kernel was in that language. Instead, the pointing kernel could be wrapped in Java. There are two ways this wrapping can be achieved within the ATST Common Services. The first is write a Java controller that then uses the Java Native Interface (JNI) to call the pointing kernel methods. The second is to embed the pointing kernel into a C++ component and container and then implement a C++ controller that invokes the appropriate kernel calls in response to configurations received. The design presented here assumes that the first method will be adopted. The reason for this is simply that it is the only method currently available and it has been shown to work reliably in the prototypes developed so far. However, given the period that is likely to elapse between now and the construction of the TCS and the maturing of the common services over the same period, this design decision should be reviewed before construction begins. 2.4 SOURCE CODE The latest source code for the TCS can be found in the CVS repository on maunder.tuc.noao.edu. The file RELEASE.NOTES in the top level directory provides a brief summary of the capabilities of each tagged release. The templates directory provides standard templates for files, include files and classes. The templates are constructed so as to allow documentation production using doxygen or Javadoc.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 6 of 71

    In order that the TCS is not precluded from being ported to another operating system, all Linux operating system calls will be wrapped in a TCS operating system independent library. All the wrappers will do is simply call the Linux system call unconditionally. The purpose of this is to clearly identify where the operating system specific calls are located not to provide an operating system independent application. Similar wrappers will be provided for any CPU architecture specific features although it is not intended that any such features should be used. 2.5 USE OF ATST COMMON SERVICES The TCS is one of the major systems of the ATST. As such it must work seamlessly with the other components that make up the overall ATST control system. In particular it must accept and act on configurations sent by the OCS or ICS and in turn must send configurations to its subsystems. The TCS is therefore built using the ATST Common Services which in turn constrains its design. At the highest level the TCS will consist of two containers. One container will be used for the environmental package and the other for the remainder of the TCS. The TCS container will then hold a number of components. Each of these components will be initialized and then started by a container manager via the init and startup methods of the top level TCS component. During this phase the TCS components will attempt to make connections to the other ATST components with which they need to communicate and will retrieve their initial state from the ATST run time database. The starting assumption in the TCS design is therefore that the various pieces of the system will be implemented as ATST components or subclasses of ATST components. Particular use will be made of the controller class which is derived from component and adds the command/action/response behaviour needed to handle configurations. Details of the controller model in general and the particular controllers and components within the TCS can be found in [23] and Section 4.2 respectively. 3. SYSTEM CONTEXT In order to see where the TCS sits in relationship to the remainder of the ATST control system the TCS context diagram is shown below in Figure 3 3.1 Context diagram

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 7 of 71

    TCS

    MCS

    M1CS M2CS FOCS

    WCCS

    ECSACS

    OCS ICSDHS

    DatabaseServer

    Configurations + TCS Status

    Status

    TCSStatus

    Configurations TCSStatus

    TCSStatus

    Configurations

    Defaults

    TCSState

    GIS

    Interlocks

    HSAPAC

    Figure 3 TCS Context diagram

    The diagram shows the TCS with each of the systems with which it interacts. TCS subsystems are shown in green. To avoid clutter, only a single double headed arrow is shown and only the ECS arrow is labeled. For each subsystem interaction the TCS sends configurations to the subsystem and retrieves status from it. The status is retrieved either by subscribing to an event channel or by using “get” methods on component properties. In turn, subsystems will subscribe to event channels provided by the TCS. All these interactions use the common services. 3.2 EXTERNAL INTERFACES Each of the interactions of the TCS with its subsystems is governed by an ICD. The ICDs and their document numbers are tabulated below

    Subsystem ICD Reference Mount 1.1/4.4 [9] M1 1.2/4.4 [8], [16] M2 1.4/4.4 [7] Feed Optics 1.5/4.4 [11] Wavefront 2.3/4.4 [10] Acquisition 1.8/4.4 [12] Enclosure 4.4/5.0 [5] EMS 4.4/6.3.2 HSA 1.3/4.4 [6] PAC 3.1.1/4.4 [13]

    Table 1 TCS subsystem interface control documents

    The Environmental Monitoring System is not shown as an external device in the context diagram as certain aspects of it are part of the TCS.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 8 of 71

    The ICD for each subsystem lists the attributes that can be set as part of a configuration along with subsidiary details like units and range. The first part of the ICD describes those attributes that are of most interest to the TCS. These are sent to the top level component in the subsystem for distribution to subsystems lower level components. The second part of the ICD describes those attributes that are associated with subsystem sub-controllers. These will also be sent to the top level subsystem controller but are likely then to be passed direct to the sub-controller. The final part of the ICD lists those events that the subsystem will monitor. Most of these events will be generated by the TCS but some may be generated by other subsystems. The real time database server is used by all ATST systems as a persistent store to save state between reboots. The TCS will access the server via the ATST Common Services to retrieve default parameters on startup and save its state periodically. The Global Interlock System (GIS) is the hardware safety system that protects the ATST and its personnel from damage. In the presence of interlock signals from the GIS subsystems may be prevented from coming on line for example or individual mechanisms will not be able to be moved. The interface to the TCS is described in [14]. The hardware board used to interface to the GIS will be taken from the list of supported ATST hardware. As a baseline, the 48 channel Adlink PCI-7348 has been adopted for the time being. The remaining interactions shown in the context diagram are between the TCS and its peer systems in the ATST control hierarchy. Both the OCS and the ICS will send configurations to the TCS which it will verify and then attempt to match by controlling its subsystems. The intention is that the TCS will operate independently of the OCS and ICS and will not therefore subscribe to any event channels that those systems may provide (this is TBC with regards to the ICS). The final interaction is between the TCS and the Data Handling System. The DHS will not control the TCS but may need to subscribe to data provided by it (this is TBC) The interactions described above are covered by the following ICDs

    System ICD Reference Instrument 3.1.6/4.4 OCS 4.2/4.4 [15] DHS 4.3/4.4 GIS ?? [14] 3.3 TIME SYSTEM The TCS acts as the time master for the ATST. Ideally the time standard will be provided by a standalone system to which the TCS and each subsystem can be directly connected. Failing this the TCS will be provided with a bc637PCI card from Symmetricom that will allow direct connection to a GPS antenna to lock the TCS to TAI. Time will then be distributed to the other systems via IRIG-B. On startup of the TCS application, time will be read from the time card and used to set the system clock. Periodically after this the time will be re-read and the system clock reset to the time standard. Within the TCS pointing kernel all timestamps etc. will be read directly from the time standard but by ensuring the system clock is also closely locked to this standard we will also synchronize less critical events. TAI will be the timescale of choice for the TCS and all its subsystems.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 9 of 71

    4. SYSTEM DESIGN This section describes the system design of the TCS and in particular its use of the ATST common services. The TCS design is constrained by the need to interact seamlessly with the rest of the ATST control system and to use the ATST Common Services in its operation [1]. The first part of this section therefore recaps and summarizes the ATST Design with regard to the use of configurations and in particular the controller model. Section 4.4 then describes the controller model as applied to the TCS. 4.1 CONFIGURATIONS Configurations lie at the heart of the ATST Common Software. Configurations consist of a list of attributes and values that the system to which the configuration has been sent must match. This matching is handled by an ATST controller which is described later. A configuration can be set up by issuing a series of set commands (Section 4.5.10) or more usually by sending a complete configuration with a submit command (Section 4.5.6). The class diagram for a configuration is shown below.

    AttributeTable

    - table :Map

    + AttributeTable ():+ contains (aName :String ):boolean+ insert (newAttribute :IAttribute ):void+ remove (name :String ):void+ get (name :String ):IAttribute+ getNames ():String[]+ extractOnPrefix (prefix :String ):IAttributeTable+ extractOnSuffix (suffix :String ):IAttributeTable+ merge (source :IAttributeTable ):void+ toString ():String+ size ():int+ readData (in :BufferedReader ):IAttributeTable+ readData (fName :String ):+ displayAttributes ():void+ show (s:String ):void+ getMap ():+ setMap (map :):void

    Configuration

    #id :String = null#headertag :String = null

    + Configuration (baseConfig :):+ Configuration (baseId :String ,headerTag :String ):+ Configuration (baseId :String ):- Configuration ():+ getId ():String+ setId (newId :String ):void+ getHeaderTag ():String+ setHeaderTag (headerTag :String ):void+ show (s:String ):void+ merge (source :):void+ toString ():String+ clone ():Object

    >IConfiguration

    (from atst.cs ::interfaces ) >

    >IAttributeTable

    (from atst.cs :: interfaces )>

    IAttribute(from atst.cs ::interfaces )

    Attribute

    - name :String- value :String[]- definition :String

    + Attribute (newName :String ):+ Attribute (newName :String ,bool :boolean ):+ Attribute (name :String ,i: long ):+ Attribute (name :String ,d:double ):+ Attribute (name :String ,f :float ):+ Attribute (name :String ,s:String ):+ Attribute (name :String ,sArray :String[] ):+ setName (newName :String ):void+ getName ():String+ setValue (newValue :String[] ):void+ getStringValue ():String+ getValue ():String[]+ setDefinition (entryName :String ):

    *

    Figure 4 Class diagram for configurations

    The diagram shows that a configuration is a specialized form of AttributeTable that contains a unique configuration id and header tag. An AttributeTable consists of a collection of Attributes each of which has a name and value.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 10 of 71

    Configurations go through a set of states as illustrated below on receipt of commands and events from the underlying devices. The commands are submit, cancel, pause and resume. The events are Done or Aborted When a configuration is first created it enters the initialized state. It remains in this state while it is sent to a controller with a submit command. On receipt of the configuration it will be immediately be checked for validity. If the configuration is not accepted the controller will issue a rejection to the submitter and the configuration will be discarded. What happens next will depend on whether the configuration contains a “starttime” attribute. If starttime is present and the time is not already expired, the configuration will be placed on a queue in time order. If starttime is not present then the configuration will immediately be passed to an action thread for execution. The configuration is then running. If the underlying devices can match the demand configuration then it will eventually end and a successful done signal passed back to the submitter. If on the other hand it can not be matched then an abort will be sent back. In either case the configuration is then destroyed. Configurations that are queued will be allocated an action thread when their starttime matches the current time.

    Figure 5 Configuration states

    Another way in which the configuration can end is via a cancel command. Earlier designs of the ATST common services envisioned an attribute that would control how severe the cancel is. This is no longer the case. The TCS will take the following actions on receipt of a cancel command

    1. If the configuration is queued then it will be removed from the queue and an abort event sent for it. This behaviour is provided by the common services

    Unknown

    Initialized

    Queued

    Running

    Aborted Done

    Create

    Delayed

    Time expired

    MatchedFailed

    Submit

    Ready

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 11 of 71

    2. If the configuration is running then the action thread propagates the cancel command to all the components involved in the configuration and then aborts and destroys the configuration. This behaviour is also provided by the common services.

    3. If the TCS is running a sequence then after executing the actions in 2 above then no further steps in the sequence will be executed and the sequence will be destroyed.

    The existence of queued configurations has some consequences for the TCS. In particular configurations that are valid now may not be valid later or vice versa. The obvious configurations where this is a problem are configurations that contain target parameters. The target (typically a point on the Sun) may be below the horizon when the configuration is submitted or it may have set by the time the configuration is de-queued. The TCS handles this by use of the starttime attribute and a TCS specific “horizonchecking” attribute. When a configuration is received that contains target attributes then the TCS will first look for the starttime attribute. If starttime is present then the TCS will use that time to validate the configuration otherwise it will use the current time. If the result of this check is a demand elevation that is below the horizon then whether the configuration is rejected or not will depend on whether horizonchecking is on or off. With horizonchecking off, the configuration will be accepted but the demands to the mount and enclosure shutters will be clamped at the lower elevation limit. The telescope will thus track along the lower elevation limit until the target rises. With regard to the starttime attribute, the TCS is not expecting to use this in any configurations that it generates for its subsystems. The only controller that will make use of starttime will be the top level TCS head controller if it receives a starttime from the OCS or ICS. If a queue-able configuration is received then this will be sent for immediate execution by the subsystems once it is de-queued by the TCS. Note that the above does not prevent the OCS or ICS from submitting queue-able configurations to TCS subsystems provided they do not also include the atst.tcs.starttime attribute as well as the subsystem specific starttime attribute in the same configuration. 4.2 EVENTS As well as providing configurations that can be sent from one component or controller to another, the ATST common services also provides events that can be used to signal changes of state or status. Although both configurations and events can be used to pass data between components, they do so by very different mechanisms. Configurations require that there is a connection maintained between the two communicating components. Events on the other hand use a “publish-subscribe” mechanism. The source of the data publishes it and then any number of clients can subscribe. The publisher does not care how many or who the subscribers are and there is no permanent connection between the two. If a publisher is not present then a subscriber receives no events until the publisher is started. In the ATST common services the data sent with events is encoded as an attribute table. Arbitrary amounts of data can therefore be sent as an event, all a subscriber needs to know is the name of the event so that it can register to receive it. Events are used internally by the common services to signal the matching of configurations and will be used extensively by the TCS to report status and also to send trajectory streams to its subsystems. There is currently one issue with the event system that will affect the final design of the TCS and that is the problem of what to do about non-periodic or low frequency events. Suppose for example the status of the M1 mirror cover was signaled by an event each time it was opened or closed. Suppose also that after

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 12 of 71

    it has been opened the TCS is shut down and then restarted. Currently in this situation, the TCS would not discover the state of the M1 cover until it was closed again. There are a number of ways to deal with this

    • insist that all events are periodic at some “reasonable” rate e.g. 1 Hz • modify the event system such that when a system subscribes for an event it is immediately sent

    the last value published • insist that every non-periodic event or periodic but low frequency event has a corresponding

    attribute that can be accessed via a “get” Each method has advantages and disadvantages. Currently it is being assumed that option 2 will be implemented before construction starts but if not then option 3 will be used. This will have implications for the implementation of each of the TCS subsystems. 4.3 THE DEVICE MODEL The section on the TCS use of the device model that was present in earlier versions of this document has been removed following the decision that a device class would not be implemented in the common services. It is currently believed that the controller model (see below) provides all the facilities needed by the TCS. 4.4 THE CONTROLLER MODEL An ATST Controller implements what is called the command/action/response model. In this model “commands” are separate from the “actions” that they trigger. In this way many commands may be sent to a controller resulting in many simultaneous actions and in particular a controller is not blocked whilst waiting for a previous command/action to complete. On receipt of a command, the controller will send an immediate response to the sender saying whether the attributes sent with the command are acceptable or not. It will then queue the command for either immediate or later action. Once queued, the controller is ready to accept another command. The actions started by commands are handled by separate threads under the control of an action manager. Actions can complete as either “done” or “aborted”. Normal completion for an action would be “done” but if an error occurred then it would be “aborted”. This response is advertised by an event called configstatus which senders can monitor for completion. The class diagram of the ATST Controller is shown in the figure below

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 13 of 71

    Controller

    + UNKNOWN:String = "unknown"+ OK:int = 0- currentStatus :String = "LOADED"- paramSet :Set= null

    + Controller ():+ init (args :IAttributeTable ):void+ startup (args :IAttributeTable ):void+ shutdown ():void+ uninit ():void+ remove ():void+ submit (config :IConfiguration ):int+ submit (configuration :IConfiguration ,callback :IActionCallback ):int+ submit (configuration :IConfiguration ,pause :long ):int+ submit (configuration :IConfiguration ,pause :long ,callback :IActionCallback ):int+ get (table :IAttributeTable ):IAttributeTable+ set (table :IAttributeTable ):void#doSubmit (configuration :IConfiguration ):void+ cancel (configId :String ):void

    >IControllerAdmin

    (from atst.cs ::interfaces )

    >IActionCallback

    (from atst.cs :: interfaces )

    + done (config :IConfiguration ):void+ abort (config : IConfiguration ):void+ report (config :IConfiguration ):void

    >IController

    (from atst.cs :: interfaces )

    >IControllerLifeCycle

    (from atst.cs ::interfaces )

    >IComponentAdmin

    (from atst.cs ::interfaces )

    >IRemote

    (from atst.cs ::interfaces )

    > ActionCallback

    - component :IComponentAdmin = null

    + ActionCallback (ownerComponent :IComponentAdmin ):+ doDone (config :IConfiguration ):void

    >

    Figure 6 Controller class diagram

    Controllers accept configurations as the parameters of their commands. It is important for the TCS and its subsystems to distinguish between the completion of an action and the state of an underlying piece of hardware. An ATST configuration is in the running state until it is done or aborted. This may or may not coincide with an underlying piece of hardware being physically stationary or not. For example if a filter wheel is moved from one position to another then when the hardware reaches the new filter demand a signal must be sent to the action thread to tell the configuration it is done. In this case the hardware device stopping coincides with the configuration being done. In the case of slewing the mount to a new astronomic target however the action is done when the mount, coude and enclosure first match the position and velocity tolerances. The mechanisms will continue to track and it would be incorrect to consider the configuration as still running because of this. Another way of looking at this is to consider that the configuration is done when the mount is stationary in the frame in which the demand coordinates are specified. The life cycle of a controller is shown in Figure 7.

    Loaded Initialized Running

    Unknown

    load

    remove

    init

    uninit

    startup

    shutdown

    Figure 7 Controller life cycle

    The next section discusses the commands that trigger movements between these states

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 14 of 71

    4.5 CONTROLLER COMMANDS 4.5.1 Load This is not a command to the controller itself but the action of a container or container manager that loads the controller itself from disk. A controller should do very little at load time and constructors should be kept to an absolute minimum. Once loaded the controller will be connected to the ATST Common Services and it will be able to perform low level functions through those services such as setting and getting attribute information, logging, checking health and posting alarms. A controller should execute no functional behaviour nor allocate any resources at this level. If it is in control of hardware it must certainly not attempt to move it. 4.5.2 Init Standard behaviour during the init phase will be to start any housekeeping services, allocate any resources needed for later phases etc. This is the state that many subsystems will be brought to at night when they are not being used operationally. It is essentially a standby mode where the system is activated but not fully operational. For example the M1 control system’s thermal control would still be running but the mirror itself wouldn’t be being actively or passively controlled. A controller will always reach the initialized state as a result of an init command. This is guaranteed by the common services. Should some error occur during the initialization actions the controller should set its health to bad or ill and raise one or more alarms depending on the severity of the failure. As can be seen from Figure 6, the init command can take an attribute table that can be used to control the form that the initialization takes. 4.5.3 Startup Following an init, the next stage will be to receive a startup. This command takes the controller into the running state which is the state for operational use. Only in the running state is the controller able to act upon and execute arbitrary configurations. Whether the controller actually acts on a submitted configuration will depend on whether any errors were encountered on reaching the initialized and then running state. It may be that due to problems some or all configurations will be rejected. An example of what a mcs controller might do during the startup phase is to turn on the brakes and oil pumps and enable the servos. Controllers should avoid unexpected hardware movements during startup particularly those that might be hazardous. For example it would certainly not be appropriate to automatically datum the mount as part of the startup phase. Reaching the running state is a necessary condition for a controller to be operational but for some controllers it may not be fully sufficient. Again as with init the startup command can take a configuration as a parameter allowing the startup process to be customized when needed. 4.5.4 Shutdown Shutdown is essentially the reverse process of startup so any actions undertaken as part of the startup process should be undone during the shutdown. The aim of the shutdown is however to get back to the initialized state. Whilst in the running state the controller may have executed many configurations that leave mechanisms active. These should be also be halted as part of the shutdown. For example if the mount was tracking it should be stopped.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 15 of 71

    4.5.5 Uninit This command is the reverse of init and should undo any actions that resulted from the init. As a result of the uninit command the controller should be back in the same state as if it had just been loaded from disk. 4.5.6 Submit Once a controller is in the running state then it is ready to act on any configurations sent to it. Configurations are sent with the submit command. On receipt of a submit, the controller will verify that the configuration is valid. Within the TCS validation will consist of a number of stages

    1. The name of the attribute (if it is for the TCS and not one of its subsystems) will be checked to ensure it is known about. If any TCS attribute name is unknown then the configuration will be rejected.

    The alternative of simply ignoring unknown names could lead to a situation where a complex configuration is submitted with one of the attribute names spelt incorrectly. The user might not notice that the configuration had been accepted but that one attribute was actually discarded in the process.

    2. The value of the attribute will be range checked. Range checking is provided through the common services and the property database. If the value is out of range then the whole configuration will be rejected.

    3. Conflict checking. This is check on the current configuration as sent to ensure there are no

    conflicting demands. For example setting the telescope tracking mode to On but also setting the mcsmountmode to move would be a conflict as the mount would simultaneously be asked to move and follow.

    4. A self consistency check. This is to validate whether the current configuration when applied to

    the current state of the telescope will result in an overall configuration that is valid. The obvious example here is where the telescope is tracking an RA and Dec. and the new configuration consists simply of a new RA. The new RA may be perfectly valid and well formed but combined with the current declination may result in an elevation demand below the horizon limit. Another example would be where the enclosure shutter was open and the TCS was tracking the Sun and an attempt was made to turn off the cooling loops.

    If all these validation checks are passed then the configuration is handed to an action thread for execution. 4.5.7 Pause This command pauses the actions associated with an earlier submit. For components within the TCS this command is not generally useful as the configuration state will only be running for a very short time typically less than 0.05s This is because the components within the telescope like the virtual telescope are ideal. Internal TCS components will therefore essentially always be “done” and the concept of pausing and then resuming a component that responds on such a short timescale is not relevant. The pause will be significant for TCS actions that involve a sequence of steps. For such sequences the next step will not be taken until a resume is received (see below)

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 16 of 71

    It will be necessary for the TCS to propagate the pause to any subsystems that are part of the current configuration. 4.5.8 Resume Resume restarts the actions that were previously paused. For internal TCS components this will generally be irrelevant as their actions will already have completed by the time the pause is given. The TCS will forward the resume to any components that are part of the specified configuration. 4.5.9 Get This is a low level command provided to any sub-class of component. Get can be used to retrieve an attribute from the current configuration. A Get without any attributes will return all the attributes in the current configuration. 4.5.10 Set Set will similarly specify attributes for the default configuration. These attributes may be over ridden by an attribute provided with a subsequent preset. Since set only affects the default configuration and not the current one it is not possible to affect a configuration that has been checked and transferred ready for a start. 4.6 ENGINEERING SCREENS It is a requirement [1] that the TCS provide an engineering display that is capable of exercising the full functionality of the TCS although these screens need not necessarily use the principal systems interface described in the TCS/OCS ICD [15]. For the design of the engineering screens presented here we have adopted to use the principal systems interface as this will allow consistent testing of that interface prior to the delivery of the other principal systems. The design of the engineering interface component should meet the following requirements

    1. Screens should be easy to construct and modify. Ideally the screens should be laid out graphically and it should be possible to group and/or align widgets.

    2. General widgets should be provided that can be configured to send arbitrary configurations to the TCS

    3. As well as general widgets, it should be possible to send fixed configurations where the user only has to supply the attribute values without having to know the attribute names.

    4. It should be possible to attach a widget to an arbitrary event and extract one or more values from the attribute table associated with that event.

    5. It should be possible to use color, both to enhance the appearance of the display as well as providing color rules so that widgets can alter their appearance dependent on the values they are displaying.

    6. It should be possible to save screen layouts such that they can be recovered later. A prototype engineering display has been written that demonstrates these features. The prototype is known as the JES (Java Engineering Screen) and is described fully in [24]. The following paragraphs provide an overview and some screen shots. The JES is a graphical tool for laying out a graphical display. It has been written using Java Swing to enable it to be as platform independent as possible. As long as Java is installed on your machine it should be possible to use the JES. An earlier prototype of the JES used the SWT widget set but this requires the appropriate shareable libraries for the platform being used to be installed. The JES tool is implemented as an ATST Common Services component and so can make use of all the standard services provided and in

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 17 of 71

    particular, the connection and event service. The tool operates in two modes, edit and execute. In edit mode the mouse can be used to select and position a range of different widget types anywhere on the screen display area as shown in Figure 8

    Figure 8 JES screen in edit mode

    The screen shows a new widget being added to the screen. The initial size of the widget is indicated by the rectangle and the available widgets by the drop down menu. Once the widget is placed on the screen then it can be further edited once selected to tune its size, color etc. The edit screen for the Static Text widget is illustrated below

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 18 of 71

    For all widgets you can set the x, y positions plus the height and width manually as shown here by the text widget or by dragging the widget using the cursor. Each widget then has some widget specific settings. For the text widget displayed here there is the text that is to be displayed, the font size, the foreground color and the font style. The preferred setting if selected will set the size of the widget just large enough to contain the text that is to be displayed. Once a screen has been designed it can immediately be activated by placing it into execute mode. This makes for a very quick design cycle. If the executing display isn’t quite what is wanted it can immediately be switched back to edit mode, the changes made and then switched back to execute. It is not necessary to save the screen and then launch another application to see the effect of the changes. Once the design of the screen is satisfactory it can be saved. The saved layout is stored as XML so could potentially be edited with a standard text editor before being reloaded. This might be useful if some global change was required to all the widgets in a display. A more complex example that was put together to control a prototype TCS demonstrator is shown in the figures below. Figure 9 shows the engineering screen running in execute mode and Figure 10 shows the same screen in edit mode. Some of the widgets visible in this screen apart from the static text are

    • Configuration widget. This is the rectangle to the left of the screen under the text “TCS Configuration”. It consists of a drop down list of available attributes plus a field in which the attribute value can be entered. Clicking the add button will add it to the current configuration which is then tabulated on the right hand side of the widget. Currently the atst.tcs.mode attribute has been set to a value of “follow”. Additional attributes could be added and then the apply button pressed to send the configuration to the TCS.

    • Sub-screen buttons. Underneath the configuration widget are four buttons labeled “MCS Control”, “ECS Control” etc. These buttons launch new screens which in this case provide direct access to the MCS, ECS etc.

    • Text update widgets. These widgets are recognized in the edit mode figure by the presence of text starting $(top).tcs. These widgets are attached to events with the names displayed in the text. Whenever the event is received the corresponding widget on the screen is updated. The $(top)

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 19 of 71

    prefix is a macro which is specified when the JES application is started. The macro is substituted on startup to create the full name of the event. Macros have been introduced to cope with the development environment where multiple users may want to run copies of the TCS at the same time. To avoid namespace clashes all names in the TCS can be altered by providing a new value for the “top” macro. Engineering screens must then similarly be able to be switched to register to receive the new names.

    Figure 9 An example TCS Engineering Screen

    Figure 10 The same screen in edit mode

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 20 of 71

    Perhaps the most interesting widget on the prototype screens is that in the lower right hand corner. The tabbed panel widget allows separate tabbed panes to be defined. Each of these tabbed panes can be treated and edited like a separate screen and have other widgets added to it. The edit window for the tabbed panel is show in Figure 11. It is slightly different from the main screen edit window but the controls work the same . In particular, it is possible to copy and paste from the main screen to the tabbed panel.

    Figure 11 Edit window for a tabbed panel

    The advantage of the tabbed panel is that it allows a lot of detailed information to be added in a small amount of screen space with easy selection of the details you are interested in. The tabs here provided detailed views of the MCS and ECS but it is easy to switch to view the Feed Optics Controller or a graph of the azimuth demand and achieved positions. 4.7 TCS COMPONENTS In this section we use the term component to refer not only to the ATST component class but also to sub classes of it. In fact most TCS components will be controllers or sub-classes of controllers. Using the super class name however avoids constantly switching terminology. The ATST component model imposes a strict hierarchy on the control flow through the ATST control system. This means that any attribute destined for a sub-component must pass through a parent and so the name of the attribute maps exactly on to the component hierarchy. For example if there is an attribute atst.tcs.seq.mcs.mode then the OCS (which implements the top level atst component) will send this attribute to the top level tcs component which in turn will send it to its seq component. The TCS will then send the attribute to the mcs subsystem. A component in the hierarchy may intercept an attribute and as a result generate its own configuration. For example if the top level tcs component had an attribute atst.tcs.mode then it might on receiving this

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 21 of 71

    value generate a configuration that contained atst.tcs.seq.mcs.mode. This attribute would then be sent to the seq and mcs components just as in the previous example. The atst.tcs component could not send the mode attribute direct to the mcs component and bypass the seq. The TCS component hierarchy is shown below

    tcs

    thermal ems tpk seq

    acsm1cs ecsmcs hsam2cs focs wccs

    Figure 12 TCS Component hierarchy

    Those items in light blue are TCS components whereas those in green are the top level components of each of the TCS subsystems. The roles of these components are as follows: thermal – this device handles the thermal control aspects of the TCS. Apart from some configuration parameters its main role will be to monitor the thermal state of the telescope and raise warnings etc. if there are problems. ems – this is the environmental monitoring system controller. Again it will play a mainly monitoring role and provide suitably massaged data for the pointing kernel. tpk – this is the TCS pointing kernel. This component will provide the trajectory event streams that will be subscribed to by TCS subsystems to control the tracking devices e.g. the mount axes. seq – this is the main sequencing device within the TCS. Its role is to do the straightforward sequencing that is needed to bring the telescope into line with the current demand configuration. Under the sequence component we currently have most of the TCS subsystems so that there is a place where such sequencing can occur if it is needed. Currently the only component that does not need to be sequenced in some way is the ACS but this may change if it re-acquires its guiding role as well as its acquisition role. For more complex sequencing than can be handled at the component level and particularly for any sequencing that involves operator intervention then we are looking to the OCS level to provide the necessary facilities.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 22 of 71

    4.8 THE “OSL” CONTROLLERS Give the hierarchy in the preceding section and the functionality provided by the standard ATST controller it will be necessary to sub class it for use by the TCS. The two new features we need to implement are;

    1. The ability for a controller to pass configurations to sub controllers without returning a done signal until all the child controllers are done

    2. The ability for a controller to init, startup, shutdown etc. a range of other components The first functionality is needed by any controller that passes configurations to a sub controller. If you think of controllers forming a tree like structure then only the leaf nodes don’t require this ability. The second ability is needed by the controller at the top of the tree. Without it the OCS would have to initialize, startup, shutdown and uninit every component within the TCS. This in turn would need it to know what components the TCS implemented and new components added to the TCS would require modifications to the OCS. The class diagram for the TCS controllers is shown below.

    HeadController(from atst.tcs ::util )

    #_subcomponents :IAttributeTable = null

    #doInit (att : IAttributeTable ):void#doStartup (att : IAttributeTable ):void#doShutdown ():void#doUninit ():void#__init (att :IAttributeTable ):void#__startup (att :IConfiguration ):void#__shutdown ():void#__uninit ():void

    OslController2(from atst.tcs ::util )

    - _map :HashMap

    #_action (config :IConfiguration ):String#issue (config :IConfiguration ,controller :String ,queue :IAttributeTable ): int#__action (config :IConfiguration ):String- addToQueue (id :String ,queue :IAttributeTable ):void- signalDone (id :String ,queue :IAttributeTable ):void- signalAbort (id :String ,reason:String ,queue :IAttributeTable ):void- checkAbort (queue :IAttributeTable ):String- queueFinished (queue :IAttributeTable ):boolean

    Controller(from atst.cs ::controller )

    ActionCallback(from atst.cs ::controller )

    _ActionCallback(from atst.tcs ::util )

    - _owner :OslController2 = null- _queue:IAttributeTable = null

    + _ActionCallback (owner :IComponentAdmin ,queue :IAttributeTable ):#doDone (config :IConfiguration ):void#doAbort (config :IConfiguration ,reason :String ):void

    TimerTask(from java::util )

    ShutdownTask(from atst.tcs ::util ::HeadController )

    - _name:String = null- _timer :String = null

    + ShutdownTask (name :String ,timer :Timer ):+ run ():void

    InitTask(from atst.tcs ::util ::HeadController )

    - _name:String = null- _att :IAttributeTable = null- _timer :Timer = null

    + run ():void+ InitTask (name:String ,att : IAttributeTable ,timer :Timer ):

    StartupClass(from atst.tcs ::util ::HeadController )

    - _name:String = null- _att :IAttributeTable = null- _timer :Timer = null

    + StartupTask (name :String ,att : IAttributeTable ,timer :Timer ):+ run ():void

    UninitTask(from atst.tcs ::util ::HeadController )

    - _name:String = null- _timer :Timer = null

    + UninitTask (name:String ,timer :Timer ):+ run ():void

    Figure 13 TCS Controller classes

    The ability to pass configurations to sub controllers and yet maintain a busy state is implemented within OslController2. The ability for a controller to init, startup etc. a range of other controllers is implemented

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 23 of 71

    in HeadController. A HeadController inherits from OslController2 as in general it will also need the ability to pass configurations to its sub controllers. Note that the four classes that are derived from TimerTask are all from atst.tcs.util.HeadController i.e. they are all internal private classes used only by HeadController. 4.8.1 OslController2 The OslController2 class extends the functionality of the common services Controller class. Most importantly it can be used to forward attribute tables to other controllers, and then hold itself in a busy state until all of the other controllers have replied. It would then itself complete with a state equal to the most severe returned from all of the other controllers. A sequence diagram showing how controllers collaborate is shown below

    OCS ConfigSender TCS configHandler :OslController2 Subsys configHandler :OslController2

    SendConfig : status := submit (config )

    queue :AttributeTabledoAction : new

    doAction : status := __action (config )

    Submit : destroyActionCompletion : Controller.DONE

    Completion : done

    Response : Controller.OK

    Completion : Controller.DONE

    doAction : x := issue (config ,controller ,queue ) issue : Controller.OK

    Figure 14 Controller collaboration

    In this diagram the OCS sends a configuration to the TCS and the TCS in turn sends a configuration to one of its subsystems. In general there could be several layers of controllers in the TCS as well as in the subsystems and the TCS might simultaneously be sending configurations to multiple subsystems. Although this would complicate the situation, the essential principles are captured in this simpler example. The main requirement is that the OCS sees only two signals returned. The first is an acknowledgement from the receiver that it has received and accepted the configuration that was sent and the second is a signal that the configuration has been acted on and is now matched. There are two alternate scenarios not shown here. The first is that the configuration is rejected. In this case there will be no further signals. The second is that the configuration was accepted but never matched due to some error. In this case the second signal will be an error response rather than a done. For the TCS to provide this behaviour to the OCS requires in turn that all controllers under it behave the same way, hence the decision to extend the standard Controller class to encapsulate the required behaviour. The way this must be done is for a parent controller to remain within its doAction method until all its sub controllers have both acknowledged (or rejected) the configurations they were sent and completed their actions.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 24 of 71

    The OslController2 class achieves this by extending the Controller class and overriding its doAction method. When an action happens on this class it enters the doAction method and performs the following operations.

    1) A queue is created. This is in fact just an attribute table but will be used to store the status of all Controllers that this controller has invoked actions on. From now on these will be known as sub-controllers. Each attribute stored in the queue will be named according to the attribute table passed to the sub-controllers, and the value it contains will be the current status of that action.

    2) __action is now called and passed the configuration that this method was passed. __action is in fact an empty method that is present to allow subclasses to alter the configuration before it is considered for forwarding. Subclasses will override the __action method, add, remove or alter attributes within the configuration and then return a status value. This status value is also eventually added to the queue and combined with the others to form the overall status of this action.

    3) This class now has a configuration that may contain many attributes that are destined for many different sub-controllers. This needs to be split into separate configurations, one for each sub-controller that we have an attribute for. The attributes are looped over by name. The name decides where the attribute will be going. A hash map of configurations is created and whenever a new name is looped over a new configuration is added to this hash map. The attribute is placed into the configuration. If a configuration, destined for a sub-controller we need already exists, then the attribute is simply placed into that configuration.

    4) All attributes have now been iterated over and placed in their correct configurations ready to be submitted to the sub-controllers. The controller now loops through the hash map of configurations, submitting each one and at the same time placing an entry for it on the queue.

    5) After all configurations have been sent the controller updates its own entry on the queue to say that it has finished. It then enters a loop that checks for all entries on the queue to finish. The thread of execution will not pass this point until answers from all sub-controllers have arrived. (A future version will allow cancellation here in case of error, so that the controller doesn’t get stuck in a busy state).

    6) Each time one of the sub-controllers has finished the action callback will get executed with the id of the configuration passed to it. This id is used to notify the queue that the particular sub-controller has completed its action. After all queue entries have been updated with done or abort the controller will itself complete with the most severe status of all the entries in the queue.

    4.8.2 HeadController A HeadController extends OslController2 and adds the ability to initialize, startup, shutdown and un-initialize any number of sub controllers. It is possible to wait (with a time out) for a sub controller to complete its operation before issuing the operation to the next controller or the HeadController can simply issue the operations in parallel. This class works by extending the OslController2 class and overriding the doIinit, doStartup, doShutdown and doUninit methods. As can be seen from the class diagram, it contains an attribute table called _subcomponents that is used to store the names of all of the sub-components it controls. This table is populated during the initialization of the controller; any attributes passed in that have the prefix “subcomponent” are placed into this table. After that the sub-components table is used whenever one of the four methods is invoked on this controller. Basically, it will connect to each of its sub-components and invoke the same method. 4.9 LOGGING

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 25 of 71

    The TCS will use the ATST Common Services Log Service [23] to log its system activity. In accordance with the ATSTCS, the TCS will recognize messages of two types. Status messages are messages that will always be logged whereas debug messages will only be logged during diagnostic checks. Both types of messages will be stored in a relational database. The ATST Log Service enumerates a number of different message categories. All TCS status messages will be placed in the default category whereas debug messages will be placed into the most appropriate category e.g. flow, init, timer etc. For a full list see [23]. It is expected that the ATST component model (and its sub class the controller model) will provide logging of such things as state and configuration changes, commands and configurations etc. The TCS will log all alarms errors and warnings. The TCS will make use of the debug level within the ATST log service to log its debug messages in various degrees of detail. 4.10 POINTING AND TRACKING Note that most of the following sections have been extracted from versions of ATST Pointing and Tracking [25]with minor formatting differences. For full details of the pointing and tracking strategy used by the ATST TCS and in particular the details of how to use the library sollib, refer directly to that paper. The pointing problem for ATST has two aspects:

    1. Given the coordinates of a solar surface or coronal feature (the “target”) we want to control the

    telescope mount and movable optics so that the image of the feature appears in the right place in the focal plane.

    2. Given the coordinates to which the mount and movable optics have been set, we want to know the

    solar coordinates imaged at a given place in the focal plane.

    These two are complementary, linked by the same transformation chain and differing only in the direction in which the chain is traversed. The first aspect is about pointing and tracking, the second addresses displayed coordinates and World Coordinate Systems. The extreme ends of the chain, so far as this document is concerned, are (i) the heliographic longitude and latitude (Ψ, Φ) and (ii) the demanded coordinates sent to the telescope control system (TCS). There are a number of intermediate coordinate systems some of which have uses in their own right. The following section deals first with the TCS pointing kernel, a relatively self-contained component from whose internal complexities the solar observer can be insulated. The next reviews the choice of supported coordinate systems, the dominating influence on the observer’s perception of the ATST pointing controls. The final section discusses methods for generating the required solar ephemerides, both orbital and physical. 4.10.1 The Interface to the Telescope Control System Because of the need to calibrate the pointing of ATST using stars, and to reduce development costs, it is proposed to use a proprietary telescope pointing kernel (TCSpk). This is principally intended to track celestial targets, and so solar observing will be treated as an extension, a specific case of tracking any solar-system target.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 26 of 71

    4.10.1.1 POINTING AT SOLAR TARGETS The interface to the pointing kernel will be topocentric apparent place, continuously recalculated in order to generate the requisite non-sidereal tracking rates. The starting point for this calculation will usually be heliographic coordinates, but sometimes heliocentric coordinates. The pointing calculation will be quite rigorous, taking properly into account planetary aberration (i.e. light time) and diurnal parallax. Light deflection (i.e. the Sun’s gravitational lens effect) will probably be neglected, as for solar features it is always below 4 mas, though it could be up to 1.75″ for a coronal feature distant from the Sun and seen against the limb.1 Working in apparent (α, δ) has the disadvantage that the coordinates of the main object of interest, namely the Sun, are continuously changing. The motion in latitude could be reduced were the kernel to support ecliptic coordinates, and the longitude drift could be reduced by working in terms of the mean Sun. However, they would never quite go away—diurnal, monthly, synodic and annual terms would always be present—making such strategies less advantageous. Moreover, the rates in (α, δ) change so smoothly that keeping the coordinates and their rates of change up-to-date to the required accuracy is straightforward.

    4.10.1.2 TCSPK ARCHITECTURE TCSpk is a suite of just over 50 ANSI C functions, around which a telescope control system can be developed. It has been used on a number of projects including SOAR and LBT. The TCSpk functions implement the astrometric pointing kernel part of a TCS in a rigorous, general and modular way, insulating the TCS designer from many intricacies. The algorithms used by TCSpk are described in Wallace (2002) [22]. Because TCSpk does a specialized job and is intended to be merely a part of a complete TCS, it is designed to place as few constraints on the TCS developer as possible. There is no preferred operating system or user-interface style for example, and even the real time requirements can be met in a variety of ways. This flexibility will make TCSpk easy to integrate within an ATST control system design. For example, TCSpk lends itself to use in a C++ design, providing the mathematical algorithms without dictating the way they are grouped and interconnected. Experience with the SOAR and LBT control systems has provided many insights into the best ways of dealing with the object-oriented design issues. TCSpk uses the SLALIB/C library for all its positional-astronomy transformations and a subset of the TPOINT pointing-analysis software (see below) for calculating pointing corrections. Both are de facto standards, reducing maintenance concerns.

    4.10.1.3 TPOINT Pointing analysis will be via the proprietary TPOINT system. This is a straightforward command-driven system that can be used interactively or run automatically using scripts. The rigorous TCSpk transformations are built into TPOINT so that analysis and control are tautologically linked. TPOINT is used by most large professional telescopes.

    4.10.1.4 SLALIB/C The proprietary SLALIB/C positional-astronomy library comprises 182 functions coded in about 23K lines of ANSI C. Among other capabilities, the library can perform milliarcsecond predictions of line-of-

    1 As with other ephemeris minutiae, consistency with other solar observers will be an important consideration.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 27 of 71

    sight to celestial targets. It is central to both TPOINT and TCSpk. The library is completely platform-independent, and is compatible with applications written in either C or C++. The current development version of SLALIB/C additionally includes support for the IAU 2000 precession-nutation and Earth rotation models, in both classical (equinox-based) and new (CIO-based) forms. At present, the TCSpk software uses the more traditional equinox-based approach (and a different, but almost as accurate, precession-nutation model) and a decision has yet to be made on whether to introduce the new methods into TCSpk. The only practical difference this will make to ATST users is whether apparent right ascensions are recorded using the new zero-point, or the equinox, or both. 4.10.1.5 POINTING-KERNEL FEATURES

    It would of course be possible to write a simple ATST pointing kernel from scratch, using standard positional-astronomy transformations to compute mount encoder demands that correspond to the solar feature to be studied. However, starting with TCSpk should make this basic job much easier to do properly (and get right first time), while offering important additional capabilities, such as the following.

    • All telescope control systems provide as their basic function the ability to point and track

    the mount in order to follow the sky coordinates of a specified target, in this case a solar feature. The ATST design will go a step further by additionally allowing the image position in the focal plane to be specified. With this explicit control over image (x, y), the ATST control system will deliver, as a matter of course, rapid transfer of images from acquisition device to instrument, fast dithering, precise blind positioning on slits and fibers and so on. In non-TCSpk systems, the way this is traditionally done is to make ad hoc changes to the pointing, either by perturbing the target (α, δ) or (Ψ, Φ) coordinates or by introducing spurious pointing-model offsets. A cascade of further ad hoc adjustments is then required, for example to correct the behavior of an autoguider. In contrast, the proposed ATST system has all these capabilities simply as part of the way the system works.

    • The target image remains centered on an off-axis instrument “hot-spot” even when the

    rotator (which in the case of ATST includes the coudé floor) is turned.

    • ATST’s various moving optical elements will be treated not as free-standing devices but as components of an integrated pointing system. Limb-guiding while roaming or scanning will happen naturally, and, should it be required, bandwidth splitting between the slow mount and a tip/tilt mirror would be easy to implement.

    • The ATST pointing kernel will have inbuilt handling of field orientation at Nasmyth and

    coudé foci. The fact that a given experiment is being done at, say, coudé will not, beyond cable wrap issues, be something the observer needs to be particularly aware of.

    • For limb guiding, the pointing kernel will automatically take care of differential

    refraction and atmospheric dispersion.

    • The kernel will provide for the accurate logging of target positions, and there will be support for World Coordinate System mapping.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 28 of 71

    • Control of instrument rotator angle will be handled rigorously, ensuring accurate results even near the zenith. The kernel reckons field orientation at the “pointing origin” (which is in general not at the rotator axis) enabling precise alignment between solar and instrument coordinates.

    • The basic pointing transformations that the kernel uses are rigorous and glitch-free even

    near the zenith (and even for large mechanical misalignments, not that this will be a concern for ATST).

    • High computational efficiency is built into the design of the kernel. On a typical PC,

    hundreds or even thousands of highly accurate astrometric transformations per second are possible, should the ATST control system design require it.

    • After commissioning, the design of the kernel is such that the pointing model can be

    upgraded without writing new code. Moreover, because the calibration and operational implementations of the pointing model use identical code, there are no hidden sign errors or subtle distortions.

    • The ATST control system can use as many simultaneous pointing models as necessary.

    This technique could support multiple foci, via switching or dichroics, and allow for geometrical misalignments and flexures peculiar to one part of the telescope—a limb guider, the occulter and so on.

    4.10.1.6 ARCHITECTURAL CONSIDERATIONS

    Designing a TCSpk-based ATST pointing control system is mainly a matter of managing a data context and reacting to events by introducing appropriate changes to it. The kernel design is essentially modeless, producing different effects as a consequence of the passing of time and the changing data context, not because set-piece maneuvers are carried out. The TCS designer has to address the following:

    • Access to TAI date and time has to be provided. The GPS offset, and UTC leap

    seconds, are handled outside the kernel.

    • Starting the system begins by setting up a data context containing the items needed to drive the kernel. This contains fixed site-specific information and locations for all the changing parameters.

    • Among many other tasks, the user interface must provide ways for the operator to

    change the kernel’s context, in particular by announcing a new target.

    • The ATST system must schedule a call to the kernel’s “slow update” function once every minute and a call to the “medium update” function once every 5s or so, to refresh different parts of the context. The timing of these events is not critical, and the update frequencies are merely suggestions.

    • The system must also call a “fast update” routine at, say, 20 Hz, to generate the tracking

    demands. Even this is not especially time-critical, because of the kernel’s use of timestamps. However, a fixed and reliable update rate will make for easier fault diagnosis and generally smoother results.

  • TCS Software Design Description

    SPEC-0021 Draft of Rev C Page 29 of 71

    • In addition to satisfying the TCSpk housekeeping needs, the ATST’s slow, medium and fast updates will perform the solar coordinate transformations necessary to supply the updated topocentric apparent places and differential rates that drive the kernel proper. A cascaded series of ephemeris calculations (for example the heliocentric Earth position and velocity might be refreshed once a minute, the solar rotation perhaps every five seconds) will ensure th