Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
DICOM DATA ABSTRACTION INTERFACES
A CBSE Based Approach
By
Lolke B. Dijkstra
A DISSERTATION
Submitted to
The University of Liverpool
in partial fulfillment of the requirements
for the degree of
MASTER OF SCIENCE
26th March 2006
ABSTRACT
DICOM DATA ABSTRACTION INTERFACES
By
Lolke B. Dijkstra
In a clinical environment DICOM is one of the most widely spread standards to ensure
connectivity and ease interoperability. The DICOM standard comprises the information objects
that are required to precisely record medical information and the services available to these
objects in a vast number of medical disciplines. DICOM precisely defines both the syntax and
semantics of DICOM objects and messages, but it is up to the application to decide how to use
and implement these and how to manage the objects. Although a wide variety of Open Source
toolkits for manipulating DICOM data exists, there is a lack of uniform approach in handling
DICOM related objects in applications. This causes DICOM based domain objects to be treated in
an opportunistic way by different applications even within the same institution.
This dissertation presents the results of applying a Component Based Development
approach to the DICOM-RT domain using CORBA. The design is based on a common
foundation framework whose generic nature makes it beneficial to many projects dealing with
families of components. In our project we created a series of DICOM data components based on
the DICOM real world model. To illustrate interoperability we developed a WEB based prototype
using J2EE technology which allows browsing DICOM studies. We used the AJAX pattern to
improve responsiveness of the user interface.
We examine our approach and show how using the common foundation framework
benefits extensibility, dependability and maintainability. We also demonstrate how our extended
interface specification contributes to creating concise and precise component specifications.
DECLARATION
I hereby certify that this dissertation constitutes my own product, that where the language
of others is set forth, quotation marks so indicate, and that appropriate credit is given where I
have used the language, ideas, expressions or writings of another.
I declare that the dissertation describes original work that has not previously been
presented for the award of any other degree of any institution.
Signed,
Lolke B. Dijkstra
This dissertation contains material that is confidential and/or commercially sensitive. It is included
here on the understanding that this will not be revealed to any person not involved in the
assessment process.
ACKNOWLEDGEMENTS
During the years of working as a software professional, many people have influenced the
way I approach software development. My studies at the University of Liverpool have contributed
significantly to approaching software development from a broader perspective. Past experience
continues to influence my work, my dissertation being no exception.
Not only have my studies had a profound impact on my mindset as an IT professional,
my family and I have had to show dedication and persistence.
Here I would like to express special thanks to:
� My wife Karin, for her support, my daughter Anna Karlien (8) for here understanding,
and my son Lourens Jan Pieter (5) for having coped with it.
� My dissertation advisor, Lelia Livadas, for her continuous support and for being
persistent but considerate, for being demanding, but to my benefit.
� Gail Miles, for doing such an excellent job on the Software Engineering course.
Lolke B. Dijkstra
5
TABLE OF CONTENTS
Section Page
TABLE OF CONTENTS.................................................................................................................. 5 LIST OF FIGURES.......................................................................................................................... 9
PART I - CONTEXT AND BACKGROUND
INTRODUCTION ........................................................................................................................... 11 Component Based Development .............................................................................................. 11 DICOM ...................................................................................................................................... 11 Project Definition ....................................................................................................................... 12 Database Technology ............................................................................................................... 12 Component Model ..................................................................................................................... 12 Common Framework................................................................................................................. 13 Summary ................................................................................................................................... 13
DICOM-RT domain....................................................................................................................... 14 Introduction................................................................................................................................ 14 DICOM Information Model ........................................................................................................ 14 DICOM Real world entity model................................................................................................ 15
PART II - REVIEW OF CBD LITERATURE
CBD Fundamentals ..................................................................................................................... 17 Introduction................................................................................................................................ 17 Common Definitions .................................................................................................................. 17
Component ........................................................................................................................... 17 Interface ................................................................................................................................ 18 Service .................................................................................................................................. 19 Operation .............................................................................................................................. 19 Design by Contract ............................................................................................................... 19
Component Specification ........................................................................................................... 20 Introduction................................................................................................................................ 20 Syntactical Specification ........................................................................................................... 20 Semantical Specification ........................................................................................................... 21
Pre- and post conditions ....................................................................................................... 21 Levels of Semantics .................................................................................................................. 21 Levels of Component Specification........................................................................................... 21 Extended Service Specification................................................................................................. 22
Component Interactions ............................................................................................................. 25 Introduction................................................................................................................................ 25 Options for Inter-component interaction.................................................................................... 25
Option 1................................................................................................................................. 25 Option 2................................................................................................................................. 26 Option 3................................................................................................................................. 26
Design Decision – Inter-component interactions ...................................................................... 27 Design Decision – Data consistency......................................................................................... 27
6
PART III - DATABASES AND PERSISTENCY
Persistency: Database Options.................................................................................................. 30 Introduction................................................................................................................................ 30 Object Oriented Databases (ODBMS) ...................................................................................... 30 Relational SQL Based Databases (RDBMS) ............................................................................ 31 Object Relational Databases (ORDBMS) ................................................................................. 31 Databases - Summary............................................................................................................... 31 DICOM-RT data ........................................................................................................................ 32 Conclusion................................................................................................................................. 32
Strategies for RDBMS Interaction.............................................................................................. 33 Introduction................................................................................................................................ 33 SQL based data access ............................................................................................................ 33 Data mapping – Table Mapper pattern ..................................................................................... 33 Dedicated persistency layer ...................................................................................................... 34 CBD and Interface based data access...................................................................................... 34 Conclusion................................................................................................................................. 34
PART IV - ARCHITECTURE AND DESIGN
Component Based Development and Quality .......................................................................... 36 Introduction................................................................................................................................ 36 Quality Attributes ....................................................................................................................... 36 Non Functional Requirements................................................................................................... 37 Conclusion................................................................................................................................. 37
Architectural Design Model ........................................................................................................ 38 Introduction................................................................................................................................ 38 Overview ................................................................................................................................... 38
Database............................................................................................................................... 40 DataManager ........................................................................................................................ 40 CORBA Server Components ................................................................................................ 40
DICOMDSArchive............................................................................................................. 40 DICOMDSServer .............................................................................................................. 40
WEB Application Components.............................................................................................. 40 WEB Client Components ...................................................................................................... 40 StoreSCP .............................................................................................................................. 41 Monitor .................................................................................................................................. 41 NotificationManager .............................................................................................................. 42
Component View ....................................................................................................................... 43 DICOMServices and DataServices....................................................................................... 44 DICOM Import....................................................................................................................... 44
Logical Design Model.................................................................................................................. 45 Introduction................................................................................................................................ 45 Overview ................................................................................................................................... 45 Data Primitives .......................................................................................................................... 48 SQL Primitives........................................................................................................................... 49 Transaction Primitives ............................................................................................................... 50 Service primitives ...................................................................................................................... 51
Generic Services Overview................................................................................................... 51 CRUD Services..................................................................................................................... 51 Generic Services................................................................................................................... 52 Selective inclusion of operations .......................................................................................... 52
Summary ................................................................................................................................... 53
7
PART V - EVALUATION
Evaluation Plan ............................................................................................................................ 55 Introduction................................................................................................................................ 55
Qualitative Analysis versus Quantitative Analysis ................................................................ 56 Dependability............................................................................................................................. 57 Extensibility and Maintainability ................................................................................................ 58 Ease of Integration and Portability ............................................................................................ 58 Performance and Concurrency ................................................................................................. 58 Assessment............................................................................................................................... 59
Fault avoidance, maintainability and extensibility................................................................. 59 Fault tolerance (external) and detection ............................................................................... 59 Assessing ease of integration and portability ....................................................................... 59
Evaluation Specification and Execution ................................................................................... 61 Interface Specification ............................................................................................................... 61
Level of Abstraction .............................................................................................................. 61 Technical .......................................................................................................................... 61 Semantical ........................................................................................................................ 61
Precise and Robust specification.......................................................................................... 62 Consistency of Interface specification .................................................................................. 64 Conclusions .......................................................................................................................... 65
Fault avoidance, Extensibility and Maintainability..................................................................... 65 Introduction ........................................................................................................................... 65 Analysis................................................................................................................................. 65
Defining a data module..................................................................................................... 66 Consistency ...................................................................................................................... 69 Defining additional services.............................................................................................. 69 Providing low level access................................................................................................ 70 Complex transactions ....................................................................................................... 71 Component Services ........................................................................................................ 71
Conclusions and suggestions ............................................................................................... 72 Conclusions and Recommendations ........................................................................................ 73
Introduction................................................................................................................................ 73 Evaluation Results..................................................................................................................... 73
Common Framework ............................................................................................................ 73 Interface Specification........................................................................................................... 74 Conclusions .......................................................................................................................... 74 Recommendations ................................................................................................................ 74
Future Work.................................................................................................................................. 76 Future Framework Extensions .................................................................................................. 76
Concurrent transactions........................................................................................................ 76 Portability .............................................................................................................................. 76 SQL Primitives ...................................................................................................................... 76 Exception Handling ............................................................................................................... 76 Extension of Interfaces ......................................................................................................... 76
Applying Component Based Development to other subsystems.............................................. 77 Scalability .................................................................................................................................. 77 The Framework and Beyond..................................................................................................... 77
References Cited ......................................................................................................................... 78 WEB References.......................................................................................................................... 78 Concluding Remarks.................................................................................................................. 78
8
APPENDICES ............................................................................................................................... 79 Overview ................................................................................................................................... 79
Appendix A - SCREENSHOTS.................................................................................................... 80 Name service ............................................................................................................................ 80 DICOM archive.......................................................................................................................... 80 DICOM data server ................................................................................................................... 80 DICOM web application............................................................................................................. 81 Browser window ........................................................................................................................ 81
Appendix B - DATABASE DESIGN............................................................................................. 82 Appendix C - DATA DICTIONARY .............................................................................................. 83
Data dictionary .......................................................................................................................... 83 Data mapping ............................................................................................................................ 87
Appendix D - STORED PROCEDURES...................................................................................... 88 Appendix E - IDL.......................................................................................................................... 98 Appendix F - CLIENT DESIGN (AJAX) ..................................................................................... 115
Interaction Model – Using XMLHTTP (AJAX) ......................................................................... 115 XML Response Messages ...................................................................................................... 116
Patient collection................................................................................................................. 116 Selection Event (Patient, Study, Series)............................................................................. 116 Patient Details ..................................................................................................................... 116
Thumbnails and Preview......................................................................................................... 116 Thumbnails ......................................................................................................................... 116 Preview ............................................................................................................................... 117
Web References:..................................................................................................................... 117 Appendix G - CLIENT HTML ..................................................................................................... 118
index.html ................................................................................................................................ 118 APPENDIX H - JAVA SCRIPT.................................................................................................... 124
prototype.js.............................................................................................................................. 124 image.js ................................................................................................................................... 137
APPENDIX I - CLIENT CASCADING STYLESHEETS.............................................................. 139 IMAGESTYLE..CSS................................................................................................................ 139 STYLE.CSS............................................................................................................................. 139
APPENDIX J - WEB APPLICATION JSP .................................................................................. 145 patient_list.jsp.......................................................................................................................... 145 thumbnails.jsp ......................................................................................................................... 145 preview.jsp .............................................................................................................................. 146
APPENDIX K - FRAMEWORK DETAILS................................................................................... 147 SQL Primitives......................................................................................................................... 147
SQL Statement ................................................................................................................... 147 SQL Insert Statement ......................................................................................................... 148
Transaction Primitives ............................................................................................................. 151 APPENDIX L - C++ DATA SERVICES SUBSYSTEMS............................................................. 153
9
LIST OF FIGURES
Figure Page
Figure 1 - DICOM Information model (fragment)........................................................................... 15
Figure 2 - DICOM Real world model (fragment)............................................................................ 15
Figure 3 – OMG-IDL Metamodel ................................................................................................... 18
Figure 4 - Component Specification .............................................................................................. 19
Figure 5 – OMG-IDL Fragment of Patient Module ........................................................................ 20
Figure 6 – Formal Description of Substitutability (Findler at al, 2001) .......................................... 22
Figure 7 – Extended Service Specification.................................................................................... 23
Figure 8 - Type definitions ............................................................................................................. 24
Figure 9 – Interaction II.................................................................................................................. 26
Figure 10 - Interaction III ............................................................................................................... 27
Figure 11 - Overview ..................................................................................................................... 39
Figure 12 - DataManager details ................................................................................................... 41
Figure 13 - Notification Manager ................................................................................................... 42
Figure 14 - Component View......................................................................................................... 43
Figure 15 - Import Process ............................................................................................................ 44
Figure 16 - Design Model Overview (truncated)............................................................................ 45
Figure 17 - Abstraction Layers ...................................................................................................... 46
Figure 18 - Detailed Design Model ................................................................................................ 47
Figure 19 - Data Transfer Object................................................................................................... 47
Figure 20 - Data Record Family .................................................................................................... 48
Figure 21 - SQL generation classes.............................................................................................. 49
Figure 22 - Transaction CRUD primitives...................................................................................... 50
Figure 23 - Generic Data Services ................................................................................................ 51
Figure 24 - CRUD Services ........................................................................................................... 51
Figure 25 - Generic Services......................................................................................................... 52
Figure 26 - Selective Inclusion ...................................................................................................... 52
Figure 27 - Operations for Objects with an ID ............................................................................... 52
Figure 28 - Operations for Objects with a UID .............................................................................. 53
Figure 29 - DICOM Object............................................................................................................. 53
10
���������������� �������� �
In the Introduction we present an overview of the project, motivate the need for such
project within the DICOM domain and summarize the choices made during design. Section
DICOM-RT domain briefly introduces the concepts and model underlying the DICOM-RT domain.
11
CHAPTER 1
INTRODUCTION
Component Based Development
Component Based Development (CBD) and Component Based Software Engineering
(CBSE) constitute a modern methodology aiming at addressing many of the complexities related
to intricate software systems development. The promises of CBD are relatively well known:
increased reusability, maintainability, replace-ability, manageability and cost effectiveness to
name a few. In this project we showed how to deliver on these promises using a CBSE approach
and how to approach the underlying challenges in a practical and repeatable manner. We chose
to fully encapsulate persistency by utilizing a component based design approach. Using our
extended service specification, the coupling between the consumer (application) and supplier
(persistency) becomes precisely defined.
DICOM
DICOM (Digital Imaging and Communications in Medicine) is the de facto standard
for the exchange of medical information, notably imaging data. Its primary uses are to store and
exchange information related to patient treatment and diagnostics. Although there exist a wide
variety of Open Source and commercial toolkits for manipulating DICOM data, there is a lack of
uniform approach in handling DICOM related objects in applications. This causes DICOM based
domain objects to be treated in an ad hoc manner by different applications even within the same
institution.
PACS (Picture Archive Communications System) systems are dedicated to long term
storage and provide query & retrieve services in a DICOM networking environment. Although a
PACS system constitutes a crucial element in a DICOM networking environment, applications will
generally need to add functionality that goes beyond the support of DICOM and thus PACS
systems. Because of this, applications need to deal with different types of data in a different
manner. Having a consistent approach for sharing data amongst applications benefits
applications that use both DICOM and additional related data.
12
Project Definition
This project defines a set of interfaces for DICOM within the RT (radio-therapy) domain
enabling applications within it to share information in a consistent and managed manner. The
design of the interfaces was founded on a common framework encapsulating access to widely
available and standardized RDBMS technology. An important benefit of such common framework
is it serving as the foundation for future extensions. Additionally, implementations of services can
be relatively easily updated to utilize any standard RDBMS solution, since the involved changes
are encapsulated within the framework. To validate our approach and the validity of the
framework, a prototype was developed to implement part of the core DICOM real-world model
applicable to the RT domain.
Database Technology
The choice for an RDBMS as opposed to ORDBMS or ODBMS was a deliberate one and
is explained in section Persistency: Database Options.
Component Model
The emerge of component middleware has enabled architects to create distributed
architectures in which multiple components collaborate. Based on the following requirements and
observations, we selected the Object Management Group’s Common Object Request Broker
Architecture (CORBA) 2.x as the middleware for our design:
� Integration of components based on different implementation technologies (programming
languages) and platforms is a common challenge within the DICOM-RT field.
� Design by contract is best realized using a supporting technology enforcing strict
separation between interface and implementation. The CORBA 2.x specification defines
interface as the contract between consumer (client) and supplier (server).
� The need for a proven and industry standard technology. This is a requirement since the
applications are used by numerous institutions.
Additionally, other software components using various technologies may be integrated by
designing component wrappers based on CORBA. This is particularly interesting when multiple
(possibly distributed) heterogeneous components need to be integrated. It may be worth
mentioning the latter observation is based on our practical experience in the field.
13
Common Framework
The various data abstraction components share a lot in common; a common framework
as the foundation for these components dramatically decreases the effort for the realization of
such an isomorphic family of components. Although the contents of this project are very specific
the approach followed is applicable to many projects dealing with families of components. A
highly iterative design methodology is well suited to developing components since it allows testing
a design and tuning in at an early stage. In particular when designing a family of components
(such as Patient, Study, Series, Image) a lot of time can be saved by creating a reusable
foundation from the start: the first component is used as a proof of concept for the foundation and
the foundation is tuned to meet all requirements during the development of it. Subsequently new
components are designed based on the common foundation, which is fine-tuned where
necessary.
Summary
This project clearly demonstrates some important advantages of CBD, notably
implementation technology independence (integration of services implemented in C++ in Java)
and precise and verifiable contracts. Further this project endeavours to demonstrate that:
� CBD is well suited for encapsulation of part of a domain.
� Precise and verifiable specifications of interfaces can be achieved using OMG-IDL
(Interface Definition Language) for the syntactical specification annotated with semantics
expressed in pre- and post- conditions and exceptions indicating error conditions (see:
Extended Service Specification).
� A common framework as the foundation for an isomorphic set of components decreases
the effort for the realization of such components. Because of the inherent high-level of
reuse, the resulting design provides increased maintainability.
� Time to market of additional data components is dramatically decreased since a proven
approach exists (in the form of concrete realizations).
� Consistency of the data usage in the client application is increased.
14
CHAPTER 2
DICOM-RT domain
Introduction
In this section we briefly summarize the definitions that are essential to understanding
how DICOM captures medical information. For an introduction to the DICOM standard the reader
is referred to Digital Imaging and Communications in Medicine (DICOM) Part I.
DICOM Information Model
Figure 1 - DICOM Information model gives an overview limited to the parts of the DICOM
information model relevant to our project. To capture medical information DICOM uses the
concept of Information Object Definition (IOD). Normalized IOD classes include only those
attributes inherent to the represented real-world entity, composite IOD classes on the other hand
comprise information inherent to the various related real-world entities. DICOM uses this
composite model to ensure that information transmitted across system boundaries is
self-contained. This is essential because related information may not be (and probably is not)
available at another system. In the wordings of the DICOM committee:
“Composite Information Object Classes provide a structured framework for expressing the
communication requirements of images where image data and related data need to be closely
associated” [DICOM Standard Part-I,15].
In practice this means that DICOM files sent between systems contain all related
information required to process the data.
15
Figure 1 - DICOM Information model (fragment)
DICOM Real world entity model
The DICOM real world entity model comprises many classes. The core classes relevant
to the RT-image perspective are depicted in Figure 1 - DICOM Information model (fragment).
These are the classes that are required to represent the information coming in from a DICOM
modality (e.g. scanner).
Figure 2 - DICOM Real world model (fragment)
16
����������������������������������
In the CBD Fundamentals we deal with differences and inconsistencies of definitions and
define a common vocabulary used throughout the project. Section Component Specification
introduces the concepts of syntactic and semantic specification used to express the design of the
components and interfaces in the project. Component Interactions finally compares different
options for inter-component interactions and motivates the choices we made in our design.
17
CHAPTER 3
CBD Fundamentals
Introduction
One of the problems related to the CBSE discipline is the existence of various not fully
mutually compliant definitions of terms (such as component). This section intends to clarify the
usage of terminology within the context of this project and relates it to existing definitions found in
industry.
Common Definitions
Component
In industry many different definitions of component are given. Szyperski [1998, cited in
Crnkovic et al, 2002] defines a component by enumerating its characteristics:
“A software component is a unit of composition with contractually specified interfaces and
explicit context dependencies only. A software component can be deployed independently and is
subject to composition by third party.”
The most important common characteristic is that a component is a unit of software that
provides services to its environment exclusively through one or more specified interfaces (the
contract). A component never exposes its implementation (a characteristic known as black-box).
The notion of development by contract is the cornerstone of the Eiffel methodology [Meyer, 2002].
Additionally Szyperski (as well as others) mentions independency of deployment. Although CBSE
practices (such as design by contract) can add significantly to the reusability and quality of
software regardless of this characteristic, we believe its benefits are best demonstrated including
it.
In our design we mapped components to unique CORBA modules. This mapping implies
the use of different interfaces (see Figure 3 – OMG-IDL Metamodel). In our project Patient, Study,
Series, Image and Archive are examples of components.
18
Figure 3 – OMG-IDL Metamodel
Interface
The component provides services to its clients, grouped into interfaces. The interface
constitutes the contract between the consumer (client) and the supplier (component) of the
services.
19
Figure 4 - Component Specification
Figure 4 - Component Specification shows the Patient component with its interfaces IPatient
and IPatientS, implemented by respectively IPatient_i and IPatientS_i.
Service
A service is part of the interface. It specifies an operation in terms of pre- and post
conditions, input and output and exceptions. It is part of the contract between consumer and
supplier. Examples of services are getPNByOID and getDTOByOID.
PN getPNByOID( in OID poid )
raises(DataError, ObjectNotFound)
PatientDTO getDTOByOID( in OID poid )
raises(DataError, ObjectNotFound)
In the above example the input, output and return values are structures defined in
OMG-IDL. We often used the acronym DTO, which stands for Data Transfer Object [Fowler,
2002]. DTOs group data such that related data can be transferred in one roundtrip.
Operation
An operation is the implementation of a service that is defined in the interface.
Design by Contract
The design by contract concept was first introduced by Dr. Bertrand Meyer (1991) the
inventor of the Eiffel method. In chapter Component Specification, section Extended Service
Specification we illustrate how we applied Design by Contract to our design.
20
CHAPTER 4
Component Specification
Introduction
The advantages of CBD are relatively well known. However without a clear means to
specify what a component delivers and under what conditions it operates, these proclaimed
advantages remain promises only; what is needed is a concise and precise specification of the
component’s behaviour. An adequate specification of a component comprises at least of two
parts: a syntactical and a semantical part; according to Lüders, “it is widely acknowledged that
semantic information about a component’s operations is necessary to use the component
effectively” [Lüders, F. et al, 2002]. (see: Levels of Component Specification). In this section we
present the concepts and conventions that we used to specify our components.
Syntactical Specification
We specify the Syntactical Specification using OMG-IDL, e.g.:
PN getPNByOID( in OID poid ) raises(DataError, ObjectNotFound)
This specification defines the method’s full signature; its input and output parameters, return
values and possible exceptions. See Figure 5 – OMG-IDL Fragment of Patient Module.
Figure 5 – OMG-IDL Fragment of Patient Module
21
Semantical Specification
Semantical Specification specifies what a service does (see: Figure 7 – Extended Service
Specification).
Pre- and post conditions
Preconditions specify the conditions under which a component can be assumed to keep
its contact. Pre- and post-conditions are part of the semantical specification.
Levels of Semantics
In Semantic Integrity in Component Based Development [Blom Nordby, 2002] Eivind
Nordby and Martin Blom define five levels of semantics: no semantics, intuitive semantics,
structured or pragmatic semantics, executable semantics and formal semantics.
Our notation is best classified as structured semantics. The syntax of our specification
does not impose knowledge of nor rely on a specific specification language; obviously this makes
it more accessible to human readers, however there is a trade-off: If automatic parsing of
semantics is required a formally defined specification language (such as OCL) should be used
[Blom, 2002].
Levels of Component Specification
In Making Components Contract Aware [Beugnard, 1999] describes four levels of specification:
1. Basic or syntactic
2. Behavioural (semantic)
3. Synchronization (between method calls)
4. Quality of Service (adaptability of QoS)
It is clear that we require both component specification level one and two; we must know
how we invoke an operation (1) and we need to understand what it does when we do and under
which conditions (2). Level four (QoS) requires a supporting component framework and runtime
environment and considerably additional effort throughout all phases of the development cycle
(specification, design, implementation and test). Level four is mainly concerned with runtime
adaptability of QoS. We have not found any requirements that justify the additional effort required
for level four in our project. To decide whether or not we require level three depends on the
runtime perspective of our application and the design of our components. Specifically we need to
22
answer the following question: Are component instances shared amongst concurrent threads or
clients and if they are is there any risk related do data inconsistency?
Firstly, it is important to note that all of our components are stateless; secondly all of our
data services are transactional. Clearly in our design concurrency does not impact data integrity.
Although we cannot exclude situations exist in which concurrency plays an important role
(performance), we foresee the level of concurrency will be modest (see: Quality Attributes).
Additionally, substitutability of our implementation allows us to upgrade our components to a
more restrictive specification in the future - known as the Liskov’s substitution principle [Findler et
al, 2001, Lüders et al. 2002, Nordby et al. 2002] - might the need arise. As a consequence there
is no immediate need for specifying at the synchronization level since we cannot justify the
additional design effort based on our current requirements.
Figure 6 – Formal Description of Substitutability (Findler at al, 2001)
Extended Service Specification
In the IArchive::getPreview service, the syntactical part of the specification is
expressed in IDL:
ImagePixelDTO getPreview(
in OID seriesid, in IS instanceNum,
in SIZE sz, in Q quality )
raises(DataError, ObjectNotFound, InvalidInput);
In Figure 7 – Extended Service Specification we illustrate how we annotated the IDL
description to specify its semantics.
23
Figure 7 – Extended Service Specification
Not only do we use pre- and post conditions to specify the contract, we also specify what
happens when the pre-conditions are not satisfied. In the example above not keeping pre-
condition 1 or 2 could simply result in an exception indicating contract breach; however we
extended the specification to indicate what part of the contract was broken. This approach is
inspired on the discussion of weak contracts in relation to external errors [Nordby, 2002]. Nordby
distinguishes internal (caused by incorrect behaviour of the system itself) and external (caused by
incorrect behaviour of the actors of the system) errors and relates these to strong and weak
contracts. Weak contracts should be used where external actors interact with the system, thus
making the system more tolerant and robust. Actually, the notion of weak contract formally
removes the issue of misuse, since it moves the responsibility of checking the pre-conditions from
the caller to the callee. In our specification we use pre-conditions to indicate under what precise
conditions the operation can be expected to produce a normal result and under what conditions it
produces an exception. In the above example ObjectNotFound exception is only thrown if
precondition 1 is not respected, whereas InvalidInput is thrown if the ranges of input
parameter are invalid (that is incompliant with precondition 2). Finally DataError is used to
communicate remaining sources of exceptions (such as a broken connection to the database or a
framework error). Generally DataError is used to trap external errors, but incidentally may
report unexpected exceptions from the framework.
24
To further precisely define the meaning of parameters and results where possible we
used a direct mapping of interface types to the DICOM domain and defined additional types
where no mapping was possible.
Figure 8 - Type definitions
In [Blom et al, 2002] semantic integrity of a software system is defined as: “the degree to
which its semantic properties are preserved”. Not respecting the specification is considered a
violation of the contract.
25
CHAPTER 5
Component Interactions
Introduction
Data components and their interfaces encapsulate a concept. The interfaces are
completely independent of their implementations [Szyperski 1995]. The data model comprises the
entities that represent these concepts in the database. The relationships in the underlying data
model need to be somehow reflected in the data components. To increase modularity and
reusability of components we modelled inter-component relationships (inherent relationships
between two concepts) explicitly in the interfaces. The components model a concept and are
context independent; they are reactive and possess no knowledge of the collaborations that they
take part in. Joon-Sang Lee and Doo-Hwan Bae stress the importance of this separation in their
proposal of a collaboration-based framework [Joon-Sang2002] for developing component-based
software.
In this chapter we discuss three different possible options to model interactions involving
multiple components.
Options for Inter-component interaction
Option 1 IPatient ipatient = IDicom::getPatient( in: oid )
ipatient.getStudies( out: sequence<DTOStudy> )
Code fragment 1
In Code fragment 1 the IPatient interface resembles an instance of Patient. The client
requests the associated studies from patient. Although at first sight it may seem attractive to ask
the patient what studies it has undergone, the result is that Patient has to deal with both the
Patient and Study concepts. When this design is consistently applied to the model, the result is
that many of the concepts become interwoven (tightly coupled). Preventing multiple components
to become concerned with the same concept is important to our design, because duplication of
concepts negatively impacts maintainability.
26
Option 2 Figure 9 – Interaction II illustrates a collaboration involving IPatient and IStudy. The
interaction respects the separation between the Patient and Study concepts.
Figure 9 – Interaction II IStudy istudy = IDicom::getIStudy()
IPatient ipatient = IDicom::getIPatient()
ipatient.getStudyIDs( in: patientOID, out: sequence<studyOID> )
FOR ( ALL studyOID IN sequence )
LOOP
istudy.getStudy( in: studyOID, out: DTOStudy )
END LOOP
Code fragment 2 In Code fragment 2 DTOStudy is a data transfer object for Study. PatientOID is the
key that resembles the relationship between the Patient and Study relations. This key represents
the minimal coupling between the two concepts.
Option 3 The final example Code fragment 3 the Study component collects the studies. The
IStudy istudy = IDicom::getIStudy()
istudy.getStudiesForPatient( in: patientOID, out: sequence<DTOStudy> )
Code fragment 3
27
Figure 10 - Interaction III
Design Decision – Inter-component interactions
As discussed, option 1 implies both the Patient and Study components have to deal with
the Study concept. Maintainability is clearly negatively impacted if we allow such multiple
dependencies. In both option 2 we combine the two concepts at the controller level. As a result
the concepts can be dealt with independently from the usage of the concepts. Consequently the
concepts become more reusable. In option 3 the controller delegates the get studies for patient
operation to IStudy, as a result the controller layer has become redundant (in fact option 3 can
be regarded an optimization of option 2). In the resulting design both components deal with only
one concept, dependencies (between concepts) have moved to the interface of the components
and reusable services are modelled at the level of the reusable concepts that they relate to. We
think this is a clear advantage and thus based our designs on option 3.
Design Decision – Data consistency
The persistency layer implements transactions and data access. To enforce data
consistency, integrity constraints and additional rules were implemented in the RDBMS using
triggers and stored procedures. We identified three main reasons for this design decision:
� Performance: Since data are checked where the data is, less traffic and fewer
interactions with the database are required.
� Clearness of Policy: All procedures related to database consistency are in one place;
both database integrity rules and additional procedures are implemented in the database.
28
� Maintainability: Procedures in the database are defined using a procedural scripting
language which allows changing policies without having to recompile and link any client
program.
Additional reasons:
� Reusability: A general design policy is to promote reusable procedures to the lowest
possible layer.
� Data Consistency: Although components are provided to access the data, no
component based approach can prevent applications from accessing the database
directly (surpassing the component). If an application accesses the database directly
integrity is still guaranteed.
Data consistency is covered in detail in Appendix O.
29
���������������������� ������������
In Persistency: Database Options we compare the different database models and explain
the rational for the choices we made to manage the DICOM-RT data. Strategies for RDBMS
Interaction deals with the different design options for interacting with the database and studies the
pros and cons of the various options in relation to our project.
30
CHAPTER 6
Persistency: Database Options
Introduction
In this section we briefly discuss the mainstream database options in industry and some
of their relevant characteristics. Given the mandatory limitations on the overall size of this report,
instead of providing an in depth analysis of the different qualities of these systems, we will focus
on providing sufficient context to making an informed decision within the specific context of the
DICOM-RT domain.
Object Oriented Databases (ODBMS)
Persistency utilizing an ODBMS is one of the possible strategies. The major advantages
of ODBMS commonly mentioned are:
� Using an ODBMS the designer does not have to bother with the design of persistency
since it is solved by the ODBMS technology in a fully transparent manner
� Support for complex types or objects
Particularly the first point also constraints the design: it only holds when adding persistency
directly to the business (or domain) objects thus omitting a separate persistency layer; an
approach that works if the business model is stable. In our experience it is the latter implicit
assumption (stability of the business model) that often proofs incorrect: Generally the business
model will change over time, more precisely, the structure of the business objects, their
relationships and dependencies, will evolve during the lifecycle of the model. In part this is
caused by the fact that the design not only reflects the structural aspects of the model, but also
the behavioural ones, causing relationships between objects to change when collaborations
amongst them are redesigned. A closely related frequently encountered problem is the change of
inheritance structure when refactoring design. As a consequence the model of the database
continues to change with it, resulting in complex upgrading procedures. Although to certain
extend upgrading issues also play a role in RDBMS (and of course ORDBMS) it is the tight
coupling between business objects and persistency that make this type of problem particularly
hard to deal with.
Another problem with ODBMS systems is the implementation being vendor dependant.
31
Relational SQL Based Databases (RDBMS)
The use of an RDBMS offers some clear advantages: such systems are widely available
and standardized on many different platforms. RDBMS are particularly good at managing
traditional data such as numbers and character strings. Additionally many current
implementations allow for object extensions. Integrity constraints can be defined using a standard
design methodology and additional procedures using procedural languages such as PL/SQL (or
PLpg/SQL) can be developed to implement more complex integrity constraints at the expense of
sacrificing some (well localized) portability. Furthermore SQL provides a standardized and flexible
means of querying the database in a declarative manner. However there exist several possible
disadvantages using SQL or ODBC as the application interface to the data:
� Notwithstanding the fact that standards are well-defined, different products implement
different subsets of the standard or sometimes do not comply
� Using SQL causes applications to become tightly coupled to the underlying data model
making changes to the data model costly
� Application developers may access data in unintended ways, for example by
implementing time consuming queries
� Data access is rather low level. The data structures do not map well to the objects that
are managed by the application. In literature this phenomenon is often referred to as
impedance mismatch.
Object Relational Databases (ORDBMS)
Some RDBMS systems provide extended and custom types and support for sub-typing.
ORDBMS are essentially a special case of both ODBMS and RDBMS systems. These systems
are commonly implemented on top of a RDB engine. As a consequence such systems tend to
support both SQL-99 and ODBMS features. They provide ODBMS-like access and integration
and SQL-99 access to the underlying data at the same time.
Compared to ODBMS systems ORDBMS systems may seem very attractive, however
the bi-directional conversions between object representations and relational representations tend
to degrade performance.
Databases - Summary
ODBMS are the first choice when applications use complex and custom data types that
are difficult to map onto tables. RDBMS systems excel in providing ad-hoc access to conventional
32
data. ORDBMS systems are excellent when integration of complex data with existing relational
data is required. ORDBMS systems also enable existing database engines to be utilized.
The use of both ODBMS and ORDBMS systems leads to designs in which the domain
model is tightly coupled with the database. This can result in complex upgrading issues when
changes to the domain model are required. RDBMS require the use of SQL to implement
persistency. As a result applications that directly use SQL need to know the data structure and
are therefore tightly coupled to the underlying data structure.
DICOM-RT data
The DICOM images which are generated by modalities in a clinical environment contain
both descriptive and picture data. It is the descriptive data that we need to access through
services. The original DICOM images need to be preserved; the DICOM standard disallows
DICOM images from being changed.
DICOM descriptive data comprises of conventional data and as such a RDBMS is very
well suited for managing data storage of these objects. A second argument for the selection of
RDBMS lies with the organizations that use DICOM data: Most clinics possess one or more
RDBM systems and have at least some knowledge on how to manage and maintain these
systems. A third argument is the need to easily upgrade the database, a requirement that can be
satisfied by separating persistency from the domain model as we will see in section Strategies for
RDBMS interaction.
Conclusion
In conclusion we summarize the rationale for choosing an RDBMS:
Since extensibility is one of the dominant quality attributes for our design we will have to
decouple persistency and domain. Although strictly speaking separating persistency from the
domain model does not limit us to using a RDBMS system, it is the only option that makes sense;
using an ODBMS such separation would contrast with one of the major advantages of such
systems: persistency as an extension to the business objects that require it.
Additionally a RDBMS is well suited for managing (traditional) DICOM-RT descriptive
data and are often available in clinical environments.
33
CHAPTER 7
Strategies for RDBMS Interaction
Introduction
In this section we discuss some of the strategies for designing persistency using a
RDBMS.
SQL based data access
SQL stands out when it comes to performing ad-hoc queries, but there are some
limitations that we should be aware of: Firstly, even though SQL is well standardized,
implementations of SQL differ amongst various implementations. Secondly, SQL makes it
possible to do all sorts of ad-hoc queries, which is excellent for reporting purposes, but SQL
access is limited in the extend to which it allows restrictions to database access. Thirdly, knowing
the structure of the tables easily results in writing applications that are tightly coupled to the
underlying data model; clearly a maintenance hazard.
Data mapping – Table Mapper pattern
Often in Object Oriented designs a separate abstraction layer is used to allow access to
and implement the mappings between the relational data and the objects within the domain. Such
designs typically rely on patterns to achieve the mapping. A clear advantage of the table mapper
pattern [Fowler, 2002] is the localization of knowledge regarding the organization of the data and
isolation of database (datasource) dependencies; changes in the underlying data structure do not
affect the domain objects. A disadvantage is that in essence the coupling between domain
objects and persistency, however indirect and unidirectional, still exists: each domain object is
mirrored in the mapping layer. As a consequence each change in the domain model will lead to
subsequent changes in the mapping layer. Additionally each change in the relations in the
database requires the mapping layer to be updated. Finally, using this approach it is common to
combine persistency operations with business operations in the same business class. This has
two possibly important architectural side effects:
� It limits the distribution of the involved objects
� It forces a one to one mapping between business object and unit of persistence
34
Both of which may conflict with requirements.
Dedicated persistency layer
The layers architectural pattern [Buschman et al, 1996] is useful to more firmly separate
persistency related responsibilities from domain layer responsibilities. The persistency layer
provides the operations to manage object persistency, whereas the domain layer uses these
operations. The operations are designed to fulfil all persistency requirements regardless of the
user of the operations. This approach promotes consistency and reuse. Although the approach
has many advantages, it does not enforce separation and reuse is typically constrained by
programming language boundaries.
CBD and Interface based data access
CBD takes separation one step further. Essential is the notion of contract; the
formalization of the relationship between supplier and consumer that plays a central role. It is
important to note that there is a shift of focus: away from the application towards components
providing reusable operations organized in coherent and precisely defined interfaces.
Formalization and enforcement are essential parts of CBSE and enable separation of the
component production and component assembly.
Conclusion
In this section we have examined several persistency strategies. Both the dedicated
persistency layer and the interface based data access allow a clear separation between domain
and persistency responsibilities; operations can be chosen according to architectural constraints
and requirements. CBD goes an important step beyond the capabilities of the layered approach; it
allows a strict separation and provides the means to enforce this separation. Additionally
depending on the component model data components can be reused regardless of the
programming language in which the components are developed and may be distributed over
various nodes in the network.
35
������������������������ �������
Section Component Based Development and Quality presents the main arguments that
underpin our choice for a CBSE approach to persistency. In section Architectural Design Model
the focus is on architectural and component design and design trade-offs. Finally, section Logical
Design Model expresses the internals of the framework design.
36
CHAPTER 8
Component Based Development and Quality
Introduction
In the previous chapter CBD was compared to alternative persistency strategies. We
found that CBD was particularly effective when reusability and maintainability are major
constraints for the design. In this section we argument the case for using a CBD approach to
achieving persistency based on an analysis of the underlying quality attributes. It is important to
realize that the various quality attributes constrain the design in often conflicting ways [Boehm
1978, cited in Barbacci et al 1995]. An appropriate systems design therefore takes into account
all of the relevant quality attributes and tries to balance the design to guarantee adequate quality
in all required dimensions. We start enumerating the QAs for our architecture and their respective
ratings. We also present the non-functional requirements in relation to the identified quality
attributes. Finally we show why the CBD based approach was the most appropriate for our
project.
Quality Attributes
Quality Attributes are used to motivate design decisions. The following attributes were
identified and weighed (1 low, 2 modest, 3 average, 4 important, 5 must have):
Ref. Quality Attribute (QA) Importance
1 Extensibility (quickly implement new services) 5
2 Affordability (affordable overall solution) 2
3 Portability (between OS) 3
4 Interoperability (between programming languages) 4
5 Manageability (prevent unanticipated data usage) 2
6 Performance 3
7 Replace-ability (replace implementation) 4
8 Scalability 3
9 Ease of integration (ease of integration with existing software components)
4
10 Maintainability (ease with which fixes are applied) 4
11 Dependability [Sommerville, 2004]
(fault avoidance, tolerance and detection ) 5
37
We did not do a survey amongst workers in the field; the ratings in the above table were
drawn from our experience in the field. To determine the correct weights in any particular case
meetings with user representatives possibly supplemented with surveys will be required.
Non Functional Requirements
Ref. Non Functional Requirement (NFR) QA
1 Incorporate additional data services quickly and at low cost whilst minimizing the risk of introducing new errors
1, 2, 11
2 Standard solution available at low cost using existing knowledge where possible
2
3 Need to serve customers on both Linux and Windows NT 3
4 Guarantee data consistency between applications (enforcement of consistency constraints)
5, 11
5 Process a large study within (ca. 100 DICOM images) within 1 minute
6
6 Replace or upgrade implementation without affecting clients 7
7 Ease of integration with existing software assets 4, 9
8 Low maintenance cost 10
When examining the Quality Attributes it is clear that some QAs may conflict with others.
For instance QA-1 and QA-2 may be conflicting; when developing software in an extensible
manner initial development cost is generally increased. However in our project we needed to
develop a series of isomorphic components and consequently the common foundations for our
components could be reused amongst the various components. Consequently the negative
impact on affordability (higher initial design cost for creating a common foundation) was
outweighed by the positive impact on extensibility (building additional components on a common
foundation).
Conclusion
We think extensibility and dependability are of crucial importance to the community that
this project was targeted at. Further we indicated that replace-ability, ease of integration and
maintainability are all important. In health-care industry where safety and accuracy are
paramount, affordability plays a modest role. In conclusion we selected CBD because we believe
a component based approach to be very well suited for our project.
38
CHAPTER 9
Architectural Design Model
Introduction
In this section we present an overview of the subsystems that comprise the prototype and
introduce the high-level design concepts.
Overview
In Figure 11 - Overview we give an overview of the main components and subsystems
that comprise the system. Note that only the main components that are part of the prototype are
shown. In the remainder of this section we briefly introduce the components and explain the role
they play in the design.
39
Figure 11 - Overview
40
Database The database represents the DICOM data; it contains the Patient, Study, Series and
DICOM Image objects. The DataManager populates the database which is queried by the
DICOM Server CORBA components. PosgreSQL is the RDBMS managing the database. As
indicated the database may be deployed on a separate node. In the prototype we combined the
DB Server and DICOM Server nodes.
DataManager DataManager is the server side executable component responsible for processing
incoming DICOM data. It has the following responsibilities:
� It processes incoming DICOM data
� It stores the DICOM meta-data in the database
� It archives the DICOM images (DICOM Part-10 files)
For each DICOM series, the DataManager compresses and archives the DICOM images
in a specific subdirectory. Compression/decompression is performed by the Packer subsystem
(not shown in the diagram).
CORBA Server Components
DICOMDSArchive The DICOMDSArchive subsystem hosts the Archive component which provides
services to the client applications involving the archive. In the prototype only services related to
extraction of image data (the actual pictures) are provided. The Archive component only
requires access to the archive.
DICOMDSServer This subsystem hosts the Patient, Study, Series and Image components. The
interfaces of these components provide services for querying the DICOM-data. These
components only require access to the RDBMS.
WEB Application Components To illustrate interoperability the premier client of the CORBA Server Components was
developed using Java. The web application was hosted by the Jetty (COTS) component, which is
a combined HTTP/Application Server. The web application consists of several JSP and Servlets.
WEB Client Components The WEB client is a browser application which consists of HTML pages and client side
JavaScript. We followed the AJAX pattern [McCarthy, 2005] to increase responsiveness of the
WEB client.
41
StoreSCP To allow the DataManager to process DICOM studies sent by a DICOM modality (e.g.
MRT scanner, CT scanner, etc.) we used a DICOM StoreSCP COTS component (Figure 12).
This component was configured to store incoming DICOM studies in a directory in the local file
system.
Figure 12 - DataManager details
Monitor The monitor polls the directory in which StoreSCP stores incoming DICOM studies. We
used a polling interface to make it possible to feed DICOM Studies to the DataManager
component by either manually copying DICOM files or using StoreSCP service configured to
store incoming studies into the common directory.
42
NotificationManager A NotificationManager was used to decouple the DataManager and Monitor. It
implements the observer pattern [Gamma et al, 1995], enabling components to exchange
messages. There are no dependencies between the publishers and subscribers other then the
events that flow between them. The Monitor publishes the STUDY_RECEIVING_END event
(sending it to the event queue of the NotificationManager, which notifies its subscribers); the
DataManager receives the event and starts processing the study in response.
Figure 13 - Notification Manager
43
Component View
The DICOMDSServer CORBA server application (Figure 11) is the home of the Data
Components (e.g. Patient, Study, Series and Image). The Data Components (CORBA)
were implemented (realized) by the generic DataServices implementation package (Figure 14 -
Component View).
The same pattern was followed for the Archive component (which is instantiated by the
DICOMDSArchive CORBA server application): The ArchivingServices package implements
(realizes) the DICOM Archive component.
Figure 14 - Component View
In our design the Data Components are used to access the data in a platform neutral
manner. Typically the Data Components may be invoked by a client application or by WEB
Application components such as a servlet or JSP. In our prototype we chose to implement a web
browser based application with only JavaScript and XHTML on the client. The WEB Application
Components use the CORBA interfaces to access the Data and Archiving services. The
44
DataManager component directly uses the classes provided by the framework to access the
archive and DICOM data.
DICOMServices and DataServices DICOMServices are the counterpart of DataServices; where data services provide
access to the relational data, DICOMServices parse and provide access to the equivalent DICOM
data.
DICOM Import DICOM Import implements the collaboration between DICOMServices and DataServices,
providing import functionality of DICOM data (see Figure 15 - Import Process). The activity
diagram summarizes how incoming studies are processed (not including the activities involving
StoreSCP).
Figure 15 - Import Process
The monitor continually waits for incoming Studies and once a study has completely
arrived it generates a notification which is consumed by the DataManager (see:
NotificationManager). The DataManager in turn handles the import of the study. The activity
Process DICOM data is in fact a recurring pattern: each DICOM entity (e.g. Patient, Study,
Series, Image) is read, transformed and stored in the database. Additionally the Process DICOM
data activity sets the required references between the entities.
45
CHAPTER 10
Logical Design Model
Introduction
In this section we explain the design of the foundation framework. To optimize reusability
of design the following design cycle was used:
� Implement foundation and first component
� Redesign (refactor) foundation to optimize reuse
Overview
In overview we present the main components of the foundation framework.
Figure 16 - Design Model Overview (truncated)
46
As explained in section Component View the Data Components were implemented
(realized) by the DataServices implementation package. In the section we zoom in on the parts of
the framework relevant to the Data Components.
Figure 17 - Abstraction Layers
Figure 17 - Abstraction Layers more clearly emphasizes the hierarchical relationships
between the DataServices, Transaction and SQL packages and assigns the responsibilities to
architectural layers. PQXX is a third-party COTS component (library) which abstracts the
PostgreSQL database. A more detailed view on the framework is presented in Figure 18 -
Detailed Design Model where we show the Generic Services, CRUD Services implementation
packages. Transaction, Generic Services and CRUD Services form the foundation on which Data
Services are implemented. The Data Services (implementation) package above is not to be
confused with the interfaces that are exposed; the interfaces of the data access components
were defined using IDL and the implementation of these interfaces was realized by using the
generic foundation classes presented in the above design.
We briefly mention the remaining architectural layer here: the database itself. As
motivated in Design Decision – Data consistency we implemented additional procedures to
enforce data consistency using stored procedures in the database (see: Appendices D and O).
47
Figure 18 - Detailed Design Model
The data package provides the generic data representation. When data is
communicated between foundation classes (implementation classes) it is done so in the form of a
specific Data Transfer Object (DTO) which extends the generic data_record that is part of this
package (Figure 19 - Data Transfer Object). In contrast the interfaces of the Data Access
Components (exposed interfaces) were represented by specific structures, which allow the data
to be represented in a straight-forward and type safe way in the client applications.
Figure 19 - Data Transfer Object
48
Essential to the design is the reliance on generics (C++ templates) which makes the operations
both generic and type safe. Specific entities have their own specific set of attributes; using a
common <key, value> type container to implement a record could have easily led to mistakes
in using attributes thus seriously degrading the usefulness of the common foundation. In our
design each specific component (e.g. Patient, Study…) has its own specific nested record type
which extends a basic pair <key, value> structure. The use of generics allows the generic
primitives to access the nested specific types.
Data Primitives
The data package defines the data::record type which was used to represent record
based data. The top two levels of the hierarchy represent the elements provided by the
framework, whereas the bottom level represents the extensions that implement the domain
specific types.
Figure 20 - Data Record Family
The sql::data_record is used by both the SQL primitives package and Transaction
primitives package. The bottom level defines specific default initialization of fields and checks
consistency between the number of specific enumeration elements and the number of column
names.
49
SQL Primitives
The SQL family of classes encapsulate the generation of SQL statements. Each class
below encapsulates the code required to generate the respective SQL statement or clause. The
data_record is a structure which contains all information relevant to a record in the database.
In essence it is a collection of key - value pairs, which is used by the SQL primitives to generate
the content of the corresponding SQL statements.
Each primitive was designed as a function object [SGI]. On construction the table name is
passed to the sql::statement, so it knows what table name to use when it is executed. When
it is used it is called with the relevant data that is needed to populate the query. The data
structure that is used for this purpose is a generic data_record structure, typically each clause
in the final statement has its own corresponding parameter in the function call operator. To
illustrate the approach we explore the sql::statement and sql::insert classes in detail in
Appendix K.
Figure 21 - SQL generation classes
50
Transaction Primitives
Basic transactions like insert, update, remove and select are provided by the transaction
layer and can be reused as building blocks for data services. The transaction layer relies on the
SQL package for generating SQL statements and builds on a set of transaction foundation
classes provided by the PQXX framework which allows transactors (executors of transactions
that follow the command pattern) to be plugged in. The transaction layer implements this model
by providing specialized transactors to encapsulate either basic or composite transactions.
Figure 22 - Transaction CRUD primitives
Each transaction primitive (insert, update…) extends the pqxx::transactor (via
base_transactor) that is called by the PQXX framework. The transaction primitive knows what
table and data to operate on (construction time) and the PQXX framework calls the function call
operator providing the database transaction as the input parameter. The transaction primitive
delegates SQL statement generation to the corresponding SQL primitive (for details see:
Appendix K).
51
Service primitives
This section presents two types of service primitive: generic and CRUD services. The
justification of these primitives was: preventing code to be duplicated between implementations of
various data access components. Each of the primitives executes a query wrapped in a database
transaction.
Generic Services Overview Figure 23 - Generic Data Services shows which classes comprise the generic
dataservices. These classes are contained in the namespace dataservice.
Figure 23 - Generic Data Services
CRUD Services CRUD primitives are operations that handle create, retrieve, update and delete actions
on a relation. The CRUD family of classes provide typesafe access by using the
data::record<type> class to represent the data.
Figure 24 - CRUD Services
52
Generic Services Generic primitives are operations that are commonly used when accessing a database,
such as selection of all data that meets a particular criterion (implemented using a select query)
or updating or selecting a record using the primary key of the entity.
Figure 25 - Generic Services
Selective inclusion of operations To be able to selectively include common operations to our domain objects we created
specific base classes that provide these operations. In this section we present the classes that
deal with behaviour common to a subset of the domain classes.
Figure 26 - Selective Inclusion
The interface of dataservice::object_with_id<T> provides access to objects that
having a unique OID (Figure 27).
Figure 27 - Operations for Objects with an ID
In Figure 28 we present the interface common to domain objects that carry a UID
(Patient, Series, Study):
53
Figure 28 - Operations for Objects with a UID
Finally class dataservice::dicom_object<T> (Figure 29) provides both of the
above interfaces and additionally conversion operations between UID and OID.
Figure 29 - DICOM Object
Summary
� Separation of responsibilities in architectural layers promotes reusability and extensibility
(e.g. Data primitives, SQL primitives, Transaction primitives).
� Data Transfer Object (DTO) is consistently used to communicate information between
components.
� Operations that apply to part of the classes are assigned to framework elements that
encapsulate the different roles (e.g. object_with_id, object_with_uid).
54
��������� ��!�����
Whereas section Evaluation Plan puts the evaluation into perspective and specifies what
we evaluated, section Evaluation Specification and Execution defines our evaluation approach; it
defines the data sets that we used for validation and the detailed specification of requirements
that our components were to meet. Section Conclusions and Recommendations summarizes the
findings and suggests possible areas of improvement and Future Work points out directions for
future work based on the findings.
�
55
�
CHAPTER 11
Evaluation Plan
Introduction
In our project DICOM Data Abstraction Interfaces – A CBSE Based Approach, we
applied Component Based Software Engineering to encapsulate part of the DICOM-RT domain.
We studied the specification of components and inspired our approach on design by contract
[Meyer, 1992] and the notion of weak and strong contracts [Nordby, 2002]. To clearly document
our specification we annotated the OMG-IDL (syntactical specification) with descriptions for each
respective input and output parameter using structured semantics (semantical specification). We
specified the contract by providing precise information on the conditions under which a service is
assumed to operate normally and extended the specification to provide accurate information on
the cause of an exception in case of contract breach (based on Nordby’s discussion of strong vs.
weak contracts in relation to internal and external errors). In Component Based Development
and Quality we provided an overview of the quality attributes of our system in relation to the
system’s non-functional requirements and weighed their relative importance to our design. In the
design we have gone considerable length to provide the best possible quality with respect to
consistency (5), extensibility (5), dependability (5), maintainability (4) and ease of integration (4)
(relative weight between parentheses). We also carefully considered portability (3) and
performance (3). We enumerated and highlighted the benefits of our approach in relation to the
here fore mentioned quality attributes. In order to decrease time-to-market (extensibility) and
promote consistency and increase dependability we based our internal design on a common
framework.
Section Evaluation Plan explains the evaluation plan of the components in relation with
the quality attributes that we identified and analyzed. It specifies what we evaluated and why, how
we executed the evaluation is covered in section Evaluation Specification and Execution. In the
evaluation we showed how our approach has benefited the quality attributes that we identified. At
the highest level of abstraction the following questions summarize what we evaluated:
Q1. Are the components dependable?
Q2. Does the interface layer provide a sufficient level of abstraction?
Q3. Is the definition of interfaces precise, robust and consistent?
Q4. Does the framework allow easy extension and maintenance?
56
Q5. Is design and implementation consistent between interfaces?
Q6. Can the implementation be replaced without affecting the applications?
Q7. Can components be easily integrated?
Q8. Is the implementation portable?
To demonstrate extensibility and dependability we developed an isomorphic family of
components (Patient, Study, Series, Image) based on a common framework. We developed a
web-application prototype and a Java console test application which demonstrate ease of
integration and interoperability (Java/C++).
Qualitative Analysis versus Quantitative Analysis
Performance is relatively easily quantifyable. We did not rate performance relative to
other approaches or architectures, since this was not the focus of our evaluation. Instead we
verified satisfactory performance given the performance requirements that we defined.
Consistency of Interfaces
As we consistently applied common design patterns to all of our interfaces, consistency
of interfaces can be easily demonstrated by examining the IDL interface specification.
Ease of integration
To demonstrate ease of integration we developed a Java web application a Java console
application and a C++ console application. We demonstrated integration and explain how it was
achieved. We did not quantify ease of integration, as it would require a comparison between
CORBA and other component frameworks.
Substitutability and Replace-ability
We covered substitutability in section Levels of Component Specification. Liskov’s
principle defines the rules (based on interface specification) that need to be followed for one
component to replace another. Replace-ability of a component’s implementation does not impact
the component’s interface and follows from abstraction (the implementation is totally hidden, only
the interface is exposed). Since following the rules for substitution and abstraction imply
substitutability respectively replace-ability we examined neither of these in our evaluation.
Dependability, extensibility and maintainability
Regarding dependability (except data consistency), extensibility and maintainability, we
discuss the strategies that we followed to the benefit of each of these. We explain our approach
by examining the code and highlighting how the choices we made relate to the here fore
57
mentioned quality attributes. To quantify these quality attributes we would need to:
� Select representative software projects
� Involve representative software professionals and have each of them rate the different
projects with regards to the relevant QAs
� Analyze and present the results
Since this is a project in its own right, clearly it is beyond the scope of our study. Due to time
constraints we limit ourselves to a qualitative analysis.
Dependability
Dependability is one of the most important quality attributes for our design. We have
pursued dependability using the following design strategies:
� Fault avoidance [Sommerville, 2004]:
- Internally: Designing a common framework using type specific interfaces and
field identifiers and consistency checking to minimize programming faults. This is
important when additional services are developed (extensibility)
- Externally: Create a precise interface specification using easy to apply consistent
patterns for exchanging data (DTO). This is important for clients of our
components.
� Fault tolerance and detection [Sommerville, 2004]: Using weak interfaces and
precise interfaces definitions, allowing the caller detect misuse and possibly recover
from an error.
� Data consistency: We supplemented our datamodel with stored procedures
enforcing additional data consistency procedures (see: Appendices D and O).
Additionally we tailored our software development process:
� Iterative development and white-box testing: To make sure the main quality
requirements – dependability and extensibility – were met, the common framework
was developed, tested, corrected and tuned iteratively while developing the Patient
component. All identified defects were immediately fixed, fine tuning and optimization
of the design were done during subsequent development of the Study, Series and
Image components.
� Component (black-box) testing: For completeness we mention component testing.
Before integrating components into applications each component must be black-box
tested by members of a separate test team. The test team receives the components
together with its interface specification. Although we only performed limited
component testing, it is a mandatory step in a production environment.
58
Essential to the usefulness of a component is the quality of the interface specification. In
Levels of Component Specification we explained our choice to specify components at both the
syntactical and semantical levels. We carefully designed our interface specification to be both
unambiguous and forgiving. The approach we followed is inspired on design by contract. With our
interface specification we aim at increasing software dependability and testability. In the
evaluation section we assess fault avoidance, fault tolerance and detection.
Extensibility and Maintainability
Both maintainability and extensibility are closely related to fault avoidance. Extensibility is
one of the key requirements for our design; using the framework it should be possible to
efficiently provide additional high-quality services and the framework should protect the
component programmer against common mistakes (fault avoidance). We explain the steps
required to implement a component and the creation of additional services. The evaluation
section is limited to a qualitative analysis.
Ease of Integration and Portability
As we found in sections CBD and Interface based data access and Component Based
Development and Quality ease of integration has a direct relationship to Component Based
Development.
We developed a prototype in which we demonstrate both ease of integration and
interoperability between data components (supplier) written in C++ and a web application written
in Java using various data and archiving services (including image retrieval). Additionally we
wrote two console clients (one in Java one in C++) for performance testing. We both tested these
components distributed and on the same node.
We are aware of the fact that this special case of interoperability may not be
representative for other cases. However, the ease with which we could achieve integration and
the fact that CORBA is a very mature cross-platform, language independent component
middleware at least indicate that interoperability and integration between other supported
programming languages as well as operating systems are possible.
Performance and Concurrency
Any component in order to be of use must meet the performance requirements of the
application that it is used in. However, since performance is not a key concern in our design, we
59
chose to not cover it here. We refer the interested reader to Appendix P – Performance and
Concurrency.
Assessment
Fault avoidance, maintainability and extensibility To assess fault avoidance, maintainability and extensibility we had to look at the
component’s internals. Since fault avoidance, maintainability and extensibility are not
independent and relate to the same design we examined the design and explain where it impacts
each of these quality attributes. In the next section we will focus on the Patient component and
discuss the various detailed design choices made. We will also highlight the flexibility of the
design to cater for additional services (extensibility) that are not common to other components.
Fault tolerance (external) and detection To assess our approach we validated a service using input somewhere in the middle of
its domain, at its domain boundaries, and outside its domain boundaries and examined the
response of the system in each case. As a direct consequence of applying design by contract
there are basically two distinct cases: either the programmer respects the pre-conditions of the
interface or violates them. The first case leads to the expected outcome (if not the component
fails, also known as failure [Meyer, 1992]); in the second case (client failure or misuse) the
component throws an exception (contract breach) indicating the nature of the error. In the event
of component failure the component also exits with an exception possibly indicating the nature of
the error (by external cause). A third case of failure is misconfiguration. In this case a CORBA
exception is thrown (e.g. COMM_FAILURE).
We answered the following questions:
� Does the component behave as specified? (correctness)
� Does the component provide us with sufficient feed-back to find out what went wrong
in case of misuse? (tolerance and detection)
� Does the component report an error in case of failure by external causes (e.g. an
archived object was deleted, a network connection fails, et cetera) (detection)
We are aware of the fact that we cannot misinterpret our own specification, however we
will simulated misuse by providing invalid input to the component.
Assessing ease of integration and portability We demonstrated ease of integration of or our components by examining the steps
required to use our data components from a Java servlet.
Due to the time constraints of the project we did not evaluate portability. We selected
C++ as the implementation language for our components based on the wide-spread use of the
60
language in the domain and the availability of proven COTS components for accessing DICOM
objects. We pursued portability by selecting COTS components which are available on both Linux
and Microsoft Windows™ platforms. Although our components are based on portable COTS
components, we know by experience – due to different levels of C++ standards compliancy of
compilers – that C++ portability is best achieved by making it an integral part of the development
cycle.
61
CHAPTER 12
Evaluation Specification and Execution
Interface Specification
In this section we present the evaluation of one of the more complex services that we
defined (see Figure 7 – Extended Service Specification ): the generation of a preview of a DICOM
image.
Level of Abstraction
Technical IDL is implementation technology independant and this guarantees a strict separation
between interface and implementation technology.
Semantical The inputs were all defined in terms of attributes that are meaningful within the domain
and do not expose nor assume any implementation related information. The return type
ImagePixelDTO follows the DTO (Data Transfer Object) pattern that is consistently used
62
throughout the interface specification. It was specified in the IDL specification as follows:
Finally PixelData was specified as part of the typedefs:
Precise and Robust specification Whether or not an interface can be said to be precise and robust is a matter of definition.
We use the following definition: For an interface to be precise and robust for each possible input
there exists one and only one possible output. The relationship between input and output is
63
precisely defined in the specification. As we explained we extended the interface to communicate
accurate error information where possible. We will illustrate what this implies for the above
service:
1. The input parameters satisfiy both pre-condition 1 and pre-condition 2
2. The input parameters do not satisfy pre-condition 1
3. The input parameters do not satisfy pre-condition 2
4. The component fails
Cases 2 and 3 clearly indicate misuse, whereas case 3 indicates component failure. Case 1 is
defined as normal execution.
We used the Java console application to validate the interface.
Results:
Case Input Output
1 Seriesid = 5433,
Instancenumber = 4,
size = 1024, Q = 100
Image is safed into local preview directory: /5433/4.jpg
Image properties (1024x1024)
Size: 189 kB
2 Seriesid = 5433,
Instancenumber = 3,
size = 1024, Q = 100
ObjectNotFound exception:
The requested object could not be located in the
archive.
Server message: get_preview( int seriesid, int
instancenumber, unsigned width, unsigned Q )
3 Seriesid = 5433,
Instancenumber = 4,
size = 1025, Q = 100
InvalidInput: dicom::SIZE sz 1025
3 Seriesid = 5433,
Instancenumber = 4,
size = 1024, Q = 9
InvalidInput: dicom::Q quality 9
4 Input as in case 1, object 1
was removed from the
archive
ObjectNotFound exception:
The requested object could not be located in the
archive.
Server message: get_preview( int seriesid, int
instancenumber, unsigned width, unsigned Q )
Other cases:
Additional errors may occur due to misconfiguration or inavailability of services, for
instance when no connection can be made to the component (CORBA::COMM_FAILURE). This
64
is clearly not component failure nor is it misuse. The middleware is responsible for reporting these
types of errors.
Implementation Note:
All internal errors are guaranteed to be reported by one of the specified exceptions due to
a final catch clause in which unanticipated errors are caught and propagated over the interface
using the DataError exception:
catch( exception& e ) { throw dicom::DataError( instanceNum, get_message(DATA_ERROR, e) );
} FINAL_CATCH( instanceNum )
First unanticipated errors extending std::exception are converted and finally all
remaining exceptions are caught and converted to a dicom::DataError reporting an error by
unknown cause e.g. Unknown error in ARCHIVE component. (The FINAL_CATCH macro). We
have never encountered this message but we used it to guarantee no unexpected errors are
leaking through the interface.
Note: Doing this is a design choice. Alternatively we could have opted to leave the final
clauses out. Reasoning: If the caller does not misuse the component, the component must deliver
results as specified in the interface. If the component fails to do so, the component is in failure
and violates the contract anyway, so there is no point in trying to fix this. The CORBA
specification will then catch the unexpected error and convert it into a CORBA exception. We
chose to separate CORBA related exceptions from exceptions by component misbehaviour.
Consistency of Interface specification In this section we provide an overview of the set of rules that we used to consistently
design our interfaces. Note that we did not rate consistency (quantify it). However, we believe a
consistent interface definition follows from the set of rules that we consistently applied.
� Every basic DICOM type is mapped onto an IDL typedef following the DICOM
standard (see:Typedefs).
� Each DICOM module (component) is mapped onto an IDL module.
� Separate interfaces are used for services returning a multitude of objects
(mapped onto a sequence of DTO) and for services operating on a single
instance.
� Data transfer is always done through a DTO (Data Transfer Object)
� Each DTO is part of the module in which it is used or part of the most specific
common module if it is shared (e.g. ImagePixelDTO is part of module dicom
which is the parent module of all other modules. Modules image and archive
65
share ImagePixelDTO).
� We use a sequence<DTO> to return a collection of DTO objects.
� Input parameters are used to pass information into an object.
� Output parameters are used to get information from an object. If only one object
is returned by a service it is done so by returning it.
� Inout parameters are not used.
� All exceptions are part of the dicom module.
� The IDicom interface is the point of access to all dicom related components.
This interface is registered in the naming service.
� The IArchiving interface is the point of access to all archiving related
components. This interface is registered in the naming service.
� The related Patient, Study, Series and Image interfaces follow a common design
phylosophy: components deal with only a single concept (see: Design Decision –
Inter-component interactions).
Conclusions Although we did not attempt to quantify interface specification quality, we believe we
have sufficiently explained the choices that we made and how these contribute to precise,
robust and consistent interface specifications at a level of abstraction meaningful to the
component user.
Fault avoidance, Extensibility and Maintainability
Introduction The framework was designed with fault avoidance, extensibility and maintainability in
mind. We have designed the framework to help the component programmer quickly create
additional reliable component implementations. Additional functionality can simply be added by
reusing generic services provided by the framework (illustrated by the operation
patient::getName). Each interaction with the database is wrapped inside a database
transaction, which is the best guarantee to a consistent state of the database. The framework
automatically provides commonly used functions (CRUD services, generic services, as well as
common operations based on OID and UID) for classes that extend it and encourages reuse,
however low level access is still available might the need arise (illustrated by the operation
patient::getPNsForWorker).
Analysis In this section we examine the framework at the code level. We look at the definition of
the patient module, consistency, defining additional services and finally using low-level data
66
access.
Defining a data module Table 1 shows the definition of the Patient module, Table 2 the implementation; since all
modules follow the same simple and consistent pattern, we could have equilly well used one of
the other modules.
Each of the modules (Patient, Study...) extends the framework and provides specific
initializers (record) and additional operations that are only relevant for that type.
Table 1 - definition of a new module (Patient) 1 #include "record.h"
2
3 namespace data
4 {
5 class EXTERN_DATASERVICES patient {
6 public:
7 static const char* table_name;
8 enum type {
9 id,
10 birthdate,
11 dicompatientid,
12 creationtime,
13 info,
14 givenname,
15 familyname,
16 middlename,
17 nameprefix,
18 namesuffix,
19 end
20 };
21 /// patient record
22 class EXTERN_DATASERVICES record : public ::data::record<type> {
23 typedef ::data::record<type> base;
24
25 public:
26 record( bool setDefaults = false );
27 record( const std::vector<patient::type>& types
28 , bool setDefaults = false );
29 record& operator += ( const type& t );
30 };
31
32 static std::vector<patient::type> get_columns();
33
34 static void getName( id_type patientid, patient::record& data );
35 static id_buffer getIDs();
36 static id_buffer getIDsForWorker( id_type workerid );
37 static managed_buffer<data::patient::record>
38 patient::getPNsForWorker( id_type workerid );
39 static managed_buffer<data::patient::record>
40 patient::getPatientsForWorker( id_type workerid );
41 };
42 }
Comments
67
7 Each of the objects is stored in a table in the database identified by table_name.
The framework uses it in the transaction primitives and SQL primitives.
8-20 An enumeration defines safe access to the columns. This type is used by the
framework to access the column names (generation of SQL)
22-30 A specific record type extends the generic record<type>. This nested type must
be named record as it is used by the framework. Its constructors allow specifying
default values for columns. The nested record is used by generic data services
and by CRUD services in the form T::record as well as by specific operations for
the module.
32 get_columns is used by generic services (e.g. to get all columns of a type in a
select) and in checking::consistency<T>::check()
34- Additional specific services may be defined here.
Note that the operations of the patient, study, series, etc. modules are static: all of the
functions are accessible without the need for an instance. This makes sense because the data
implementation class does not carry state.
Table 2 - implementing a new module (Patient) 1 #include <strstream>
2 #include "patient.h"
3
4 using namespace data; // from dataaccess
5 using namespace std; // from STL
6
7 /**********************************************************************
8 PATIENT RECORD: DEFINITION OF COLUMNS
9 **********************************************************************/
10
11 const char*
12 record<patient::type>::column_name[] = {
13 "patientid"
14 , "birthdate"
15 , "dicompatientid"
16 , "creationtime"
17 , "info"
18 , "givenname"
19 , "familyname"
20 , "middlename"
21 , "nameprefix"
22 , "namesuffix"
23 , 0 /* 0 is used as terminator */
24 };
25
26
27 //. define record and set key
68
28 DEFINE_RECORD( patient )
29 DEFINE_AUTO_INCR( patient, "patientid" )
30
/************************************************************************
PATIENT TABLE: DEFINITION TABLENAME AND COLUMNS
************************************************************************/
//. name of table
31 DEFINE_TABLE( patient )
//. all columns of this table
32 std::vector<patient::type> patient::get_columns() {
33 static vector<type> columns;
34 static MUTEX mtx;
35
36 if( columns.size() == 0 ) {
37 SCOPED_LOCK scoped_lock(mtx);
38 if( columns.size() == 0 ) {
39 columns.push_back( id );
40 columns.push_back( birthdate );
41 columns.push_back( dicompatientid );
42 columns.push_back( creationtime );
43 columns.push_back( info );
44 columns.push_back( givenname );
45 columns.push_back( familyname );
46 columns.push_back( middlename );
47 columns.push_back( nameprefix );
48 columns.push_back( namesuffix );
49 }
50 }
51 return columns;
52 }
/************************************************************************
PATIENT RECORD: IMPLEMENTATION (DEFAULTS)
************************************************************************/
53 patient::record::record( bool setDefaults /*= false*/ )
54 {
55 CHECK_CONSISTENCY( patient )
56
57 if( setDefaults ) { /* setting default values */
58 (*this)[creationtime] = now;
59 }
60 }
61 patient::record::record( const std::vector<patient::type>& types, bool setDefaults /*= false*/ )
62 {
63 CHECK_CONSISTENCY( patient )
64
69
65 for( vector<patient::type>::const_iterator i = types.begin(); i != types.end(); ++i ) {
66 (*this)[*i] = "";
67
68 if( setDefaults ) {
69 switch( *i ) {
70 case creationtime:
71 (*this)[*i] = now;
72 break;
73 }
74 }
75 }
76 }
Comments
To implement the patient module (cpp source) there is only a few things that need doing:
� Define the column names (lines 11-24)
� Define the table name (line 31)
� Define which column (if any) is auto incremented by the database (line 29)
� Initialize the base record (line 28)
� Implement get_columns (lines 32-52)
� Implement the constructors of patient::record (lines 53-60 and 61-76)
Consistency The CHECK_CONSISTENCY macro (lines 55 and 63) was provided to detect mismatches
between the column names and the column ids (enumeration) which are defined.
Defining additional services We now show how to extend the source code module to implement additional services.
The patient::getName operation retrieves the DICOM Patient Name object for a specific
patient identified by its patient id. Generally its straight forward to define additional services. In
Table 3 we show the patient::getName service.
Table 3 - Implementing a specific service using the framework 1 void
2 patient::getName( id_type patientid, patient::record& data ) {
3
4 data::patient::record select;
5 select += data::patient::givenname;
6 select += data::patient::middlename;
7 select += data::patient::familyname;
8 select += data::patient::nameprefix;
9 select += data::patient::namesuffix;
10
11 data::patient::record where;
12 where[ data::patient::id ] = patientid;
13
70
14 managed_buffer<data::patient::record> output =
15 dataservice::generic< data::patient >::get_select_where(
16 select, where, data::patient::end );
17
18 if( output.size() == 0 ) {
19 strstream s;
20 s << "patient not found. patientid = "
21 << patientid << ends;
22 throw not_found( s.str() );
23 }
24
25 data = *output.begin();
26 }
2 id_type defines the type of a key. This is an elementary type (int)
4 Declaration of the result type for the query
5-9 Selection of the columns
11-12 Setting the selection critiria (specifying the where clause)
14-16 Executing the query (from generic services). The 3rd parameter specifies the order by
clause.
25 Copying the first row of the result set (containing only 1 row) to the output structure
(data)
Providing low level access In this section we show how the component developer can create complex services for
which SQL generation cannot be delegated to the framework. Note that the transaction
framework is used as usual.
Table 4 – using low level data access 1 managed_buffer<data::patient::record>
2 patient::getPNsForWorker( id_type workerid ) {
3 pqxx::result r;
4
5 strstream s;
6 s << "SELECT P.givenname, P.middlename, P.familyname, P.nameprefix, P.namesuffix FROM patient P, "
7 << "(SELECT DISTINCT P.patientid FROM study S, patient P WHERE S.workerid = " << workerid << ") AS S1 "
8 << "WHERE S1.patientid = P.patientid ORDER BY P.patientid" << ends;
9
10 tx::exec_transaction(
11 tx::execute( s.str(), r )
12 );
13
14 managed_buffer<data::patient::record>
71
buffer( r.size(), r.size() );
15 managed_buffer<data::patient::record>::iterator data = buffer.begin();
16
17 for( size_t i=0; i< r.size(); i++ ) {
18 *data++ << r[i];
19 }
20 return buffer;
21 }
6-8 Here we specify a nested query which the framework does not allow specifying.
10 Now that we specified the query we execute it using the framework transaction
primitives.
17-19 Finally we collect the results in a managed_buffer of patient records.
Complex transactions To define more complex transactions a specialized combined transaction must be
defined directly extending the PQXX (COTS) framework. The combined transaction can then be
used in any place where simple transactions can be used (using the transaction primitives
provided by our framework). The following code fragment shows how the insertion of image data
and the thumbnail (BLOB) are combined.
Table 5 - Executing a combined transaction (dc_import.cpp) /* execute combined transaction (image data + thumbnail) */
tx::exec_transaction(
image::insert( /* image::insert transaction */
dcm_to_data( data, record ), /* pour data into record */
thumbnail, /* thumbnail image data */
thumbnail_id, /* output the id of image */
r ) /* output the result */
);
Component Services Finally we need to briefly look at the component service implementations. In our
component development approach the component service implementations are almost empty,
they provide only the glue between the implementation and the CORBA services provided by the
component; the service delegates the implementation to the corresponding data module (such as
patient above) or directly to the framework (Table 6: lines 5, 6).
Table 6 - Direct delegation to the framework 1 PatientDTO*
2 implementation::IPatient_i::getDTOByOID( OID poid ){
3 record_type data;
4 try {
5 dataservice::dicom_object< data::patient >::get_data(
6 poid, data );
7 return createPatientDTO( data );
72
8 }
9 catch( data::not_found& e ) {
10 throw dicom::ObjectNotFound( poid,
11 get_message(OBJECT_NOT_FOUND, e) );
12 }
13 catch( exception& e ) {
14 throw dicom::DataError( poid, get_message(DATA_ERROR, e) );
15 }
16 FINAL_CATCH( poid )
17 }
Conclusions and suggestions
Using the framework has the following consequences:
1. Component implementations follow a consistent pattern. As a consequence
extensibility and maintainability are increased.
2. Since the framework was designed to work with the nested record type,
common operations become automatically available through the framework
(CRUD and generic services). As a consequence of using the framework less
code needs to be developed thus increasing maintainability and enabling rapid
development of new data components(extensibility).
3. Since the framework encapsulates SQL generation and provides a safe way of
accessing individual columns, once we have defined a module, implementing
additional services for that module becomes very easy to do and much less error
prone compared to directly coding SQL (extensibility, fault avoidance)
4. Additionally the framework checks consistency between the column names and
the column ids (enumeration) which are defined. (fault avoidance)
5. The framework encapsulates database interactions in transactions by extending
the PQXX transaction framework. (robustness, data consistency)
Fault avoidance is further improved by:
� The use of named table columns
73
CHAPTER 13
Conclusions and Recommendations
Introduction In our project we applied Component Based Development to the DICOM-RT domain. In
Chapter 1 we talked about the promises of CBD. In chapter 8 we discussed the quality attributes
of our project and found CBD to be especially well suited to our project. During the project we
have visited the major aspects of CBD and how these contribute to software quality and particular
the quality attributes of our design (dependability, extensibility and maintainability). We explained
our framework approach and its central role to improving quality with regards to these quality
attributes. In the evaluation we assessed our project based on the following questions:
Q1. Are the components dependable?
Q2. Does the interface layer provide a sufficient level of abstraction?
Q3. Is the definition of interfaces precise, robust and consistent?
Q4. Does the framework allow easy extension and maintenance?
Q5. Is design and implementation consistent between interfaces?
Q6. Can the implementation be replaced without affecting the applications?
Q7. Can components be easily integrated?
Q8. Is the implementation portable?
Additionally we assessed data consistency, maintainability of stored procedures and
performance. In this final chapter we will give an overview of the results of our evaluation and
make recommendations for future work based on our approach.
Evaluation Results In the previous chapter showed how we used quality attributes (and related questions) to
drive the evaluation. We have seen how the different aspects of our approach impacted the
quality of the final product. In this section we shift perspective; we will summarize how our design
decisions influenced the quality of the software.
Common Framework We showed how the common framework that we designed contributes to extensibility,
maintainability and dependability, all key concerns to our design. Here we summarize the most
important facts.
� Reuse and quality require consistent effort and an iterative design process:
Throughout the development cycle of our components (Patient, Study, Series, Image)
74
we reused, refactored and fine-tuned our framework. Every cycle contributed to the
usefulness of the framework.
� We separated different types of responsibilities into different architectural layers in
our framework. As a result reuse of code increased and thus quality improved (reuse
of tested software elements decreases the number of defects).
� We improved dependability and extensibility by using fault avoidance strategies (such
as the use of typed identifiers to identify data fields as opposed to strings.
Interface Specification We used a simple syntax (structured semantics) which allowed us annotating the IDL
specification with semantics and error conditions (using pre- and post- conditions). We
demonstrated how the extended service specification provides detailed information to assess
the cause of an error (either due to component failure or misuse).
Conclusions � CBD is very well suited for encapsulating (part of) a domain.
� Using a consistent set of design rules as the basis for interface design promotes
consistency of interface specifications resulting in increased usability and learnability.
� We demonstrated the extended service specification that we defined to be helpful in
dealing with exceptional results, since it clearly relates exceptions to causes.
� We designed our components around the notion of reusable concepts and disallowed
sharing of concepts amongst components. As a result our design possesses low coupling
between the constituant components and excellent maintainability and reusability.
� Reliance on a component model such as the CORBA specification enforces strict
separation between interface and implementation, which is of crucial importance in
component based software engineering. Additionally it - being implementation technology
independent - enables seemless integration between different programming languages
as we demonstrated in our proof of concept.
� The design of a common foundation framework as the basis for a collection of
components benefits both the quality, consistency and time-to-market of the components.
� Separating the logic (service realization) from the service implementation (glue)
increases design consistency and helps avoiding faults.
� Using an iterative design methodology benefits the usability and quality of the
framework.
Recommendations � In a layered architecture: Assign clear responsibilities to architectural layers and use the
result to constrain the design (e.g. transaction layer, SQL generation layer…). Use
75
dependencies only from high to low level of abstraction.
� To promote reuse, place common design elements in the lowest possible architectural
layer (layered architecture type).
� Prevent complex interactions between components and prevent components to share
concepts, since it increases the coupling between components. If components within the
same architectural layer depend on the same concept, move the dependencies to the
component interface. (Chapter 5).
� Use clear, consistent patterns to design component interfaces (e.g. DTO, consistent rules
for input and output parameters, exceptions).
� Design for quality: use fault avoidance and detection strategies and prefer type specific to
type ignorant (e.g. Chapter 10: nested record class).
� Avoid inconsistencies: Keep interface and interface specification together (e.g. annotated
IDL specification)
� Always design interfaces with the user in mind. Design by contract. Do not assume any
implementation related knowledge; since it breaks the black-box principle. The only
knowledge you may assume is what you specify in the contract.
� Design interfaces with the user in mind: Make sure the interface specification is sufficient;
always fully specify both syntactical and semantical aspects of the interface and specify
exceptional behaviour using exceptions to enable the client of the component to detect
what went wrong (Chapter 4: extended service specification) .
� Include non-functional attributes like performance, concurrency, et cetera in the
specification (only) when needed to assess a component’s fitness for use in a client
application.
� Encapsulate platform specifics when designing with portability in mind (e.g. SQL
primitives).
� Separate CORBA related exceptions from exceptions caused by component
misbehaviour.
76
CHAPTER 14
Future Work
Future Framework Extensions
Concurrent transactions As we have seen, concurrency testing revealed an issue that relates to our
implementation; since we use multiple thread of execution in our application, we should upgrade
the implementation to use a connection pool. In fact multiple concurrent database operations
should be executed using a different connections.
Portability Although our implementation is based on portable COTS components, it must still be
ported to Linux (the only alternative target platform).
SQL Primitives � Currently the framework only allows definition of simple primary keys. The framework
should be amended to work with compound primary keys.
� Generation of SQL is limited. Complex and nested queries cannot be coded using the
framework. As a consequence in some cases in the component implementation module
SQL coding is needed. It is clear that this is inconsistent with the idea of having the
framework encapsulate SQL generation and tightly couples the implementation to the
SQL syntax.
Exception Handling � The error reporting functionality is somewhat limited. Error messages should be
configurable instead of hard-coded. The exceptions are not flexible (only deal with OID,
whether or not this is a relevant attrubute).
Extension of Interfaces � Currently we only provide DICOM data services to read data from the database. The
interfaces should be extended with common CRUD operations. However, one should be
careful to not violate the DICOM rules (existing DICOM objects may not be changed). As
a consequence updating DICOM IODs (Study, Series, Image) effectively creates new
copies of the existing objects.
� Our current process (DICOM import) is characterized by reading DICOM Part-10 files,
77
transforming and storing DICOM objects in the database. Once we have services that
alter (actually copy) DICOM objects in the database, the reverse process should also be
supported. As a consequence additionally DICOM services should be provided capable
of creating a DICOM Part-10 files (DICOM export).
Applying Component Based Development to other subsystems
Our project comprises many subsystems. In the project we focussed on providing DICOM
data services and archiving services. We think the project would benefit from creating additional
components. Notably:
� DICOM manipulation Components (implemented using DCMTK)
� Notification Component
� Monitor Component
� Configuration Component
� Possibly also Compression/Decompression Component
Scalability
We provided only very basic CORBA server implementations. However we do not think
performance limitations will be easily met, it would be interesting to experiment with several
distributed scenarios and server implementations (POA configurations, et cetera).
The Framework and Beyond
Although we have proved the framework approach to be very benificial to the project,
given the commonality of our data components (Patient, Study, Series, Image) we think
generation of the data implementation modules (at least the basic parts) should be possible
largely automatically. Additionally, the service implementations that rely on these modules all
follow the same pattern. Hence, here it should be possible to generate major parts of the code
interactively once the IDL specification is complete. It would be interesting to see how such
project could evolve.
78
References Cited
i. Beugnard et al. (1999) “Making Components Contract Aware”, Computer, Volume 32, Issue 7, IEEE Periodicals, pp.
38-45
ii. Boehm (1978), B. et al. “Characteristics of Software Quality”, American Elsevier, 1978, New York
iii. Blom, M. et al. (2002), “Semantic Integrity in Component Based Development”, Karlstad University, Department of
Computer Science
iv. Buschmann, F. et al. (1996), “Pattern-Oriented Software Architecture, Vol. I”, “A System of Patterns”, John Wiley &
Sons
v. Crnkovic, I. et al. (2002), “Basic Concepts in CBSE”, Ivica Crnkovic, Magnus Larsson, Building reliable component
based Software Systems, Artech House Publishers, Norwood MA USA, p. 3-22
vi. Fowler, M. (2002), “Patterns of Enterprise Application Architecture”, Addison-Wesley Professional
vii. Joon-Sang Lee, et al. (2002), “An aspect-oriented framework for developing component-based software with the
collaboration-based architectural style”
viii. Findler, R.B. (2001), “Behavioral Contracts and Behavioral Subtypes”, Foundations of Software Engineering
ix. Gamma, E. et a. (1995), “Design Patterns”, Addison-Wesley Professional
x. Lüders, F. et al. (2002) “Specification of Software Components”, Ivica Crnkovic, Magnus Larsson, Building reliable
component based Software Systems, Artech House Publishers, Norwood MA USA, p. 23-40
xi. Meyer, B. (Oct. 1992) “Applying “Design by Contract””, Computer, Volume 25, Issue 10, IEEE Periodicals
xii. Nordby, E.J. et al. (2002), “On the Relation between Design Contracts and Errors: A Software Development
Strategy”, Ninth Annual IEEE International Conference and Workshop on the Engineering of Computer-Based
Systems, IEEE
xiii. Sommerville, I. (2004), “Software Engineering”, 7th edition (first published 1982), Addison-Wesley Professional
WEB References
STL Hewlett-Packard, “Standard Template Library Programmer’s Guide” (1994). Function Objects [Internet].
Available from: http://www.sgi.com/tech/stl/functors.html (Accessed: 18 February 2006)
PQXX Vermeulen, J.T., “libpqxx” (2003?). [Internet]. Available from: http://pqxx.org (Accessed: 18 February 2006)
Concluding Remarks
Due to limitations imposed on the overall size of the dissertation document, to the
complexity of the prototype and for completeness of purpose, the whole set of results is
presented on the accompanying CD. For a complete list of appendices please see
APPENDICES section Overview.
79
APPENDICES
Overview
Appendix Contents Location
A SCREENSHOTS dissertation.pdf
B DATABASE DESIGN dissertation.pdf
C DATA DICTIONARY dissertation.pdf
D STORED PROCEDURES dissertation.pdf
E CORBA IDL dissertation.pdf
F CLIENT DESIGN (AJAX) dissertation.pdf
G CLIENT HTML CODE dissertation.pdf
H CLIENT JAVASCRIPT dissertation.pdf
I CLIENT CCS STYLESHEETS dissertation.pdf
J WEB APPLICATION – JSP dissertation.pdf
K FRAMEWORK DETAILS dissertation.pdf
L C++ DATA SERVICES SUBSYSTEMS
dissertation.pdf
M WEB APPLICATION – JAVA CD: /Appendices/Appendix M.pdf
N C++ CODE CD: /Appendices/Appendix N.pdf
O DATA CONSISTENCY CD: Appendices/Appendix O.pdf
P PERFORMANCE & CONCURRENCY CD: Appendices/Appendix P.pdf
Q INSTALLATION & SETUP CD: Setup/setup.pdf
R IMPLEMENTATION (BUILD) CD: Setup/implementation.pdf
80
Appendix A - SCREENSHOTS
Name service
DICOM archive
DICOM data server
81
DICOM web application
Browser window
Selection panel Selection
of image
Preview of referenced image of selected series Descriptive Information
�
82
Appendix B - DATABASE DESIGN
�
83
Appendix C - DATA DICTIONARY
Data dictionary
� Primary keys underlined, additional indices in italics.
� Column N { N=1 NULLABLE, N=0 NOT NULLABLE }
Patient The Patient subject to a study
Column Datatype N Comment DICOM Type
patientID identity 0 Unique Identifier - -
birthdate DATE 0 Birthdate of Patient Patients
Birthdate
DA
dicomPatientID varchar(64) 1 DICOM PatientID Patients ID LO
creationTime DATETIME 0 Date of Patient
creation
-
info varchar(10240) 1 Additional Information Patients
Comments
LT
personName 5x varchar(64) 0 DICOM PatientName Patient Name PN
size double 1 Patient’s Size Patients Size DS
weight double 1 Patient’s Weight Patients Weight DS
Study The DICOM Patient Study
Column Datatype N Comment DICOM Type
studyID identity 0 Unique Identifier - -
instanceUID varchar(64) 0 Unique index Study Instance
UID
UI
accessionNum varchar(16) 1 Accession
Number
SH
creationDate DATE 0 Date of creation Study Date DA
creationTime TIME 0 Time of creation Study Time TM
84
lastChanged DATETIME 0 Date of latest change - -
reason varchar(64) 1 Reason for Study Reason for
Study
LO
requestingPhysician 5x varchar(64) 1 Name of physician
who requested study
Requesting
Physician
PN
performingPhysician 5x varchar(64) 1 Name of physician
who administering
study
Performing
Physician
PN
info varchar(64) 1 Additional information Study
Description
LO
Series The DICOM Series
Column Datatype N Comment DICOM Type
seriesID identity 0 Unique Identifier.
This ID is used to
generate the name of
the compressed file.
- -
instanceUID char(64) 0 DICOM Series UID Series Instance
UID
UI
serieType varchar(16) 0 Series Type CS
modificationTime DATETIME 1 Datetime latest
change
SeriesDate
SeriesTime
DA
/TM
numberOfImages integer 0 The number of
images in this series
Images in
Acquisition
IS
approvedBy integer 1 ID of Worker that
approved series
- -
approvedTime integer 1 Time of approval - -
modalityType varchar(16) 0 DICOM Modality type Modality CS
modalityModelName varchar(64) 1 DICOM Modality
model
Manufacturers
Model Name
LO
85
patientPosition char(18) 0 DICOM Patient
Position
Patient Position CS
referenceImage integer 1 The instanceNum of
the reference image
- -
info varchar(10240) 1 Free text Series
Description
LT
Image Image contained in Series
Column Datatype N Comment DICOM Type
instanceNumber integer 0 The image position in
the series (1st,2nd…)
Instance
Number
IS
seriesID FK
label varchar[16] 1 Label of image RTImage Label SH
name varchar[64] 1 Name of image RTImage
Name
LO
sliceLocation real/double 0 The Z value of the
image
Slice Location DS
preview blob 1 Preview of image -
86
Worker Clinical worker (e.g. Doctor)
Column Datatype N Comment
workerID identity 0 Unique Identifier
password char(50) 0 Password for logon
email varchar(100) 0 Email address
address varchar(255) 0 DICOM address structure?
gender char(100) 0 Gender
firstName varchar(100) 0 Firstname
lastName varchar(100) 0 Lastname
middleName varchar(100) 0 Middlename
prefix varchar(100) 0 Prefix for name
postfix varchar(100) 0 Postfix for name
phone char(20) 1 Phone number
fax char(20) 1 Fax number
mobile char(20) 1 Mobile tel. number
city varchar(100) 0 City
postCode char(10) 0 Postal code
login char(10) 0 Login name
UserRole Role of user in the system (may be used for authorization)
Column Datatype N Comment
roleID identity 0 Unique Identifier
87
Data mapping
This table contains the mapping of types used in the above design
DICOM Type Description SQL Type
CS Code String varchar[16]
DA Date DATE
DS Decimal String double or char[16]
IS Integer String integer or char[12]
LO Long String varchar[64]
LT Long Text varchar[10240]
PN Person Name 5 x varchar[64] (decomposed into its
components)
SH Short String varchar[16]
TM Time TIME
UI Unique Identifier varchar[64]
88
Appendix D – STORED PROCEDURES --##################################################################### -- FUNCTIONS --##################################################################### --===================================================================== --===================================================================== -- MODULE PATIENT --===================================================================== --===================================================================== --===================================================================== -- Function: ex_find_full_match(p patient) -- DROP FUNCTION ex_find_full_match(p patient); --===================================================================== CREATE OR REPLACE FUNCTION ex_find_full_match(p patient) RETURNS int4 AS $BODY$ declare patient_id integer; begin patient_id := 0; select into patient_id patientid from patient where (lower(familyname) = lower(p.familyname)) and (lower(givenname) = lower(p.givenname)) and (lower(middlename) = lower(p.middlename)) and (lower(nameprefix) = lower(p.nameprefix)) and (lower(namesuffix) = lower(p.namesuffix)) and ( birthdate = p.birthdate ) and lower(dicompatientid) = lower(p.dicompatientid) and (patientid <> p.patientid); return patient_id; end $BODY$ LANGUAGE 'plpgsql' VOLATILE; --===================================================================== -- Function: ex_merge_patients(p1 int4, p2 int4) -- DROP FUNCTION ex_merge_patients(p1 int4, p2 int4); --===================================================================== CREATE OR REPLACE FUNCTION ex_merge_patients(p1 int4, p2 int4) RETURNS void AS $BODY$ declare patient1 patient; begin if( p1 = p2 ) then return; end if;
89
-- update dependencies update study set patientid = p2 where patientid = p1; -- update tmp_matching update tmp_matching set patientid1 = p2 where patientid1 = p1; update tmp_matching set patientid2 = p2 where patientid2 = p1; -- delete patient p1 delete from patient where patientid = p1; return; end; $BODY$ LANGUAGE 'plpgsql' VOLATILE; --===================================================================== -- Function: ex_resolve_partial_match(p1 patient) --===================================================================== CREATE OR REPLACE FUNCTION ex_resolve_partial_match(p1 patient) RETURNS void AS $BODY$ declare p2 patient; begin -- remove all old references to this patientid in matching table delete from tmp_matching where patientid1 = p1.patientid or patientid2 = p1.patientid; -- resolve partial match type 1 (eq. of dicompatientid) for p2 in select p.patientid from patient p where p.patientid <> p1.patientid and lower(p.dicompatientid) = lower(p1.dicompatientid) order by p.patientid loop insert into tmp_matching ( patientid1, patientid2, status, reason ) values ( p1.patientid, p2.patientid, 1, 1 ); end loop; -- resolve partial match type 2 (eq. of patientname and birthdate) for p2 in select p.patientid from patient p where p.patientid <> p1.patientid
90
and lower(p.familyname) = lower(p1.familyname) and lower(p.givenname) = lower(p1.givenname) and lower(p.middlename) = lower(p1.middlename) and lower(p.nameprefix) = lower(p1.nameprefix) and lower(p.namesuffix) = lower(p1.namesuffix) and p.birthdate = p1.birthdate order by p.patientid loop insert into tmp_matching ( patientid1, patientid2, status, reason ) values ( p1.patientid, p2.patientid, 1, 2 ); end loop; return; end $BODY$ LANGUAGE 'plpgsql' VOLATILE; --===================================================================== -- Function: ex_get_last_full_match() --===================================================================== CREATE OR REPLACE FUNCTION ex_get_last_full_match() RETURNS int4 AS $BODY$ declare res integer; begin res := 0; SELECT INTO res patientid FROM last_full_match; return res; end; $BODY$ LANGUAGE 'plpgsql' STABLE; --===================================================================== -- Function: ex_get_thumbnail(int4, int4) --===================================================================== CREATE OR REPLACE FUNCTION ex_get_thumbnail(int4, int4) RETURNS int4 AS $BODY$ SELECT thumbnail FROM image WHERE seriesid = $1 AND instancenumber = $2; $BODY$ LANGUAGE 'sql' STABLE; --#####################################################################
91
-- TRIGGERS --##################################################################### --===================================================================== --===================================================================== -- MODULE PATIENT --===================================================================== --===================================================================== --===================================================================== -- TRG_DELETE PATIENT -- PURPOSE : Handle delete of corresponding tmp_matching entries --===================================================================== CREATE OR REPLACE FUNCTION ex_trg_delete_patient() RETURNS "trigger" AS $BODY$ declare begin delete from tmp_matching where patientid1 = OLD.patientid or patientid2 = OLD.patientid; return OLD; end; $BODY$ LANGUAGE 'plpgsql' VOLATILE; --===================================================================== -- TRG INSERT_PATIENT -- PURPOSE : Handle duplications on import of patients --===================================================================== CREATE OR REPLACE FUNCTION ex_trg_insert_patient() RETURNS "trigger" AS $BODY$ declare patient_id integer; begin -- -------------------------------------------------------------------- -- 1. --> EXAMINE DATA -- -------------------------------------------------------------------- -- if dicompatientid and patientname and birthday are equal -- -> full match -- -- if dicompatientid OR (name and birthday) are equal -- -> partial match -- -- the other cases: we found no match -- -> no match -- -------------------------------------------------------------------- -- 1.1 --> examine full match possibility -- -------------------------------------------------------------------- patient_id = ex_find_full_match( NEW ); -- -------------------------------------------------------------------- -- 2. --> HANDLE DATA
92
if( patient_id > 0 ) then -- -------------------------------------------------------------------- -- 2.1 --> FULL MATCH => just update existing record and exit -- -------------------------------------------------------------------- update patient set info = NEW.info where patientid = patient_id; delete from last_full_match where patientid > 0; insert into last_full_match (patientid)
values (patient_id); -- compensate auto_increment PK perform setval('seq_patient_patientid'::regclass,
lastval()-1 ); return NULL; -- skip insert else -- -------------------------------------------------------------------- -- 2.2 --> PARTIAL or NO MATCH -- => insert a new record and if applicable -- => matching patientid's in matching table -- --------------------------------------------------------------------
-- 2.2.1 --> get next to be inserted value and assign it to -- NEW -- it is assigned to NEW because resolve functions require -- it
NEW.patientid = currval('seq_Patient_patientID'); -- 2.2.2 --> handle partial match -- --+ if no matches are found nothing happens perform ex_resolve_partial_match(NEW); -- 2.2.3 --> insert the record (partial or no match) return NEW; -- insert end if; return NULL; -- do nothing end; $BODY$ LANGUAGE 'plpgsql' VOLATILE; -- ==================================================================== -- TRG_UPDATE_PATIENT -- PURPOSE: Handle changes to patient, update matching and -- dependencies -- The matching table has the structure (p1, p2, status, reason) -- ==================================================================== CREATE OR REPLACE FUNCTION ex_trg_update_patient() RETURNS "trigger" AS $BODY$ declare patient_old integer; begin -- ---> block changes to patientid (currently not supported)
93
if( NEW.patientid <> OLD.patientid ) then RAISE EXCEPTION 'Update of patientid not supported: see ex_trg_update_patient'; end if; -- -------------------------------------------------------------------- -- 1 --> handle full match -- -------------------------------------------------------------------- patient_old = ex_find_full_match( NEW ); if ( patient_old <> 0 ) then perform ex_merge_patients( patient_old, NEW.patientid ); return NEW; end if; -- -------------------------------------------------------------------- -- 2 --> recalculate the matching for updated record.. -- -------------------------------------------------------------------- perform ex_resolve_partial_match( NEW ); return NEW; end; $BODY$ LANGUAGE 'plpgsql' VOLATILE; -- ==================================================================== -- CREATE PATIENT TRIGGERS -- ==================================================================== CREATE TRIGGER insert_patient BEFORE INSERT ON patient FOR EACH ROW EXECUTE PROCEDURE ex_trg_insert_patient(); CREATE TRIGGER delete_patient BEFORE DELETE ON patient FOR EACH ROW EXECUTE PROCEDURE ex_trg_delete_patient(); CREATE TRIGGER update_patient BEFORE UPDATE ON patient FOR EACH ROW EXECUTE PROCEDURE ex_trg_update_patient(); -- ==================================================================== -- ==================================================================== -- MODULE SERIES -- ==================================================================== -- ==================================================================== -- ==================================================================== -- TRG_INSERT_SERIES -- PURPOSE : Handle duplications on import of SERIES.
94
-- SOLVES : Multiple series may be attached to the same study -- REMEDY : The duplicate is ignored -- ==================================================================== CREATE OR REPLACE FUNCTION ex_trg_insert_series() RETURNS "trigger" AS $BODY$ declare series_id integer; begin -- -------------------------------------------------------------------- -- 1. --> EXAMINE DATA -- -------------------------------------------------------------------- -- if instanceUID known ignore this serie we have it already series_id := 0; select into series_id S.seriesid from series S where NEW.instanceuid = S.instanceuid; if( series_id <> 0 ) then perform setval('seq_series_seriesid'::regclass, lastval()-1 ); return NULL; -- do nothing end if; return NEW; end; $BODY$ LANGUAGE 'plpgsql' VOLATILE; -- CREATE TRIGGER CREATE TRIGGER insert_series BEFORE INSERT ON series FOR EACH ROW EXECUTE PROCEDURE ex_trg_insert_series();
95
-- ==================================================================== -- TRG_INSERT_STUDY -- PURPOSE : Handle duplications on import of STUDY. -- SOLVES : Multiple series may be attached to the same study -- REMEDY : The duplicate is ignored -- ==================================================================== CREATE OR REPLACE FUNCTION ex_trg_insert_study() RETURNS "trigger" AS $BODY$ declare study_id integer; begin -- if instanceUID known ignore this study we have it already study_id := 0; select into study_id S.studyid from study S where NEW.instanceuid = S.instanceuid; if( study_id <> 0 ) then perform setval('seq_study_studyid'::regclass, lastval()-1 ); return NULL; -- do nothing end if; return NEW; end; $BODY$ LANGUAGE 'plpgsql' VOLATILE; -- CREATE TRIGGERS CREATE TRIGGER insert_study BEFORE INSERT ON study FOR EACH ROW EXECUTE PROCEDURE ex_trg_insert_study(); -- ==================================================================== -- ==================================================================== -- MODULE IMAGE -- ==================================================================== -- ==================================================================== -- ==================================================================== -- get thumbnail -- ==================================================================== CREATE OR REPLACE FUNCTION ex_get_thumbnail(int4, int4) RETURNS int4 AS $$ SELECT thumbnail FROM image WHERE seriesid = $1 AND instancenumber = $2; $$
96
LANGUAGE 'sql' STABLE; -- ==================================================================== -- DELETE_IMAGE -- ==================================================================== CREATE OR REPLACE FUNCTION ex_trg_delete_image() RETURNS "trigger" AS $BODY$ declare begin perform lo_unlink(OLD.thumbnail); return OLD; end; $BODY$ LANGUAGE 'plpgsql' VOLATILE; -- ==================================================================== -- MODULE : INSERT_IMAGE -- PURPOSE : Handle duplications on import of IMAGE. -- SOLVES : Makes it possible to reimport study might something -- have gone wrong. -- REMEDY : The duplicate is ignored -- DATE : 18.10.2005 -- AUTHOR : LD -- ==================================================================== CREATE OR REPLACE FUNCTION ex_trg_insert_image() RETURNS "trigger" AS $BODY$ declare series_id integer; instance_num integer; begin -- if (seriesid, instancenumber) known ignore this image we got it -- already series_id := 0; instance_num := 0; select into series_id, instance_num IMG.seriesid, IMG.instancenumber from image IMG where NEW.seriesid = IMG.seriesid and NEW.instancenumber = IMG.instancenumber; if( series_id <> 0 ) then return NULL; -- do nothing end if; return NEW; end; $BODY$ LANGUAGE 'plpgsql' VOLATILE;
97
-- ==================================================================== -- CREATE TRIGGERS -- ==================================================================== CREATE TRIGGER insert_image BEFORE INSERT ON image FOR EACH ROW EXECUTE PROCEDURE ex_trg_insert_image(); CREATE TRIGGER delete_image BEFORE DELETE ON image FOR EACH ROW EXECUTE PROCEDURE ex_trg_delete_image();
98
Appendix E - IDL
#pragma prefix "dijkstra-ict.com" /** \brief DICOM Module * * Because this module interfaces to objects in the DICOM domain as much as possible * data types have been borrowed from this domain. */ module dicom { // // Common DICOM types: http://medical.nema.org/dicom/2003/03_06PU.PDF // typedef string DICOM_PID; ///< DICOM PatientID typedef string TM; ///< DICOM TM TIME typedef string DA; ///< DICOM DA DATE typedef string LO; ///< DICOM LO Long String [0..64] typedef string LT; ///< DICOM LT Long Text [0..10240] typedef string SH; ///< DICOM SH Short String [0..16] typedef string UI; ///< DICOM UI Globally Unique Identifier [64] typedef long IS; ///< DICOM IS Integer String typedef string CS; ///< DICOM CS Code String [0..16] typedef double DS; ///< DICOM DS Decimal String // // Additional types // typedef long OID; ///< Unique identifier in DB typedef string TIMESTAMP; ///< Time Stamp typedef long SIZE; ///< size type typedef long Q; ///< quality of object typedef sequence<OID> OIDSequence; ///< OID collection typedef sequence<octet> PixelData; ///< Collection of pixels (a raw image)
99
/** \brief Raw Image Data Transfer Object * * This struct is used to transfer image pixel data over the interface */ struct ImagePixelDTO { PixelData data; ///< Raw image data IS instancenumber; ///< Instancenumber of image }; typedef sequence<ImagePixelDTO> ImagePixelDTOSequence; ///< Collection of ImagePixelDTO /** \brief ObjectNotFound exception thrown when a specific requested object could
<em><b>not</b></em> be found. */ exception ObjectNotFound { OID objectID; ///< objectID of object string message; ///< associated message }; /** \brief DataError exception thrown when the server fails to serve the request. */ exception DataError { OID objectID; ///< ObjectID identifies object associated with the error (if applicable) string message; ///< Associated message }; /** \brief Describes the cause of a range error. */ struct error_info { string parameter; ///< Name of the parameter causing the error string value; ///< Value of the parameters string message; ///< Message indicating the error (e.g. x > 10: valid range x in [0..10]) }; /** \brief InvalidInput exception thrown when input parameters are invalid. */ exception InvalidInput { error_info error; ///< Detailed error information
100
}; /* forward declarations */ module patient { interface IPatient; interface IPatientS; }; module study { interface IStudy; interface IStudyS; }; module series { interface ISeries; interface ISeriesS; }; module image { interface IImage; interface IImageS; }; module archive { interface IArchive; }; /** \brief IDicom root interface for DICOM data interfaces * * This interface is the gateway to the data interfaces that are used in the client application. */ interface IDicom { patient::IPatient getIPatient(); ///< Entry point to patient::IPatient patient::IPatientS getIPatientS(); ///< Entry point to patient::IPatientS study::IStudy getIStudy(); ///< Entry point to study::IStudy study::IStudyS getIStudyS(); ///< Entry point to study::IStudyS series::ISeries getISeries(); ///< Entry point to series::ISeries series::ISeriesS getISeriesS(); ///< Entry point to series::ISeriesS image::IImage getIImage(); ///< Entry point to image::IImage
101
image::IImageS getIImageS(); ///< Entry point to image::IImageS }; /** \brief IArchiving root interface for DICOM archiving interfaces * * This interface is the gateway to the archiving interfaces that are used in the client application. */ interface IArchiving { archive::IArchive getIArchive(); ///< Entry point to archive::IArchive }; /** \brief Patient module * * The Patient module groups the interfaces and structures related to the Patient object. */ module patient { /** \brief DICOM PersonName PN [5x64] * * The PN struct corresponds to the DICOM PN (0040,A123) object. * The PN struct is used as part of the PatientDTO struct */ struct PN { string givenname; ///< givenname (firstname) [64] string middlename; ///< middlename [64] string familyname; ///< familyname [64] string prefix; ///< nameprefix [64] string suffix; ///< namesuffix [64] }; typedef sequence<PN> PNSequence; ///< PN collection /** \brief Patient Data Transfer Object * * This struct is used to transfer Patient data over the interface */ struct PatientDTO { PN name; ///< patient PersonName PN
102
OID id; ///< unique oid in the database DA birthdate; ///< patient birhtdate DICOM_PID dicompatientid; ///< DICOM PatientID not guaranteed to be unique) TM creationtime; ///< creation time of patient object TM LT info; ///< info field LT }; typedef sequence<PatientDTO> PatientDTOSequence; ///< PatientDTO collection /** \brief IPatient interface for operations on single patient * * This interface groups all operations available to single patient objects. */ interface IPatient { /** \brief get PN for patient identified by OID poid * * \param[in] poid OID of patient * \pre 1. patient with OID poid exists * <br>2. <em><b>not</b></em> pre 1. * \post 1. PN of patient is returned * <br>2. ObjectNotFound exception */ PN getPNByOID( in OID poid ) raises(DataError, ObjectNotFound); /** \brief PatientDTO for patient with OID * * \param[in] poid OID of patient * \pre 1. patient with OID poid exists * <br>2. <em><b>not</b></em> pre 1. * \post 1. PatientDTO of patient is returned * <br>2. ObjectNotFound exception */ PatientDTO getDTOByOID( in OID poid ) raises(DataError, ObjectNotFound); }; /** \brief IPatient interface for operations that span multiple patients
103
* * This interface groups all operations relating to multiple patients. */ interface IPatientS { OIDSequence getOIDs() raises(DataError); ///< get OIDs for all patients in database /** \brief Get OIDs for all patients that received treatment from physicianID * * \param[in] physicianID OID of physician dealing with patient * \pre 1. physicianID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. OIDSequence of patients that relate to physicianID is returned. * <br>2. resultset is empty */ OIDSequence getOIDsForPhysicianByOID( in OID physicianID ) raises(DataError); /** \brief Get PNs for all patients that received treatment from physicianID * * \param[in] physicianID OID of physician dealing with patient * \pre 1. physicianID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. PNSequence of patients that relate to physicianID is returned. * <br>2. resultset is empty */ PNSequence getPNsForPhysicianByOID( in OID physicianID ) raises(DataError); /** \brief Get DTOs for all patients that received treatment from physicianID * * \param[in] physicianID OID of physician dealing with patient * \pre 1. physicianID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. DTOSequence of patients that relate to physicianID is returned. * <br>2. resultset is empty */ PatientDTOSequence getDTOsForPhysicianByOID( in OID physicianID ) raises(DataError); }; };
104
/** \brief Study module */ module study { typedef long TYPE; ///< study type /** \brief Study Data Transfer Object * * This struct is used to transfer Study data over the interface */ struct StudyDTO { OID id; ///< oid UI instanceuid; ///< DICOM uid LO info; ///< additional info SH accessionNumber; ///< DICOM accession number DA creationdate; ///< creation date TM creationtime; ///< creation time TIMESTAMP lastchanged; ///< date of latest change TYPE studytype; ///< DICOM study type OID workerid; ///< owner oid of study OID patientid; ///< oid of related patient }; typedef sequence<StudyDTO> StudyDTOSequence; ///< StudyDTO collection /** \brief IStudy interface for operations on single study */ interface IStudy { /** \brief Convert from OID to UI * * \param[in] poid OID identifying the study * \pre 1. poid refers to an existing study * <br>2. <em><b>not</b></em> pre 1. * \post 1. result is UI corresponding to poid * <br>2. ObjectNotFound exception * \retval UI corresponding to poid
105
*/ UI toUID( in OID poid ) raises(DataError, ObjectNotFound); /** \brief Convert from UI to OID * * \param[in] puid the UI identifying the study * \pre 1. puid refers to an existing study * <br>2. <em><b>not</b></em> pre 1. * \post 1. result is OID corresponding to puid * <br>2. ObjectNotFound exception * \retval OID corresponding to puid */ OID toOID( in UI puid ) raises(DataError, ObjectNotFound); /** \brief Get StudyDTO with OID * * \param[in] poid the OID identifying the study * \pre 1. poid refers to an existing study * <br>2. <em><b>not</b></em> pre 1. * \post 1. returns StudyDTO corresponding to poid * <br>2. ObjectNotFound exception * \retval StudyDTO of the study identified by poid */ StudyDTO getDTOByOID( in OID poid ) raises(DataError, ObjectNotFound); /** \brief Get StudyDTO with UI * * \param[in] poid the UI identifying the study * \pre 1. puid refers to an existing study * <br>2. <em><b>not</b></em> pre 1. * \post 1. returns StudyDTO * <br>2. ObjectNotFound exception * \retval StudyDTO of the study identified by puid */ StudyDTO getDTOByUID( in UI puid ) raises(DataError, ObjectNotFound);
106
}; /** \brief IStudyS interface for operations that span multiple studies * * This interface groups all operations relating to multiple studies. */ interface IStudyS { /** \brief Get OIDs for all studies relating to patient * * \param[in] poid the OID identifying the patient * \pre 1. patientID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. sequence contains the number of related study OIDs for patient * <br>2. resultset is empty * \retval OIDs of the studies that relate to patientID */ OIDSequence getOIDsForPatientByOID( in OID patientID ) raises(DataError); /** \brief Get OIDs for all studies done/owned by physician * * \param[in] physicianID the OID identifying the physician * \pre 1. physicianID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. sequence contains the number of related study OIDs for physician * <br>2. resultset is empty * \retval OIDs of the studies that relate to physician */ OIDSequence getOIDsForPhysicianByOID( in OID physicianID ) raises(DataError); /** \brief Get StudyDTOs for all studies relating to patient * * \param[in] poid the OID identifying the patient * \pre 1. patientID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. sequence contains the number of related study DTOs for patient
107
* <br>2. resultset is empty * \retval StudyDTOs of the studies that relate to patientID */ StudyDTOSequence getDTOsForPatientByOID( in OID patientID ) raises(DataError); /** \brief Get StudyDTOs for all studies that relate to physicianID * * \param[in] physicianID the OID identifying the physician * \pre 1. physicianID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. sequence contains the number of related study DTOs for physician * <br>2. resultset is empty * \retval StudyDTOs of the studies that relate to physician */ StudyDTOSequence getDTOsForPhysicianByOID( in OID physicianID ) raises(DataError); }; }; /** \brief Series module */ module series { /** \brief Series Data Transfer Object * * This struct is used to transfer Series data over the interface */ struct SeriesDTO { OID id; ///< oid of series UI instanceUID; ///< DICOM instance UI CS serietype; ///< DICOM series type TIMESTAMP modificationtime; ///< DICOM modification time OID approvedby; ///< approved by ID TIMESTAMP approvedtime; ///< approved time CS modalitytype; ///< DICOM modality type LO modalitymodelname;///< DICOM modality model
108
CS patientposition; ///< DICOM patient postition string technicalcomment; ///< technical comment OID referenceimage; ///< id of image that is used to 'preview' series LT info; ///< additional comment OID diskid; ///< id of disk where series is stored OID workerid; ///< id of worker (physician) that owns this series OID studyid; ///< id of study that this series is part of }; typedef sequence<SeriesDTO> SeriesDTOSequence; /** \brief ISeries interface for operations on single serie */ interface ISeries { /** \brief Convert from OID to UI * * \param[in] poid OID identifying the series * \pre 1. poid refers to an existing study * <br>2. <em><b>not</b></em> pre 1. * \post 1. result is UI corresponding to poid * <br>2. ObjectNotFound exception * \retval UI corresponding to poid */ UI toUID( in OID poid ) raises(DataError, ObjectNotFound); /** \brief Convert from UI to OID * * \param[in] puid the UI identifying the study * \pre 1. puid refers to an existing study * <br>2. <em><b>not</b></em> pre 1. * \post 1. result is OID corresponding to puid * <br>2. ObjectNotFound exception * \retval OID corresponding to puid */ OID toOID( in UI puid ) raises(DataError, ObjectNotFound);
109
/** \brief Get SeriesDTO with OID * * \param[in] poid the OID identifying the series * \pre 1. poid refers to an existing series * <br>2. <em><b>not</b></em> pre 1. * \post 1. returns SeriesDTO corresponding to poid * <br>2. ObjectNotFound exception * \retval SeriesDTO of the series identified by poid */ SeriesDTO getDTOByOID( in OID poid ) raises(DataError, ObjectNotFound); /** \brief Get SeriesDTO with UI * * \param[in] poid the UI identifying the series * \pre puid refers to an existing series * \pre 1. puid refers to an existing series * <br>2. <em><b>not</b></em> pre 1. * \post 1. returns SeriesDTO * <br>2. ObjectNotFound exception * \retval SeriesDTO of the series identified by puid */ SeriesDTO getDTOByUID( in UI puid ) raises(DataError, ObjectNotFound); }; /** \brief ISeriesS interface for operations that span multiple series * * This interface groups all operations relating to multiple series. */ interface ISeriesS { /** \brief Get OIDs for all series relating to study * * \param[in] studyOID the OID identifying the study * \pre 1. studyOID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. sequence contains the number of related series OIDs for patient * <br>2. resultset is empty
110
* \retval OIDs of the studies that relate to studyOID */ OIDSequence getOIDsForStudyByOID( in OID studyOID ) raises(DataError); /** \brief Get DTOs for all series relating to study * * \param[in] studyOID the OID identifying the study * \pre 1. studyOID is known * <br>2. <em><b>not</b></em> pre 1. * \post 1. sequence contains the number of related series DTOs for patient * <br>2. resultset is empty * \retval SeriesDTOs of the studies that relate to studyOID */ SeriesDTOSequence getDTOsForStudyByOID( in OID studyOID ) raises(DataError); }; }; /** \brief Image module */ module image { /** \brief Image Data Transfer Object * * This struct is used to transfer Image data over the interface */ struct ImageDTO { OID seriesid; ///< series id identifying the series the image is part of IS instancenumber; ///< instancenumber within the series SH label; ///< label of the slice LO name; ///< name of the slice DS slicelocation; ///< location of this slice in the series OID thumbnail; ///< id of the thumbnail of the image }; typedef sequence<ImageDTO> ImageDTOSequence; ///< collection of ImageDTO typedef sequence<IS> InstanceNumberSequence; ///< collection of InstanceNumbers /** \brief Image (single) interface
111
* * This interface groups all operations available to single Image objects. */ interface IImage { /** \brief Get ImageDTO for image * * \param[in] seriesid the series OID to which the image belongs * \param[in] instanceNum the instance number IS identifying the image within the series * \pre 1. seriesid identifies existing series <em><b>and</b></em> instanceNum identifies
existing image within the series * <br>2. <em><b>not</b></em> pre 1. * \post 1. ImageDTO of the image containing image data of image * <br>2. ObjectNotFound exception * \retval ImageDTO of the image */ ImageDTO getDTO( in OID seriesid, in IS instanceNum ) raises(DataError, ObjectNotFound); /** \brief Get ImagePixelDTO for image * * \param[in] seriesid the series OID to which the image belongs * \param[in] instanceNum the instance number IS identifying the image within the series * \pre 1. seriesid identifies existing series <em><b>and</b></em> instanceNum identifies
existing image within the series * <br>2. <em><b>not</b></em> pre 1. * \post 1. ImagePixelDTO of the image containing image thumbnail (JPEG 100x100) * <br>2. ObjectNotFound exception * \retval ImagePixelDTO of the image */ ImagePixelDTO getThumbnail( in OID seriesid, in IS instanceNum )
raises(DataError, ObjectNotFound); }; /** \brief Image (multiple) interface * * This interface groups all operations relating to multiple image objects. */ interface IImageS {
112
/** \brief Get InstanceNumbers for all images relating to series * * \param[in] seriesOID the OID identifying the series * \pre 1. seriesOID references existing series * <br>2. <em><b>not</b></em> pre 1. * \post 1. Sequence contains the related instance numbers for series * <br>2. resultset is empty * \retval InstanceNumbers of the images that relate to seriesOID */ InstanceNumberSequence getInstanceNumbersForSeriesByOID( in OID seriesOID )
raises(DataError); /** \brief Get ImageDTOs for all images relating to series * * \param[in] seriesOID the OID identifying the series * \pre 1. seriesOID references existing series * <br>2. <em><b>not</b></em> pre 1. * \post 1. Sequence contains the related ImageDTOs for series * <br>2. resultset is empty * \retval ImageDTOs of the images that relate to seriesOID */ ImageDTOSequence getDTOsForSeriesByOID( in OID seriesOID ) raises(DataError); /** \brief Get thumbnails for all images relating to series * * \param[in] seriesOID the OID identifying the series * \pre 1. seriesOID references existing series * <br>2. <em><b>not</b></em> pre 1. * \post 1. Sequence contains the thumbnails (100x100) for series * <br>2. resultset is empty * \retval ImagePixelDTOs of the images that relate to seriesOID */ ImagePixelDTOSequence getThumbnailsBySeriesID( in OID seriesID ) raises(DataError); }; };
113
/** \brief Archive module (hosts interfaces that require accessing the archive) * * The archive may be hosted on a separate machine or the archiving module may be * made available only to customers that require it. */ module archive { /** \brief Archive (single) interface */ interface IArchive { /** \brief Get preview (JPEG 512x512 max) for image * * Get preview of image.
* The size will be equal to the size of the original with a maximum of 512x512. * * \param[in] seriesid the series OID to which the image belongs * \param[in] instanceNum the instance number IS identifying the image within the series * \pre 1. seriesid identifies existing series <em><b>and</b></em> instanceNum identifies
* existing image within the series * <br>2. <em><b>not</b></em> pre 1. * \post 1. ImagePixelDTO of the image containing preview (JPEG 512x512 max) of the image * <br>2. ObjectNotFound exception * \retval ImagePixelDTO of the image */ ImagePixelDTO getDefaultPreview( in OID seriesid, in IS instanceNum )
raises(DataError, ObjectNotFound); /** \brief Get preview (JPEG sz x sz max) for image * * \param[in] seriesid the series OID to which the image belongs * \param[in] instanceNum the instance number IS identifying the image within the series * \param[in] sz the maximum size for the image (sz x sz pixels) * \param[in] Q quality of the image [10..100], 10 = min quality, 100 = max quality * \pre 1. seriesid identifies existing series <em><b>and</b></em> instanceNum identifies existing
image within the series * <br>2. sz in [10..1024] <em><b>and</b></em> quality in [1..100] * \post 1. pre 1. <em><b>and</b></em> pre 2. => ImagePixelDTO of the image containing preview (JPEG)
114
* <br>2. <em><b>not</b></em> pre 1. => ObjectNotFound exception * <br>3. <em><b>not</b></em> pre 2. => InvalidInput exception * \retval ImagePixelDTO of the image */ ImagePixelDTO getPreview( in OID seriesid, in IS instanceNum, in SIZE sz, in Q quality )
raises(DataError, ObjectNotFound, InvalidInput); }; }; };
115
Appendix F - CLIENT DESIGN (AJAX)
Interaction Model – Using XMLHTTP (AJAX)
We use remote scripting (Asynchronous JavaScript and XML) to incrementally update the
page. This approach allows us to send specific requests to the back-end and use the response to
update parts of the web page using JavaScript and DOM. In addition to the AJAX pattern we use
iframes and forms to transmit the image data between server and client on request.
Philip McCarthy [McCarthy, 2005] gives a good explanation of a typical AJAX interaction
pattern:
116
XML Response Messages
The interface between the web application and the web-client consiste of XML response
messages. In this section we describe the layout of the messages.
Patient collection On startup the back-end is queried for patients in the database that have a study. The
back-end responds by sending an XML response object containing the patients:
The lists for studies and series follow the same pattern.
Selection Event (Patient, Study, Series) When a patient is selected the study list is refreshed and the patient’s details are
displayed. Two separate requests (one for the details and one for the related studies) are sent to
the web server each having its own response. When a study is selected the study details and
series list are retrieved. Finally when a series is selected the series details are updated,
thumbnails are generated at the server and the list of thumbnails is retrieved from the server.
Patient Details
Study Details and Series Detais follow the same pattern.
Thumbnails and Preview
Thumbnails Images cannot be returned in a response message. This is how thumbnails are retrieved
from the server:
1. The client sends an HTTP request to the server using a hidden form: previewform
(allowing redirection of the response using the target attribute).
<patients>
<patient id=’1’>
<label>Somebody’s Name</label>
</patient>
<patient id=’1’>
<name>Somebody’s Name</name>
<patientid>...</patientid>
<info>...</info>
117
2. The server generates the thumbnails (Images.processRequest) in a server directory
and generates an HTML page containing the images (using thumbnails.jsp) which is
sent back to the client’s hidden iframe (filmstrip)
3. The client loads the page containing the images.
Preview Generally DICOM images are relatively large. The DICOM images are stored at the back-
end in archives. When a full-size preview of a DICOM image is requested the back-end extracts
the image from the archive, converts it to a JPEG and sends it back to the client. This is how it
works:
1. The client sends an HTTP request to the server using a hidden form: previewform
2. The server generates the image (Images.processRequest) in a local server directory and
the HTML page containing the image (using a JSP: preview.jsp). The HTML page is sent
back to the client’s hidden iframe (preview).
3. The client loads the page containing the image.
Web References:
AJAX McCarthy, P. , “AJAX for Java developers, Build dynamic Java applications” (2005). [Internet]. Available from:
http://www-128.ibm.com/developerworks/library/j-ajax1/ (Accessed: 18 February 2006)
118
Appendix G - CLIENT HTML
index.html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <title>DBBrowser Prototype</title> <link rel="stylesheet" type="text/css" href="style/style.css" media="screen" /> <script type="text/javascript" src="script/prototype.js"></script> </head> <body onload="onLoad()"> <form id="previewform" method="get" target="preview" action=""> <input type="hidden" id="seriesid" name="seriesid" /> <input type="hidden" id="imageid" name="imageid" /> <input type="hidden" id="operation" name="operation" /> </form> <!-- page --> <div class="page"> <!-- navigationmenus --> <div class="navigation"> <form action=""> <!-- PATIENT menu --> <div class="navigationmenu"> <div class="header patienth">Patient</div> <select
class="menui patienti" id="patient"
119
onchange="onPatientSelect(event)" size="8">
</select> </div> <!-- STUDY menu --> <div class="navigationmenu"> <div class="header studyh">Study</div> <select
class="menui studyi" id="study" onchange="onStudySelect(event)" size="8">
</select> </div> <!-- SERIES menu --> <div class="navigationmenu"> <div class="header seriesh">Series</div> <select
class="menui seriesi" id="series" onchange="onSeriesSelect(event)" size="8">
</select> </div> </form> </div> <!-- navigationmenus --> <!-- thumbnails --> <iframe class="filmstrip" id="filmstrip" name="filmstrip" src="thumbnails/blank.html"></iframe> <!-- preview --> <iframe class="preview" id="preview" name="preview" src="preview/blank.html"></iframe> <div > <!-- patient details --> <div class="entity"> <div class="header patienth">Patient Info</div>
120
<div class="info patient_info"> <form class="noedit" id="patientdetails" action=""> <table> <tbody> <tr><td><b>name:</b> </td>
<td> <input
type="text" readonly id="patientname" name="patientname" value=""
/> </td></tr>
<tr><td><b>patient id:</b> </td> <td>
<input type="text" readonly id="patientid" name="patientid" value="" />
</td></tr> <tr><td><b>medical info:</b> </td>
<td> <textarea
style="width:90%;" readonly id="patientinfo" name="patientinfo">
</textarea> </td></tr>
</tbody> </table> </form> </div> </div> <!-- study details --> <div class="entity"> <div class="header studyh">Study Info</div> <div class="info study_info"> <br/>
121
<form class="noedit" id="patientdetails" action=""> <table> <tbody> <tr><td><b>instance UID:</b> </td>
<td> <input
type="text" readonly id="studyuid" name="studyuid" value="" />
</td></tr> <tr><td><b>last changed:</b> </td><td>
<input type="text" readonly id="lastchanged" name="lastchanged" value="" />
</td></tr> <tr><td><b>created:</b> </td><td>
<input type="text" readonly id="created" name="created" value="" />
</td></tr> <tr><td><b>info:</b> </td>
<td> <textarea
style="width:90%;" readonly id="studyinfo" name="studyinfo">
</textarea> </td></tr>
</tbody> </table> </form> </div> </div>
122
<!-- series details --> <div class="entity"> <div class="header seriesh" >Series Info</div> <div class="info series_info"> <br/> <form class="noedit" id="patientdetails" action=""> <table> <tbody> <tr><td><b>instance UID:</b> </td>
<td> <input
type="text" readonly id="seriesuid" name="seriesuid" value="" />
</td></tr>
<tr><td><b>modality model:</b> </td> <td> <input
type="text" readonly id="modalitymodel" name="modalitymodel" value="" />
</td></tr>
<tr><td><b>modality type:</b> </td> <td> <input
type="text" readonly id="modalitytype" name="modalitytype" value="" />
</td></tr>
<tr><td><b>patient position:</b> </td> <td> <input
123
type="text" readonly id="patientposition" name="patientposition" value="" />
</td></tr>
<tr><td><b>approved:</b> </td> <td> <input
type="text" readonly id="approvedtime" name="approvedtime" value="" />
</td></tr> <tr>
<td><b>info:</b> </td> <td> <textarea
style="width:90%;" readonly id="seriesinfo" name="seriesinfo">
</textarea> </td></tr>
</tbody> </table> </form> </div> </div> </div> <!-- page --> </body> </html>
124
APPENDIX H - JAVA SCRIPT
prototype.js
///////////////////////////////////////////////////// // global request and XML document objects var req; // global type of request var g_evt = null; ///////////////////////////////////////////////////// ///////////////////////////////////////////////////// // get XMLHTTP request object (IE and Mozilla) ///////////////////////////////////////////////////// function getXMLHTTPObject() { var req = null; // branch for native XMLHttpRequest object if (window.XMLHttpRequest) { req = new XMLHttpRequest(); } // branch for IE/Windows ActiveX version else if (window.ActiveXObject) { try { req = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try { req = new ActiveXObject("Microsoft.XMLHTTP");
125
} catch(e) { req = false; } } } return req; } ///////////////////////////////////////////////////// // handle onreadystatechange event of req object ///////////////////////////////////////////////////// function getEvent() { // ready.. if (req.readyState == 4 && req.status == 200) { return g_evt.type; } return null; } ///////////////////////////////////////////////////// // handle selection in patient/study/series list ///////////////////////////////////////////////////// function handle_selection() { // ready.. var evtType = getEvent(); if( !evtType ) { return; } if( evtType == "patient" ) { buildList("study"); appendToList("series", "0", "(not applicable)");
126
// next event.. g_evt.type = "patient_details"; g_evt.handler = handle_details; g_evt.destination = "patient.do"; g_evt.parList = "operation=getPatientDetails&id="
+ g_evt.id; postEvent( g_evt ); } else if( evtType == "study" ) { buildList("series"); //alert( evtType ); // next event.. g_evt.type = "study_details"; g_evt.handler = handle_details; g_evt.destination = "study.do"; g_evt.parList = "operation=getStudyDetails&id=" + g_evt.id; //alert( g_evt.parList ); postEvent( g_evt ); } } function handle_onload() { // ready.. var evtType = getEvent(); if( !evtType ) { return; } // start up if( evtType == "patientlist" ) { buildList("patient"); clearPreview();
127
appendToList("study", "0", "(not applicable)"); appendToList("series", "0", "(not applicable)"); } } ///////////////////////////////////////////////////// // handle details event ///////////////////////////////////////////////////// function handle_details() { // ready.. var evtType = getEvent(); if( !evtType ) { return; } //alert( evtType ); setDetails(evtType); } ///////////////////////////////////////////////////// // retrieve text of an XML document element, // including elements using namespaces // taken from: // http://developer.apple.com/internet/webcontent/xmlhttpreq.html ///////////////////////////////////////////////////// function getElementTextNS(prefix, local, parentElem, index) { var element = ""; if (prefix && window.ActiveXObject ) { // IE/Windows way of handling namespaces element = parentElem.getElementsByTagName(prefix + ":"
+ local)[index]; } else {
128
// the namespace versions of this method // (getElementsByTagNameNS()) operate // differently in Safari and Mozilla, but both // return value with just local name, provided // there aren't conflicts with non-namespace element // names element = parentElem.getElementsByTagName(local)[index]; } // not found if( !element ) return "(null)"; // get text, accounting for possible // whitespace (carriage return) text nodes if (element.childNodes.length > 1) { return element.childNodes[1].nodeValue; } else { return element.firstChild.nodeValue; } } ///////////////////////////////////////////////////// // empty preview content ///////////////////////////////////////////////////// function clearPreview() { var form = document.getElementById( "previewform" ); form.action = "preview/blank.html"; form.target = "preview"; form.submit(); } ///////////////////////////////////////////////////// // fill filmstrip content /////////////////////////////////////////////////////
129
function buildFilmstrip( seriesid ) { // hidden form.. var form = document.getElementById( "previewform" ); // set form parameters.. var series = document.getElementById( "seriesid" ); series.value = seriesid; //alert( seriesid ); var operation = document.getElementById( "operation" ); operation.value = 'getThumbnails'; // submit the form.. form.action = 'images.do'; form.target = "filmstrip"; form.submit(); } ///////////////////////////////////////////////////// // empty filmstrip content ///////////////////////////////////////////////////// function clearFilmstrip() { var form = document.getElementById( "previewform" ); form.action = "thumbnails/blank.html"; form.target = "filmstrip"; form.submit(); } ///////////////////////////////////////////////////// // add item to select element the less // elegant, but compatible way. ///////////////////////////////////////////////////// function appendToList( listid, id, content ) { var list = document.getElementById( listid );
130
var opt = document.createElement("option"); opt.id = id; opt.appendChild( document.createTextNode( content ) ); list.appendChild( opt ); } ///////////////////////////////////////////////////// // fill select list with items from // the current XML document // input: // - listid = id of list (e.g. 'patient') ///////////////////////////////////////////////////// function buildList( listid ) { var items = req.responseXML.getElementsByTagName( listid ); // loop through elements, and add each nested // <label> element to select element for (var i = 0; i < items.length; i++) { appendToList( listid, items[i].getAttribute("id"), getElementTextNS("", "label", items[i], 0) ); } } ///////////////////////////////////////////////////// // empty select list content ///////////////////////////////////////////////////// function clearList( id ) { if( id != null ) { var select = document.getElementById( id ); while (select.length > 0) { select.remove(0); } } }
131
///////////////////////////////////////////////////// // details: initialization ///////////////////////////////////////////////////// // [left = XML tag, right = HTML id] function tagMapNode( pleft, pright ) { this.left = pleft; this.right = pright; } // patient map var patientMap = new Array(); patientMap[0] = new tagMapNode("name", "patientname"); patientMap[1] = new tagMapNode("patientid", "patientid"); patientMap[2] = new tagMapNode("info", "patientinfo"); patientMap[3] = null; // study map var studyMap = new Array(); studyMap[0] = new tagMapNode("instanceuid", "studyuid"); studyMap[1] = new tagMapNode("lastchanged", "lastchanged"); studyMap[2] = new tagMapNode("creationdate", "created"); studyMap[3] = new tagMapNode("info", "studyinfo"); studyMap[4] = null; // series map var seriesMap = new Array(); seriesMap[0] = new tagMapNode("instanceuid", "seriesuid" ); seriesMap[1] = new tagMapNode("modalitymodelname", "modalitymodel" ); seriesMap[2] = new tagMapNode("modalitytype", "modalitytype" ); seriesMap[3] = new tagMapNode("patientposition", "patientposition" ); seriesMap[5] = new tagMapNode("approvedtime", "approvedtime" ); seriesMap[4] = new tagMapNode("seriesinfo", "seriesinfo" ); seriesMap[6] = null; // image map var imageMap = new Array();
132
// to do.. imageMap[0] = null; var details = new Array(); details['patient_details'] = patientMap; details['study_details'] = studyMap; details['series_details'] = seriesMap; details['image_details'] = imageMap; ///////////////////////////////////////////////////// // set details ///////////////////////////////////////////////////// function setDetails( id ) { var ldetails = details[id]; if( !ldetails ) return; for( var i = 0; ldetails[i] != null; i++ ) { var items = req.responseXML.getElementsByTagName( ldetails[i].left ); var formItem = document.getElementById( ldetails[i].right ); formItem.value = ""; formItem.value = items[0].firstChild.nodeValue; } } ///////////////////////////////////////////////////// // clear details ///////////////////////////////////////////////////// function clearDetails( id ) { var ldetails = details[id]; if( !ldetails ) return; for( var i = 0; ldetails[i] != null; i++ ) { var formItem = document.getElementById( ldetails[i].right ); formItem.value = "";
133
} } ///////////////////////////////////////////////////// // EVENT HANDLING ///////////////////////////////////////////////////// ///////////////////////////////////////////////////// // Event (constructor) // input: // - pType = type of event (e.g. 'patient') // - pID = id of the selected object // - pDestination = target URL // - parList = parameter list for target URL // - pHandler = function that handles response ///////////////////////////////////////////////////// function Event( pType, pID, pDestination, pParList, pHandler ) { // METHODS this.handler = pHandler; // DATA this.type = pType; this.id = pID; this.destination = pDestination; this.parList = pParList; } ///////////////////////////////////////////////////// // send XML request to server ///////////////////////////////////////////////////// function sendMessage( dest, parList, handler ) { req = getXMLHTTPObject();
134
if (req) { req.onreadystatechange = handler; var url = parList ? (dest + '?' + parList) : dest; req.open("GET", url, true); req.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); req.send(null); } else { alert("getXMLHTTPObject() failed."); } } function postMessage( dest, parlist, handler ) { req = getXMLHTTPObject(); if (req) { req.onreadystatechange = handler; req.open("POST", dest, true); req.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); req.send(parlist); } else { alert("getXMLHTTPObject() failed."); } } function sendEvent( evt ) { try { sendMessage( evt.destination, evt.parList, evt.handler ); } catch( e ) { var msg = (typeof e == "string") ? e : ((e.message) ? e.message : "Unknown Error"); alert("Unable to process request:\n" + msg); return; } } function postEvent( evt ) {
135
try { //alert( evt.destination + ':' + evt.parList ); postMessage( evt.destination, evt.parList, evt.handler ); } catch( e ) { var msg = (typeof e == "string") ? e : ((e.message) ? e.message : "Unknown Error"); alert("Unable to process request:\n" + msg); return; } } ///////////////////////////////////////////////////// // startup function ///////////////////////////////////////////////////// function onLoad() { // id is physician id (hardcoded 1) should realistically depend on logon g_evt = new Event( "patientlist", 0, "patient.do", "operation=getPatients&physicianid=1", handle_onload ); postEvent( g_evt ); clearDetails('patient_details'); clearDetails('study_details'); clearDetails('series_details'); clearDetails('image_details'); } ///////////////////////////////////////////////////// // extract event ID from selected item (=id) ///////////////////////////////////////////////////// function getEventID( evt ) { // equalize W3C/IE event models to get event object evt = evt || window.event; if (evt) { // equalize W3C/IE models to get event target reference
136
var elem = evt.target || evt.srcElement; if (elem ) { //alert( "getEventID: " + elem.options[elem.selectedIndex].id ); return elem.options[elem.selectedIndex].id; } } return 0; } ///////////////////////////////////////////////////// // user interaction handlers (see: main jsp/html) ///////////////////////////////////////////////////// function onPatientSelect(evt) { clearList("study"); clearList("series"); clearDetails("study_details"); clearDetails("series_details"); clearPreview(); clearFilmstrip(); // get studies for patient var id = getEventID( evt ); var parList = "operation=getStudies&patientid=" + id; g_evt = new Event( "patient", id, "study.do", parList, handle_selection ); postEvent( g_evt ); } function onStudySelect(evt) { clearList("series"); clearDetails("series_details"); clearPreview(); clearFilmstrip(); // get series for study var id = getEventID( evt ); var parList = "operation=getSeries&studyid=" + id;
137
//alert( parList ); g_evt = new Event( "study", id, "series.do", parList, handle_selection ); postEvent( g_evt ); } function onSeriesSelect(evt) { clearPreview(); var id = getEventID( evt ); // get series details var parList = "operation=getSeriesDetails&seriesid=" + id; //alert( parList ); g_evt = new Event( "series_details", id, "series.do", parList
, handle_details );
postEvent( g_evt ); // get thumbnails for series buildFilmstrip( id ); }
image.js
///////////////////////////////////////////////////////////////////////// // get preview image from server ///////////////////////////////////////////////////////////////////////// function setPreview( series_id, image_id ) { if( series_id != null && image_id != null ) { var form =
window.parent.document.getElementById("previewform"); var series =
window.parent.document.getElementById( "seriesid" );
138
var image = window.parent.document.getElementById( "imageid" );
var operation = window.parent.document.getElementById( "operation" );
operation.value = "getPreview"; series.value = series_id; image.value = image_id; form.method = "POST"; form.action = "images.do"; form.target = "preview"; form.submit(); } }
139
APPENDIX I – CLIENT CASCADING STYLESHEETS
IMAGESTYLE..CSS
.preview { float: left; position: relative; height: 512px; width: 512px; margin: 0px 0px 0px 0px; border: 3px double teal; } .preview img { width: 512px; height: 512px; }
STYLE.CSS
html { margin: 0px; padding: 0px; } body { font: 9pt/17pt Verdana, Geneva, Arial, Helvetica, sans-serif; color: #555753; background: silver; margin: 0px; padding: 0px;
140
} p, div { font: 9pt/12pt Verdana, Geneva, Arial, Helvetica, sans-serif; margin-top: 0px; text-align: left; } table { font-size: 100%; } a:link { font-weight: bold; text-decoration: none; color: #4169E1; font-size: 12px; } a:visited { font-weight: bold; text-decoration: none; color: #4169E1; font-size: 12px; } a:hover, a:active { text-decoration: underline; color: #4169E1; font-size: 12px; } a img { border: none; } .page { height: 650px; width: 1200px; border: 10px outset #33B3CC; background-color: darkgray; } .navigation {
141
position: relative; margin: 1px 1px 0px 1px; font-family:sans-serif; background-color: transparent; font-variant: small-caps; font-weight: bold; float: left; width: 200px; height: 100%; } .navigationmenu { margin: 0px 0px 2px 0px; font-family:sans-serif; color: #FFFFFF; background-color: transparent; font-variant: small-caps; font-weight: bold; float: left; width: 100%; } .header { padding: 0.5ex 0ex 0.5ex 0ex; font-variant: small-caps; text-align: center; font-weight: bold; } .header.patienth { background-color: Teal; width: 100%; } .header.studyh { background-color: #33B3CC; } .header.imageh, .header.seriesh { background-color: #CC3333;
142
} .menui { padding: 0ex 0ex 0ex 0ex; background: #FFFFCC; color: maroon; width: 100%; font-size: 110%; font-weight: bold; } .menui.patienti { border: 2px solid teal; } .menui.studyi { border: 2px solid #33B3CC; } .menui.seriesi { border: 2px solid #CC3333; } .filmstrip { float: left; position: relative; height: 645px; width: 145px; margin-top: 0px; margin-bottom: 0px; padding: 0px 0px 0px 0px; background-color: black; border: 3px solid teal; overflow: auto; } .filmstrip img {
143
width: 100px; height: 100px; margin-bottom: 1px; margin-left: 10px; border: none; } .filmstrip a img { border: none; } .preview { float:left; position: relative; height: 512px; width: 512px; margin: 0px 0px 0px 0px; padding: 0px 0px 0px 0px; border: 2px solid teal; background-color: black; } .preview img { width: 512px; height: 512px; } .entity { position: relative; margin: 1px 0px 1px 1px; font-family:sans-serif; color: #FFFFFF; background-color: transparent; font-variant: small-caps; font-weight: bold; float: left; width: 300px; }
144
.info { padding: 1ex 1ex 1ex 1ex; background: #FFFFCC; color: black; height: 120px; overflow: auto; } .info.patient_info { border: 2px solid teal; } .info.study_info { border: 2px solid #33B3CC; height: 140px; } .info.image_info, .info.series_info { border: 2px solid #CC3333; } textarea { font: 9pt Verdana, Geneva, Arial, Helvetica, sans-serif; }
145
APPENDIX J – WEB APPLICATION JSP
patient_list.jsp
<%-- <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> --%> <?xml version="1.0"?> <patients> <patient id="1"> <label>lolke</label> </patient> </patients>
thumbnails.jsp
<%@ page language="java" %> <%@ page import="dicom.ImagePixelDTO" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <script type="text/javascript" src="script/image.js"></script> </head> <body style="background-color: black"> <% java.lang.String seriesid = request.getParameter("seriesid"); ImagePixelDTO dto[] = (ImagePixelDTO[])request.getAttribute("THUMBNAILS"); for( int i=0; i!=dto.length; i++ ) { %>
146
<a href="javascript:setPreview('<%=seriesid%>','<%=dto[i].instancenumber%>');" id='<%=dto[i].instancenumber%>'> <img src="thumbnails/<%=seriesid+"/"+ dto[i].instancenumber+".jpg"%>"
title="instance number = <%=dto[i].instancenumber%>" /> </a> <% } %> </body> </html>
preview.jsp
<%@ page language="java" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> </head> <body style="margin: 0px 0px 0px 0px; background-color:silver;"> <div> <% // // this view component generates the html for the preview pane // it is called from images.do // %> <img
src="preview/<%=request.getParameter("seriesid")%>/<%=request.getParameter("imageid")%>.jpg" title="instance number = <%=request.getParameter("imageid")%>" style="padding: 0px 0px 0px 0px;border-style:none;"
/> </div> </body> </html>
147
APPENDIX K – FRAMEWORK DETAILS
SQL Primitives
SQL Statement As shown in Logical Design Model – section SQL Primitives, the sql::statement class is the base class for all of the sql family
members.
namespace sql {
class statement { public: /// constructs a statement referring to table ptableName statement( const char* ptableName ); virtual ~statement() = 0 {} /// returns the SQL statement operator const char *(); protected: std::string tableName; ///< data table std::strstream stream; ///< stream (used internally) };
}
The base class attribute tableName represents the name of the table in the database. During execution of the derived class’s member
function insert& operator () ( const data_record& data ) the stream is populated with the proper SQL statement. The member
function operator const char *() makes the stream’s contents available to the transaction classes (which will be explained later). Since
each of the SQL family of classes needs an internal stream and the tableName, these attributes are provided in the abstract base class.
148
SQL Insert Statement The SQL insert statement is encapsulated by class sql::insert:
namespace sql { class insert : public statement { typedef statement base; public: insert( const char* ptableName ) : base( ptableName ) , write_key( stream ) , write_val( stream ) {} insert& operator () ( const data_record& data ); private: write_key_t write_key; ///< writes keys write_val_t write_val; ///< writes values };
} The important part is the member function operator():
insert& operator () ( const data_record& data );
sql::insert& sql::insert::operator () ( const data_record& data ) { // definition of static pre- and post-fix and separators... stream.clear(); // write the none empty keys... stream << "INSERT INTO " << tableName << ' '; write_items( data.begin(), data.end(), write_key, stream, fprefix, fseparator, fpostfix, notEmpty );
149
// write the corresponding values... stream << " VALUES "; write_items( data.begin(), data.end(), write_val, stream, vprefix, vseparator, vpostfix, notEmpty ); stream << ends; return *this; } The SQL Insert statement has the following syntax:
INSERT INTO tableName (fieldName1, fieldName2...) VALUES (value1, value2...);
The INSERT INTO part is represented by
stream << "INSERT INTO " << tableName << ' '; write_items( data.begin(), data.end(), write_key, stream, fprefix, fseparator, fpostfix, notEmpty );
The write_items helper function loops over all the fields in the data_record and writes the key of each field to the stream. Basically it
iterates over the collection and invokes the writeEl (formal parameter) command to write each item to the stream. The insert statement
insert::operator () passes the right element writer to select what part of each item is written to the stream (write_key respectively write_val).
Parameters pre, sep and suf are used to write the correct prefixes, separators and suffixes to the stream. Finally, the isValid algorithm allows to
selectively (based on certain conditions) write elements to the stream.
150
To make the picture complete we have included the code here:
template <class I, class F, class Fc> void write_items(
I begin, I end, F writeEl, std::ostream& out, const char *pre, const char* sep, const char* suf, Fc isValid ) { // check precondition if( ! (begin != end ) ) return; out << pre; I pos = begin; bool prevValid = false; while( pos != end ) { if( isValid (*pos) ) { if( prevValid ) out << sep; else prevValid = true; writeEl( *pos ); } ++pos; } out << suf; }
template <class Key, class Val> struct write_first { write_first( std::ostream& pout ) : out( pout ) {} std::ostream& operator()( const std::pair< Key, Val > &p ) { out << p.first; return out; } private: std::ostream &out;
151
}; template <class Key, class Val> struct write_second { write_second( std::ostream& pout ) : out( pout ) {} std::ostream& operator()( const std::pair< Key, Val > &p ) { out << p.second; return out; } private: std::ostream &out; }; static bool notEmpty( const pair_type & val ) { return ! ( val.second.empty() ); }
Transaction Primitives
As explained in Logical Design Model – section Transaction Primitives, the transaction family of classes delegate SQL generation to
the SQL family of classes. This section explains the details.
We illustrate the collaboration by examining the tx::update transactor class and the implementation of�void operator()(argument_type
&tx:
namespace tx { class update : public internal::txtor { typedef internal::txtor base; public: update( const std::string& table
, const sql::data_record& data , const sql::data_record& where
152
, pqxx::result &result );
/// executes the transaction void operator()(argument_type &tx); protected: const sql::data_record &_data; const sql::data_record &_where; };
}
�void tx::update::operator()(argument_type &tx) { 1. sql::update command( _table.c_str() );
2. _result = tx.exec( command( _data, _where ) );
}
It’s interesting to see what’s happening in the above member function operator():
1. An SQL update command is instantiated
2. The transaction is executed with the result of the sql command.
a. first the sql::update::operator() is called followed by
b. the conversion operator const char*().
�
153
APPENDIX L – C++ DATA SERVICES SUBSYSTEMS
archiving Implementation of archiving functions
dataaccess Core data access
dataservices Core data services
dbconfig Basic settings for dbmanager and DICOM servers
dbmanager Executable component for importing DICOM data in the database
dc_util DICOM utility (common wrappers for DCMTK)
dicom_import DICOM the interface to the DICOM objects
dicomdsarchive CORBA Archive server (core functionality)
dicomdsclient CORBA test client
dicomdccommon Common CORBA tools (library)
dicomdsserver CORBA data access server (core functionality)
monitor Monitors input directory
notification Notification engine
packer Bzip2 functionality
utility Common utility functions