225
Software Components: Components are deployable units that provide a business service to their clients. Each component provides an interface in the form of operations, properties and events Components can be developed in any language such as JAVA,C++,VB Components are frequently organized into application frameworks for vertical domains Component models such as Active X and EJB standardize communication and allow for prebuilt purchased components Components Characteristics Properties Operations Reusability Shareable Events Distributable Deployable Self Containment Self Description Components are operation independent of Hardware The underlying operating system Their Application Server The network protocol they use Component services can be used as a part of business logic of other components. If multiple clients are using a component simultaneously, the component will provide the same quality of service to all the clients Components should provide services to clients running locally or remotely. Components should only contain the code necessary to implement their services. Infrastructure services should be injected by the execution environment. Standard infrastructure services include Directory Services Distributed transaction management Security management Concurrent access management Persistence management

CBT

  • Upload
    dinesh

  • View
    147

  • Download
    2

Embed Size (px)

Citation preview

Software Components:

Components are deployable units that provide a business service to their clients.

• Each component provides an interface in the form of operations, properties and

events

• Components can be developed in any language such as JAVA,C++,VB

• Components are frequently organized into application frameworks for vertical

domains

• Component models such as Active X and EJB standardize communication and

allow for prebuilt purchased components

Components Characteristics

Properties Operations Reusability

Shareable Events Distributable

Deployable Self Containment Self Description

Components are operation independent of

• Hardware

• The underlying operating system

• Their Application Server

• The network protocol they use

Component services can be used as a part of business logic of other components. If

multiple clients are using a component simultaneously, the component will provide the

same quality of service to all the clients

Components should provide services to clients running locally or remotely. Components

should only contain the code necessary to implement their services. Infrastructure

services should be injected by the execution environment. Standard infrastructure

services include

• Directory Services

• Distributed transaction management

• Security management

• Concurrent access management

• Persistence management

• Resource pooling (e.g. DB connections)

• Administration interface

• Load Balancing

• Fault tolerance

An application server provides the infrastructure and services to run

components/applications.

Modules:

The module will provide students with a knowledge and understanding of current and

emerging component technologies. The module is focused on two major themes: Object-

Oriented Middleware (OOM) and Message-Oriented Middleware (MOM). In the first

theme we examine the evolution of object-oriented programming into (distributed)

component models such as The Common Object Request Broker Architecture (CORBA),

The Component Object Model (COM), Java Remote Method Invocation (RMI) and Java

Beans. The common underlying requirements of these systems will be studied in detail

such as naming and binding issues and marshalling and un-marshalling of data. The

second theme will explore the emerging field of Message-Oriented Middleware with an

in-depth study into current MOM technologies such as Java Messaging System (JMS).In

their work on Modular Smalltalk [5], Allen Wirfs-Brock and Brian Wilkerson describe

the essential features of modules:

Modules are program units that manage the visibility and accessibility of names...

A module typically groups a set of class definitions and objects to implement some

service or abstraction. A module will frequently be the unit of division of responsibility

within a programming team....

A module provides an independent naming environment that is separate from other

modules within the program....

Modules support team engineering by providing isolated name spaces...

While providing many potential improvements to Smalltalk, the Modular Smalltalk

system does not implement modules as first-class objects. Like many other programming

systems, the Modular Smalltalk system uses modules only for organizational purposes.

This article proposes a different view of modules as a special kind of Smalltalk class.

Modules for Smalltalk

The definition of a normal Smalltalk class includes a reference to a superclass, the name

of the new subclass, and the names of any new instance and class variables added by the

new subclass. Class variables are shared by all the instances of a class, and are visible to

all the methods of the class and its subclasses, if any.

In addition, the new subclass can provide its methods with access to named objects that

are shared on a subscription basis. Certain names in the Smalltalk system dictionary are

bound to global pool dictionaries that contain these sharable named objects. The new

subclass can subscribe to these objects by including selected global names in its list of

pool dictionaries. For example, a File class might be defined using the following

message:

Object subclass: #File

instanceVariableNames: 'directory fileId name'

classVariableNames: 'PageSize'

poolDictionaries: 'CharacterConstants'!

Modules may be added to Smalltalk in a relatively straightforward manner. The details of

how this can be done are presented in a later section. For now, we can say that each

module is a class that contains a name space, called its domain, instead of simply a pool

of class variables.

Their are several new messages for defining modules and the private classes contained in

their domains. The definition of a module for managing an inventory might use the

following message:

Object moduleSubclass: #InventoryManager

instanceVariableNames: ''

classVariableNames: ''

poolDictionaries: ''!

A new private class can be added to the domain of the InventoryManager class using the

message:

Object subclass: #InventoryItem in: InventoryManager

instanceVariableNames: 'partNumber partName quantity'

classVariableNames: ''

poolDictionaries: ''!

In order to add a new private subclass of InventoryItem, we send the name of the private

class (#InventoryItem) as a message to the InventoryManager module:

InventoryManager InventoryItem subclass: #FloorItem

instanceVariableNames: 'storeLocation'

classVariableNames: ''

poolDictionaries: ''!

The issues involved in this breaking of the module encapsulation will be considered

further in a later section.

Modules can be used to create nested subsystems. The following message creates a

nested module for managing the accounts in the InventoryManager module class.

Object moduleSubclass: #AccountManager in: InventoryManager

instanceVariableNames: ''

classVariableNames: ''

poolDictionaries: ''!

Figure 1 depicts the structural relationships between the classes in the InventoryManager

module. Note that the graphic design notation of OMT [3] has been extended slightly to

show what classes are encapsulated inside a module class. The rounded rectangles

represent module domains. Note that the Smalltalk system dictionary is also considered

to be the system domain.

Encapsulating Private

Behavior

Modules provide three ways of

encapsulating private behavior, all of which are based on their ability to encapsulate

private classes:

• Class Groups (Systems)

• Baseline Class Extensions

• Private Methods

Ada encourages the division of code into separate modules called packages. Each

package can contain any combination of items.

Some of the benefits of using packages are:

• package contents are placed in a separate namespace, preventing naming

collisions,

• implementation details of the package can be hidden from the programmer

(information hiding),

• object orientation requires defining a type and its primitive subprograms within a

package, and

• packages can be separately compiled.

Some of the more common package usages are:

• a group of related subprograms along with their shared data, with the data not

visible outside the package,

• one or more data types along with subprograms for manipulating those data types,

and

• a generic package that can be instantiated under varying conditions.

Interfaces:

• a unit is a source code file

• a module (UML and Java package) is a directory of source files, probably

with it's own build script

• a component is a run-time, or at least install-time, thing, generated by a build

process

An individual component is a software package or a module that encapsulates a set of

related functions (or data).

All system processes are placed into separate components so that all of the data and

functions inside each component are semantically related (just as with the contents of

classes). Because of this principle, it is often said that components are modular and

cohesive.

With regard to system-wide co-ordination, components communicate with each other via

interfaces. When a component offers services to the rest of the system, it adopts a

provided interface which specifies the services that can be utilized by other components

and how. This interface can be seen as a signature of the component - the client does not

need to know about the inner workings of the component (implementation) in order to

make use of it. This principle results in components referred to as encapsulated. The

UML illustrations within this article represent provided interfaces by a lollipop-symbol

attached to the outer edge of the component.

However when a component needs to use another component in order to function, it

adopts a used interface which specifies the services that it needs. In the UML illustrations

in this article, used interfaces are represented by an open socket symbol attached to the

outer edge of the component.

An interface is hence a type definition; anywhere an object can be exchanged (in a

function or method call) the type of the object to be exchanged can be defined in terms of

an interface instead of a specific class.

Networked computers are finding their way into a broader range of environments, from

corporate offices to schools, homes, and shirt pockets. This new computing model fosters

the development of distributed software components that communicate with one another

across the underlying networked infrastructure. A distributed software component can be

plugged into distributed applications that may not have existed when it was created. The

intention is that many developers will reuse distributed software components to build new

systems.

An interface definition language usually is used to describe a distributed software

component's interface. However, a notable limitation of current IDLs is that they

generally only describe the names and type signatures of the component's attributes and

operations. Current IDLs don't formally specify the behavior of the software component's

operations. To help solve these problems, the authors have developed Biscotti (behavioral

specification of distributed software component interfaces), a Java extension that

enhances Java remote method invocation interfaces with Eiffel-style preconditions,

postconditions, and invariants.

The software components can represent standard parts designed particularly with regard

to their reuse and their closed functionality published by their contractually specified

interfaces.

Contractually specified interfaces can be clear descriptions of the behavior required, on

the one hand, as well as the offered behavior of software components, on the other hand.

Contractually specified interfaces therefore can reduce the interconnection of the system.

Thus, software components can be developed, maintained, installed, and combined

independently from one another.

Callbacks:

Modern software systems are often developed via composition of independent

components with well-defined interfaces and (formal) behavior specification of some

sort. When reliability of a software system built from components is a critical issue,

formal verification such as program model checking becomes a necessity. Since model

checking of the whole complex ("real-life") system at a time is prone to state explosion,

compositional methods have to be used. A basic idea of compositional model checking is

the checking of (local) properties of isolated components and inferring (global) properties

of the whole system fromthe local properties. This way, state explosion is partially

addressed, since a single isolated component typically triggers a smaller state space

compared to the whole system.

A popular approach to compositional model checking of component applications is based

on the assume-guarantee paradigm: For each component subject to checking, an

assumption is stated on the behavior of the component's environment (e.g. the rest of a

particular component application); similarly, the "guarantee" are the properties to hold if

the component works properly in the assumed environment (e.g. absence of concurrency

errors and compliance with behavior specification). Thus, a successful model checking of

the component against the properties under the specific assumption guarantees the

component to work properly when put into an environment modeled by the assumption.

Specific to program model checkers such as Java PathFinder (JPF) is that they check

only complete programs (featuring main()).

Thus checking of an isolated component (its implementation, i.e. for instance of its Java

code) is not directly possible, since also its environment has to be provided in the form of

a program (code). Thus, program model checking of a primitive component is associated

with the problem of missing environment [14]. A typical solution to it in case of JPF is to

construct an "artificial" environment (Java code) from an assumption formed as a

behavior model as in, where the behavior model is based on LTS defined either directly,

or in the formalism of behavior protocols. Then, JPF is applied to the complete program

composed of the component and environment.

Directory Services

A directory service is the software system that stores, organizes and provides access to

information in a directory. In software engineering, a directory is a map of the differences

between names and values. It allows the lookup of values given a name, similar to a

dictionary. As a word in a dictionary may have multiple definitions, in a directory, a

name may be associated with multiple, different pieces of information. Likewise, as a

word may have different parts of speech and different definitions, a name in a directory

may have many different types of data.

Directories may be very narrow in scope, supporting only a small set of node types and

data types, or they may be very broad, supporting an arbitrary or extensible set of types.

In a telephone directory, the nodes are names and the data items are telephone numbers.

In the DNS the nodes are domain names and the data items are IP addresses (and alias,

mail server names, etc.). In a directory used by a network operating system, the nodes

represent resources that are managed by the OS, including users, computers, printers and

other shared resources. Many different directory services have been used since the advent

of the Internet but this article focuses mainly on those that have descended from the

X.500 directory service.

In some cases, an existing software component version is a prerequisite for a software

component version because, for example, you need to reuse part of the software. In this

case, the software component version upon which the other software component version

depends is referred to as the underlying software component version. The software

component version is only complete with the underlying software component version

upon which it is based.

You define the corresponding software dependency (“based-on” relationship) in the

System Landscape Directory (see: Software Dependencies). Objects of an underlying

software component version are displayed in two places in the navigation area of the

Enterprise Services Repository (Objects tab page). ...

1. In the Basis Objects subtree of the software component version that is based

on the underlying software component version.

2. In the subtree of the underlying software component version (at the same

level as all other software component versions).

This relationship is shown in the figure. In the case illustrated, the software component

version Based-On Component 1.0 is based on the software component version Basis 1.0.

Component Architecture:

What is a component framework?

• A component framework defines a set of abstract interactions that define

protocols by which components communicate.

• Is also defines the packaging for components so that they can be instantiated and

composed into legal configurations by third-party binding.

• Three most used types of component frameworks on the market today:

• Second tired component framework

• Contextual component framework

• Visual component frameworks

Tiered component architectures

• Hierarchal decomposition

• Layering

• Key terms

– Component

– Atomic Component

– Module

– Resource

• Importance of Layers and hierarchal decomposition on today’s component system

• Layers can contain components needed perform a certain service.

• All the layers stacked on top of each other form an architecture.

• Mastering the complexity of a layered architecture by adding the layered

architecture on top of another layered architecture.

• Meta-architecture

• A meta-architecture is forms a layer beneath the component architecture

• In this case layers are called tiers

• First tier is the component architecture

• Second Tier is the meta-architecture (component framework)

• Definition of a component framework

• A component framework is a dedicated and focused architecture usually around a

few key mechanisms, and a fixed set of policies for mechanisms at the component

level.

• The need for a third tier. (another meta-architecture)

• A component framework can become an Island (can communicate only with

instances of specified components)

• Model a component framework as a component.

• Can then be plugged into another architecture.

• The third tier can enable these component framework to communicate

Components and Middleware

Middleware is a computer software that connects software components or applications.

The software consists of a set of services that allows multiple processes running on one

or more machines to interact. This technology evolved to provide for interoperability in

support of the move to coherent distributed architectures, which are most often used to

support and simplify complex, distributed applications. It includes web servers,

application servers, and similar tools that support application development and delivery.

Middleware is especially integral to modern information technology based on XML,

SOAP, Web services, and service-oriented architecture.

Middleware sits "in the middle" between application software that may be working on

different operating systems. It is similar to the middle layer of a three-tier single system

architecture, except that it is stretched across multiple systems or applications. Examples

include EAI software, telecommunications software, transaction monitors, and

messaging-and-queuing software.

The distinction between operating system and middleware functionality is, to some

extent, arbitrary. While core kernel functionality can only be provided by the operating

system itself, some functionality previously provided by separately sold middleware is

now integrated in operating systems. A typical example is the TCP/IP stack for

telecommunications, nowadays included in virtually every operating system.

Types of middleware

• Remote Procedure Call — Client makes calls to procedures running on remote

systems. Can be asynchronous or synchronous.

• Message Oriented Middleware — Messages sent to the client are collected and

stored until they are acted upon, while the client continues with other processing.

• Object Request Broker — This type of middleware makes it possible for

applications to send objects and request services in an object-oriented system.

• SQL-oriented Data Access — middleware between applications and database

servers.

• Embedded Middleware — communication services and integration interface

software/firmware that operates between embedded applications and the real time

operating system.

Functions of Middleware

In all of the above situations, applications use intermediate software that resides on top of

the operating systems and communication protocols to perform the following functions :

• Hiding distribution, i.e. the fact that an application is usually made up of many

interconnected parts running in distributed locations;

• Hiding the heterogeneity of the various hardware components, operating

systems and communication protocols;

• Providing uniform, standard, high-level interfaces to the application developers

and integrators, so that applications can be easily composed, reused, ported, and

made to interoperate;

• Supplying a set of common services to perform various general purpose

functions, in order to avoid duplicating efforts and to facilitate collaboration

between applications.

THREADS

Introduction:

Threading is a facility to allow multiple activities to coexist within a single process. Most

modern operating systems support threads and Java is the first mainstream programming

language to explicitly include threading within the language itself, rather than treating

threading as a facility of the underlying operating system. Threads are sometimes referred

to as lightweight processes. Like processes, threads are independent, concurrent paths of

execution through a program, and each thread has its own stack, its own program counter,

and its own local variables. A process can support multiple threads, which appear to

execute simultaneously and asynchronously to each other.

Every Java program uses threads

Every Java program has at least one thread -- the main thread. When a Java program

starts, the JVM creates the main thread and calls the program's main() method within that

thread. The JVM also creates other threads that are mostly invisible to you -- for

example, threads associated with garbage collection, object finalization, and other JVM

housekeeping tasks. Other facilities create threads too, such as the AWT (Abstract

Windowing Toolkit) or Swing UI toolkits, servlet containers, application servers, and

RMI (Remote Method Invocation).

Why use threads?

Some of the reasons for using threads are that they can help to:

• �Make the UI more responsive

• Take advantage of multiprocessor systems

• Simplify program logic when there are multiple independent entities

• Perform asynchronous (Perform blocking I/O without blocking the entire

program) or background processing

Creating threads

In Java, an object of the Thread class can represent a thread. Thread can be implemented

through any one of two ways:

• Extending the java.lang.Thread Class

• Implementing the java.lang.Runnable Interface

I. Extending the java.lang.Thread Class

For creating a thread a class have to extend the Thread Class. For creating a thread by

this procedure you have to follow these steps:

1. Extend the java.lang.Thread Class.

2. Override the run( ) method in the subclass from the Thread class to define the

code executed by the thread.

3. Create an instance of this subclass. This subclass may call a Thread class

constructor by subclass constructor.

4. Invoke the start( ) method on the instance of the class to make the thread eligible

for running.

The following program demonstrates a single thread creation extending the

"Thread" Class:

class MyThread extends Thread{

String s=null;

MyThread(String s1){

s=s1;

start(); }

public void run(){

System .out.println(s); } }

public class RunThread{

public static void main(String args[]){

MyThread m1=new MyThread("Thread started....");

} }

Output:

C:\j2se6\thread>javac RunThread.java

C:\j2se6\thread>java RunThread

Thread started....

II. Implementing the java.lang.Runnable Interface

The procedure for creating threads by implementing the Runnable Interface is as

follows:

1. A Class implements the Runnable Interface, override the run() method to define

the code executed by thread. An object of this class is Runnable Object.

2. Create an object of Thread Class by passing a Runnable object as argument.

3. Invoke the start( ) method on the instance of the Thread class.

The following program demonstrates the thread creation implementing the

Runnable interface:

class MyThread1 implements Runnable{

Thread t;

String s=null;

MyThread1(String s1){

s=s1;

t=new Thread(this);

t.start();}

public void run(){

System.out.println(s); }

public class RunableThread{

public static void main(String args[]){

MyThread1 m1=new MyThread1("Thread started....");

}}}

There are two reasons for implementing a Runnable interface preferable to extending

the Thread Class:

1. If you extend the Thread Class, that means that subclass cannot extend any other

Class, but if you implement Runnable interface then you can do this.

2. The class implementing the Runnable interface can avoid the full overhead of

Thread class which can be excessive.

Thread Priorities:

In Java, thread scheduler can use the thread priorities in the form of integer value to

each of its thread to determine the execution schedule of threads. Thread gets the ready-

to-run state according to their priorities. The thread scheduler provides the CPU time to

thread of highest priority during ready-to-run state. Priorities are integer values from 1

(lowest priority given by the constant Thread.MIN_PRIORITY) to 10 (highest priority

given by the constant Thread.MAX_PRIORITY). The default priority is

5(Thread.NORM_PRIORITY).

Constant Description

Thread.MIN_PRIORITY The maximum priority of any thread (an int value of 10)

Thread.MAX_PRIORITY The minimum priority of any thread (an int value of 1)

Thread.NORM_PRIORITY The normal priority of any thread (an int value of 5)

The methods that are used to set the priority of thread shown as:

setPriority() This is method is used to set the priority of thread.

getPriority() This method is used to get the priority of thread.

When a Java thread is created, it inherits its priority from the thread that created it. At

any given time, when multiple threads are ready to be executed, the runtime system

chooses the runnable thread with the highest priority for execution. In Java runtime

system, preemptive scheduling algorithm is applied. If at the execution time a thread

with a higher priority and all other threads are runnable then the runtime system chooses

the new higher priority thread for execution. On the other hand, if two threads of the

same priority are waiting to be executed by the CPU then the round-robin algorithm is

applied in which the scheduler chooses one of them to run according to their round of

time-slice.

Thread Scheduler

In the implementation of threading scheduler usually applies one of the two following

strategies:

• Preemptive scheduling – If the new thread has a higher priority then

current running thread leaves the runnable state and higher priority thread

enter to the runnable state.

• Time-Sliced (Round-Robin) Scheduling – A running thread is allowed to

be execute for the fixed time, after completion the time, current thread

indicates to the another thread to enter it in the runnable state.

States and State Transitions of Java Threads:

start

synchronized

resume

suspend

synchronize notified

suspend

wait

stop, destroy

new

When a thread is in the "New Thread" state, it is merely an empty Thread object. No

system resources have been allocated for it yet. Thus when a thread is in this state, you

suspended

runnable

running

wait for lock

wait for

notify

new

terminated

sch

edu

le

pre

-em

pt

can only start the thread or stop it. Calling any method besides start() or stop() when a

thread is in this state makes no sense and causes an IllegalThreadStateException.

Runnable

The start() method creates the system resources necessary to run the thread, schedules the

thread to run, and calls the thread's run() method. At this point the thread is in the

"Runnable" state. This state is called "Runnable" rather than "Running" because the

thread might not actually be running when it is in this state. Many computers have a

single processor, making it impossible to run all "Runnable" threads at the same time. So,

the Java runtime system must implement a scheduling scheme that shares the processor

between all "Runnable" threads. (See Thread Priority for more information about

scheduling.) However, for most purposes you can think of the "Runnable" state as simply

"Running". When a thread is running--it's "Runnable" and is the current thread--the

instructions in its run() method are executing sequentially.

Not Runnable

A thread enters the "Not Runnable" state when one of these four events occurs:

• Someone invokes its sleep() method.

• Someone invokes its suspend() method.

• The thread uses its wait() method to wait on a condition variable.

• The thread is blocking on I/O.

• If a thread has been put to sleep, then the specified number of milliseconds

must elapse.

• If a thread has been suspended, then someone must call its resume()

method.

• If a thread is waiting on a condition variable, whatever object owns the

variable must relinquish it by calling either notify() or notifyAll().

• If a thread is blocked on I/O, then the I/O must complete.

Dead

A thread can die in two ways: either from natural causes, or by being killed (stopped). A

thread dies naturally when its run()method exits normally. You can also kill a thread at

any time by calling its stop() method. The following code snippet creates and starts

myThread then puts the current thread to sleep for 10 seconds. The stop() method throws

a ThreadDeath object at the thread to kill it. Thus when a thread is killed in this manner it

dies asynchronously. The thread will die when it actually receives the ThreadDeath

exception. The stop() method causes a sudden termination of a Thread's run() method. If

the run() method performs critical or sensitive calculations, stop() may leave the program

in an inconsistent or awkward state. Normally, you should not call Thread's stop()

method but arrange for a gentler termination such as setting a flag to indicate to the run()

method that it should exit.

The isAlive() Method

A final word about thread state: the programming interface for the Thread class includes

a method called isAlive(). The isAlive() method returns true if the thread has been started

and not stopped. Thus, if the isAlive() method returns false you know that the thread is

either a "New Thread" or "Dead". If the isAlive() method returns true, you know that the

thread is either "Runnable" or "Not Runnable". You cannot differentiate between a "New

Thread" and a "Dead" thread; nor can you differentiate between a "Runnable" thread and

a "Not Runnable" thread.

Joining with threads

The Thread API contains a method for waiting for another thread to complete: the join()

method. When you call Thread.join(), the calling thread will block until the target thread

completes. Thread.join() is generally used by programs that use threads to partition large

problems into smaller ones, giving each thread a piece of the problem. The example at

the end of this section creates ten threads, starts them, then uses Thread.join() to wait for

them all to complete.

Sleeping

The Thread API includes a sleep() method, which will cause the current thread to go into

a wait state until the specified amount of time has elapsed or until the thread is

interrupted by another thread calling Thread.interrupt() on the current thread's Thread

object. When the specified time elapses, the thread again becomes runnable and goes

back onto the scheduler's queue of runnable threads.

If a thread is interrupted by a call to Thread.interrupt(), the sleeping thread will throw an

InterruptedException so that the thread will know that it was awakened by an interrupt

and won't have to check to see if the timer expired. The Thread.yield() method is like

Thread.sleep(), but instead of sleeping, it simply pauses the current thread momentarily

so that other threads can run. In most implementations, threads with lower priority will

not run when a thread of higher priority calls Thread.yield().

Threads - notify() and notifyAll() methods

• the notify() and notifyAll() methods are defined in the Object class

• they can only be used within synchronized code

• notify() wakes up a single thread which is waiting on the object's lock

• if there is more than one thread waiting, the choice is arbitrary ie there is no way

to specify which waiting thread should be re-awakened

• notifyAll() wakes up all waiting threads; the scheduler decides which one will run

• if there are no waiting threads, the notifys are forgotten

• only notifications that occur after a thread has moved to wait state will effect it;

earlier notifies are irrelevant

Daemon threads

We mentioned that a Java program exits when all of its threads have completed, but this

is not exactly correct. What about the hidden system threads, such as the garbage

collection thread and others created by the JVM? We have no way of stopping these. If

those threads are running, how does any Java program ever exit? These system threads

are called daemon threads. A Java program actually exits when all its non-daemon

threads have completed. Any thread can become a daemon thread. You can indicate a

thread is a daemon thread by calling the Thread.setDaemon() method. You might want to

use daemon threads for background threads that you create in your programs, such as

timer threads or other deferred event threads, which are only useful while there are other

non-daemon threads running.

In Java, any thread can be a Daemon thread. Daemon threads are like a service

providers for other threads or objects running in the same process as the daemon thread.

Daemon threads are used for background supporting tasks and are only needed while

normal threads are executing. If normal threads are not running and remaining threads

are daemon threads then the interpreter exits.

setDaemon(true/false) – This method is used to specify that a thread is daemon

thread.

Creating threads and starting threads are not the same

A thread doesn't actually begin to execute until another thread calls the start() method on

the Thread object for the new thread. The Thread object exists before its thread actually

starts, and it continues to exist after its thread exits. This allows you to control or obtain

information about a thread you've created, even if the thread hasn't started yet or has

already completed.

It's generally a bad idea to start() threads from within a constructor. Doing so could

expose partially constructed objects to the new thread. If an object owns a thread, then it

should provide a start() or init() method that will start the thread, rather than starting it

from the constructor.

Ending threads

A thread will end in one of three ways:

• The thread comes to the end of its run() method.

• The thread throws an Exception or Error that is not caught.

• Another thread calls one of the deprecated stop() methods. Deprecated means they

still exist, but you shouldn't use them in new code and should strive to eliminate

them in existing code.

When all the threads within a Java program complete, the program exits.

Thread Synchronization

The monitor

Java uses the monitor concept as the basis of its thread synchronization. A monitor is

an object that can block and revive threads.

Java provides a way to lock the code for a thread which is currently executing it, and

making other threads that wish to use it wait until the first thread is finished. These

other threads are placed in the waiting state. Java is not as fair as the service station

because there is no queue. Any one of the waiting threads may get the monitor next,

regardless of the order they asked for it.

The synchronized keyword

A block of code can be turned into a monitor using the keyword synchronized.

There are two ways to use synchronized

Synchronizing a whole method by adding the word synchronized as a modifier

in the method's header. This is the most common way of doing it. Many of Java's

API methods are synchronized this way. For example, public synchronized

void setLabel(String label) from the Button class. The object referenced in the

method call is the object holding the lock, in this case the Button object.

Synchronizing only a block of code within a method using the synchronized

keyword. The block of code is preceded by the keyword synchronized and an

expression which references the object holding the lock. For example, here is a

code fragment which changes all values in an array to positive and prevents any

other thread from changing any elements before the code has finished making all

the changes.

wait(), notify() and notifyAll()

The methods wait(), notify() and notifyAll() service this need. Note that all three can

only be used within synchronized code. If you try to use them outside of

synchronization you will get a compiler error. Note that these methods belong to the

Object class, not the Thread class.

public final void wait() throws InterruptedException

public final native void wait(long timeout) throws InterruptedException

wait() puts the current thread into the waiting state.

public final native void notify()

notify() looks at the threads in the waiting state, and picks one (which one is

unpredictable) and moves it to the ready state where the scheduler will decide what to

do with it. Any other threads in the waiting state remain there until notify() or

notifyAll() are called again.

public final native void notifyAll()

notifyAll() moves all the threads in the waiting state into the ready state where the

scheduler will decide when they will move to the running state. NotifyAll() is much

more commonly used than notify() because the latter could conceivably leave a thread

waiting for a very long time if it were unlucky.

Some important thread problems

• race conditions

• starvation

• deadlock

Java Beans:

• JavaBeans components are Java classes that can be easily reused and composed

together into applications.

• JavaBeans™ is a portable, platform-independent component model written in the

Java programming language. The JavaBeans architecture was built through a

collaborative industry effort and enables developers to write reusable components

in the Java programming language.

• Using JavaBeans-compliant application builder tools, you can combine these

components into applets, applications, or composite components.

• A Java Bean is a reusable software component (actually, a Java class) that can be

manipulated visually in a builder tool

• Examples of "builder tools":

� BeanBox (part of Sun's basic Beans Development Kit (BDK) - available

free from java.sun.com)

� Sun Java Workshop

� IBM VisualAge for Java

� Symantec Visual Cafe

� Borland JBuilder

JavaBean components are known as beans. Beans are dynamic in that they can be

changed or customized. Through the design mode of a builder tool, you use the property

sheet or bean customizer to customize the bean and then save (persist) your customized

beans.

JavaBeans Concepts

The JavaBeans™ architecture is based on a component model which enables

developers to create software units called components. Components are self-contained,

reusable software units that can be visually assembled into composite components,

applets, applications, and servlets using visual application builder tools. JavaBean

components are known as beans.

A set of APIs describes a component model for a particular language. The JavaBeans

API specification describes the core detailed elaboration for the JavaBeans component

architecture.

Beans are dynamic in that they can be changed or customized. Through the design

mode of a builder tool you can use the Properties window of the bean to customize the

bean and then save (persist) your beans using visual manipulation. You can select a

bean from the toolbox, drop it into a form, modify its appearance and behavior, define

its interaction with other beans, and combine it and other beans into an applet,

application, or a new bean.

The following list briefly describes key bean concepts:

• Builder tools discover a bean's features (that is, its properties, methods, and

events) by a process known as introspection. Beans support introspection in two

ways:

o By adhering to specific rules, known as design patterns, when naming bean

features. The Introspector class examines beans for these design patterns to

discover bean features. The Introspector class relies on the core reflection API.

The trail The Reflection API is an excellent place to learn about reflection.

o By explicitly providing property, method, and event information with a related

bean information class. A bean information class implements the BeanInfo

interface. A BeanInfo class explicitly lists those bean features that are to be

exposed to application builder tools.

• Properties are the appearance and behavior characteristics of a bean that can be

changed at design time. Builder tools introspect on a bean to discover its

properties and expose those properties for manipulation.

• Beans expose properties so they can be customized at design time. Customization

is supported in two ways: by using property editors, or by using more

sophisticated bean customizers.

• Beans use events to communicate with other beans. A bean that is to receive

events (a listener bean) registers with the bean that fires the event (a source bean).

Builder tools can examine a bean and determine which events that bean can fire

(send) and which it can handle (receive).

• Persistence enables beans to save and restore their state. After changing a bean's

properties, you can save the state of the bean and restore that bean at a later time

with the property changes intact. The JavaBeans architecture uses Java Object

Serialization to support persistence.

• A bean's methods are no different from Java methods, and can be called from

other beans or a scripting environment. By default all public methods are

exported.

Beans vary in functionality and purpose. You have probably met some of the following

beans in your programming practice:

• GUI (graphical user interface)

• Non-visual beans, such as a spelling checker

• Animation applet

• Spreadsheet application

Events and Connections

Manipulating Events

Event passing is the means by which components communicate with each other.

Components broadcast events, and the underlying framework delivers the events to the

components that are to be notified. The notified components usually perform some

action based on the event that took place.

The event model was designed to accommodate the JavaBeans™ architecture. To

understand how events and event handling work in the JavaBeans component model,

you must understand the concepts of events, listeners, and sources. To refresh your

knowledge in these areas, read the Writing Event Listeners lesson of the Swing

tutorial.

The event model that is used by the JavaBeans architecture is a delegation model. This

model is composed of three main parts: sources, events, and listeners.

The source of an event is the object that originates or fires the event. The source must

define the events it will fire, as well as the methods for registering listeners of those

events. A listener is an object that indicates that it is to be notified of events of a

particular type. Listeners register for events using the methods defined by the sources

of those events.

From the Properties lesson you discovered two event listeners. The

PropertyChangeListener(in the API reference documentation) interface provides a

notification whenever a bound property value is changed and the

VetoableChangeListener(in the API reference documentation) creates a notification

whenever a bean changes a constrained property value.

Simple Event Example

This example represents an application that performs an action when a button is clicked.

Button components are defined as sources of an event type called ActionEvent.

Listeners of events of this type must register for these events using the

addActionListener method.

Therefore, the addActionListener method is used to register the ButtonHandler

object as a listener of the ActionEvent event that is fired by the button.

In addition, according to the requirements of the ActionListener class, you must

define an actionPerformed method, which is the method that is called when the button

is clicked.

import java.awt.event.ActionEvent;

import java.awt.event.ActionListener;

import javax.swing.JTextArea;

import java.awt.BorderLayout;

import javax.swing.JButton;

import javax.swing.JFrame;

import javax.swing.WindowConstants;

public class ButtonHandler implements ActionListener {

/**

* Component that will contain messages about

* events generated.

*/

private JTextArea output;

/**

* Creates an ActionListener that will put messages in

* JTextArea everytime event received.

*/

public ButtonHandler( JTextArea output )

{

this.output = output;

}

/**

* When receives action event notification, appends

* message to the JTextArea passed into the constructor.

*/

public void actionPerformed( ActionEvent event )

{

this.output.append( "Action occurred: " + event + '\n' );

}

}

class ActionTester {

public static void main(String args[]) {

JFrame frame = new JFrame( "Button Handler" );

JTextArea area = new JTextArea( 6, 80 );

JButton button = new JButton( "Fire Event" );

button.addActionListener( new ButtonHandler( area ) );

frame.add( button, BorderLayout.NORTH );

frame.add( area, BorderLayout.CENTER );

frame.pack();

frame.setDefaultCloseOperation(

WindowConstants.DISPOSE_ON_CLOSE );

frame.setLocationRelativeTo( null );

frame.setVisible( true );

}

}

Bean Properties

A bean property is a named attribute of a bean that can affect its behavior or

appearance. Examples of bean properties include color, label, font, font size, and

display size. The JavaBeans specification defines the following types of bean

properties:

Simple – A bean property with a single value whose changes are independent of

changes in any other property. To add simple properties to a bean, add appropriate

getXXX and setXXX methods (or isXXX and setXXX methods for a boolean property).

To add simple properties to a bean, add appropriate getXXX and setXXX methods (or

isXXX and setXXX methods for a boolean property). The names of these methods follow

specific rules called design patterns. These design pattern-based method names provide

the following features:

• Discover a bean's properties

• Determine the properties' read/write attributes

• Determine the properties' types

• Locate the appropriate property editor for each property type

• Display the properties (usually in the Properties window)

• Alter the properties (at design time)

Indexed – A bean property that supports a range of values instead of a single value.

An indexed property is an array of properties or objects that supports a range of values

and enables the accessor to specify an element of a property to read or write. Indexed

properties are specified by the following methods:

//Methods to access individual values

public PropertyElement getPropertyName(int index)

public void setPropertyName(int index, PropertyElement element)

//Methods to access the entire indexed property array

public PropertyElement[] getPropertyName()

public void setPropertyName(PropertyElement element[])

• Note that the distinction between the get and set methods for indexed properties is

subtle. The get method either has an argument that is the array index of the

property, or returns an array. The set method either has two arguments, namely an

integer array index and the property element object that is being set, or has the

entire array as an argument.

Bound – A bean property for which a change to the property results in a notification

being sent to some other bean. Bound properties support the

PropertyChangeListener class.

Sometimes when a Bean property changes, another object might need to be notified of

the change, and react to the change. Whenever a bound property changes, notification

of the change is sent to interested listeners.

The accessor methods for a bound property are defined in the same way as those for

simple properties. However, you also need to provide the event listener registration

methods forPropertyChangeListener classes and fire a PropertyChangeEvent event to

the PropertyChangeListener objects by calling their propertyChange methods

The convenience PropertyChangeSupport class enables your bean to implement these

methods. Your bean can inherit changes from the PropertyChangeSupportclass, or use it

as an inner class.

In order to listen for property changes, an object must be able to add and remove itself

from the listener list on the bean containing the bound property. It must also be able to

respond to the event notification method that signals a property change.

The PropertyChangeEvent class encapsulates property change information, and is sent

from the property change event source to each object in the property change listener list

with the propertyChange method.

Implementing Bound Property Support Within a Bean

To implement a bound property in your application, follow these steps:

• Import the java.beans package. This gives you access to the

PropertyChangeSupport class.

• Instantiate a PropertyChangeSupport object. This object maintains the property

change listener list and fires property change events. You can also make your

class a PropertyChangeSupport subclass.

• Implement methods to maintain the property change listener list. Since a

PropertyChangeSupport subclass implements these methods, you merely wrap

calls to the property-change support object's methods.

• Modify a property's set method to fire a property change event when the property

is changed.

Constrained – A bean property for which a change to the property results in validation

by another bean. The other bean may reject the change if it is not appropriate. A bean

property is constrained if the bean supports the VetoableChangeListener and

PropertyChangeEvent classes, and if the set method for this property throws a

PropertyVetoException.

Constrained properties are more complicated than bound properties because they also

support property change listeners which happen to be vetoers. The following operations

in the setXXX method for the constrained property must be implemented in this order:

1. Save the old value in case the change is vetoed.

2. Notify listeners of the new proposed value, allowing them to veto the change.

3. If no listener vetoes the change (no exception is thrown), set the property to the new value.

The accessor methods for a constrained property are defined in the same way as those

for simple properties, with the addition that the setXXX method throws a

PropertyVetoException exception. The syntax is as follows:

public void setPropertyName(PropertyType pt)

throws PropertyVetoException {code}

Handling Vetoes

If a registered listener vetoes a proposed property change by throwing a

PropertyVetoException exception, the source bean with the constrained property is

responsible for the following actions:

1. Catching exceptions.

2. Reverting to the old value for the property.

3. Issuing a new VetoableChangeListener.vetoableChange call to all listeners to

report the reversion.

The VetoableChangeListener class throws a PropertyVetoException and handles the

PropertyChangeEvent event fired by the bean with the constrained property. The

VetoableChangeSupport provides the following operations:

1. Keeping track of VetoableChangeListener objects.

2. Issuing the vetoableChange method on all registered listeners.

3. Catching any vetoes (exceptions) thrown by listeners.

4. Informing all listeners of a veto by calling vetoableChange again, but with

the old property value as the proposed "new" value.

Bean properties can also be classified as follows:

Writable – A bean property that can be changed

� Standard

� Expert

� Preferred

Read Only – A bean property that cannot be changed.

• Hidden – A bean property that can be changed. However, these properties are not

disclosed with the BeanInfo class.

• A Javabean is just a Java class with the following requirements:

i) It has a public no-args constructor

ii) It has 'set' and 'get' methods for its properties.

iii) It may have any general functions.

iv) If required it must be Serializable.

Introspection

Introspection is the automatic process of analyzing a bean's design patterns to reveal the

bean's properties, events, and methods. This process controls the publishing and

discovery of bean operations and properties. This lesson explains the purpose of

introspection, introduces the Introspection API, and gives an example of introspection

code.

Purpose of Introspection

The way in which introspection is implemented provides great advantages, including:

Portability - Everything is done in the Java platform, so you can write components once,

reuse them everywhere. There are no extra specification files that need to be maintained

independently from your component code. There are no platform-specific issues to

contend with. Your component is not tied to one component model or one proprietary

platform. You get all the advantages of the evolving Java APIs, while maintaining the

portability of your components.

Reuse - By following the JavaBeans design conventions, implementing the appropriate

interfaces, and extending the appropriate classes, you provide your component with reuse

potential that possibly exceeds your expectations.

Introspection API

The JavaBeans API architecture supplies a set of classes and interfaces to provide

introspection.

The BeanInfo interface of the java.beans package defines a set of methods that allow

bean implementors to provide explicit information about their beans. By specifying

BeanInfo for a bean component, a developer can hide methods, specify an icon for the

toolbox, provide descriptive names for properties, define which properties are bound

properties, and much more.

The getBeanInfo(beanName) of the Introspector class can be used by builder tools and

other automated environments to provide detailed information about a bean. The

getBeanInfo method relies on the naming conventions for the bean's properties, events,

and methods. A call to getBeanInfo results in the introspection process analyzing the

bean’s classes and superclasses.

The Introspector class provides descriptor classes with information about properties,

events, and methods of a bean. Methods of this class locate any descriptor information

that has been explicitly supplied by the developer through BeanInfo classes. Then the

Introspector class applies the naming conventions to determine what properties the bean

has, the events to which it can listen, and those which it can send.

The following figure represents a hierarchy of the FeatureDescriptor classes:

Each class represented in this group describes a particular attribute of the bean. For

example, the isBound method of the PropertyDescriptor class indicates whether a

PropertyChangeEvent event is fired when the value of this property changes.

JAR Files

The JavaTM Archive (JAR) file format enables you to bundle multiple files into a single

archive file. Typically a JAR file contains the class files and auxiliary resources

associated with applets and applications. The JAR file format provides many benefits:

• Security: You can digitally sign the contents of a JAR file. Users who recognize your

signature can then optionally grant your software security privileges it wouldn't

otherwise have.

• Decreased download time: If your applet is bundled in a JAR file, the applet's class

files and associated resources can be downloaded to a browser in a single HTTP

transaction without the need for opening a new connection for each file.

• Compression: The JAR format allows you to compress your files for efficient

storage.

• Packaging for extensions: The extensions framework provides a means by which

you can add functionality to the Java core platform, and the JAR file format defines

the packaging for extensions. Java 3D and JavaMail are examples of extensions

developed by SunTM. By using the JAR file format, you can turn your software into

extensions as well.

• Package Sealing: Packages stored in JAR files can be optionally sealed so that the

package can enforce version consistency. Sealing a package within a JAR file means

that all classes defined in that package must be found in the same JAR file.

• Package Versioning: A JAR file can hold data about the files it contains, such as

vendor and version information.

• Portability: The mechanism for handling JAR files is a standard part of the Java

platform's core API.

Using JAR Files: The Basics

JAR files are packaged with the ZIP file format, so you can use them for tasks such as

lossless data compression, archiving, decompression, and archive unpacking. These tasks

are among the most common uses of JAR files, and you can realize many JAR file

benefits using only these basic features.

To perform basic tasks with JAR files, you use the Java Archive Tool provided as part

of the Java Development Kit. Because the Java Archive tool is invoked by using the jar

command, this tutorial refers to it as 'the Jar tool'. As a synopsis and preview of some of

the topics to be covered in this section, the following table summarizes common JAR

file operations:

Creating a JAR File

The basic format of the command for creating a JAR file is:

jar cf jar-file input-file(s)

The options and arguments used in this command are:

• The c option indicates that you want to create a JAR file.

• The f option indicates that you want the output to go to a file rather than to stdout.

• jar-file is the name that you want the resulting JAR file to have. You can use any

filename for a JAR file. By convention, JAR filenames are given a .jar extension,

though this is not required.

• The input-file(s) argument is a space-separated list of one or more files that you

want to include in your JAR file. The input-file(s) argument can contain the

wildcard * symbol. If any of the "input-files" are directories, the contents of those

directories are added to the JAR archive recursively.

This command will generate a compressed JAR file and place it in the current directory.

The command will also generate a default manifest file for the JAR archive. The

metadata in the JAR file, such as the entry names, comments, and contents of the

manifest, must be encoded in UTF8.

Viewing the Contents of a JAR File

The basic format of the command for viewing the contents of a JAR file is:

jar tf jar-file

Let's look at the options and argument used in this command:

• The t option indicates that you want to view the table of contents of the JAR file.

• The f option indicates that the JAR file whose contents are to be viewed is specified

on the command line.

• The jar-file argument is the path and name of the JAR file whose contents you

want to view.

This command will display the JAR file's table of contents to stdout.

Extracting the Contents of a JAR File

The basic command to use for extracting the contents of a JAR file is:

jar xf jar-file [archived-file(s)]

Let's look at the options and arguments in this command:

• The x option indicates that you want to extract files from the JAR archive.

• The f options indicates that the JAR file from which files are to be extracted is

specified on the command line, rather than through stdin.

• The jar-file argument is the filename (or path and filename) of the JAR file from

which to extract files.

• archived-file(s) is an optional argument consisting of a space-separated list of

the files to be extracted from the archive. If this argument is not present, the Jar tool

will extract all the files in the archive.

When extracting files, the Jar tool makes copies of the desired files and writes them to the

current directory, reproducing the directory structure that the files have in the archive.

The original JAR file remains unchanged.

Updating a JAR File

The Jar tool provides a u option which you can use to update the contents of an existing

JAR file by modifying its manifest or by adding files. The basic command for adding

files has this format:

jar uf jar-file input-file(s)

In this command:

• The u option indicates that you want to update an existing JAR file.

• The f option indicates that the JAR file to update is specified on the command line.

• jar-file is the existing JAR file that's to be updated.

• input-file(s) is a space-deliminated list of one or more files that you want to

add to the Jar file.

Any files already in the archive having the same pathname as a file being added will be

overwritten.

Working with Manifest Files: The Basics

JAR files support a wide range of functionality, including electronic signing, version

control, package sealing, and others. What gives a JAR file this versatility? The answer is

the JAR file's manifest. The manifest is a special file that can contain information about

the files packaged in a JAR file. By tailoring this "meta" information that the manifest

contains, you enable the JAR file to serve a variety of purposes.

Understanding the Default Manifest

When we create a JAR file, it automatically receives a default manifest file. There can be

only one manifest file in an archive, and it always has the pathname

META-INF/MANIFEST.MF

When we create a JAR file, the default manifest file simply contains the following:

Manifest-Version: 1.0

Created-By: 1.6.0 (Sun Microsystems Inc.)

These lines show that a manifest's entries take the form of "header: value" pairs. The

name of a header is separated from its value by a colon. The default manifest conforms

to version 1.0 of the manifest specification and was created by the 1.6.0 version of the

JDK.

The manifest can also contain information about the other files that are packaged in the

archive. Exactly what file information should be recorded in the manifest depends on

how you intend to use the JAR file. The default manifest makes no assumptions about

what information it should record about other files. Digest information is not included in

the default manifest.

Modifying a Manifest File

We use the m command-line option to add custom information to the manifest during

creation of a JAR file. This section describes the m option.

The Jar tool automatically puts a default manifest with the pathname META-

INF/MANIFEST.MF into any JAR file we create. We can enable special JAR file

functionality, such as package sealing, by modifying the default manifest. Typically,

modifying the default manifest involves adding special-purpose headers to the manifest

that allow the JAR file to perform a particular desired function.

To modify the manifest, you must first prepare a text file containing the information we

wish to add to the manifest. We then use the Jar tool's m option to add the information in

our file to the manifest.

The text file from which you are creating the manifest must end with a new line or

carriage return. The last line will not be parsed properly if it does not end with a new

line or carriage return. The basic command has this format:

jar cfm jar-file manifest-addition input-file(s)

Let's look at the options and arguments used in this command:

• The c option indicates that you want to create a JAR file.

• The m option indicates that you want to merge information from an existing file into

the manifest file of the JAR file you're creating.

• The f option indicates that you want the output to go to a file (the JAR file you're

creating) rather than to standard output.

• manifest-addition is the name (or path and name) of the existing text file whose

contents you want to add to the contents of JAR file's manifest.

• jar-file is the name that you want the resulting JAR file to have.

• The input-file(s) argument is a space-separated list of one or more files that you

want to be placed in your JAR file.

The m and f options must be in the same order as the corresponding arguments. The

contents of the manifest must be encoded in UTF8.

The Reflection API

Uses of Reflection

Reflection is commonly used by programs which require the ability to examine or

modify the runtime behavior of applications running in the Java virtual machine. This

is a relatively advanced feature and should be used only by developers who have a

strong grasp of the fundamentals of the language. With that caveat in mind, reflection

is a powerful technique and can enable applications to perform operations which would

otherwise be impossible.

Java's Reflection API's makes it possible to inspect classes, interfaces, fields and

methods at runtime, without knowing the names of the classes, methods etc. at compile

time. It is also possible to instantiate new objects, invoke methods and get/set field

values using reflection.

Java Reflection is a quite powerful and can be very useful. For instance, when mapping

objects to tables in a database at runtime, like Butterfly Persistence does. Or, when

mapping the statements in a script language to method calls on real objects at runtime,

like Butterfly Container does when parsing its configuration scripts.

Reflection is a feature in the Java programming language. It allows an executing Java

program to examine or "introspect" upon itself, and manipulate internal properties of the

program. For example, it's possible for a Java class to obtain the names of all its members

and display them.

The ability to examine and manipulate a Java class from within itself may not sound like

very much, but in other programming languages this feature simply doesn't exist. For

example, there is no way in a Pascal, C, or C++ program to obtain information about the

functions defined within that program.

One tangible use of reflection is in JavaBeans, where software components can be

manipulated visually via a builder tool. The tool uses reflection to obtain the properties of

Java components (classes) as they are dynamically loaded.

Extensibility Features

An application may make use of external, user-defined classes by creating instances of

extensibility objects using their fully-qualified names.

Class Browsers and Visual Development Environments

A class browser needs to be able to enumerate the members of classes. Visual

development environments can benefit from making use of type information available

in reflection to aid the developer in writing correct code.

Debuggers and Test Tools

Debuggers need to be able to examine private members on classes. Test harnesses can

make use of reflection to systematically call a discoverable set APIs defined on a class,

to insure a high level of code coverage in a test suite.

Drawbacks of Reflection

Reflection is powerful, but should not be used indiscriminately. If it is possible to

perform an operation without using reflection, then it is preferable to avoid using it. The

following concerns should be kept in mind when accessing code via reflection.

Performance Overhead

Because reflection involves types that are dynamically resolved, certain Java virtual

machine optimizations can not be performed. Consequently, reflective operations have

slower performance than their non-reflective counterparts, and should be avoided in

sections of code which are called frequently in performance-sensitive applications.

Security Restrictions

Reflection requires a runtime permission which may not be present when running under

a security manager. This is in an important consideration for code which has to run in a

restricted security context, such as in an Applet.

Exposure of Internals

Since reflection allows code to perform operations that would be illegal in non-

reflective code, such as accessing private fields and methods, the use of reflection can

result in unexpected side-effects, which may render code dysfunctional and may

destroy portability. Reflective code breaks abstractions and therefore may change

behavior with upgrades of the platform.

Java Reflection Classes

Every object is either a reference or primitive type. Reference types all inherit from

java.lang.Object. Classes, enums, arrays, and interfaces are all reference types. There is a

fixed set of primitive types: boolean, byte, short, int, long, char, float, and double.

Examples of reference types include java.lang.String, all of the wrapper classes for

primitive types such as java.lang.Double, the interface java.io.Serializable, and the enum

javax.swing.SortOrder.

For every type of object, the Java virtual machine instantiates an immutable instance of

java.lang.Class which provides methods to examine the runtime properties of the object

including its members and type information. Class also provides the ability to create

new classes and objects. Most importantly, it is the entry point for all of the Reflection

APIs. This lesson covers the most commonly used reflection operations involving

classes:

• Retrieving Class Objects describes the ways to get a Class

• Examining Class Modifiers and Types shows how to access the class declaration

information

• Discovering Class Members illustrates how to list the constructors, fields, methods,

and nested classes in a class

• Troubleshooting describes common errors encountered when using Class

Retrieving Class Objects

The entry point for all reflection operations is java.lang.Class. With the exception of

java.lang.reflect.ReflectPermission, none of the classes in java.lang.reflect have public

constructors. To get to these classes, it is necessary to invoke appropriate methods on

Class. There are several ways to get a Class depending on whether the code has access to

an object, the name of class, a type, or an existing Class.

Object.getClass()

If an instance of an object is available, then the simplest way to get its Class is to invoke

Object.getClass(). Of course, this only works for reference types which all inherit from

Object. Some examples follow:

Class c = "foo".getClass();

Returns the Class for String

Class c = System.console().getClass();

There is a unique console associated with the virtual machine which is returned by the

static method System.console(). The value returned by getClass() is the Class

corresponding to java.io.Console.

enum E { A, B }

Class c = A.getClass();

A is is an instance of the enum E; thus getClass() returns the Class corresponding to the

enumeration type E.

byte[] bytes = new byte[1024];

Class c = bytes.getClass();

Since arrays are Objects, it is also possible to invoke getClass() on an instance of an

array. The returned Class corresponds to an array with component type byte.

import java.util.HashSet;

import java.util.Set;

Set<String> s = new HashSet<String>();

Class c = s.getClass();

In this case, java.util.Set is an interface to an object of type java.util.HashSet. The value

returned by getClass() is the class corresponding to java.util.HashSet.

The .class Syntax

If the type is available but there is no instance then it is possible to obtain a Class by

appending ".class" to the name of the type. This is also the easiest way to obtain the Class

for a primitive type.

boolean b;

Class c = b.getClass(); // compile-time error

Class c = boolean.class; // correct

Note that the statement boolean.getClass() would produce a compile-time error because a

boolean is a primitive type and cannot be dereferenced. The .class syntax returns the

Class corresponding to the type boolean.

Class c = java.io.PrintStream.class;

The variable c will be the Class corresponding to the type java.io.PrintStream.

Class c = int[][][].class;

The .class syntax may be used to retrieve a Class corresponding to a multi-dimensional

array of a given type.

Class.forName()

If the fully-qualified name of a class is available, it is possible to get the corresponding

Class using the static method Class.forName(). This cannot be used for primitive types.

The syntax for names of array classes is described by Class.getName(). This syntax is

applicable to references and primitive types.

Class c = Class.forName("com.duke.MyLocaleServiceProvider");

This statement will create a class from the given fully-qualified name.

Class cDoubleArray = Class.forName("[D");

Class cStringArray = Class.forName("[[Ljava.lang.String;");

The variable cDoubleArray will contain the Class corresponding to an array of primitive

type double (i.e. the same as double[].class). The cStringArray variable will contain the

Class corresponding to a two-dimensional array of String (i.e. identical to

String[][].class).

TYPE Field for Primitive Type Wrappers

The .class syntax is a more convenient and the preferred way to obtain the Class for a

primitive type; however there is another way to acquire the Class. Each of the primitive

types and void has a wrapper class in java.lang that is used for boxing of primitive types

to reference types. Each wrapper class contains a field named TYPE which is equal to the

Class for the primitive type being wrapped.

Class c = Double.TYPE;

There is a class java.lang.Double which is used to wrap the primitive type double

whenever an Object is required. The value of Double.TYPE is identical to that of

double.class.

Class c = Void.TYPE;

Void.TYPE is identical to void.class.

Methods that Return Classes

There are several Reflection APIs which return classes but these may only be accessed if

a Class has already been obtained either directly or indirectly.

Class.getSuperclass()

Returns the super class for the given class.

Class c = javax.swing.JButton.class.getSuperclass();

The super class of javax.swing.JButton is javax.swing.AbstractButton.

Class.getClasses()

Returns all the public classes, interfaces, and enums that are members of the class

including inherited members.

Class<?>[] c = Character.class.getClasses();

Character contains two member classes Character.Subset and

Character.UnicodeBlock.

Class.getDeclaredClasses()

Returns all of the classes interfaces, and enums that are explicitly declared in this class.

Class<?>[] c = Character.class.getDeclaredClasses();

Character contains two public member classes Character.Subset and

Character.UnicodeBlock and one private class Character.CharacterCache.

{ Class, java.lang.reflect. { Field, Method, Constructor }

}.getDeclaringClass()

Returns the Class in which these members were declared. Anonymous classes will not

have a declaring class but will have an enclosing class.

import java.lang.reflect.Field;

Field f = System.class.getField("out");

Class c = f.getDeclaringClass();

The field out</CODE< a> is declared in System.

public class MyClass {

static Object o = new Object() { public void m() {} };

static Class<c> = o.getClass().getEnclosingClass(); }

The declaring class of the anonymous class defined by o is null.

Class.getEnclosingClass()

Returns the immediately enclosing class of the class.

Class c = Thread.State.class().getEnclosingClass();

The enclosing class of the enum Thread.State is Thread.

public class MyClass {

static Object o = new Object() { public void m() {} };

static Class<c> = o.getClass().getEnclosingClass(); }

The anonymous class defined by o is enclosed by MyClass.

Examining Class Modifiers and Types

A class may be declared with one or more modifiers which affect its runtime behavior:

• Access modifiers: public, protected, and private

• Modifier requiring override: abstract

• Modifier restricting to one instance: static

• Modifier prohibiting value modification: final

• Modifier forcing strict floating point behavior: strictfp

• Annotations

Not all modifiers are allowed on all classes, for example an interface cannot be final

and an enum cannot be abstract. java.lang.reflect.Modifier contains declarations

for all possible modifiers. It also contains methods which may be used to decode the set

of modifiers returned by Class.getModifiers().

Discovering Class Members

There are two categories of methods provided in Class for accessing fields, methods, and

constructors: methods which enumerate these members and methods which search for

particular members. Also there are distinct methods for accessing members declared

directly on the class versus methods which search the superinterfaces and superclasses for

inherited members. The following table provides a summary of all the member-locating

methods and their characteristics.

Member Class API

Field getDeclaredField()

getField()

getDeclaredFields()

getFields()

Method getDeclaredMethod()

getMethod()

getDeclaredMethods()

getMethods()

Constructor getDeclaredConstructor()

getConstructor()

getDeclaredConstructors()

getConstructors()

A Simple Example

To see how reflection works, consider this simple example:

import java.lang.reflect.*;

public class DumpMethods {

public static void main(String args[])

{

try {

Class c = Class.forName(args[0]);

Method m[] = c.getDeclaredMethods();

for (int i = 0; i < m.length; i++)

System.out.println(m[i].toString());

}

catch (Throwable e) {

System.err.println(e);

} } }

For an invocation of: java DumpMethods java.util.Stack

the output is:

public java.lang.Object java.util.Stack.push(

java.lang.Object)

public synchronized

java.lang.Object java.util.Stack.pop()

public synchronized

java.lang.Object java.util.Stack.peek()

public boolean java.util.Stack.empty()

public synchronized

int java.util.Stack.search(java.lang.Object)

This program loads the specified class using class.forName, and then calls

getDeclaredMethods to retrieve the list of methods defined in the class.

java.lang.reflect.Method is a class representing a single class method.

Setting Up to Use Reflection

The reflection classes, such as Method, are found in java.lang.reflect. There are three

steps that must be followed to use these classes. The first step is to obtain a

java.lang.Class object for the class that you want to manipulate. java.lang.Class

is used to represent classes and interfaces in a running Java program.

One way of obtaining a Class object is to say:

Class c = Class.forName("java.lang.String");

to get the Class object for String. Another approach is to use:

Class c = int.class;

or

Class c = Integer.TYPE;

to obtain Class information on fundamental types. The latter approach accesses the

predefined TYPE field of the wrapper (such as Integer) for the fundamental type. The

second step is to call a method such as getDeclaredMethods, to get a list of all the

methods declared by the class. Once this information is in hand, then the third step is to

use the reflection API to manipulate the information. For example, the sequence:

Class c = Class.forName("java.lang.String");

Method m[] = c.getDeclaredMethods();

System.out.println(m[0].toString());

will display a textual representation of the first method declared in String.

Summary

Java reflection is useful because it supports dynamic retrieval of information about

classes and data structures by name, and allows for their manipulation within an

executing Java program. This feature is extremely powerful and has no equivalent in

other conventional languages such as C, C++, Fortran, or Pascal.

Object Serialization

We all know the Java platform allows us to create reusable objects in memory. However,

all of those objects exist only as long as the Java virtual machine remains running. It

would be nice if the objects we create could exist beyond the lifetime of the virtual

machine, wouldn't it? Well, with object serialization, you can flatten your objects and

reuse them in powerful ways.

Object serialization is the process of saving an object's state to a sequence of bytes, as

well as the process of rebuilding those bytes into a live object at some future time. The

Java Serialization API provides a standard mechanism for developers to handle object

serialization. The API is small and easy to use, provided the classes and methods are

understood.

The Default Mechanism

Let's start with the basics. To persist an object in Java, we must have a persistent object.

An object is marked serializable by implementing the java.io.Serializable

interface, which signifies to the underlying API that the object can be flattened into

bytes and subsequently inflated in the future.

Let's look at a persistent class we'll use to demonstrate the serialization mechanism:

import java.io.Serializable;

import java.util.Date;

import java.util.Calendar;

public class PersistentTime implements Serializable

{

private Date time;

public PersistentTime() {

time = Calendar.getInstance().getTime(); }

public Date getTime(){

return time; } }

As you can see, the only thing we had to do differently from creating a normal class is

implement the java.io.Serializable interface. The completely empty

Serializable is only a marker interface -- it simply allows the serialization

mechanism to verify that the class is able to be persisted. Thus, we turn to the first rule

of serialization:

Rule #1: The object to be persisted must implement the Serializable interface or

inherit that implementation from its object hierarchy.

The next step is to actually persist the object. That is done with the

java.io.ObjectOutputStream class. That class is a filter stream--it is wrapped

around a lower-level byte stream (called a node stream) to handle the serialization

protocol for us. Node streams can be used to write to file systems or even across

sockets. That means we could easily transfer a flattened object across a network wire

and have it be rebuilt on the other side!

Take a look at the code used to save the PersistentTime object:

import java.io.ObjectOutputStream;

import java.io.FileOutputStream;

import java.io.IOException;

public class FlattenTime {

public static void main(String [] args){

String filename = "time.ser";

if(args.length > 0){

filename = args[0];}

PersistentTime time = new PersistentTime();

FileOutputStream fos = null;

ObjectOutputStream out = null;

try {

fos = new FileOutputStream(filename);

out = new ObjectOutputStream(fos);

out.writeObject(time);

out.close(); }

catch(IOException ex){

ex.printStackTrace();} } }

When we call the ObjectOutputStream.writeObject() method, which kicks off the

serialization mechanism and the object is flattened (in that case to a file). To restore the

file, we can employ the following code:

import java.io.ObjectInputStream;

import java.io.FileInputStream;

import java.io.IOException;

import java.util.Calendar;

public class InflateTime {

public static void main(String [] args) {

String filename = "time.ser";

if(args.length > 0) {

filename = args[0]; }

PersistentTime time = null;

FileInputStream fis = null;

ObjectInputStream in = null;

try {

fis = new FileInputStream(filename);

in = new ObjectInputStream(fis);

time = (PersistentTime)in.readObject();

in.close(); }

catch(IOException ex) {

ex.printStackTrace(); }

catch(ClassNotFoundException ex) {

ex.printStackTrace(); }

// print out restored time

System.out.println("Flattened time: " + time.getTime());

System.out.println();

// print out the current time

System.out.println("Current time: " +

Calendar.getInstance().getTime()); } }

The object's restoration occurs with the ObjectInputStream.readObject() method

call. The method call reads in the raw bytes that we previously persisted and creates a

live object that is an exact replica of the original. Because readObject() can read any

serializable object, a cast to the correct type is required. With that in mind, the class

file must be accessible from the system in which the restoration occurs. In other words,

the object's class file and methods are not saved; only the object's state is saved.

Later, we simply call the getTime() method to retrieve the time that the original

object flattened. The flatten time is compared to the current time to demonstrate that

the mechanism indeed worked as expected.

Create Your Own Protocol: the Externalizable Interface

Our discussion would be incomplete not to mention the third option for serialization:

create your own protocol with the Externalizable interface. Instead of implementing the

Serializable interface, you can implement Externalizable, which contains two methods:

public void writeExternal(ObjectOutput out) throws IOException;

public void readExternal(ObjectInput in) throws IOException,

ClassNotFoundException;

Just override those methods to provide your own protocol. Unlike the previous two

serialization variations, nothing is provided for free here, though. That is, the protocol is

entirely in your hands. Although it's the more difficult scenario, it's also the most

controllable. An example situation for that alternate type of serialization: read and write

PDF files with a Java application. If you know how to write and read PDF (the sequence

of bytes required), you could provide the PDF-specific protocol in the writeExternal and

readExternal methods.

Just as before, though, there is no difference in how a class that implements

Externalizable is used. Just call writeObject() or readObject and, voila, those

externalizable methods will be called automatically.

Caching Objects in the Stream

First, consider the situation in which an object is written to a stream and then written

again later. By default, an ObjectOutputStream will maintain a reference to an object

written to it. That means that if the state of the written object is written and then

written again, the new state will not be saved! Here is a code snippet that shows that

problem in action:

ObjectOutputStream out = new ObjectOutputStream(...);

MyObject obj = new MyObject(); // must be Serializable

obj.setState(100);

out.writeObject(obj); // saves object with state = 100

obj.setState(200);

out.writeObject(obj); // does not save new object state

There are two ways to control that situation. First, you could make sure to always close

the stream after a write call, ensuring the new object is written out each time. Second,

you could call the ObjectOutputStream.reset() method, which would tell the stream to

release the cache of references it is holding so all new write calls will actually be

written. Be careful, though -- the reset flushes the entire object cache, so all objects

that have been written could be rewritten.

Enterprise Java Bean(EJB)

Enterprise Java Bean architecture is the component architecture for the development and

deployment of robust, world class component-based distributed application using the java

language. One of Java's most important features is platform independence. Since its

arrival, Java has been depicted as "write once, run anywhere". They are not only platform

independent but also implementation independent. That is, EJBs can run in any

application server that implements the EJB specifications.

Enterprise JavaBeans - An Introduction

Enterprise JavaBeans (EJB) is a comprehensive technology that provides the

infrastructure for building enterprise-level server-side distributed Java components. The

EJB technology provides a distributed component architecture that integrates several

enterprise-level requirements such as distribution, transactions, security, messaging,

persistence, and connectivity to mainframes and Enterprise Resource Planning (ERP)

systems. When compared with other distributed component technologies such as Java

RMI and CORBA, the EJB architecture hides most the underlying system-level semantics

that are typical of distributed component applications, such as instance management,

object pooling, multiple threading, and connection pooling. Secondly, unlike other

component models, EJB technology provides us with different types of components for

business logic, persistence, and enterprise messages.

Thus, an Enterprise Java Bean is a remote object with semantics specified for creation,

invocation and deletion. The EJB container is assigned the system-level tasks mentioned

above. What a web container does for Java servlets and JSPs in a web server, the EJB

container is for EJBs.

EJB Architecture

Any distributed component technology should have the following requirements:

1. There should be a mechanism to create the client-side and server-side proxy objects. A

client-side proxy represents the server-side object on the client-side. As far as the client is

concerned, the client-side proxy is equivalent to the server-side object. On the other hand,

the purpose of the server-side proxy is to provide the basic infrastructure to receive client

requests and delegate these request to the actual implementation object

2. We need to obtain a reference to client-side proxy object. In order to communicate

with the server-side object, the client needs to obtain a reference to the proxy.

3. There should be a way to inform the distributed component system that a specific

component is no longer in use by the client.

In order to meet these requirements, the EJB architecture specifies two kinds of interfaces

for each bean. They are home interface and remote interface. These interfaces specify the

bean contract to the clients. However, a bean developer need not provide implementation

for these interfaces. The home interface will contain methods to be used for creating

remote objects. The remote interface should include business methods that a bean is able

to serve to clients. One can consider using the home interface to specify a remote object

capable of creating objects conforming to the remote interface. That is, a home interface

is analogous to a factory of remote objects. These are regular Java interfaces extending

the javax.ejb.EJBHome and javax.ejb.EJBObject interfaces respectively.

As discussed below, the EJB architecture specifies three types of beans - session beans,

entity beans, and message-driven beans. A bean developer has to specify the home and

remote interfaces and also he has to implement one of these bean interfaces depending

upon the type of the bean. For instance, for session beans, he has to implement the

javax.ejb.SessionBean interface. The EJB architecture expects him to implement the

methods specified in the bean interface and the methods specified in the home and remote

interfaces. During the deployment time, he should specify the home and remote interfaces

and bean implementation class to define a bean. The EJB container relies on specific

method names and uses delegation for invoking methods on bean instances.

Thus regarding the first requirement, the EJB container generates the proxy objects for all

beans. For the second one, the EJB container for each bean implement a proxy object to

the home interface and publishes in the JNDI implementation of the J2EE platform. One

can use JNDI to look for this and obtain a reference. As this object implements the home

interface only, he can use one of the creation methods of the home object to get a proxy

to the remote interface of the bean. When one invokes a creation method on the home

proxy object, the container makes sure that a bean instance is created on the EJB

container runtime and its proxy is returned to the client. Once the client gets hold of the

proxy for the remote interface, it can directly access the services of the bean.

Finally, once the client decides to stop accessing the services of the bean, it can inform

the EJB container by calling a remote method on the bean. This signals the EJB container

to disassociate the bean instance from the proxy and that bean instance is ready to service

any other clients.

Types of EJBs

The EJB architecture is based on the concept that in an enterprise computing system,

database persistence-related logic should be independent of the business logic that relies

on the data. This happens to be a very useful technique for separating business logic

concerns from database concerns. This makes that business logic can deal with the

business data without worrying about how the data is stored in a relational database.

Enterprise JavaBeans server-side components come in two fundamentally different types:

entity beans and session beans.

Basically entity beans model business concepts that can be expressed as nouns. For

example, an entity bean might represent a customer, a piece of equipment, an item in

inventory. Thus entity beans model real-world objects. These objects are usually

persistent records in some kind of database.

Session beans are for managing processes or tasks. A session bean is mainly for

coordinating particular kinds of activities. That is, session beans are plain remote objects

meant for abstracting business logic. The activity that a session bean represents is

fundamentally transient. A session bean does not represent anything in a database, but it

can access the database.

Thus an entity bean has persistent state whereas a session bean models interactions but

does not have persistent state.

Session beans are transaction-aware. In a distributed component environment, managing

transactions across several components mandates distributed transaction processing. The

EJB architecture allows the container to manage transactions declaratively. This

mechanism lets a bean developer to specify transactions across bean methods. Session

beans are client-specific. That is, session bean instances on the server side are specific to

the client that created them on the client side. This eliminates the need for the developer

to deal with multiple threading and concurrency.

Unlike session beans, entity beans have a client-independent identity. This is because an

entity bean encapsulates persistent data. The EJB architecture lets a developer to register

a primary key class to encapsulate the minimal set of attributes required to represent the

identity of an entity bean. Clients can use these primary key objects to accomplish the

database operations, such as create, locate, or delete entity beans. Since entity beans

represent persistent state, entity beans can be shared across different clients. Similar to

session beans, entity beans are also transactional, except for the fact that bean instances

are not allowed to programmatically control transactions.

These two types of beans are meant for synchronous invocation. That is, when a client

invokes a method on one of the above types, the client thread will be blocked till the EJB

container completes executing the method on the bean instance. Also these beans are

unable to service the messages which comes asynchronously over a messaging service

such as JMS. To overcome this deficiency, the EJB architecture has introduced a third

type of bean called message-driven bean. A message-driven bean is a bean instance that

can listen to messages from the JMS.

Unlike other types of beans, a message-driven bean is a local object without home and

remote interfaces. In a J2EE platform, message-driven beans are registered against JMS

destinations. When a JMS message receives a destination, the EJB container invokes the

associated message-driven bean. Thus message-driven beans do not require home and

remote interfaces as instances of these beans are created based on receipt of JMS

messages. This is an asynchronous activity and does not involve clients directly. The

main purpose of message-driven beans is to implement business logic in response to JMS

messages. For instance, take a B2B e-commerce application receiving a purchase order

via a JMS message as an XML document. On receipt of such a message in order to

persist this data and perform any business logic, one can implement a message-driven

bean and associate it with the corresponding JMS destination. Also these beans are

completely decoupled from the clients that send messages.

Session Beans: Stateful and Stateless

Session beans can be either stateful or stateless. Stateful session beans maintain

conversational state when used by a client. Conversational state is not written to a

database but can store some state in private variables during one method call and a

subsequent method call can rely on this state. Maintaining a conversational state allows a

client to carry on a conversation with a bean. As each method on the bean is invoked, the

state of the session bean may change and that change can affect subsequent method calls.

Stateless session beans do not maintain any conversational state. Each method is

completely independent and uses only data passed in its parameters. One can specify

whether a bean is stateful or not in the bean's deployment descriptor.

Entity Beans: Container and Bean Managed Persistence

An example entity bean in a B2B application is given as follows. A purchase order is a

business identity and requires persistence store such as a relational database. The various

purchase order attributes can be defined as the attributes of an entity bean. Since database

operations involve create, update, load, delete, and find operations, the EJB architecture

requires entity beans to implement these operations. Entity beans should implement the

javax.ejb.EntityBean interface that specifies the load and delete operations among others.

In addition, the bean developer should specify the appropriate create and find methods on

the home interface, and provide their implementation in an entity bean.

There are two types of entity beans and they are distinguished by how they manage

persistence. Container-managed beans have their persistence automatically managed by

the EJB container. This is a more sophisticated approach and here the bean developer

does not implement the persistence logic. The developer relies on the deployment

descriptor to specify attributes whose persistence should be managed by the container.

The container knows how a bean instance's fields map to the database and automatically

takes care of inserting, updating, and deleting the data associated with entities in the

database.

Beans using bean-managed persistence do all this work explicitly: the bean developer has

to write the code to manipulate the database. The EJB container tells the bean instance

when it is safe to insert, update, and delete its data from the database, but it provides no

other help. The bean instance has to do the persistence work itself.

EJB Container: The environment that surrounds the beans on the EJB server is often

referred to as the container. The container acts as an intermediary between the bean class

and the EJB server. The container manages the EJB objects and EJB homes for a

particular type of bean and helps these constructs to manage bean resources and apply the

primary services relevant to distributed systems to bean instances at run time. An EJB

server can have more than one container and each container in turn can accommodate

more than one enterprise bean. Container means a pre developed software. EJB’s cannot

survive outside containers.

Remote Interface: This interface for an enterprise bean defines the enterprise bean's

business methods that clients for this bean can access. The remote interface extends

javax.ejb.EJBObject, which in turn extends java.rmi.Remote.

Home interface: This interface defines the bean's life cycle methods such as creation of

new beans, removal of beans, and locating beans. The home interface extends

javax.ejb.EJBHome, which in turn extends java.rmi.Remote.

Bean Class: This class has to implement the bean's business methods in the remote

interface apart from some other callback methods. An entity bean must implement

javax.ejb.EntityBean and a session bean must implement javax.ejb.SessionBean. Both

EntityBean and Session Bean extend javax.ejb.EnterpriseBean.

Primary Key: This is a very simple class that provides a reference into the database.

This class has to implement java.io.Serializable. Only entity beans need a primary key.

Deployment Descriptors: Much of the information about how beans are managed at

runtime is not supplied in the interfaces and classes mentioned above. There are some

common primary services related with distributed systems apart from some specific

services such as security, transactions, naming that are being handled automatically by

EJB server. But still EJB server needs to know beforehand how to apply the primary

services to each bean class at runtime. Deployment descriptors exactly do this all

important task.

JAR Files: Jar files are ZIP files that are used specifically for packaging Java classes that

are ready to be used in some type of application. A Jar file containing one or more

enterprise beans includes the bean classes, remote interfaces, home interfaces, and

primary keys for each bean. It also contains one deployment descriptor.

Deployment is the process of reading the bean's JAR file, changing or adding properties

to the deployment descriptor, mapping the bean to the database, defining access control

in the security domain, and generating vendor-specific classes needed to support the bean

in the EJB environment. Every EJB server product comes with its own deployment tools

containing a graphical user interface and a set of command-line programs.

For clients like enterprise bean itself, Java RMI or CORBA client, to locate enterprise

beans on the net, Java EJB specifications specify the clients to use Java Naming and

Directory Interface (JNDI). JNDI is a standard Java extension that provides a uniform

Application Programming Interface (API) for accessing a wide range of naming and

directory services. The communication protocol may be Java RMI-IIOP or CORBA's

IIOP

There are some special integrated application development tools such as Inprise's

JBuilder, Sun's Forte and IBM's VisualAge, for designing EJBs in the market.

Why EJB (Enterprise Java Beans)?

Enterprise Java Beans or EJB for short is the server-side component architecture for

the Java 2 Platform, Enterprise Edition (J2EE) platform. EJB technology enables rapid

and simplified development of distributed, transactional, secure and portable applications

based on Java technology.

Sun Microsystems in the beginning put forward Java Remote Method Invocation

(RMI) API as a distributed object computing technology. RMI specifies how to write

objects so that they can talk to each other no matter where on the network they are found.

At its core, however, RMI is nothing more than an API to which our distributed objects

must conform. RMI says nothing about about other characteristics normally required of

an enterprise-class distributed environment. For example, it does not say anything about

how a client might perform a search for RMI objects matching some criteria. It also does

not specify how those distributed objects work together to construct a single transaction.

Thus there is a realization for a need of a distributed component model.

A component model is a standard that defines how components are written so that

systems can be built from components by different developers with little or no

customization. There is already a component model called as JavaBeans in Java. It is a

component model that defines how we write user interface components so that they may

be plugged into third-party applications. The magic thing about JavaBeans is that there is

very little API behind the specification; we neither implement nor extend any special

classes and we need not call any special methods.

Enterprise JavaBeans is a more complex extension of this concept. While there are API

elements behind Enterprise JavaBeans, it is more than an API. It is a standard way of

writing distributed components so that the written components can be used with the

components we write in someone else's system. RMI does not support this ability for

several reasons listed below. Here comes the features that are not available with RMI.

1. Security - RMI does not worry about security. RMI alone basically leaves our system

wide open. Any one who has access to our RMI interfaces can forge access to the

underlying objects. If we do not impose complex security restrictions to authenticate

clients and verify access by writing extra code, we will have no security at all. Thus our

components are therefore unlikely to interoperate with other's components unless we

agree to some sort of security model.

2. Searching - RMI provides the ability to do a lookup only for a specific, registry-bound

object. It specifies nothing about how we find unbound objects or perform searches for a

group of objects meeting certain requirements. For example, writing a banking

application, we might want to support the ability to find all accounts with negative

balances. In order to do this in an RMI environment, we would have to write our own

search methods in bound objects. Our custom approach to handling searches will not

work with someone's else custom approach to searching without forcing clients to deal

with both search models.

3. Transactions - The most important feature for a distributed component model is

transactions. RMI does not support transactions. If we develop an RMI-based application,

we need to address how we will support transactions. That is, we need to keep track of

when a client begins a transaction, what RMI objects that client changes, and commit and

roll back those changes when the client is done. This is further compounded by the fact

that most distributed object systems support more than one client at a time. Different

transaction models are much more incompatible than different searching or security

models. While client coders can get around differences in search and security models by

being aware of those differences, transaction models can almost never be made to work

together.

4. Persistence - RMI does not care about how RMI objects persist across time. There is a

persistence utility that supports saving RMI objects to a database using JDBC. But it is

very difficult to integrate with RMI objects designed to use some other persistence model

because the other persistence model may have different persistence requirements.

Enterprise JavaBeans addresses all of these points so that we can literally pick and choose

the best designed business components from different vendors and make them work and

play well with one another in the same environment. EJB is now the standard component

model for capturing distributed business components. It hides from us the details we

might have to worry about ourself if we were writing an RMI application.

Application Servers

1. BEA WebLogic Server

2. iPlanet

3. Oracle

4. Orion Server

5. WebSphere

6. NetDynamics

7. JRun Server

8. Tomcat

9. JOnAS

10. Pramati Server

11. Power Tier for J2EE

BEA WebLogic Server JMS

• provides the environment in which a bean executes

• generates Home Object

• generates EJB Object

• manages individual bean instances

Enterprise JavaBeans Model

EJB is Sun’s J2EE transactional, vendor-neutral, enterprise component architecture

providing:

• Modelling of business entities as well as synchronous and asynchronous processes

• Persistence via explicit code (bean-managed) or via services of the EJB server

(container-managed)

• Vendor neutrality and inter operability

• XML driven deployment and configuration

Difference between EJBs and Java Beans:

EJB Java Beans

EJBs need a Container

EJBs are deployable components JBs are development components

EJBs are assembled to form a complete

application

JBs are classes with no argument

constructor

EJBs are based on RMI IIOP and JNDI

technologies

JBs have a get and a set method on them

EJB Overview

EJB simplified distributed development

• Develop EJB implementation logic

• Define Home/Local Remote/Local interfaces

• Container delegates client calls

• Container manages resources/lifecycle/callbacks

Java Naming and Directory Interface (JNDI)

Provides a standardized way of accessing resources in a distributed environment

Protocol and naming service agnostic

• DNS

• NDIS

• LDAP

• X.500

Implemented by the javax.naming package and three other packages below it

javax.naming.InitialContext is the entry point to the EJB Server

• bind – associates a name with an object

• lookup – finds an object given the name

EJB Specification

The EJB specification defines interfaces between

• the EJB and its container

• the container and the application server

• the container and the client

EJB Roles

• Service & Tool Provider provides Server, Container and integrates with

distributed facilities

• EJB Provider creates EJB components

• Application Assembler assembles apps from per-built EJB components

• Deployment Specialist deploys apps and understands architecture issues

EJB Design Approaches

EJB model is based on three basic design approaches for building distributed component

systems:

1. Stateless server approach

2. Session-oriented approach

3. Persistent Object approach

The EJB specification provides these as

• Stateless session Beans

• Stateful session Beans

• Entity Beans

• Message driven beans

RMI

Java Remote Method Invocation

Java Remote Method Invocation (Java RMI) enables you to create distributed Java

technology-based applications that can communicate with other such applications.

Methods of remote Java objects can be invoked from other Java virtual machines,

possibly on different hosts.

With RMI we can write distributed programs in the Java programming language. RMI is

easy to use, you don't need to learn a separate interface definition language (IDL), and

you get Java's inherent "write once, run anywhere" benefit. Clients, remote interfaces,

and servers are written entirely in Java. RMI uses the Java Remote Method Protocol

(JRMP) for remote Java object communication. RMI lacks interoperability with other

languages because it does not use CORBA-IIOP as the communication protocol.

RMI uses object serialization to marshal and unmarshal parameters and does not truncate

types, supporting object-oriented polymorphism. The RMI registry is a lookup service for

ports.

The RMI implementation

The RMI implementation consists of three abstraction layers.

These abstraction layers are:

1. The Stub and Skeleton layer, which intercepts method calls made by the client to

the interface reference variable and redirects these calls to a remote RMI service.

2. The Remote Reference layer understands how to interpret and manage references

made from clients to the remote service objects.

3. The bottom layer is the Transport layer, which is based on TCP/IP connections

between machines in a network. It provides basic connectivity, as well as some

firewall penetration strategies.

On top of the TCP/IP layer, RMI uses a wire-level protocol called Java Remote Method

Protocol (JRMP), which works like this:

1. Objects that require remote behavior should extend the RemoteObject class,

typically through the UnicastRemoteObject subclass.

a. The UnicastRemoteObject subclass exports the remote object to make it

available for servicing incoming RMI calls.

b. Exporting the remote object creates a new server socket, which is bound to

a port number.

c. A thread is also created that listens for connections on that socket. The

server is registered with a registry.

d. A client obtains details of connecting to the server from the registry.

e. Using the information from the registry, which includes the hostname and

the port details of the server's listening socket, the client connects to the

server.

2. When the client issues a remote method invocation to the server, it creates a

TCPConnection object, which opens a socket to the server on the port specified

and sends the RMI header information and the marshalled arguments through this

connection using the StreamRemoteCall class.

3. On the server side:

a. When a client connects to the server socket, a new thread is assigned to

deal with the incoming call. The original thread can continue listening to

the original socket so that additional calls from other clients can be made.

b. The server reads the header information and creates a RemoteCall object

of its own to deal with unmarshalling the RMI arguments from the socket.

c. The serviceCall() method of the Transport class services the incoming call

by dispatching it

d. The dispatch() method calls the appropriate method on the object and

pushes the result back down the wire.

e. If the server object throws an exception, the server catches it and marshals

it down the wire instead of the return value.

4. Back on the client side:

a. The return value of the RMI is unmarshalled and returned from the stub

back to the client code itself.

b. If an exception is thrown from the server, that is unmarshalled and thrown

from the stub.

Understanding distributed garbage collection (DGC)

The RMI subsystem implements reference counting based distributed garbage collection

(DGC) to provide automatic memory management facilities for remote server objects.

When the client creates (unmarshalls) a remote reference, it calls dirty() on the server

side Distributed Garbage Collector. After it has finished with the remote reference, it

calls the corresponding clean() method.

A reference to a remote object is leased for a period of time by the client holding the

reference. The lease period starts when the dirty call is received. The client has to renew

the leases, by making additional dirty calls, on the remote references it holds before such

leases expire. If the client does not renew the lease before it expires, the distributed

garbage collector assumes that the remote object is no longer referenced by that client.

DGCClient implements the client side of the RMI distributed garbage collection system.

The external interface to DGCClient is the registerRefs() method. When a LiveRef to a

remote object enters the JVM, it must be registered with the DGCClient to participate in

distributed garbage collection. When the first LiveRef to a particular remote object is

registered, a dirty call is made to the server-side distributed garbage collector for the

remote object, which returns a lease guaranteeing that the server-side DGC will not

collect the remote object for a certain period of time. While LiveRef instances to remote

objects on a particular server exist, the DGCClient periodically sends more dirty calls to

renew its lease. The DGCClient tracks the local availability of registered LiveRef

instances using phantom references. When the LiveRef instance for a particular remote

object is garbage collected locally, a clean() call is made to the server-side distributed

garbage collector, indicating that the server no longer needs to keep the remote object

alive for this client. The RenewCleanThread handles the asynchronous client-side DGC

activity by renewing the leases and making clean calls. So this thread waits until the next

lease renewal or until any phantom reference is queued for generating clean requests as

necessary.

A Simple Java RMI example

The client program(RmiClient.class) sends a message to the server

program(RmiServer.class). The server program print out the message on the console.

This example consists of the following files:

� MessageReceiverInterface.java

� This part defines the RMI interface. The method (receiveMessage) of the

server class, which implements this interface, is called from the remote

client. In the remote client program, the type of the server class (which is the

remote class in this client class) is this interface.

� RmiServer.java

� This is the server program(class). In this class, the method

“receiveMessage”, which is called from the remote client, is defined. This

class is the implementation of the RMI interface.

� RmiClient.java

� This is the client program(class). The remote method is called from this

class.

� Execution outline

� RmiServer creates the “registry”. This is a kind of dictionary. Its key is a name

(which is the ID of a remote object) and its content is an object. This object is

looked up from a remote program by the name. This registry is accessed from a

remote object by the IP address (or host name) and the port number.

� RmiServer binds the name “rmiServer” and it-self(RmiServer.class) in the

registry.

� RmiClient looks up the remote object (RmiServer) by the name “rmiServer”.

� RmiClient calls the method “receiveMessage” of the RmiServer class.

� The method “receiveMessage” of the RmiServer class prints out the message.

registry

“rmiServer”-RmiServer

RmiClient RmiServer

receiveMessage(…)

registry

“rmiServer”-RmiServer

RmiClient

rmiServer.

RmiServer

receiveMessage(…)

� Compile

� javac RmiServer.java

� rmic RmiServer

� javac RmiClient.java

� Execution

� (at one host,) java RmiServer

� (at another host) java RmiClient <server’s address> 3232 <message text>

� The source codes

ReceiveMessageInterface.java

import java.rmi.*;

public interface ReceiveMessageInterface extends Remote

{

void receiveMessage(String x) throws RemoteException;

}

RmiServer.java

import java.rmi.*;

import java.rmi.registry.*;

import java.rmi.server.*;

import java.net.*;

public class RmiServer extends java.rmi.server.UnicastRemoteObject

implements ReceiveMessageInterface

{

int thisPort;

String thisAddress;

Registry registry; // rmi registry for lookup the remote objects.

registry

“rmiServer”-RmiServer

RmiClient

rmiServer.receiveMessage(…)

RmiServer

receiveMessage(…)

// This method is called from the remote client by the RMI.

// This is the implementation of the “ReceiveMessageInterface”.

public void receiveMessage(String x) throws RemoteException

{

System.out.println(x);

}

public RmiServer() throws RemoteException

{

try{

// get the address of this host.

thisAddress= (InetAddress.getLocalHost()).toString();

}

catch(Exception e){

throw new RemoteException("can't get inet address.");

}

thisPort=3232; // this port(registry’s port)

System.out.println("this address="+thisAddress+",port="+thisPort);

try{

// create the registry and bind the name and object.

registry = LocateRegistry.createRegistry( thisPort );

registry.rebind("rmiServer", this);

}

catch(RemoteException e){ throw e; } }

static public void main(String args[])

{

try{

RmiServer s=new RmiServer(); }

catch (Exception e) {

e.printStackTrace();

System.exit(1); } }

RmiClient.java

import java.rmi.*;

import java.rmi.registry.*;

import java.net.*;

public class RmiClient

{

static public void main(String args[])

{

ReceiveMessageInterface rmiServer;

Registry registry;

String serverAddress=args[0];

String serverPort=args[1];

String text=args[2];

System.out.println("sending "+text+" to "+ serverAddress + ":" +

serverPort);

try{

// get the “registry”

registry=LocateRegistry.getRegistry(

serverAddress, (new Integer (serverPort)).intValue());

// look up the remote object

rmiServer=

(ReceiveMessageInterface) (registry.lookup("rmiServer"));

// call the remote method

rmiServer.receiveMessage(text);

}

catch(RemoteException e){

e.printStackTrace();

}

catch(NotBoundException e){

e.printStackTrace(); } } }

RMI-IIOP

RMI-IIOP adds CORBA (Common Object Request Broker Architecture) capability to

Java RMI, providing standards-based interoperability and connectivity to many other

programming languages and platforms. RMI-IIOP enables distributed Web-enabled Java

applications to transparently invoke operations on remote network services using the

industry standard IIOP defined by the Object Management Group. Runtime components

include a Java ORB for distributed computing using IIOP communication.

RMI-IIOP is for Java programmers who want to program to the RMI interfaces, but use

IIOP as the underlying transport. RMI-IIOP provides interoperability with other

CORBA objects implemented in various languages - but only if all the remote interfaces

are originally defined as Java RMI interfaces. It is of particular interest to programmers

using Enterprise JavaBeans (EJB), since the remote object model for EJBs is RMI-

based.

Other options for creating distributed applications are:

• Java Interface Definition Language (IDL)

Java IDL is for CORBA programmers who want to program in the Java

programming language based on interfaces defined in CORBA Interface

Definition Language (IDL).

• JavaTM

Remote Method Invocation (RMI)

The Java RMI system allows an object running in one Java Virtual Machine

(VM) to invoke methods on an object running in another Java VM. RMI

provides for remote communication between programs written in the Java

programming language via the Java Remote Method Protocol (JRMP).

The Hello World Application

The distributed Hello World example uses a client application to make a remote method

call via IIOP to a server running on the host from which the client was downloaded.

When the client runs, "Hello from MARS!" is displayed.

Writing the Source Files

1. Define the functions of the remote class as an interface written in the Java

programming language

2. Write the implementation class

3. Write the server class

4. Write a client program that uses the remote service

The source files are:

• HelloInterface.java - a remote interface

• HelloImpl.java - a remote object implementation that implements HelloInterface

• HelloServer.java - an RMI server that creates an instance of the remote object

implementation and binds that instance to a name in the Naming Service

• HelloClient.java - a client application that invokes the remote method, sayHello()

Remote interfaces have the following characteristics:

• The remote interface must be declared public. Otherwise, a client will get an error

when attempting to load a remote object that implements the remote interface,

unless that client is in the same package as the remote interface.

• The remote interface extends the java.rmi.Remote interface.

• Each method must declare java.rmi.RemoteException (or a superclass of

RemoteException) in its throws clause, in addition to any application-specific

exceptions.

• The data type of any remote object that is passed as an argument or return value

(either directly or embedded within a local object) must be declared as the remote

interface type (for example, HelloInterface) not the implementation class

(HelloImpl).

Create the file HelloInterface.java. The following code is the interface definition

for the remote interface, HelloInterface, which contains just one method, sayHello:

//HelloInterface.java

import java.rmi.Remote;

public interface HelloInterface extends java.rmi.Remote {

public void sayHello( String from ) throws java.rmi.RemoteException; }

Because remote method invocations can fail in different ways from local method

invocations (due to network-related communication problems and server problems),

remote methods will report communication failures by throwing a

java.rmi.RemoteException.

The Implementation Class

At a minimum, a remote object implementation class, HelloImpl.java must:

• Declare that it implements at least one remote interface

• Define the constructor for the remote object

• Provide implementations for the methods that can be invoked remotely

//HelloImpl.java

import javax.rmi.PortableRemoteObject;

public class HelloImpl extends PortableRemoteObject implements HelloInterface

{

public HelloImpl() throws java.rmi.RemoteException {

super(); // invoke rmi linking and remote object initialization }

public void sayHello( String from ) throws java.rmi.RemoteException {

System.out.println( "Hello from " + from + "!!" );

System.out.flush(); } }

Implement a remote interface

In the Java programming language, when a class declares that it implements an

interface, a contract is formed between the class and the compiler. By entering into this

contract, the class is promising that it will provide method bodies, or definitions, for

each of the method signatures declared in that interface. Interface methods are implicitly

public and abstract, so if the implementation class doesn't fulfill its contract, it becomes

by definition an abstract class, and the compiler will point out this fact if the class was

not declared abstract.

The implementation class in this example is HelloImpl. The implementation class

declares which remote interface(s) it is implementing. Here is the HelloImpl class

declaration:

public class HelloImpl extends PortableRemoteObject

implements HelloInterface{

As a convenience, the implementation class can extend a remote class, which in this

example is javax.rmi.PortableRemoteObject. By extending PortableRemoteObject, the

HelloImpl class can be used to create a remote object that uses IIOP-based transport for

communication.

Define the constructor for the remote object

The constructor for a remote class provides the same functionality as the constructor for a

non-remote class: it initializes the variables of each newly created instance of the class,

and returns an instance of the class to the program which called the constructor.

In addition, the remote object instance will need to be "exported". Exporting a remote

object makes it available to accept incoming remote method requests, by listening for

incoming calls to the remote object on an anonymous port. When you extend

javax.rmi.PortableRemoteObject, your class will be exported automatically upon

creation.

Because the object export could potentially throw a java.rmi.RemoteException, you

must define a constructor that throws a RemoteException, even if the constructor does

nothing else. If you forget the constructor, javac will produce the following error

message:

HelloImpl.java:3: unreported exception java.rmi.RemoteException;

must be caught or declared to be thrown.

public class HelloImpl extends PortableRemoteObject implements HelloInterface{

^

1 error

To review: The implementation class for a remote object needs to:

• Implement a remote interface

• Export the object so that it can accept incoming remote method calls

• Declare its constructor(s) to throw at least a java.rmi.RemoteException

Here is the constructor for the HelloImpl class:

public HelloImpl() throws java.rmi.RemoteException {

super();

}

Note the following:

• The super method call invokes the no-argument constructor of

javax.rmi.PortableRemoteObject, which exports the remote object.

• The constructor must throw java.rmi.RemoteException, because RMI's attempt to

export a remote object during construction might fail if communication resources

are not available.

Provide an implementation for each remote method

The implementation class for a remote object contains the code that implements each of

the remote methods specified in the remote interface. For example, here is the

implementation for the sayHello() method, which returns the string "Hello from

MARS!!" to the caller:

public void sayHello( String from ) throws java.rmi.RemoteException {

System.out.println( "Hello from " + from + "!!");

System.out.flush(); }

Arguments to, or return values from, remote methods can be any data type for the Java

platform, including objects, as long as those objects implement the interface

java.io.Serializable. Most of the core classes in java.lang and java.util implement the

Serializable interface. In RMI:

• By default, local objects are passed by copy, which means that all data members

(or fields) of an object are copied, except those marked as static or transient.

Please refer to the Java Object Serialization Specification for information on how

to alter the default serialization behavior.

• Remote objects are passed by reference. A reference to a remote object is actually

a reference to a stub, which is a client-side proxy for the remote object. Stubs are

described fully in the Java Remote Method Invocation Specification. We'll create

them later in this tutorial in the section: Use rmic to generate stubs and skeletons.

Write The Server Class

A server class is the class which has a main method that creates an instance of the

remote object implementation, and binds that instance to a name in the Naming Service.

The class that contains this main method could be the implementation class itself, or

another class entirely. In this example, the main method is part of HelloServer.java,

which does the following:

• Creates an instance of the servant

• Publishes the object reference

Create the file HelloServer.java. The source code for this file follows. An

explanation of each of the preceding steps follows the source code:

//HelloServer.java

import javax.naming.InitialContext;

import javax.naming.Context;

public class HelloServer {

public static void main(String[] args) {

try {

// Step 1: Instantiate the Hello servant

HelloImpl helloRef = new HelloImpl();

// Step 2: Publish the reference in the Naming Service using JNDI API

Context initialNamingContext = new InitialContext();

initialNamingContext.rebind("HelloService", helloRef );

System.out.println("Hello Server: Ready..."); }

catch (Exception e) {

System.out.println("Trouble: " + e);

e.printStackTrace(); } } }

Create an instance of a remote object

The main method of the server needs to create an instance of the remote object

implementation, or Servant. For example:

HelloImpl helloRef = new HelloImpl();

The constructor exports the remote object, which means that once created, the remote

object is ready to accept incoming calls.

Publish the object reference

For a caller (client, peer, or client application) to be able to invoke a method on a remote

object, that caller must first obtain a reference to the remote object. Once a remote object

is registered on the server, callers can look up the object by name (using a naming

service), obtain a remote object reference, and then remotely invoke methods on the

object. In this example, we use the Naming Service that is part of the Object Request

Broker Daemon (orbd). For example, the following code binds the name "HelloService"

to a reference for the remote object:

// Step 2: Publish the reference in the Naming Service using JNDI API

Context initialNamingContext = new InitialContext();

initialNamingContext.rebind("HelloService", helloRef );

Note the following about the arguments to the rebind method call:

• The first argument, "HelloService", is a java.lang.String, representing the name of

the remote object to bind

• The second argument, helloRef is the object id of the remote object to bind

Write a client program that uses the remote service

The client application in this example remotely invokes the sayHello method in order

to get the string "Hello from MARS!!" to display when the client application runs.

Create the file HelloClient.java. Here is the source code for the client application:

//HelloClient.java

import java.rmi.RemoteException;

import java.net.MalformedURLException;

import java.rmi.NotBoundException;

import javax.rmi.*;

import java.util.Vector;

import javax.naming.NamingException;

import javax.naming.InitialContext;

import javax.naming.Context;

public class HelloClient {

public static void main( String args[] ) {

Context ic;

Object objref;

HelloInterface hi;

try {

ic = new InitialContext();

// STEP 1: Get the Object reference from the Name Service

// using JNDI call.

objref = ic.lookup("HelloService");

System.out.println("Client: Obtained a ref. to Hello server.");

// STEP 2: Narrow the object reference to the concrete type and

// invoke the method.

hi = (HelloInterface) PortableRemoteObject.narrow(

objref, HelloInterface.class);

hi.sayHello( " MARS " );

} catch( Exception e ) {

System.err.println( "Exception " + e + "Caught" );

e.printStackTrace( );

return; } } }

First, the client application gets a reference to the remote object implementation

(advertised as "HelloService") from the Naming Service using Java Naming and

Directory Interface [TM] (JNDI) calls. Like the Naming.rebind method, the

Naming.lookup method takes java.lang.String value representing the name of the

object to look up. You supply Naming.lookup() the name of the object you want to

look up, and it returns the object bound to that name. Naming.lookup() returns the

stub for the remote implementation of the Hello interface to its caller (HelloClient).

The client application invokes the remote sayHello() method on the server's remote

object, causing the string "Hello from MARS!!" to be displayed on the command line.

Compile the Example

The source code for this example is now complete and the directory contains four files:

• HelloInterface.java contains the source code for the remote interface

• HelloImpl.java contains the source code for the remote object implementation

• HelloServer.java contains the source code for the server

• HelloClient.java contains the source code for the client application

In this section, you compile the remote object implementation file, HelloImpl.java, in

order to create the .class files needed to run rmic. You then run the rmic compiler to

create stubs and skeletons. A stub is a client-side proxy for a remote object which

forwards RMI-IIOP calls to the server-side dispatcher, which in turn forwards the call to

the actual remote object implementation. The last task is to compile the remaining .java

source files to create .class files. The following tasks will be completed in this section:

1. Compile the remote object implementation

2. Use rmic to generate stubs and skeletons

3. Compile the source files

Compile the remote object implementation

To create stub and skeleton files, the rmic compiler must be run on the fully-qualified

package names of compiled class files that contain remote object implementations. In

this example, the file that contains the remote object implementations is HelloImpl.java.

To generate the stubs and skeletons:

Compile HelloImpl.java, as follows:

javac -d . -classpath . HelloImpl.java

The "-d ." option indicates that the generated files should be placed in the directory from

which you are running the compiler. The "-classpath ." option indicates that files on

which HelloImpl.java is dependent can be found in this directory.

Use rmic to generate skeletons and stubs

To create CORBA-compatible stub and skeleton files, run the rmic compiler with the -

iiop option. The rmic -iiop command takes one or more class names as an argument and

produces class files of the form _HelloImpl_Tie.class and _HelloInterface_Stub.class.

The remote implementation file, HelloImpl.class, is the class name to pass in this

example. For an explanation of rmic options, you can refer to the Solaris rmic manual

page or the Windows rmic manual page. To create the stub and skeleton for the

HelloImpl remote object implementation, run rmic like this:

rmic -iiop HelloImpl

The preceding command creates the following files:

• _HelloInterface_Stub.class - the client stub

• _HelloImpl_Tie.class - the server skeleton

Compile the source files

Compile the source files as follows:

javac -d . -classpath . HelloInterface.java HelloServer.java HelloClient.java

This command creates the class files HelloInterface.class, HelloServer.class, and

HelloClient.class. These are the remote interface, the server, and the client application

respectively.

Run the Example

The following tasks will be completed in this section:

1. Start the Naming Service

2. Start the server

3. Run the client application

Start the Naming Service

For this example, we will use the Object Request Broker Daemon, orbd, which includes

both a Transient and a Persistent Naming Service, and is available with every download

of J2SE 1.4 and higher. For a caller (client, peer, or client application) to be able to

invoke a method on a remote object, that caller must first obtain a reference to the remote

object. Once a remote object is registered on the server, callers can look up the object by

name, obtain a remote object reference, and then remotely invoke methods on the object.

Start the Naming Service by running orbd from the command line. For this example, on

the Solaris operating system:

orbd -ORBInitialPort 1050&

or, on the Windows operating system:

start orbd -ORBInitialPort 1050

You must specify a port on which to run orbd. For this example the port of 1050 is

chosen because in the Solaris operating environment, a user must become root to start a

process on a port under 1024.

You must stop and restart the server any time you modify a remote interface or use

modified/additional remote interfaces in a remote object implementation. Otherwise,

the type of the object reference bound in the Naming Service will not match the

modified class.

Start the server

Open another terminal window and change to the directory containing the source files

for this example. The command for running the client has been spread out below to

make it easier to read, but should be typed without returns between the lines. The

following command shows how to start the HelloServer server. If you used a port other

than 1050 or a host other than localhost when starting the orbd tool, replace those values

in the command below with the actual values used to start orbd. Start the Hello server,

as follows:

java -classpath .

-Djava.naming.factory.initial=com.sun.jndi.cosnaming.CNCtxFactory

-Djava.naming.provider.url=iiop://localhost:1050

HelloServer

The output should look like this:

Hello Server: Ready ...

Run the client application

Once the Naming Service and server are running, the client application can be run. From

a new terminal window, go to the source code directory, and run the client application

from the command line, as shown below. The command for running the client has been

spread out below to make it easier to read, but should be typed without returns between

the lines. If you used a port other than 1050 or a host other than localhost when starting

the orbd tool, replace those values in the command below with the actual values used to

start orbd. Start the client application, as follows:

java -classpath . -Djava.naming.factory.initial = com.sun.jndi.cosnaming.

CNCtxFactory -Djava.naming.provider.url=iiop://localhost:1050 HelloClient

After running the client application, you will see output similar to the following on your

display:

Client: Obtained a ref. to Hello server.

Hello from MARS

ORBD and the Hello server will continue to run until they are explicitly stopped. On

Solaris, you can stop these processes using the pkill orbd and pkill HelloServer

commands from a terminal window. On Windows, you can type Ctrl+C in a prompt

window to kill the process.

CORBA

CORBA, or Common Object Request Broker Architecture, is a standard architecture for

distributed object systems. It allows a distributed, heterogeneous collection of objects to

interoperate. CORBA is a specification that defines how distributed objects can

interoperate. Until the explosion in popularity of the World Wide Web and in particular,

the java programming language, CORBA was basically a high-end, distributed-object

solution primarily used by c++ developers.

The Common Object Request Broker Architecture (CORBA) is an emerging open

distributed object computing infrastructure being standardized by the Object

Management Group (OMG). CORBA automates many common network programming

tasks such as object registration, location, and activation; request demultiplexing; framing

and error-handling; parameter marshalling and demarshalling; and operation dispatching.

Common Object Request Broker Architecture (CORBA) is an open, vendor-independent

specification for distributed computing. It is published by the Object Management Group

(OMG).

Using the Internet Inter-ORB Protocol (IIOP), CORBA allows objects on different

architectures, operating systems, and networks to interoperate. This interoperability is

obtained by the use of the Interface Definition Language (IDL), which specifies the

syntax that is used to invoke operations on objects. IDL is programming-language

independent.

Developers define the hierarchy, attributes, and operations of objects in IDL, then use an

IDL compiler (such as IDLJ for Java™) to map the definition onto an implementation in

a programming language. The implementation of an object is encapsulated. Clients of the

object can see only its external IDL interface.

OMG have produced specifications for mappings from IDL to many common

programming languages, including C, C++, and Java. Central to the CORBA

specification is the Object Request Broker (ORB). The ORB routes requests from client

to remote object, and responses to their destinations. Java contains an implementation of

the ORB that communicates by using IIOP.

CORBA Architecture

CORBA defines an architecture for distributed objects. The basic CORBA paradigm is

that of a request for services of a distributed object. Everything else defined by the OMG

is in terms of this basic paradigm. The following figure illustrates the primary

components in the OMG Reference Model architecture.

Figure 1. OMG Reference Model Architecture

• Object Services -- These are domain-independent interfaces that are used by

many distributed object programs. For example, a service providing for the

discovery of other available services is almost always necessary regardless of the

application domain. Two examples of Object Services that fulfill this role are:

o The Naming Service -- which allows clients to find objects based on

names;

o The Trading Service -- which allows clients to find objects based on their

properties.

There are also Object Service specifications for lifecycle management, security,

transactions, and event notification, as well as many others.

• Common Facilities -- Like Object Service interfaces, these interfaces are also

horizontally-oriented, but unlike Object Services they are oriented towards end-

user applications. An example of such a facility is the Distributed Document

Component Facility (DDCF), a compound document Common Facility based on

OpenDoc. DDCF allows for the presentation and interchange of objects based on

a document model, for example, facilitating the linking of a spreadsheet object

into a report document.

• Domain Interfaces -- These interfaces fill roles similar to Object Services and

Common Facilities but are oriented towards specific application domains. For

example, one of the first OMG RFPs issued for Domain Interfaces is for Product

Data Management (PDM) Enablers for the manufacturing domain. Other OMG

RFPs will soon be issued in the telecommunications, medical, and financial

domains.

• Application Interfaces - These are interfaces developed specifically for a given

application. Because they are application-specific, and because the OMG does not

develop applications (only specifications), these interfaces are not standardized.

However, if over time it appears that certain broadly useful services emerge out of

a particular application domain, they might become candidates for future OMG

standardization.

CORBA ORB Architecture The following figure illustrates the primary components in the CORBA ORB

architecture. Descriptions of these components are available below the figure.

Figure 2. CORBA ORB Architecture

• Object -- This is a CORBA programming entity that consists of an identity, an

interface, and an implementation, which is known as a Servant.

• Servant -- This is an implementation programming language entity that defines

the operations that support a CORBA IDL interface. Servants can be written in a

variety of languages, including C, C++, Java, Smalltalk, and Ada.

• Client -- This is the program entity that invokes an operation on an object

implementation. Accessing the services of a remote object should be transparent

to the caller. Ideally, it should be as simple as calling a method on an object, i.e.,

obj->op(args). The remaining components in Figure 2 help to support this level

of transparency.

• Object Request Broker (ORB) -- The ORB provides a mechanism for

transparently communicating client requests to target object implementations. The

ORB simplifies distributed programming by decoupling the client from the details

of the method invocations. This makes client requests appear to be local

procedure calls. When a client invokes an operation, the ORB is responsible for

finding the object implementation, transparently activating it if necessary,

delivering the request to the object, and returning any response to the caller.

• ORB Interface -- An ORB is a logical entity that may be implemented in various

ways (such as one or more processes or a set of libraries). To decouple

applications from implementation details, the CORBA specification defines an

abstract interface for an ORB. This interface provides various helper functions

such as converting object references to strings and vice versa, and creating

argument lists for requests made through the dynamic invocation interface

described below.

• CORBA IDL stubs and skeletons -- CORBA IDL stubs and skeletons serve as

the ``glue'' between the client and server applications, respectively, and the ORB.

The transformation between CORBA IDL definitions and the target programming

language is automated by a CORBA IDL compiler. The use of a compiler reduces

the potential for inconsistencies between client stubs and server skeletons and

increases opportunities for automated compiler optimizations.

• Dynamic Invocation Interface (DII) -- This interface allows a client to directly

access the underlying request mechanisms provided by an ORB. Applications use

the DII to dynamically issue requests to objects without requiring IDL interface-

specific stubs to be linked in. Unlike IDL stubs (which only allow RPC-style

requests), the DII also allows clients to make non-blocking deferred synchronous

(separate send and receive operations) and oneway (send-only) calls.

• Dynamic Skeleton Interface (DSI) -- This is the server side's analogue to the

client side's DII. The DSI allows an ORB to deliver requests to an object

implementation that does not have compile-time knowledge of the type of the

object it is implementing. The client making the request has no idea whether the

implementation is using the type-specific IDL skeletons or is using the dynamic

skeletons.

• Object Adapter -- This assists the ORB with delivering requests to the object and

with activating the object. More importantly, an object adapter associates object

implementations with the ORB. Object adapters can be specialized to provide

support for certain object implementation styles (such as OODB object adapters

for persistence and library object adapters for non-remote objects).

Interface Definition Language (IDL)

The services that an object provides are given by its interface. Interfaces are defined in

OMG's Interface Definition Language (IDL). Distributed objects are identified by object

references, which are typed by IDL interfaces.

Developers use the Interface Definition Language (IDL) to describe the interface to a

CORBA object. An IDL schema can then be used to generate Java code for the client and

server that will use the object. The same IDL schema could be used to generate either a

client or server in C++, Ada, or any other language that supports CORBA. You don't

write your implementation of a CORBA service in IDL - so you can continue to write in

pure Java code if you so wish.

The figure below graphically depicts a request. A client holds an object reference to a

distributed object. The object reference is typed by an interface. In the figure below the

object reference is typed by the Rabbit interface. The Object Request Broker, or ORB,

delivers the request to the object and returns any results to the client. In the figure, a jump

request returns an object reference typed by the AnotherObject interface.

The Common Object Request Broker Architecture (CORBA) from the Object

Management Group (OMG) provides a platform-independent, language-independent

architecture for writing distributed, object-oriented applications.

CORBA objects can reside in the same process, on the same machine, down the hall, or

across the planet. The Java language is an excellent language for writing CORBA

programs. Some of the features that account for this popularity include the clear mapping

from OMG IDL to the Java programming language, and the Java runtime environment's

built-in garbage collection.

The ORB

The ORB is the distributed service that implements the request to the remote object. It

locates the remote object on the network, communicates the request to the object, waits

for the results and when available communicates those results back to the client.

The ORB implements location transparency. Exactly the same request mechanism is used

by the client and the CORBA object regardless of where the object is located. It might be

in the same process with the client, down the hall or across the planet. The client cannot

tell the difference.

The ORB implements programming language independence for the request. The client

issuing the request can be written in a different programming language from the

implementation of the CORBA object. The ORB does the necessary translation between

programming languages. Language bindings are defined for all popular programming

languages.

Introduction to CORBA IDL

The first step in developing a CORBA application is to define the interfaces to the objects

required in your distributed system. To define these interfaces, we use CORBA IDL. IDL

allows us to define interfaces to objects without specifying the implementation of those

interfaces. To implement an IDL interface, we define a C++ class that can be accessed

through that interface and then we create objects of that class within an Orbix server

application.

In fact, we can implement IDL interfaces using any programming language for which an

IDL mapping is available. An IDL mapping specifies how an interface defined in IDL

corresponds to an implementation defined in a programming language. CORBA

applications written in different programming languages are fully interoperable. CORBA

defines standard mappings from IDL to several programming languages, including C++,

Java, and Smalltalk. The Orbix IDL compiler converts IDL definitions to corresponding

C++ definitions, in accordance with the standard IDL to C++ mapping.

IDL Modules and Scoping

An IDL module defines a naming scope for a set of IDL definitions. Modules allow you

to group interface and other IDL type definitions in logical name spaces. When writing

IDL definitions, always use modules to avoid possible name clashes. The following

example illustrates the use of modules in IDL:

// IDL module BankSimple { interface Bank { ... }; interface Account { ... }; };

The interfaces Bank and Account are scoped within the module BankSimple. IDL

definitions are available directly within the scope in which you define them. In other

naming scopes, you must use the scoping operator (::) to access these definitions. For

example, the fully scoped name of interfaces Bank and Account are BankSimple::Bank

and BankSimple::Account respectively.

IDL modules can be reopened. For example, a module declaration can appear several

times in a single IDL specification if each declaration contains different data types. In

most IDL specifications, this feature of modules is not required.

Defining IDL Interfaces

An IDL interface describes the functions that an object supports in a distributed

application. Interface definitions provide all of the information that clients need to access

the object across a network. Consider the example of an interface that describes objects

which implement bank accounts in a distributed application. The IDL interface definition

is as follows:

//IDL module BankSimple { // Define a named type to represent money. typedef float CashAmount; // Forward declaration of interface Account. interface Account; interface Bank { ... }; interface Account { // The account owner and balance. readonly attribute string name; readonly attribute CashAmount balance; // Operations available on the account. void deposit (in CashAmount amount); void withdraw (in CashAmount amount); }; };

The definition of interface Account includes both attributes and operations. These are

the main elements of any IDL interface definition.

Attributes in IDL Interface Definitions

Conceptually, attributes correspond to variables that an object implements. Attributes

indicate that these variables are available in an object and that clients can read or write

their values.

In general, attributes map to a pair of functions in the programming language used to

implement the object. These functions allow client applications to read or write the

attribute values. However, if an attribute is preceded by the keyword readonly, then

clients can only read the attribute value.

For example, the Account interface defines the attributes name and balance. These

attributes represent information about the account which the object implementation can

set, but which client applications can only read.

Operations in IDL Interface Definitions

IDL operations define the format of functions, methods, or operations that clients use to

access the functionality of an object. An IDL operation can take parameters and return a

value, using any of the available IDL data types. For example, the Account interface

defines the operations deposit() and withdraw() as follows:

//IDL module BankSimple { typedef float CashAmount; ... interface Account { // Operations available on the account. void deposit(in CashAmount amount); void withdraw(in CashAmount amount); ... }; };

Each operation takes a parameter and has a void return type. Each parameter definition

must specify the direction in which the parameter value is passed. The possible parameter

passing modes are as follows:

in The parameter is passed from the caller of the operation to the object. out The parameter is passed from the object to the caller. inout The parameter is passed in both directions.

Parameter passing modes clarify operation definitions and allow an IDL compiler to map

operations accurately to a target programming language.

Raising Exceptions in IDL Operations

IDL operations can raise exceptions to indicate the occurrence of an error. CORBA

defines two types of exceptions:

• System exceptions are a set of standard exceptions defined by CORBA.

• User-defined exceptions are exceptions that you define in your IDL specification.

Implicitly, all IDL operations can raise any of the CORBA system exceptions. No

reference to system exceptions appears in an IDL specification. To specify that an

operation can raise a user-defined exception, first define the exception structure and then

add an IDL raises clause to the operation definition. For example, the operation

withdraw() in interface Account could raise an exception to indicate that the withdrawal

has failed, as follows:

// IDL module BankExceptions { typedef float CashAmount; ... interface Account {

exception InsufficientFunds { string reason; }; void withdraw(in CashAmount amount) raises(InsufficientFunds); ... }; };

An IDL exception is a data structure that contains member fields. In the preceding

example, the exception InsufficientFunds includes a single member of type string.

The raises clause follows the definition of operation withdraw() to indicate that this

operation can raise exception InsufficientFunds. If an operation can raise more then

one type of user-defined exception, include each exception identifier in the raises clause

and separate the identifiers using commas.

Invocation Semantics for IDL Operations

By default, IDL operations calls are synchronous, that is a client calls an operation and

blocks until the object has processed the operation call and returned a value. The IDL

keyword oneway allows you to modify these invocation semantics.

If you precede an operation definition with the keyword oneway, a client that calls the

operation will not block while the object processes the call. For example, you could add a

oneway operation to interface Account that sends a notice to an Account object, as

follows:

module BankSimple {

... interface Account { oneway void notice(in string text); ... }; };

Orbix does not guarantee that a oneway operation call will succeed; so if a oneway

operation fails, a client may never know. There is only one circumstance in which Orbix

indicates failure of a oneway operation. If a oneway operation call fails before Orbix

transmits the call from the client address space, then Orbix raises a system exception. A

oneway operation can not have any out or inout parameters and can not return a value.

In addition, a oneway operation can not have an associated raises clause.

Passing Context Information to IDL Operations

CORBA context objects allow a client to map a set of identifiers to a set of string values.

When defining an IDL operation, you can specify that the operation should receive the

client mapping for particular identifiers as an implicit part of the operation call. To do

this, add a context clause to the operation definition.

Consider the example of an Account object, where each client maintains a set of

identifiers, such as sys_time and sys_location that map to information that the

operation deposit() logs for each deposit received. To ensure that this information is

passed with every operation call, extend the definition of deposit() as follows:

// IDL module BankSimple { typedef float CashAmount; ... interface Account { void deposit(in CashAmount amount) context("sys_time", "sys_location"); ... }; };

A context clause includes the identifiers for which the operation expects to receive

mappings.

Inheritance of IDL Interfaces

IDL supports inheritance of interfaces. An IDL interface can inherit all the elements of

one or more other interfaces. For example, the following IDL definition illustrates two

interfaces, called CheckingAccount and SavingsAccount, that inherit from interface

Account:

// IDL module BankSimple{ interface Account { ... }; interface CheckingAccount : Account { readonly attribute overdraftLimit; boolean orderChequeBook (); }; interface SavingsAccount : Account {

float calculateInterest (); }; };

Interfaces CheckingAccount and SavingsAccount implicitly include all elements of

interface Account. An object that implements CheckingAccount can accept invocations

on any of the attributes and operations of this interface, and on any of the elements of

interface Account. However, a CheckingAccount object may provide different

implementations of the elements of interface Account to an object that implements

Account only. The following IDL definition shows how to define an interface that

inherits both CheckingAccount and SavingsAccount:

// IDL module BankSimple { interface Account { ... }; interface CheckingAccount : Account { ... }; interface SavingsAccount : Account { ... }; interface PremiumAccount : CheckingAccount, SavingsAccount { ... }; };

Interface PremiumAccount is an example of multiple inheritance in IDL. Figure 3.1

illustrates the inheritance hierarchy for this interface. If you define an interface that

inherits from two interfaces which contain a constant, type, or exception definition of the

same name, you must fully scope that name when using that constant, type, or exception.

An interface can not inherit from two interfaces that include operations or attributes that

have the same name.

Figure 3.1: Multiple Inheritance of IDL Interfaces

The Object Interface Type

IDL includes the pre-defined interface Object, which all user-defined interfaces inherit

implicitly. While interface Object is never defined explicitly in your IDL specification,

the operations of this interface are available through all your interface types. In addition,

you can use Object as an attribute or operation parameter type to indicate that the

attribute or operation accepts any interface type, for example:

// IDL interface ObjectLocator { void getAnyObject (out Object obj); };

Note that it is not legal IDL syntax to inherit interface Object explicitly.

Forward Declaration of IDL Interfaces

In an IDL definition, you must declare an IDL interface before you reference it. A

forward declaration declares the name of an interface without defining it. This feature of

IDL allows you to define interfaces that mutually reference each other. For example, IDL

interface Bank includes an operation of IDL interface type Account, to indicate that Bank

stores a reference to an Account object. If the definition of interface Account follows the

definition of interface Bank, you must forward declare Account as follows:

// IDL module BankSimple { // Forward declaration of Account. interface Account; interface Bank { Account create_account (in string name); Account find_account (in string name); }; // Full definition of Account. interface Account { ... }; };

The syntax for a forward declaration is the keyword interface followed by the interface

identifier.

Overview of the IDL Data Types

In addition to IDL module, interface, and exception types, there are three general

categories of data type in IDL:

• Basic types.

• Complex types.

• Pseudo object types

IDL Basic Types

The following table lists the basic types supported in IDL.

IDL Type Range of Values Short -2 15...2 15-1 (16-bit) unsigned short 0...2 16-1 (16-bit) Long -2 31...2 31-1 (32-bit) unsigned long 0...2 32-1 (32-bit) long long -2 63...2 63-1 (64-bit) unsigned long long

0...-2 64 (64-bit)

Float IEEE single-precision floating point numbers. Double IEEE double-precision floating point numbers. Char An 8-bit value. Boolean TRUE or FALSE. Octet An 8-bit value that is guaranteed not to undergo any conversion

during transmission. Any The any type allows the specification of values that can express an

arbitrary IDL type.

The any data type allows you to specify that an attribute value, an operation parameter, or

an operation return value can contain an arbitrary type of value to be determined at

runtime.

IDL Complex Types

This section describes the IDL data types enum, struct, union, string, sequence, array, and

fixed.

Enum

An enumerated type allows you to assign identifiers to the members of a set of values, for

example:

// IDL module BankSimple { enum Currency {pound, dollar, yen, franc}; interface Account { readonly attribute CashAmount balance; readonly attribute Currency balanceCurrency; ... }; };

In this example, attribute balanceCurrency in interface Account can take any one of the

values pound, dollar, yen, or franc.

Struct A struct data type allows you to package a set of named members of various types, for

example:

// IDL module BankSimple{ struct CustomerDetails { string name; short age; }; interface Bank { CustomerDetails getCustomerDetails (in string name); ... }; };

In this example, the struct CustomerDetails has two members. The operation

getCustomerDetails() returns a struct of type CustomerDetails that includes values

for the customer name and age.

Union

A union data type allows you to define a structure that can contain only one of several

alternative members at any given time. A union saves space in memory, as the amount of

storage required for a union is the amount necessary to store its largest member. All IDL

unions are discriminated. A discriminated union associates a label value with each

member. The value of the label indicates which member of the union currently stores a

value. For example, consider the following IDL union definition:

// IDL struct DateStructure { short Day; short Month; short Year; }; union Date switch (short) { case 1: string stringFormat; case 2: long digitalFormat; default: DateStructure structFormat; };

The union type Date is discriminated by a short value. For example, if this short value is

1, then the union member stringFormat stores a date value as an IDL string. The default

label associated with the member structFormat indicates that if the short value is not 1

or 2, then the structFormat member stores a date value as an IDL struct.

Note that the type specified in parentheses after the switch keyword must be an integer,

char, boolean or enum type and the value of each case label must be compatible with this

type.

String An IDL string represents a character string, where each character can take any value of

the char basic type. If the maximum length of an IDL string is specified in the string

declaration, then the string is bounded. Otherwise the string is unbounded. The following

example shows how to declare bounded and unbounded strings:

// IDL module BankSimple { interface Account { // A bounded string with maximum length 10. attribute string<10> sortCode; // An unbounded string. readonly attribute string name; ... }; };

Sequence In IDL, you can declare a sequence of any IDL data type. An IDL sequence is similar to a

one-dimensional array of elements. An IDL sequence does not have a fixed length. If the

sequence has a fixed maximum length, then the sequence is bounded. Otherwise, the

sequence is unbounded. For example, the following code shows how to declare bounded

and unbounded sequences as members of an IDL struct:

// IDL module BankSimple { interface Account { ... }; struct LimitedAccounts { string bankSortCode<10>; // Maximum length of sequence is 50. sequence<Account, 50> accounts; }; struct UnlimitedAccounts { string bankSortCode<10>; // No maximum length of sequence. sequence<Account> accounts; }; };

A sequence must be named by an IDL typedef declaration before it can be used as the

type of an IDL attribute or operation parameter. The following code illustrates this:

// IDL module BankSimple { typedef sequence<string> CustomerSeq; interface Account { void getCustomerList(out CustomerSeq names); ... }; };

Arrays In IDL, you can declare an array of any IDL data type. IDL arrays can be multi-

dimensional and always have a fixed size. For example, you can define an IDL struct

with an array member as follows:

// IDL module BankSimple { ...

interface Account { ... }; struct CustomerAccountInfo { string name; Account accounts[3]; }; interface Bank { getCustomerAccountInfo (in string name, out CustomerAccountInfo accounts); ... }; };

In this example, struct CustomerAccountInfo provides access to an array of Account

objects for a bank customer, where each customer can have a maximum of three

accounts. An array must be named by an IDL typedef declaration before it can be used

as the type of an IDL attribute or operation parameter. The IDL typedef declaration

allows you define an alias for a data type. The following code illustrates this:

// IDL module BankSimple { interface Account { ... }; typedef Account AccountArray[100]; interface Bank { readonly attribute AccountArray accounts; ... }; };

Note that an array is a less flexible data type than an IDL sequence, because an array

always has a fixed length. An IDL sequence always has a variable length, although it may

have an associated maximum length value.

Fixed The fixed data type allows you to represent number in two parts: a digit and a scale. The

digit represents the length of the number, and the scale is a non-negative integer that

represents the position of the decimal point in the number, relative to the rightmost digit.

module BankSimple { typedef fixed<10,4> ExchangeRate; struct Rates { ExchangeRate USRate; ExchangeRate UKRate; ExchangeRate IRRate; }; };

In this case, the ExchangeRate type has a digit of size 10, and a scale of 4. This means

that it can represent numbers up to (+/-)999999.9999. The maximum value for the digits

is 31, and scale cannot be greater than digits. The maximum value that a fixed type can

hold is equal to the maximum value of a double. Scale can also be a negative number.

This means that the decimal point is moved scale digits in a rightward direction, causing

trailing zeros to be added to the value of the fixed. For example, fixed <3,-4> with a

numeric value of 123 actually represents the number 1230000. This provides a

mechanism for storing numbers with trailing zeros in an efficient manner. Note: Fixed

<3, -4> can also be represented as fixed <7, 0>. Constant fixed types can also be

declared in IDL. The digits and scale are automatically calculated from the constant

value. For example:

module Circle { const fixed pi = 3.142857; };

This yields a fixed type with a digits value of 7, and a scale value of 6.

IDL Pseudo Object Types

CORBA defines a set of pseudo object types that ORB implementations use when

mapping IDL to some programming languages. These object types have interfaces

defined in IDL but do not have to follow the normal IDL mapping for interfaces and are

not generally available in your IDL specifications. You can use only the following

pseudo object types as attribute or operation parameter types in an IDL specification:

CORBA::NamedValue CORBA::Principal CORBA::TypeCode

To use any of these three types in an IDL specification, include the file orb.idl in the

IDL file as follows:

// IDL #include <orb.idl> ...

This statement indicates to the IDL compiler that types NamedValue, Principal, and

TypeCode may be used. The file orb.idl should not actually exist in your system. Do

not name any of your IDL files orb.idl.

Defining Data Type Names and Constants

IDL allows you to define new data type names and constants. This section describes how

to use each of these features of IDL.

Data Type Names

The typedef keyword allows you define a meaningful or more simple name for an IDL

type. The following IDL provides a simple example of using this keyword:

// IDL module BankSimple { interface Account { ... };

typedef Account StandardAccount; };

The identifier StandardAccount can act as an alias for type Account in subsequent IDL

definitions. Note that CORBA does not specify whether the identifiers Account and

StandardAccount represent distinct IDL data types in this example.

Constants IDL allows you to specify constant data values using one of several basic data types. To

declare a constant, use the IDL keyword const, for example:

// IDL module BankSimple { interface Bank { const long MaxAccounts = 10000; const float Factor = (10.0 - 6.5) * 3.91; ... }; };

The value of an IDL constant cannot change. You can define a constant at any level of

scope in your IDL specification.

System Object Model System Object Model (SOM) is an object-oriented shared library system developed by

IBM. DSOM, a distributed version based on CORBA, allowed objects on different

computers to communicate.

SOM (System Object Model) is a library packaging technology that enables languages to

share class libraries regardless of the language they were written in. This ability to share

class libraries between various object oriented languages solves many interoperability

and re-use problems between object oriented and non object oriented languages as well.

Key characteristics of SOM in support of these key commercial requirements include:

• the ability to create portable shrink wrapped binaries

• the ability to create class libraries in one language that can be accessed and used by other languages

• the ability to subclass from binaries even if they were written in a different language

• the ability to add new methods and relocate existing methods without re-compilation of the application

• the ability to insert new classes into the inheritance hierarchy without recompiling the application.

Objects

SOM objects are derived from a root object which defines the essential behavior common

to all SOM objects. Factory methods are used to create SOM objects at run time. These

factory methods are invoked on a class object, in the SOM run-time.

Operations

The interface to a SOM object is defined by permitting the specification of operation

signatures which consist of an operation name and the argument and result types.

Operations are performed on methods which implement an objects behavior.

Requests

Client requests are invoked on objects by specifying the name of an object and the name

of the operation along with parameters to invoke on it. An object can support multiple

operations.

Messages

Messages are not explicitly identified in SOM.

Specification of behavioral semantics

SOM defines object behavior by permitting the specification of operation signatures

which consist of an operation name, and the argument and result types. The semantics of

the operations on an object are defined by the methods that implement these operations.

Methods

Methods are invoked on SOM objects. Methods can be relocated upward in the class

hierarchy without requiring the client to be re-compiled. SOM supports three different

method dispatching mechanisms; offset, name resolution, and dispatch function

resolution.

The "offset resolution" mechanism implies a static scheme for typing objects and is

roughly equivalent to the C++ virtual function concept. It offers the best performance

characteristics for SOM method resolution at a cost of some loss in flexibility.

This form of method resolution supports polymorphism that is based on the derivation of

the object's class. Name resolution supports access to objects whose class is not known at

compile time, and permits polymorphism based on the protocols that an object supports,

rather than its derivation.

The "dispatch function" resolution is a feature of SOM that permits method resolution to

be based on arbitrary rules known only in the domain of the receiving object. Dispatch

function resolution is a completely dynamic mechanism that permits run time type

checking, and open-ended forms of polymorphism. A distinguishing feature of SOM is

that all 3 forms of method resolution are complementary and can be intermixed within

client programs.

State

The state of SOM objects is accessed through published interfaces to an object. Invoking

operations on objects may cause state changes.

Object lifetime

SOM objects are created by invoking a create operation on a factory object in the SOM

run time. Once created, the object will exist until explicitly deleted or until the process

that created it no longer exists. A SOM object would need to make use of a persistence

mechanism in order to exist beyond the life of the process that created it. A persistence

mechanism is beyond the scope of this object model discussion, however, SOM could be

and has been used as the basis for building a variety of persistence frameworks.

Behavior/state grouping

SOM uses the classical object model in that a target object must be specified for each

operation.

Communication model

Since SOM is a basic mechanism, its run-time model is one where an operation occurs on

a single thread within a single process. However, SOM code permits concurrent

execution by multiple threads on systems where SOM supports the underlying threads

model, therefore, multi-threaded programs can use mutual exclusion mechanisms to

serialize updates to SOM objects with confidence that critical sections in SOM are thread

safe.

Complex object interactions that need to span process boundaries can be constructed on

SOM using standard inter-process communication facilities provided by the underlying

system. No serialization code is necessary if programming in a single thread, single

process model. A class library based on SOM is used to provide SOM with distributed

access to remote SOM objects.

Binding

SOM provides support for both early and late binding. These binding choices are on a per

method bases.

Polymorphism

The polymorphism provided by SOM depends on the method dispatching scheme

selected. If "offset resolution" is used then a static scheme for typing objects is used and

polymorphism is based strictly on class inheritance. If the "name resolution" method

dispatching is used then methods are located dynamically and polymorphism is based on

the actual protocols that objects honor.

Encapsulation

Access to the state of SOM objects is through the operations that makeup the objects

interface. Invoking operations on SOM objects can have side effects. SOM objects do

have private data that is not accessible by invoking external operations. In addition, it is

possible to define class attributes on SOM objects. These attributes are accessed via set

and get functions implemented for each attribute and are invoked in the same way as

methods.

Identity, Equality, Copy

When a SOM object is created the SOM run-time returns a pointer to the object. It is left

to higher level abstractions which build on SOM to define object identity, equality and

copy operations.

Types and Classes

The SOM description for "types and classes" is essentially the same as that described in

the OODBTG Reference Model entry in this section in that a "type" defines a protocol

shared by a group of objects, called "instances" of the type and a class defines an

implementation shard by a group of objects.

In SOM, all objects are derived from a SOM root object which defines the essential

behavior common to all SOM objects. In addition, SOM has a root class for all SOM

meta classes which defines the essential behavior common to all SOM classes. The SOM

meta classes define factory methods that manufacture objects of any class for which they

are the meta class.

Inheritance and Delegation

A class defines both an interface and an implementation for objects. The interface defines the

signature of the methods supported by objects of the class, and the implementation defines what

instance variables implement the object's state and what procedures implement its methods. New

classes are derived by sub-classing from previously existing classes through inheritance and

specialization. Subclasses inherit their interfaces and implementations from their parent classes

unless they are overridden. SOM supports multiple inheritance. That is, a class may be derived

from multiple parent classes.

Noteworthy Objects

SOM has three important objects which exist as part of the run time environment. The first is a

root object from which all SOM objects are derived. This root object defines the essential

behavior common to all SOM objects.

The second object is the root class object which defines the essential behavior common to all

SOM classes. All SOM classes are expected to have this class object or some class derived from

it as their meta class. This object carries the methods which serve as factory objects. The third

object is a class manager. It serves as a run time registry for all SOM class objects that have been

created or dynamically loaded by the current process.

Attributes

SOM classes support attributes. An attribute can be thought of as an instance variable that

has accompanying "get" and "set” methods. The get and set methods are invoked the

same way as other methods.

Literals

SOM literal types are characters and integers.

Containment

SOM is a basic technology and therefore does not have a notion of one object containing

another. However, since SOM is a library packaging technology it supports the

construction of SOM objects which can contain other SOM objects.

Aggregates

Aggregation is used in SOM to represent collections of basic data types. The aggregation

types as expressed in C are struct, union and enum.

Extensibility

In SOM, all new classes are defined in terms of a previously existing class. New methods

can be added and old methods can be overridden. Methods can also be relocated upward

in the hierarchy without requiring-compilation of the application. Source code is not

required for sub-classing since the binaries can be used for this purpose. This sub-

classing from binaries even extends to languages other than the one the binaries were

written in.

Dynamic

SOM is dynamic in that the class binaries can be replaced without having to re-compiled

the application.

Metaclasses/metaobject protocol

In SOM all classes are real objects. SOM supports a class object which represents the

meta class for the creation of all SOM classes. The SOM meta class defines the behavior

common to all class objects. Since it inherits from the root SOM object it exists at run

time and contains the methods for manufacturing object instances. It also has the methods

used to dynamically obtain information about a class and its methods at run time.

Object Languages

SOM is designed to work with a variety of programming languages. SOM supports an

interface definition language to define interfaces and data. The IDL is then compiled

(pre-compiler) and linked with the application. This does not preclude the use of a

languages' object model in the same application. The use of SOM with a procedural

language provides that language with object oriented capabilities.

Portable Object Adapter (POA) The CORBA object responsible for splitting the server side remote invocation handler

into the remote Object and its Servant. The object is exposed for the remote invocations,

while the servant contains the methods that are actually handling the requests. The

Servant for each object can be chosen either statically (once) or dynamically (for each

remote invocation), in both cases allowing the call forwarding to another server.

On the server side, the POAs form a tree-like structure, where each POA is responsible

for one or more objects being served. The branches of this tree can be independently

activated/deactivated, have the different code for the servant location or activation and

the different request handling policies.

An object adapter is the mechanism that connects a request using an object reference

with the proper code to service that request. The Portable Object Adapter, or POA, is a

particular type of object adapter that is defined by the CORBA specification. The POA is

designed to meet the following goals:

• Allow programmers to construct object implementations that are portable between

different ORB products.

• Provide support for objects with persistent identities.

• Provide support for transparent activation of objects.

• Allow a single servant to support multiple object identities simultaneously.

Creating and Using the POA

The steps for creating and using a POA will vary according to the specific application

being developed. The following steps generally occur during the POA life cycle:

1. Get the root POA

2. Define the POA's policies

3. Create the POA

4. Activate the POAManager

5. Activate the servants, which may include activating the Tie

6. Create the object reference from the POA

Step 1: Get the root POA The first step is to get the first POA, which is called the rootPOA. The root POA is

managed by the ORB and provided to the application using the ORB initialization

interface under the initial object name "RootPOA". An example of code that will get the

root POA object and cast it to a POA is:

ORB orb = ORB.init( args, null ); POA rootPOA = POAHelper.narrow(orb.resolve_initial_references("RootPOA"));

Step 2: Define the POA's Policies The Portable Object Adapter (POA) is designed to provide an object adapter that can be

used with multiple ORB implementations with no rewriting needed to deal with different

vendors' implementations.

The POA is also intended to allow persistent objects -- at least, from the client's

perspective. That is, as far as the client is concerned, these objects are always alive, and

maintain data values stored in them, even though physically, the server may have been

restarted many times. The POA allows the object implementer a lot more control over the

object's identity, state, storage, and life cycle. You can create a POA without defining any

policies and the default values will be used. The root POA has the following policies by

default:

• Thread Policy: ORB_CTRL_MODEL

• Lifespan Policy: TRANSIENT

• Object Id Uniqueness Policy: UNIQUE_ID

• Id Assignment Policy: SYSTEM_ID

• Servant Retention Policy: RETAIN

• Request Processing Policy: USE_ACTIVE_OBJECT_MAP_ONLY

• Implicit Activation Policy: IMPLICIT_ACTIVATION

The following code snippet shows how policies are set in the RMI-IIOP (with POA)

example:

Policy[] tpolicy = new Policy[3]; tpolicy[0] = rootPOA.create_lifespan_policy( LifespanPolicyValue.TRANSIENT ); tpolicy[1] = rootPOA.create_request_processing_policy( RequestProcessingPolicyValue.USE_ACTIVE_OBJECT_MAP_ONLY ); tpolicy[2] = rootPOA.create_servant_retention_policy( ServantRetentionPolicyValue.RETAIN);

Step 3: Create the POA Creating a new POA allows the application developer to declare specific policy choices

for the new POA and to provide a different adapter activator and servant manager (these

are callback objects used by the POA to activate POAs on demand and activate servants).

Creating new POAs also allows the application developer to partition the name space of

objects, as Object Ids are interpreted relative to a POA. Finally, by creating new POAs,

the developer can independently control request processing for multiple sets of objects. A

POA is created as a child of an existing POA using the create_POA operation on the

parent POA. To create a new POA, pass in the following information:

• Name of the POA - The POA is given a name that must be unique with respect to

all other POAs with the same parent. In the following example, the POA is named

childPOA.

• POA Manager - Specify the POA Manager to be associated with the new POA.

If, as is shown in the following example, null is passed for this parameter, a new

POA Manager will be created. The user can also choose to pass the POA Manager

of another POA.

• Policy List - Specify the policy list to be associated with the POA to control its

behavior. In the following example, a persistent lifespan policy has already been

defined for this POA.

The following code snippet shows how the POA is created :

// Create a POA by passing the Persistent Policy

POA persistentPOA = rootPOA.create_POA("childPOA", null,

persistentPolicy );

Step 4: Activate the POAManager Each POA object has an associated POAManager object that controls the processing state

of the POAs with which it is associated, such as whether requests to the POA are queued

or discarded. The POAManager can also deactivate the POA. A POA Manager may be

associated with one or more POA objects. The POAManager can have the following states:

• Holding - In this state, associated POAs will queue incoming requests.

• Active - In this state, associated POAs will start processing requests.

• Discarding - In this state, associated POAs will discard incoming requests.

• Inactive - In this state, associated POAs will reject the requests that have not

begun executing as well as as any new requests.

POA Managers are not automatically activated when they are created. The following

code snippet shows how the POAManager is activated . If the POA Manager is not

activated in this way, all calls to the Servant will hang because, by default, the POA

Manager is in the HOLD state.

// Activate PersistentPOA's POAManager. Without this step, // all calls to Persistent Server will hang because POAManager // will be in the 'HOLD' state. persistentPOA.the_POAManager().activate( );

Step 5: Activate the servants

At any point in time, a CORBA object may or may not be associated with an active

servant. If the POA has the RETAIN policy, the servant and its associated Object Id are

entered into the Active Object Map of the appropriate POA. This type of activation can

be accomplished in one of the following ways.

• The server application itself explicitly activates individual objects (via the

activate_object or activate_object_with_id operations).

• The server application instructs the POA to activate objects on demand by having the

POA invoke a user-supplied servant manager. The server application registers this

servant manager with set_servant_manager.

• Under some circumstances (when the IMPLICIT_ACTIVATION policy is also in effect

and the language binding allows such an operation), the POA may implicitly activate

an object when the server application attempts to obtain a reference for a servant that

is not already active (that is, not associated with an Object Id).

If the USE_DEFAULT_SERVANT policy is also in effect, the server application instructs the

POA to activate unknown objects by having the POA invoke a single servant no matter

what the Object Id is. The server application registers this servant with set_servant.

If the POA has the NON_RETAIN policy, for every request, the POA may use either a

default servant or a servant manager to locate an active servant. From the POA's point of

view, the servant is active only for the duration of that one request. The POA does not

enter the servant-object association into the Active Object Map.

When using RMI-IIOP technology, your implementations use delegation (known as the

Tie model) to associate your implementation with the interface. When you create an

instance of your implementation, you also need to create a Tie object to associate it with

a CORBA interface. The following code snippet shows how to activate the Tie, if the

POA policy is USE_ACTIVE_OBJECT_MAP_ONLY. This sample code is from the RMI-IIOP

with POA example.

_HelloImpl_Tie tie = (_HelloImpl_Tie)Util.getTie( helloImpl ); String helloId = "hello"; byte[] id = helloId.getBytes(); tPOA.activate_object_with_id( id, tie );

Step 6: Create the object reference

Object references are created in servers. Once created, they may be exported to clients.

Object references encapsulate object identity information and information required by the

ORB to identify and locate the server and the POA with which the object is associated.

References are created in the following ways:

• Explicitly activate a servant and associate it with an object reference.

The following example is from Hello World: Persistent Server. This example uses

the servant_to_reference operation to map an activated servant to its

corresponding object reference.

// Resolve Root Naming context and bind a name for the // servant. org.omg.CORBA.Object obj = orb.resolve_initial_references( "NameService" ); NamingContextExt rootContext = NamingContextExtHelper.narrow( obj ); NameComponent[] nc = rootContext.to_name( "PersistentServerTutorial" ); rootContext.rebind( nc, persistentPOA.servant_to_reference( servant ) );

• Server application directly creates a reference.

The following example is from the RMI-IIOP with POA example. In this

example, the following code directly creates a reference. In doing so, they bring

the abstract object into existence, but do not associate it with an active servant.

// Publish the object reference using the same object id // used to activate the Tie object. Context initialNamingContext = new InitialContext(); initialNamingContext.rebind("HelloService", tPOA.create_reference_with_id(id, tie._all_interfaces(tPOA,id)[0]) );

• Server application causes a servant to implicitly activate itself.

The behavior can occur only if the POA has been created with the

IMPLICIT_ACTIVATION policy, which is the default behavior. Once an reference is

created in the server, it can be made available to clients.

The ORB The IBM ORB ships with the JVM and is used by the IBM WebSphere Application

Server. It is one of the enterprise features of the Java 2 Standard Edition. The ORB is a

tool and runtime component that provides distributed computing through the OMG-

defined CORBA IIOP communication protocol. The ORB runtime consists of a Java

implementation of a CORBA ORB. The ORB toolkit provides APIs and tools for both

the RMI programming model and the IDL programming model.

This separation of interface from implementation, enabled by OMG IDL, is the essence

of CORBA - how it enables interoperability, with all of the transparencies we've claimed.

The interface to each object is defined very strictly. In contrast, the implementation of an

object - its running code, and its data - is hidden from the rest of the system (that is,

encapsulated) behind a boundary that the client may not cross. Clients access objects

only through their advertised interface, invoking only those operations that that the object

exposes through its IDL interface, with only

those parameters (input and

output) that are included in the invocation

Figure 1 shows how everything fits together, at least within a single process: You

compile your IDL into client stubs and object skeletons, and write your object (shown on

the right) and a client for it (on the left). Stubs and skeletons serve as proxies for clients

and servers, respectively. Because IDL defines interfaces so strictly, the stub on the

client side has no trouble meshing perfectly with the skeleton on the server side, even if

the two are compiled into different programming languages, or even running on different

ORBs from different vendors.

In CORBA, every object instance has its own unique object reference, an identifying

electronic token. Clients use the object references to direct their invocations, identifying

to the ORB the exact instance they want to invoke (Ensuring, for example, that the books

you select go into your own shopping cart, and not into your neighbor's.) The client acts

as if it's invoking an operation on the object instance, but it's actually invoking on the

IDL stub which acts as a proxy. Passing through the stub on the client side, the invocation

continues through the ORB (Object Request Broker), and the skeleton on the

implementation side, to get to the object where it is executed. The CORBAservices

provide standard ways of passing object references around your network of CORBA

objects. Location Transparency keeps your applications flexible.

How do remote invocations work?

Figure 2 diagrams a remote invocation. In order to invoke the remote object instance, the

client first obtains its object reference. (There are many ways to do this, but we won't

detail any of them here. Easy ways include the Naming Service and the Trader Service.)

To make the remote invocation, the client uses the same code that it used in the local

invocation we just described, substituting the object reference for the remote instance.

When the ORB examines the object reference and discovers that the target object is

remote, it routes the invocation out over the network to the remote object's ORB. (Again

we point out: for load balanced servers, this is an oversimplification.)

How does this work?

OMG has standardized this process at two key levels: First, the client knows the type of

object it's invoking (that it's a shopping cart object, for instance), and the client stub and

object skeleton are generated from the same IDL. This means that the client knows

exactly which operations it may invoke, what the input parameters are, and where they

have to go in the invocation; when the invocation reaches the target, everything is there

and in the right place. We've already seen how OMG IDL accomplishes this. Second, the

client's ORB and object's ORB must agree on a common protocol - that is, a

representation to specify the target object, operation, all parameters (input and output) of

every type that they may use, and how all of this is represented over the wire. OMG has

defined this also - it's the standard protocol IIOP. (ORBs may use other protocols besides

IIOP, and many do for various reasons. But virtually all speak the standard protocol IIOP

for reasons of interoperability, and because it's required by OMG for compliance.)

Although the ORB can tell from the object reference that the target object is remote, the

client can not. (The user may know that this also, because of other knowledge - for

instance, that all accounting objects run on the mainframe at the main office in Tulsa.)

There is nothing in the object reference token that the client holds and uses at invocation

time that identifies the location of the target object. This ensures location transparency -

the CORBA principle that simplifies the design of distributed object computing

applications.That ORB/Skeleton Architecture on the Server side doesn't look very

scalable. What did you leave out?

Using the ORB

To use the ORB, you need to understand the properties that the ORB contains. These

properties change the behavior of the ORB.

How the ORB works

This description explains the client side, and describes what the ORB does under the

cover and transparently to the client. Then, the important role of the ORB in the server

side is explained. This section describes a basic, typical RMI-IIOP session in which a

client accesses a remote object on a server implementing an interface named Sample. The

client then invokes a simple method called message(). The method returns a "Hello

World" string.

The client side

The client side operation of the ORB is described.

1. Stub creation:

In a simple distributed application, the client needs to know (in almost all the

cases) what object it is going to contact, and which method of this object it needs

to call. Because the ORB is a general framework, you must give it general

information about the method that you want to call.

2. ORB initialization:

In a stand-alone Java application, the client has to create an instance of the ORB.

3. Obtaining the remote object

Several methods exist by which the client can get a reference for the remote

object.

4. Remote method invocation:

The client holds a reference to the remote object that is an instance of the stub

class. The next step is to call the method on that reference. The stub implements

the Sample interface and therefore contains the message() method that the client

has called.

Stub creation

In a simple distributed application, the client needs to know (in almost all the cases) what

object it is going to contact, and which method of this object it needs to call. Because the

ORB is a general framework, you must give it general information about the method that

you want to call. For this reason, you implement a Java interface, Sample, which contains

the signatures of the methods that can be called in the remote object.

The client relies on the existence of a server that contains an object that is that Sample

interface. You must, therefore, create a proxy. This proxy is an object, called stub that

acts as an interface between client application and ORB.To create the stub, run the RMIC

compiler on the Java interface:

rmic -iiop Sample

This command generates a file and object named _Sample_Stub.class.

The presence of a stub is not always mandatory for a client application to operate. When

you use particular CORBA features such as the DII (Dynamic Invocation Interface), you

do not require a stub. The reason is that the proxy code is implemented directly by the

client application. You can also upload a stub from the server to which you are trying to

connect. See the CORBA specification for further details

ORB initialization

In a stand-alone Java application, the client has to create an instance of the ORB. This

instance is created by calling the static method init(...); for example:

ORB orb = ORB.init(args,props);

The parameters that are passed to the method are:

• A string array that contains property-value pairs

• A Java Properties object

For an applet, a similar method is used in which a Java Applet is passed instead of the

string array. The first step of the ORB initialization is the processing of the ORB

properties. The properties are processed in the following sequence:

1. Check in the applet parameter or application string array

2. Check in the properties parameter (if the parameter exists)

3. Check in the system properties

4. Check in the orb.properties file that is in the <user-home> directory (if the file

exists)

5. Check in the orb.properties file that is in the <java-home>/lib directory (if the file

exists)

6. Fall back on a hardcoded default behavior

The two properties ORBClass and ORBSingletonClass determine which ORB class has

to be instantiated. After this, the ORB starts and initializes the TCP transport layer. If the

ListenerPort property was set, the ORB also opens a server socket that is listening for

incoming requests, as a server-side ORB usually does. At the end of the init() method, the

ORB is fully functional and ready to support the client application.

Obtaining the remote object

Several methods exist by which the client can get a reference for the remote object.

Typically, this reference is a string, called an IOR (Interoperable Object Reference). For

example:

IOR:000000000000001d524d493a5......

This reference contains all the information needed to find the remote object. It also

contains some details of the settings of the server to which the object belongs. The client

ORB is not supposed to understand the details of the IOR, but to use it as a key. In other

words, the IOR is a reference to the remote object. However, when client and server are

both using an IBM ORB, extra features are coded in the IOR. For example, the IBM

ORB adds a proprietary field into the IOR, called IBM_PARTNER_VERSION. This

field holds a value like the following example:

49424d0a 00000008 00000000 1400 0005 where:

• The three initial bytes (from left to right) are the ASCII code for IBM, followed

by 0x0A, which specifies that the following bytes handle the partner version.

• The next 4 bytes encode the length of the remaining data (in this case 8 bytes)

• The next 4 null bytes are for future use.

• The 2 bytes for the Partner Version Major field (0x1400 in this example) define

the release of the ORB that is being used (1.4.0 in this case).

• The Minor field (0x0005) distinguishes in the same release, service refreshes that

contain changes that have affected the compatibility with earlier versions.

Because the IOR is not visible to application-level ORB programmers and the client ORB

does not know where to look for it, there is one more step, called the bootstrap process.

Basically, the client application needs to tell the ORB where the remote object reference

is located.

A typical example of bootstrapping is if you use a naming service: the client calls the

ORB method resolve_initial_references("NameService"), which returns (after narrowing)

a reference to the name server in the form of a NamingContext object. The ORB looks

for a name server in the local machine at the port 2809 (as default). If no name server

exists, or the name server is listening on another port, the ORB returns an exception. The

client application can specify a different host, port, or both by using the -ORBInitRef

and -ORBInitPort options.

Using the NamingContext and the name with which the Remote Object has been bound

in the name service, the client can retrieve a reference to the remote object. The reference

to the remote object that the client holds is always an instance of a Stub object; for

example _Sample_Stub.

ORB.resolve_initial_references() causes much system activity. The ORB starts by

creating a remote communication with the name server. This communication might

include several requests and replies. Typically, the client ORB first checks whether a

name server is listening, then asks for the specified remote reference. In an application

where performance is considered important, caching the remote reference is a better

alternative to repetitive use of the naming service. However, because the naming service

implementation is a transient type, the validity of the cached reference is tied to the time

in which the naming service is running. The IBM ORB implements an Interoperable

Naming Service as described in the CORBA 2.3 specification. This service includes a

new string format that can be passed as a parameter to the ORB methods

string_to_object() and resolve_initial_references(). By calling the previous two methods

where the string parameter has a corbaloc (or corbaname) format as, for example:

corbaloc:iiop:[email protected]:1050/AService

the client ORB uses GIOP 1.0 to send a request with a simple object key of AService to

port 1050 at host aserver.aworld.aorg. There, the client ORB expects to find a server for

the Aservice that is requested, and returns a reference to itself. You can then use this

reference to look for the remote object. This naming service is transient. It means that the

validity of the contained references expires when the name service or the server for the

remote object is stopped.

Remote method invocation

The client holds a reference to the remote object that is an instance of the stub class. The

next step is to call the method on that reference. The stub implements the Sample

interface and therefore contains the message() method that the client has called.

First, the stub code determines whether the implementation of the remote object is

located on the same ORB instance. If so, the object can be accessed without using the

Internet.

If the implementation of the remote object is located on the same ORB instance, the

performance improvement can be significant because a direct call to the object

implementation is done. If no local servant can be found, the stub first asks the ORB to

create a request by calling the _request() method, specifying the name of the method to

call and whether a reply is expected or not.

The CORBA specification imposes an extra layer of indirection between the ORB code

and the stub. This layer is commonly known as delegation. CORBA imposes the layer

using an interface named Delegate. This interface specifies a portable API for ORB-

vendor-specific implementation of the org.omg.CORBA.Object methods. Each stub

contains a delegate object, to which all org.omg.CORBA.Object method invocations are

forwarded. The delegate object allows a stub that is generated by the ORB from one

vendor to work with the delegate from the ORB of another vendor.

When creating a request, the ORB first checks whether the enableLocateRequest property

is set to true, in which case, a LocateRequest is created. The steps of creating this request

are like the full Request case.

The ORB obtains the IOR of the remote object (the one that was retrieved by a naming

service, for example) and passes the information that is contained in the IOR (Profile

object) to the transport layer.

The transport layer uses the information that is in the IOR (IP address, port number, and

object key) to create a connection if it does not exist. The ORB TCP/IP transport has an

implementation of a table of cached connections for improving performances, because

the creation of a new connection is a time-consuming process. The connection is not an

open communication channel to the server host. It is only an object that has the potential

to create and deliver a TCP/IP message to a location on the Internet. Typically, that

involves the creation of a Java™ socket and a reader thread that is ready to intercept the

server reply. The ORB.connect() method is called as part of this process.

When the ORB has the connection, it proceeds to create the Request message. The

message contains the header and the body of the request. The CORBA 2.3 specification

specifies the exact format. The header contains these items, for example:

• Local IP address

• Local port

• Remote IP address

• Remote port

• Message size

• Version of the CORBA stream format

• Byte sequence convention

• Request types

• Ids

The body of the request contains several service contexts and the name and parameters of

the method invocation. Parameters are typically serialized. A service context is some

extra information that the ORB includes in the request or reply, to add several other

functions. CORBA defines a few service contexts, such as the codebase and the codeset

service contexts. The first is used for the callback feature which is described in the

CORBA specification, the second is used to specify the encoding of strings.

In the next step, the stub calls _invoke(). Again, it is the delegate invoke() method that is

executed. The ORB in this chain of events calls the send() method on the connection that

writes the request to the socket buffer and then flushes it away. The delegate invoke()

method waits for a reply to arrive. The reader thread that was spun during the connection

creation gets the reply message, demarshals it, and returns the correct object.

The server side

Typically, a server is an application that makes available one of its implemented objects

through an ORB instance.

Servant implementation

The implementations of the remote object can either inherit from

javax.rmi.PortableRemoteObject, or implement a remote interface and use the

exportObject() method to register themselves as a servant object. In both cases, the

servant has to implement the Sample interface. Here, the first case is described. From

now, the servant is called SampleImpl.

Tie generation

Again, you must put an interfacing layer between the servant and the ORB code. In the

old RMI(JRMP) naming convention, skeleton was the name given to the proxy that was

used on the server side between ORB and the object implementation. In the RMI-IIOP

convention, the proxy is called a Tie.

You generate the RMI-IIOP tie class at the same time as the stub, by calling the rmic

compiler. These classes are generated from the compiled Java™ programming language

classes that contain remote object implementations. For example, the command:

rmic -iiop SampleImpl

Servant binding

The steps required to bind the servant are described. The server implementation is

required to do the following tasks:

1. Create an ORB instance; that is, ORB.init(...)

2. Create a servant instance; that is, new SampleImpl(...)

3. Create a Tie instance from the servant instance; that is, Util.getTie(...)

4. Export the servant by binding it to a naming service

As described for the client side, you must create the ORB instance by calling the ORB

static method init(...). The typical steps for that method are:

1. Retrieve properties

2. Get the system class loader

3. Load and instantiate the ORB class as specified in the ORBClass property

4. Initialize the ORB as determined by the properties

Then, the server needs to create an instance of the servant class SampleImpl.class.

Something more than the creation of an instance of a class happens under the cover.

Remember that the servant SampleImpl extends the PortableRemoteObject class, so the

constructor of PortableRemoteObject is executed. This constructor calls the static method

exportObject(...) with the parameter that is the same servant instance that you try to

instantiate. If the servant does not inherit from PortableRemoteObject, the application

must call exportObject() directly.

The exportObject() method first tries to load an RMI-IIOP tie. The ORB implements a

cache of classes of ties for improving performance. If a tie class is not already cached, the

ORB loads a tie class for the servant. If it cannot find one, it goes up the inheritance tree,

trying to load the parent class ties. The ORB stops if it finds a PortableRemoteObject

class or the java.lang.Object, and returns a null value. Otherwise, it returns an instance of

that tie from a hashtable that pairs a tie with its servant. If the ORB cannot get hold of the

tie, it guesses that an RMI (JRMP) skeleton might be present and calls the exportObject()

method of the UnicastRemoteObject class. A null tie is registered in the cache and an

exception is thrown. The servant is now ready to receive remote methods invocations.

However, it is not yet reachable.

In the next step, the server code has to get hold of the tie itself (assuming the ORB has

already got hold of the tie) to be able to export it to a naming service. To do that, the

server passes the newly created instance of the servant into the static method

javax.rmi.CORBA.Util.getTie(). This method, in turn, fetches the tie that is in the

hashtable that the ORB created. The tie contains the pair of tie-servant classes.

When in possession of the tie, the server must get hold of a reference for the naming

service and bind the tie to it. As in the client side, the server calls the ORB method

resolve_initial_references("NameService"). The server then creates a NameComponent,

which is a directory tree object identifying the path and the name of the remote object

reference in the naming service. The server binds the NameComponent together with the

tie. The naming service then makes the IOR for the servant available to anyone

requesting. During this process, the server code sends a LocateRequest to get hold of the

naming server address. It also sends a Request that requires a rebind operation to the

naming server.

Processing a request

The server ORB uses a single listener thread, and a reader thread for each connection or

client, to process an incoming message. During the ORB initialization, a listener thread

was created. The listener thread is listening on a default port (the next available port at

the time the thread was created). You can specify the listener port by using the

com.ibm.CORBA.ListenerPort property. When a request comes in through that port, the

listener thread first creates a connection with the client side. In this case, it is the TCP

transport layer that takes care of the details of the connection. As seen for the client side,

the ORB caches all the connections that it creates.

By using the connection, the listener thread spawns a reader thread to process the

incoming message. When dealing with multiple clients, the server ORB has a single

listener thread and one reader thread for each connection or client.

The reader thread does not fully read the request message, but instead creates an input

stream for the message to be piped into. Then, the reader thread picks up one of the

worker threads in the implemented pool (or creates one if none is present), and delegates

the reading of the message. The worker threads read all the fields in the message and

dispatch them to the tie, which unmarshals any parameters and calls the remote method.

The service contexts are then created and written to the response output stream with the

return value. The reply is sent back with a similar mechanism, as described in the client

side. After that, the connection is removed from the reader thread which eventually stops.

Additional features of the ORB

Portable object adapter, fragmentation, portable interceptors, and Inoperable Naming

Service are described. This section describes:

• Portable object adapter

• Fragmentation

• Portable interceptors

• Interoperable Naming Service (INS)

Portable object adapter

An object adapter is the primary method for an object to access ORB services such as

object reference generation. An object adapter exports a public interface to the object

implementation and a private interface to the skeleton.

The main responsibilities of an object adapter are:

• Generation and interpretation of object references

• Method invocation

• Object and implementation activation and deactivation

• Mapping object references to the corresponding object implementations

In CORBA 2.1 and below, all ORB vendors had to implement an object adapter, which

was known as the basic object adapter. Because the basic object adapter was never

completely specified with a standard CORBA IDL, vendors implemented it in many

different ways. Therefore, for example, programmers could not write server

implementations that could be truly portable between different ORB products. A first

attempt to define a standard object adapter interface was done in CORBA 2.1. With

CORBA v.2.3, the OMG group released the final corrected version for a standard

interface for the object adapter. This adapter is known as the portable object adapter

(POA). Some of the main features of the POA specification are:

• Allow programmers to construct object and server implementations that are

portable between different ORB products.

• Provide support for persistent objects; that is, objects that have a lifetime span of

multiple server lifetimes.

• Support transparent activation of objects and the ability to associate policy

information to objects.

• Allow multiple distinct instances of the POA to exist in one ORB.

Since IBM® SDK for Java v1.4, the ORB supports both the POA specification and the

proprietary basic object adapter that is already present in previous IBM ORB versions. As

default, the rmic compiler, when used with the -iiop option, generates RMI-IIOP ties for

servers. These ties are based on the basic object adapter. When a server implementation

uses the POA interface, you must add the -poa option to the rmic compiler to generate

the relevant ties.

If you want to implement an object that is using the POA, the server application must

obtain a POA object. When the server application calls the ORB method

resolve_initial_reference("RootPOA"), the ORB returns the reference to the main POA

object that contains default policies (see the CORBA specification for a complete list of

all the POA policies). You can create new POAs as child objects of the RootPOA, and

these child objects can contain different policies. This in turn allows you to manage

different sets of objects separately, and to partition the namespace of objects IDs.

Ultimately, a POA handles Object IDs and active servants. An active servant is a

programming object that exists in memory and has been registered with the POA by use

of one or more associated object identities. The ORB and POA cooperate to determine on

which servant the client-requested operation should be started. By using the POA APIs,

you can create a reference for the object, associate an object ID, and activate the servant

for that object. A map of object IDs and active servants is stored inside the POA. A POA

also provides a default servant that is used when no active servant has been registered.

You can register a particular implementation of this default servant and also of a servant

manager, which is an object for managing the association of an object ID with a

particular servant.

The POA Manager is an object that encapsulates the processing state of one or more

POAs. You can control and change the state of all POAs by using operations on the POA

manager. The adapter activator is an object that an application developer uses to activate

child POAs.

Fragmentation

The CORBA specification introduced the concept of fragmentation to handle the growing

complexity and size of marshalled objects in GIOP messages. Graphs of objects are

linearized and serialized inside a GIOP message under the IDL specification of

valuetypes. Fragmentation specifies the way a message can be split into several smaller

messages (fragments) and sent over the net.

The system administrator can set the ORB properties FragmentSize and

FragmentTimeout to obtain best performance in the existing net traffic. As a general

rule, the default value of 1024 bytes for the fragment size is a good trade-off in almost all

conditions. The fragment timeout must not be set to too low a value, or time-outs might

occur unnecessarily.

Portable interceptors

For some time, CORBA implementations have had non-standard mechanisms that allow

users to insert their own code into the ORB's flow of execution. This code, known as

interceptors, is called at particular stages during the processing of requests. It can directly

inspect and even manipulate requests. Because this message filtering mechanism is

extremely flexible and powerful, the OMG standardized interceptors in the CORBA 2.4.2

specification under the name "portable interceptors".

The idea of a portable interceptor is to define a standard interface to register and execute

application-independent code that, among other things, takes care of passing service

contexts. These interfaces are stored in the package org.omg.PortableInterceptor.* . The

implementation classes are in the com.ibm.rmi.pi.* package of the IBM® ORB. All the

interceptors implement the Interceptor interface.

Two classes of interceptors are defined: request interceptors and IOR (Interoperable

Object Reference) interceptors. Request interceptors are called during request mediation.

IOR interceptors are called when new object references are created so that service-

specific data can be added to the newly-created IOR in the form of tagged components.

The ORB calls request interceptors on the client and the server side to manipulate service

context information. Interceptors must register with the ORB for those interceptor points

that are to be executed.

Five interception points are on the client side:

• send_request (sending request)

• send_poll (sending request)

• receive_reply (receiving reply)

• receive_exception (receiving reply)

• receive_other (receiving reply)

Five interception points are on the server side:

• receive_request_service_contexts (receiving request)

• receive_request (receiving request)

• send_reply (sending reply)

• send_exception (sending reply)

• send_other (sending reply)

The only interceptor point for IOR interceptors is establish_component. The ORB calls

this interceptor point on all its registered IOR interceptors when it is assembling the set of

components that is to be included in the IOP profiles for a new object reference.

Registration of interceptors is done using the interface ORBInitializer.

Example: public class MyInterceptor extends org.omg.CORBA.LocalObject implements ClientRequestInterceptor, ServerRequestInterceptor {

public String name() { return "MyInterceptor"; } public void destroy() {} // ClientRequestInterceptor operations public void send_request(ClientRequestInfo ri) { logger(ri, "send_request"); } public void send_poll(ClientRequestInfo ri) { logger(ri, "send_poll"); } public void receive_reply(ClientRequestInfo ri) { logger(ri, "receive_reply"); } public void receive_exception(ClientRequestInfo ri) { logger(ri, "receive_exception"); } public void receive_other(ClientRequestInfo ri) { logger(ri, "receive_other"); } // Server interceptor methods public void receive_request_service_contexts(ServerRequestInfo ri) { logger(ri, "receive_request_service_contexts"); } public void receive_request(ServerRequestInfo ri) { logger(ri, "receive_request"); } public void send_reply(ServerRequestInfo ri) { logger(ri, "send_reply"); } public void send_exception(ServerRequestInfo ri) { logger(ri, "send_exception"); } public void send_other(ServerRequestInfo ri) { logger(ri, "send_other"); } // Trivial Logger public void logger(RequestInfo ri, String point) { System.out.println("Request ID:" + ri.request_id() + " at " + name() + "." + point); } }

The interceptor class extends org.omg.CORBA.LocalObject to ensure that an instance of

this class does not get marshaled, because an interceptor instance is strongly tied to the

ORB with which it is registered. This trivial implementation prints out a message at every

interception point.

You can register the interceptor by using an ORB Initializer. Because interceptors are

intended to be a means by which ORB services access ORB processing, by the time the

init() method call on the ORB class returns an ORB instance, the interceptors have

already been registered. It follows that interceptors cannot be registered with an ORB

instance that is returned from the init() method call.

First, you must create a class that implements the ORBInitializer interface. This class

will be called by the ORB during its initialization:

public class MyInterceptorORBInitializer extends LocalObject

implements ORBInitializer

{

public static MyInterceptor interceptor;

public String name() {

return "";

}

public void pre_init(ORBInitInfo info) {

try {

interceptor = new MyInterceptor();

info.add_client_request_interceptor(interceptor);

info.add_server_request_interceptor(interceptor);

} catch (Exception ex) {}

}

public void post_init(ORBInitInfo info) {}

}

Then, in the server implementation, add the following code:

Properties p = new Properties();

p.put("org.omg.PortableInterceptor.ORBInitializerClass.pi.MyInte

rceptorORBInitializer", "");

orb = ORB.init((String[])null, p);

During the ORB initialization, the ORB runtime obtains the ORB properties that begin

with org.omg.PortableInterceptor.ORBInitializerClass;. The remaining portion is

extracted and the corresponding class is instantiated. Then, the pre_init() and post_init()

methods are called on the initializer object.

Interoperable Naming Service (INS)

The CORBA CosNaming Service observes the OMG Interoperable Naming Service

specification (INS, CORBA 2.3 specification). CosNaming stands for Common Object

Services Naming.

The name service maps names to CORBA object references. Object references are stored

in the namespace by name and each object reference-name pair is called a name binding.

Name bindings can be organized under naming contexts. Naming contexts are themselves

name bindings, and serve the same organizational function as a file system subdirectory

does. All bindings are stored under the initial naming context. The initial naming context

is the only persistent binding in the namespace.

This implementation includes a new string format that can be passed as a parameter to the

ORB methods string_to_object() and resolve_initial_references() such as the corbaname

and corbaloc formats.

Corbaloc URIs allow you to specify object references that can be contacted by IIOP or

found through ORB::resolve_initial_references(). This new format is easier to manipulate

than IOR. To specify an IIOP object reference, use a URI of the form (see the CORBA

2.4.2 specification for full syntax):

corbaloc:iiop:<host>:<port>/<object key>

For example, the following corbaloc URI specifies an object with key MyObjectKey that

is in a process that is running on myHost.myOrg.com listening on port 2809.

corbaloc:iiop:myHost.myOrg.com:2809/MyObjectKey

Corbaname URIs (see the CORBA 2.4.2 specification) cause string_to_object() to look

up a name in a CORBA naming service. They are an extension of the corbaloc syntax:

corbaname:<corbaloc location>/<object key>#<stringified name>

For example:

corbaname::myOrg.com:2050#Personal/schedule

where the portion of the reference up to the hash mark (#) is the URL that returns the root

naming context. The second part is the argument that is used to resolve the object on the

NamingContext. The INS specified two standard command-line arguments that provide a

portable way of configuring ORB::resolve_initial_references():

• -ORBInitRef takes an argument of the form <ObjectId>=<ObjectURI>. For

example, you can use the following command-line arguments:

-ORBInitRef NameService=corbaname::myhost.example.com

resolve_initial_references("NameService") returns a reference to the object with

key NameService available on myhost.example.com, port 2809.

• -ORBDefaultInitRef provides a prefix string that is used to resolve otherwise

unknown names. When resolve_initial_references() cannot resolve a name that

has been specifically configured (with -ORBInitRef), it constructs a string that

consists of the default prefix, a `/' character, and the name requested. The string is

then fed to string_to_object(). So, for example, with a command-line of:

-ORBDefaultInitRef corbaloc::myhost.example.com

a call to resolve_initial_references("MyService") returns the object reference that

is denoted by corbaloc::myhost.example.com/MyService.

CORBA Services

CORBA Naming Service

The Naming Service allows you to associate abstract names with CORBA objects and

allows clients to find those objects by looking up the corresponding names. This service

is both very simple and very useful. A server that holds a CORBA object binds a name

to the object by contacting the Naming Service. To obtain a reference to the object, a

client requests the Naming Service to look up the object associated with a specified

name. This is known as resolving the object name. The Naming Service provides

interfaces defined in IDL that allow servers to bind names to objects and clients to

resolve those names.

Most CORBA applications make some use of the Naming Service. Locating a particular

object is a common requirement in distributed systems and the Naming Service provides

a simple, standard way to do this.

The Interface to the Naming Service

The Naming Service maintains a database of names and the objects associated with them.

An association between a name and an object is called a binding. The IDL interfaces to

the Naming Service provide operations to access the database of bindings. For example,

you can create new bindings, resolve names, and delete existing bindings.

Format of Names in the Naming Service

In the CORBA Naming Service, names can be associated with two types of object: a

naming context or an application object. A naming context is an object in the Naming

Service within which you can resolve the names of other objects.

Naming contexts are organized into a naming graph, which may form a naming hierarchy

much like that of a filing system. Using this analogy, a name bound to a naming context

would correspond to a directory and a name bound to an application object would

correspond to a file.

The full name of an object, including all the associated naming contexts, is known as a

compound name. The first component of a compound name gives the name of a naming

context, in which the second component is accessed. This process continues until the last

component of the compound name has been reached.

The notion of a compound name is common in filing systems. For example, in UNIX,

compound names take the form /aaa/bbb/ccc; in Windows they take the form

C:\aaa\bbb\ccc. A compound name in the Naming Service takes a more abstract form:

an IDL sequence of name components. Name components are not simple strings. Instead,

a name component is defined as an IDL structure, of type CosNaming::NameComponent,

that holds two strings:

// IDL // In module CosNaming. typedef string Istring; struct NameComponent { Istring id; Istring kind; };

A name is a sequence of these structures:

typedef sequence<NameComponent> Name;

The id member of a NameComponent is a simple identifier for the object; the kind

member is a secondary way to differentiate objects and is intended to be used by the

application layer. For example, you could use the kind member to distinguish the type of

the object being referred to. The semantics you choose for this member are not

interpreted by OrbixNames.

Both the id and kind members of a NameComponent are used in name resolution. Two

names that differ only in the kind member of one NameComponent are considered to be

different names.

IDL Interfaces to the Naming Service

The IDL module CosNaming contains two interfaces that allow your applications to

access the Naming Service:

NamingContext Provides the operations that allow you to access the main features of the Naming Service, such as binding and resolving names.

BindingIterator Allows you to read each element in a list of bindings. Such a list may be returned by operations of the NamingContext interface.

CORBA Trader Service

A Trading Object Service allows an object to be registered with a description of its

functionality. This service greatly increases the scalability of distributed systems by

making services easier to locate. An example of a service that a client might search for is

a printer.

How clients and servers use a trader

A trader contains a number of service types that describe a service. For example, a printer

service type might have properties such as pages_per_minute (a long) and location (a

string). Service types are stored in a Service Type Repository. Service offers, or offers,

are instances of these service types.

Figure 1: Typical trading service process

A server can export an offer to the trader, which includes an object reference for one of

its objects and values for properties defined by the service type, for example, "50 pages

per minute, located on the first floor". A client can then query the trading service based

on these properties using a filter called a constraint. For example, a client could search

for a printer where "pages_per_minute > 200". The trader then returns to the client an

offer of a service. The client can then use the object reference in the offer to invoke on

the server.

The Trader Service's Components

The Trader Service functionality is divided into components where each component has

an associated interface as follows:

• Lookup

• Register

• Admin

• Link

• Proxy

The functionality of each kind of trader depends on the interfaces that it supports. The

following is a list of the kinds of traders specified by the OMG:

• The simplest trader is the Query trader, which just supports the Lookup

interface. This could be useful, for example, where a trader is pre-loaded

and optimized for searching.

• The simple trader supports not only the Lookup interface but it also

supports exporting of offers with the Register interface.

• The stand-alone trader supports the interfaces of a simple trader and

additionally supports administration of the trader's configuration settings

using the Admin interface.

• The proxy trader supports the interfaces of a stand-alone trader and

additionally supports the Proxy interface. The proxy trader essentially

exports a lookup interface for delayed evaluation of offers, and can be

used for encapsulating legacy applications, or as a kind of service offer

factory.

• The linked trader supports the interfaces of a stand-alone trader and

additionally supports federation of traders using the Link interface.

• The full-service trader combines the functionality of all component

interfaces. Orbix E2A CORBA Trader Service is a full-service trader.

CORBA Event Service An event originates at a client supplier and is forwarded through an event channel to any

number of client consumers. Suppliers and consumers are completely decoupled; a

supplier has no knowledge of the number of consumers or their identities, and consumers

have no knowledge of which supplier generated a given event.

Service Capabilities

An event channel provides the following capabilities for forwarding events:

• Enables consumers to subscribe to events of certain types.

• Accepts incoming events from client suppliers.

• Forwards supplier-generated events to all connected consumers.

Connections

Suppliers and consumers connect to an event channel and not directly to each other, as

shown in Figure1. From a supplier's perspective, the event channel appears as a single

consumer; from a consumer's perspective, the event channel appears as a single supplier.

In this way, the event channel decouples suppliers and consumers.

Figure 1: Suppliers and Consumers Communicating through an Event Channel

How Many Clients?

Any number of suppliers can issue events to any number of consumers using a single

event channel. There is no correlation between the number of suppliers and the number of

consumers. New suppliers and consumers can be easily added to or removed from the

system. Furthermore, any supplier or consumer can connect to more than one event

channel.

Example

Many documents can be linked to a spreadsheet cell, and must be notified when the cell

value changes. However, the spreadsheet software does not need to know about the

documents linked to its cell. When the cell value changes, the spreadsheet software

should be able to issue an event that is automatically forwarded to each connected

document.

Event Delivery

Figure2 shows a sample implementation of event propagation in a CORBA system. In

this example, suppliers are implemented as CORBA clients; the event channel and

consumers are implemented as CORBA servers. An event occurs when a supplier

invokes a clearly defined IDL operation on an object in the event channel application.

The event channel then propagates the event by invoking a similar operation on objects in

each of the consumer servers.

Figure 2: Event Propagation in a CORBA System

CORBA Notification Service

The notification service extends the concept of event-based messaging with the following

features:

Feature Description

Quality-of-service Properties such as event message priority and lifetime, can

be set on different levels within the event channel.

Persistence Quality-of-service parameters control the availability of

events and channels beyond the lifetime of the service

process, supplier processes, or consumer processes.

Event filtering Filters allow consumers to receive only the events they are

and subscription interested in, and to tell suppliers which events are in

demand.

Event publication Suppliers can inform an event channel which events they

can supply, so consumers can subscribe to new event types

as they become available.

Structured events Header information in structured events let you set

properties and filterable data on event messages.

Multicast event

delivery

Groups of consumers can subscribe to events and receive

them using UDP multicast protocol, which keeps network

traffic to a minimum.

Event Communication

CORBA specifies two approaches to initiating the transfer of events between suppliers

and consumers

• push model: Suppliers initiate transfer of events by sending those events to the

channel. The channel then forwards them to any consumers connected to it.

• pull model: Consumers initiate the transfer of events by requesting them from the

channel. The channel requests events from the suppliers connected to it.

Push Model

In the push model, suppliers generate events and actively pass them to an event channel.

In this model, consumers wait for events to arrive from the channel. Figure3 illustrates a

push model architecture in which push suppliers communicate with push consumers

through the event channel.

Figure 3: The Push Model of Event Transfer

Event Propagation

In this architecture, a supplier initiates event transfer by invoking an IDL operation on an

object in the event channel. The event channel then invokes a similar operation on an

object in each consumer that is connected to the channel.

Pull Model

In the pull model, a consumer actively requests events from the channel. The supplier

waits for a pull request to arrive from the channel. When a pull request arrives, event data

is generated and returned to the channel. Figure4 illustrates a pull model architecture in

which pull consumers communicate with pull suppliers through the event channel.

Figure 4: Pull Model Suppliers and Consumers Communicating through an Event Channel

Event Propagation

In this architecture, the event channel invokes an IDL operation on an object in each

supplier to collect events. When a consumer invokes a similar operation on the event

channel, the channel forwards the events to the consumer that initiated the transfer.

Mixing Push and Pull Models

Because suppliers and consumers are completely decoupled by the event channel, push

and pull models can be mixed in a single system. For example, suppliers can connect to

an event channel using the push model, while consumers connect using the pull model, as

shown in Figure5.

Figure 5: Push Suppliers and Pull Consumers Communicating through an Event Channel

Event Propagation

In this case, both suppliers and consumers participate in initiating event transfer. A

supplier invokes an operation on an object in the event channel to transfer an event to the

channel. A consumer then invokes another operation on an event channel object to

transfer the event data from the channel.

In the case where push consumers and pull suppliers are mixed, the event channel

actively propagates events by invoking IDL operations in objects in both suppliers and

consumers. The pull supplier would wait for the channel to invoke an event transfer

before sending events. Similarly, the push consumer would wait for the event channel to

invoke event transfer before receiving events.

Event Filtering

Filter objects screen events as they pass through the channel, and process those that meet

the filter constraints. The notification service defines two types of filters:

• Forwarding filters are set in a channel by clients that wish to restrict event delivery to

those events that meet certain constraints. These filters implement interface

CosNotifyFilter::Filter.

• Mapping filters are set by consumers to adjust the priority or lifetime settings of those

messages that meet filter constraints. These filters implement interface

CosNotifyFilter::MappingFilter.

CORBA Object Transaction Service

The Object Transaction Service (OTS) is a CORBA service that enables the use of

distributed, two-phase commit transactions in CORBA applications. OTS consists of:

(1) several IDL interfaces (most of which are defined in a module called CosTransactions),

(2) some additional library code that is linked into client and server applications, and

(3) a transaction manager. At first sight, the OTS specification can appear to be overly

complex. There are two reasons for this.

The first reason for the apparent complexity of OTS is because the OTS specification

defines not just the API that is used by “normal” developers; it also defines the lower-

level “plumbing” API that is used by vendors to implement OTS. The reason why the

OTS specification defines the plumbing API is that doing so ensures interoperability

between different implementations of OTS. This means that an OTS client built with one

CORBA product can take part in distributed transactions with OTS servers that are

implemented with different CORBA products.

The second reason for the apparent complexity of OTS is because its API is flexible

enough to allow transactional applications to be written in several different ways. Most

developers use a simple API that allows them to focus on application-level logic, while

leaving OTS to automatically perform several house-keeping tasks. However, OTS does

allow developers to take a more hands-on approach and manually handle the required

house-keeping tasks. The fuller set of “hands-on” APIs makes it possible for developers

to integrate OTS with non-XA-compliant databases or to implement bridges from OTS to

a non-CORBA distributed transactional system.

interface TransactionFactory { Control create(in unsigned long time_out); ... }; interface Control { Terminator get_terminator(); Coordinator get_coordinator(); }; interface Terminator { void commit(...); void rollback(); }; interface Coordinator { RecoveryCoordinator register_resource(in Resource r); ... }; interface RecoveryCoordinator { Status replay_completion(in Resource r); }; interface Resource { Vote prepare(); void rollback(); void commit(); ...

}; local interface Current : CORBA::Current {

void begin(); void commit(); void rollback(); void set_timeout(in unsigned long seconds); unsigned long get_timeout(); Control get_control(); Control suspend(); void resume(in Control which); }; Figure 21.3: A subset of the OTS APIs

The Raw API of OTS

The Resource interface is a CORBA “wrapper” around a resource (database). The

operations defined on this interface are similar to the C-based API of the XA standard.

Implementations of OTS provide an implementation of the Resource interface that trivially

delegates to the underlying XA C-based API. This means that a server developer gets

trivial integration between OTS and an XA-compliant database. If server developers are

using a database that is not XA-compliant then they will have to implement the Resource

interface for that database.

An implementation of OTS provides a transaction manager (TM). The specification does

not state if this should be packaged as, say, a server process or as a library that can be

linked into another application. However, it is common for the TM to be a stand-alone

server process. Regardless of how it is packaged, the TM contains pre-written

implementations of several interfaces: TransactionFactory, Control, Terminator, Coordinator and

RecoveryCoordinator.

The CORBA specification does not state how an OTS client connects to the transaction

factory in the TM, so the mechanism varies from one CORBA product to another.

However, the connection is likely to be made by calling resolve_initial_references(). The OTS

client calls TransactionFactory::create() to begin a transaction. This operation returns a

reference to a Control object.

The client must somehow communicate the Control object reference when invoking an

operation on an OTS-aware object. This could be achieved by explicitly passing the

Control reference as a parameter to the operation. However, it is more commonly

achieved by embedding the Control reference (along with other information) in a service

context that is transmitted with the request. The OTS specification defines a service

context structure for this purpose.

When the client wants to terminate a transaction, it calls Control::get_terminator() to

obtain a reference to the Terminator object and then calls commit() or rollback() on this.

If an OTS server accesses an XA-compliant database then the server invokes an OTS

operation that puts a Resource wrapper around the database. If the server uses a non-XA-

compliant database then the server developer must implement the Resource interface so

that its database can take part in two-phase commit transactions.

In the original OTS specification, an object indicated that it could take part in OTS

transactions by implementing an IDL interface that inherited from

CosTransactions::TransactionalObject. The _is_a() operation (which is provided by the

base Object type) was used by a client application to determine whether or not an object

reference was for a transactionally aware object. However, the OMG decided that this

approach was undesirable. In particular, it can result in a dramatic increase in the number

of IDL interface definitions.2 Eventually, the OMG decided that it would be be better if

whether or not an object was transactionally aware could be expressed as a quality of

service. In modern versions of the OTS, this goal is achieved by defining a new POA

Policy type that, if used, indicates that objects in that POA are transactionally aware. An

IOR interceptor detects the presence of this POA policy and embeds an OTS

TaggedComponent into IORs that originate from that POA. A client application can

check for the presence of this TaggedComponent to determine if an object is

transactionally aware.

When an operation in an OTS-enabled server receives a Control object, it can call

get_coordinator() to gain access to the transaction’s Coordinator object. The Coordinator

interface is a “wrapper” around the coordination logic that implements the two-phase

commit protocol. Its purpose is to interact with the Resource objects in OTS servers. The

server calls register_resource() on the Coordinator to register its resource (this

registration occurs only once per transaction). This informs the TM that the server’s

Resource is taking part in the transaction and so should be included in the two-phase

commit protocol when the transaction commits. This operation returns a reference to a

RecoveryCoordinator object for the transaction. The server stores this object reference in

a persistent storage area so that if the server crashes during the two-phase commit

protocol and is restarted then the server can contact the RecoveryCoordinator to

determine if the transaction should commit or roll-back.

During the two-phase commit, the TM invokes the prepare() operation on all Resource

objects that have taken part in the transaction. The return value of this operation is a Vote

that determines if the transaction will be committed or rolled back.

How OTS Builds on Top of Other Parts of CORBA

This section briefly discusses a simple subset of the API provided by OTS. This simple

subset is used by most OTS developers. The focus of this discussion is not to act as a

tutorial for developers, but rather to show how other aspects of CORBA (such as current

objects, portable interceptors and service contexts) are used as building blocks for more

powerful capabilities, such as OTS.

OTS defines a Current object. This object is accessed by calling

resolve_initial_references("TransactionCurrent"). The OTS Current object lets threads in

both client and server applications know with which transaction they are currently

associated.

An OTS client uses the begin(), commit() and rollback() operations on the Current object

to control the lifetime of a transaction. Internally, the Current object delegates to the

corresponding operations defined on the interfaces in the transaction manager. When a

client invokes an operation on an object, a portable request interceptor provided by OTS

embeds transactional context information obtained from the Current object in a service

context that is then transmitted with the request to the target object. A corresponding

portable request interceptor in the server extracts this transactional context information

from the service context and initializes the server’s Current object before dispatching to

the target operation. This means that the body of the operation executes within the

context of a transaction. Because of this, the operation does not need to begin-and-

commit or resume-and-suspend a transaction. Instead, these details are taken care of by

the portable interceptor and so the body of the operation can focus on using, say,

embedded SQL or JDBC to query/update the database.

The mechanism discussed above provides a simple API for developers and it is powerful

enough for the majority of applications. However, developers can, if they so choose,

avoid using the Current object and its associated portable interceptor, and instead

manually execute their own OTS-infrastructure code. Although this is more complex, it

provides a way for developers to integrate a non-XA-compliant database with OTS.

CORBA Security Service The assets of an enterprise need to be protected against perceived threats. The amount of

protection the enterprise is prepared to pay for depends on the value of the assets, and the

threats that need to be countered. The security policy needed to protect against these

threats may also depend on the environment and how vulnerable the assets are in this

environment. This CORBA Security Service provides a security architecture that can

support a variety of security policies to meet different needs.

The security functionality defined by this specification comprises:

• Identification and authentication of principals (human users and objects that need

to operate under their own rights) to verify they are who they claim to be.

• Authorization and infrastructure based access control - deciding whether a

principal can access an object domain, individual object, or operation on an

object, normally using the identity and/or privilege attributes of the principal

(such as role, groups, security clearance).

• Security auditing to make users accountable for their security related actions. It is

normally the human user who should be accountable. Auditing mechanisms

should be able to identify the user correctly, even after a chain of calls through

many objects.

• Security of communication between objects, which is often over insecure lower

layer communications. This requires trust to be established between the client and

target, which may require authentication of clients to targets and authentication of

targets to clients. If also requires integrity protection and (optionally)

confidentiality protection of messages in transit between objects.

• Non-repudiation provided irrefutable evidence of actions such as proof of origin

of data to the recipient, or proof of receipt of data to the sender to protect against

subsequent attempts to falsely deny the receiving or sending of the data.

The CORBA security model is security technology neutral. For example, interfaces

specified for security of client-target object invocations hide the security mechanisms

used from both the application objects and ORB (except for some security administrative

functions). It is possible to implement CORBA security on a wide variety of existing

systems, reusing the security mechanisms and protocols native to those systems.

The CORBA Security Service can control access to an application object without it being

aware of security, so it can be ported to environments that enforce different security

policies and use different security mechanisms. However, if an object requires

application level security, the security attributes must be delegated and made available to

the application for access control. This specification defines the core security facilities

and interfaces required to ensure a reasonable level of security of a CORBA-compliant

system as a whole.

CORBA Licensing Service

First, CORBA licensing service is used to achieve transparency. The end user should not

be aware of the existence of the licensing service. Whenever a user invokes an

application, the licensing service locates a valid license. If such a license is available and

the end user has permission to use it, the service returns the appropriate access rights

allowing the application to proceed without bothering the end-user. If a valid license is

not available, various policies can be implemented, depending on the agreement

negotiated between the software supplier and the software customer: put the request in a

waiting queue, wait a random amount of time and retry, ask the end user for a course of

action, give access anyway, etc.

Second, it is to permit the evolvability of the service in the face of ever changing

licensing policies. The service should be able to adapt to unanticipated licensing practices

with no or little changes to the core design.

A licensing service must allow client applications to run according to a license. It defines

two interfaces: LicenseServiceManager and ProducerSpecificLicenseService.

These standardized interfaces provide the entry points to the service.

Figure 1. Interfaces Interaction 1 4 5 6 3 2

Figure 1 shows the interactions among the system components. The ProducerClient

component is some piece of software whose use we want to control. In this paper, we

refer also to it as the client application. The LicenceManager component is a black box

embodying specific licensing policies, and is not part of the licensing service pecification.

The client application connects to the LicenseServiceManager and requests an instance

of the ProducerSpecificLicenseService (steps 1 and 2), which it gets through steps 3

Producer Client

Producer Specific License Service

License Service Manager

License Manager

License

License

and 4. The exact means for getting that instance are left to the service designers. After

steps 5 and 6, the client application becomes effectively controlled. Before granting

authorization, ProducerSpecificLicenseService communicates with LicenceManager

which checks the available licenses and applicable rules and policies.

CORBA LifeCycle service Life Cycle Service defines services and conventions for creating, deleting, copying and

moving objects. Because CORBA-based environments support distributed objects, the

Life Cycle Service defines conventions that allow clients to perform life cycle operations

on objects in different locations.

A client is any piece of code that initiates a life cycle operation for some object. A client

has a simple view of the life cycle operations. The client’s model of creation is defined in

terms of factory objects. A factory is an object that creates another object. Factories are

not special objects. As with any object, factories have well-defined IDL interfaces and

implementations in some programming language.

The above figure to create an object “over there” a client must posses an object reference

to a factory over there. The client simply issues a request on the factory. There is no

standard interface for a factory. Factories provide the client with specialized operations to

create and initialize new instances in a natural way for the implementation. The following

illustrates a factory for a document.

interface DocFactory { Document create(); Document create_with_title(in string title); Document create_for(in natural_language nl); };

Factories are object implementation dependent. A different implementation of the

document could define a different factory interface. A generic factory is a creation

service. It provides a generic operation for creation.

Instead of invoking an object specific operation on a factory with statically defined

parameters, the client invokes a standard operation whose parameters can include

information about resource filters, state initialization, policy preferences, etc. To create

an object, a client must possess an object reference for a factory, which may be either a

generic factory or an object-specific factory, and issue an appropriate

request on the factory. As a result, a new object is created and typically an object

reference is returned. There is nothing special about this interaction. A factory assembles

the resources necessary for the existence of an object it creates. Therefore, the factory

represents a scope of resource allocation, which is the set of resources available to the

factory. A factory may support an interface that enables its clients to constrain the scope.

Clients find factory objects in the same fashion they find any object. Two common

scenarios for clients to find factories are:

A client that wishes to delete an object issues a remove1 request on an object supporting

the LifeCycle Object interface. The object receiving the request is called the target. To

delete an object, a client must posses an object reference supporting the LifeCycleObject

interface and issues a remove request on the object.

A client that wishes to move or copy an object issues a move or copy request on an

object supporting the LifeCycleObject interface. The object receiving the request is

called the target. The move and copy operations expect an object reference supporting the

FactoryFinder interface. The implementations of move and copy can use the factory

finder to find appropriate factories “over there.” This is invisible to the client. Client code

would simply issue a copy request on the document and pass it an object supporting the

FactoryFinder interface as an argument. When a client issues a copy request on a

target, it is assumed that the target, the factory finder, and the newly created object can all

communicate via the ORB. With externalization/internalization there is no such

assumption. In the presence of a future externalization service, the externalized form of

the object can exist outside of the ORB for arbitrary amounts of time, be transported by

means outside of the ORB and can be internalized in a different, disconnected ORB.

Factory Finders Factory finders support operations which returns one or more factories. Clients pass

factory finders to the move and copy operations, which typically invoke this operation to

find a factory to interact with. The new copy or the migrated object will then be within

the scope of the factory finder. Some examples of locations that a factory finder might

represent are:

• somewhere on a work group’s local area network

• storage device A on machine X

• Susan’s notebook computer

Design Principles Several principles have driven the design of the Life Cycle Service:

1. A factory object registered at a factory finder represents an implementation at that

location. Thus, a factory finder allows clients to query a location for an implementation.

2. Object implementations can embody knowledge of finding a factory, relative to a

location. Object implementations usually do not embody knowledge of location.

3. The desired result for life cycle operations such as copy and move depends on

relationships between the target object and other objects.

4. The Life Cycle Service is not dependent on any particular model of persistence and is

suitable for distributed, heterogeneous environments.

5. The design does not include an object equivalence service nor rely on global object

identifiers.

CORBA Concurrency Control Service The Concurrency Control Service enables multiple clients to coordinate their access to

shared resources. Coordinating access to a resource means that when multiple,

concurrent clients access a single resource, any conflicting actions by the clients are

reconciled so that the resource remains in a consistent state.

The Concurrency Control Service does not define what a resource is. It is up to the clients

of the Concurrency Control Service to define resources and to properly identify

potentially conflicting uses of those resources. In a typical use, an object would be a

resource, and the object implementation would use the concurrency control service to

coordinate concurrent access to the object by multiple clients.

The Concurrency Control Service differentiates between two types of client: a

transactional client and a non-transactional client. Conflicting access by clients of

different types is managed by the Concurrency Control Service, thereby ensuring that

clients always see the resource in a consistent state. The Concurrency Control Service

does not define what a transaction is. Transactions are defined by the Transaction

Service. The Concurrency Control Service is designed to be used with the Transaction

Service to coordinate the activities of concurrent transactions.

The Transaction Service supports two modes of operation: implicit and explicit. When

operating in the implicit mode, a transaction is implicitly associated with the current

thread of control. When executing in the explicit mode, a transaction is specified

explicitly by the reference to the coordinator that manages the current transaction. To

simplify the model of locking supported by the Concurrency Control Service when a

transactional client is operating in the implicit transaction mode, transactional clients are

limited to a single thread per transaction (nested transactions can be used when

parallelism is necessary) and that thread can be executing on behalf of at most one

transaction at a time.

Locks The Concurrency Control service coordinates concurrent use of a resource using locks. A

lock represents the ability of a specific client to access a specific resource in a particular

way. Each lock is associated with a single resource and a single client. Coordination is

achieved by preventing multiple clients from simultaneously possessing locks for the

same resource if the activities of those clients might conflict. To achieve coordination, a

client must obtain an appropriate lock before accessing a shared resource.

Lock Modes The Concurrency Control Service defines several lock modes, which correspond to

different categories of access. Having a variety of lock modes allows more flexible

conflict resolution.

Lock Granularity The Concurrency Control Service does not define the granularity of the resources that are

locked. It defines a lock set, which is a collection of locks associated with a single

resource. It is up to clients of the Concurrency Control Service to associate a lock set

with each resource. Typically, if an object is a resource, the object would internally create

and retain a lock set. However, the mapping between objects and resources (and lock

sets) is up to the object implementation; the mapping could be one to one, but it could

also be one to many, many to many, or many to one.

Lock Modes Read, Write, and Upgrade Locks

The Concurrency Control service defines read (R) and write (W) lock modes that support

the conventional multiple readers, one writer policy. Read locks conflict with write locks,

and write locks conflict with other write locks. In addition, the Concurrency Control

service defines an upgrade (U) mode. An upgrade mode lock is a read lock that conflicts

with itself. It is useful for avoiding a common form of deadlock that occurs when two or

more clients attempt to read and then update the same resource. If more than one client

holds a read lock on the resource, a deadlock will occur as soon as one of the clients

requests a write lock on the resource. If each client requests a single upgrade lock

followed by a write lock, this deadlock will not occur.

CORBA Collection Service Collections support the grouping of objects and support operations for the manipulation

of the objects as a group. Common collection types are queues, sets, bags, maps, etc.

Collection types differ in the “nature of grouping” exposed to the user. “Nature of

grouping” is reflected in the operations supported for the manipulation of objects as

members of a group. Collections, for example, can be ordered and thus support access to

an element at position ”i” while other collections may support associative access to

elements via a key. Collections may guarantee the uniqueness of elements while others

allow multiple occurrences of elements. A user chooses a collection type that matches the

application requirements based on manipulation capabilities. Collections are foundation

classes used in a broad range of applications; therefore, they have to meet the general

requirement to be able to collect elements of arbitrary type. On the other hand, a

collection instance usually is a homogenous collection in the sense that all elements

collected are of the same type, or support the same single interface.

Bag, SortedBag A Bag is an unordered collection of zero or more elements with no key. Multiple

elements are supported. As element equality is supported, operations which require the

capability “test of element equality” (e.g., test on containment) can be offered. Example:

The implementation of a text file compression algorithm. The algorithm finds the most

frequently occurring words in sample files. During compression, the words with a high

frequency are replaced by a code (for example, an escape character followed by a one

character code). During re-installation of files, codes are replaced by the respective

words. Several types of collections may be used in this context. A Bag can be used during

the analysis of the sample text files to collect isolated words. After the analysis phase you

may ask for the number of occurrences for each word to construct a structure with the

255 words with the highest word counts. A Bag offers an operation for this, you do not

have to “count by hand,” which is less efficient. To find the 255 words with the highest

word count, a SortedRelation is the appropriate structure.

A SortedBag (as compared to a Bag) exposes and maintains a sorted order of the

elements based on a user-defined element comparison. Maintained elements in a sorted

order makes sense when printing or displaying the collection content in sorted order.

EqualitySequence An EqualitySequence is an ordered collection of elements with no key. There is a first

and a last element. Each element, except the last one, has a next element and each

element, except the first one, has a previous element. As element equality is supported,

all operations that rely on the capability “test on element equality” can be offered, for

example, locating an element or test for containment. Example: An application that

arranges wagons to a train. The order of the wagons is important. The trailcar has to be

the first wagon, the first class wagons are arranged right behind the trailcar, the restaurant

has to be arranged right after the first class and before the second class wagons, and so

on.

Heap A Heap is an unordered collection of zero or more elements without a key. Multiple

elements are supported. No element equality is supported. Example: A “trash can” on a

desktop which memorizes all objects moved to the trashcan as long as it is not emptied.

Whenever you move an object to the trashcan it is added to the heap. Sometimes you

move an object accidentally to the trashcan. In that case, you iterate in some order

through the trashcan to find the object - not using a test on element equality. When you

find it, you remove it from the trashcan. Sometimes you empty the trashcan and remove

all objects from the trashcan.

KeyBag, KeySortedBag A KeyBag is an unordered collection of zero or more elements that have a key. Multiple

keys are supported. As no element equality is assumed, operations such as “test on

collection equality” or “set theoretical operation” are not offered. A KeySortedBag is

sorted by key. In addition to the operations supported for a KeyBag, all operations related

to ordering are offered

KeySet, KeySortedSet A KeySet is an unordered collection of zero or more elements that have a key. Keys must

be unique. Defined element equality is not assumed; therefore, operations and semantics

which require the capability “element equality test" are not offered. A KeySortedSet is

sorted by key. In addition to the operations supported for a KeySet, all operations related

to ordering are offered. For example, operations exploiting the ordering, such as

“set_to_previous / set_to_next” and “access via position” are supported.

Map, SortedMap A Map is an unordered collection of zero or more elements that have a key. Keys must be

unique. As defined, element equality is assumed access via the element value and all

operations which need to test on element equality, such as a test on containment for an

element, test for equality, and set theoretical operations can be offered for maps. A

SortedMap is sorted by key. In addition to the operations supported for a Map, all

operations related to ordering are offered. For example, operations exploiting the

ordering like “set_to_previous / set_to_next” and “access via position” are supported.

Relation, SortedRelation A Relation is an unordered collection of zero or more elements with a key. Multiple keys

are supported. As defined element equality is assumed, test for equality of two collections

is offered as well as the set theoretical operations. A SortedRelation is sorted by key. In

addition to the operations supported for a Relation, all operations related to ordering are

offered. For example, operations that exploit ordering such as “set_to_previous /

set_to_next” and “access via position” are supported.

Set, SortedSet A set is an unordered collection of zero or more elements without a key. Element equality

is supported; therefore, operations that require the capability “test on element equality”

such as intersection or union can be offered. A SortedSet is sorted with respect to a user-

defined element comparison. In addition to the operations supported for a Set, all

operations related to ordering are offered.

Sequence A Sequence is an ordered collection of elements without a key. There is a first and a last

element. Each element (except the last one) has a next element and each element (except

the first one) has a previous element. No element equality is supported; therefore,

multiples may occur and access to elements via the element value is not possible. Access

to elements is possible via position/index.

Deque A double ended queue may be considered as a sequence with restricted access. It is an

ordered collection of elements without a key and no element equality. As there is no

element equality, an element value may occur multiple times. There is a first and a last

element. You can only add an element as first or last element and only remove the first or

the last element from the Deque.

PriorityQueue A PriorityQueue may be considered as a KeySortedBag with restricted access. It is an

ordered collection with zero or more elements. Multiple key values are supported. As no

element equality is defined, multiple element values may occur. Access to elements is via

key only and sorting is maintained by key. Accessing a PriorityQueue is restricted. You

can add an element relative to the ordering relation defined for keys and remove only the

first element (e.g., the one with highest priority).

Queue

A queue may be considered as a sequence with restricted access. It is an ordered

collection of elements with no key and no element equality. There is a first and a last

element. You can only add (enque) an element as last element and only remove (deque)

the first element from the Queue. That is, a queue exposes FIFO behavior.

Stack

A Stack may be considered as a sequence with restricted access. It is an ordered

collection of elements with no key and no element equality. There is a first and a last

element. You can only add (push) an element as last element (at the top) and only remove

(pop) the last element from the Stack (from the top). That is, a Stack exposes LIFO

behavior

CORBA Externalization Service The Externalization Service specification defines protocols and conventions for

externalizing and internalizing objects. To externalize an object is to record the object’s

state in a stream of data. Objects which support the appropriate interfaces and whose

implementations adhere to the proper conventions can be externalized to a stream (in

memory, on a disk file, across the network, etc.) and subsequently be internalized into a

new object in the same or a different process. The externalized form of the object can

exist for arbitrary amounts of time, be transported by means outside of the ORB, and can

be internalized in a different, disconnected ORB. Many different externalized data

formats and storage mediums can be supported by service implementations. But, for

portability, clients can request that externalized data be stored in a file using a

standardized format that is defined as part of this Externalization Service specification.

Externalizing and internalizing an object is similar to copying the object. The copy

operation creates a new object that is initialized from an existing object. The new object

is then available to provide service. Furthermore, with the copy operation, there is an

assumption that it is possible to communicate via the ORB between the “here” and

“there”. Externalization, on the other hand, does not create an object that is initialized

from an existing object. Externalization “stops along the way”.

New objects are not created until the stream is internalized. Furthermore, there is no

assumption that is possible to communicate via the ORB between “here” and “there.” The

Externalization Service is related to the Relationship Service. It also parallels the Life

Cycle Service in defining externalization protocols for simple objects, for arbitrarily

related objects, and for graphs of related objects that support compound operations. The

Externalization Service defines protocols in these areas:

• Client’s view of externalization, composed of the interfaces used by a client to

externalize and internalize objects. The client’s view of externalization is defined

by the Stream interface.

• Object’s view of externalization, composed of the interfaces used by an

externalizable object to record and retrieve their object state to and from the

stream’s external form. The object’s view is defined by the StreamIO interface.

• Stream’s view of externalization, composed of the interfaces used by the stream

to direct an externalizable object or graph of objects to record or retrieve their

state from the stream’s external form. The stream’s view of externalization is

given by the Streamable, Node, Role and Relationship interfaces.

CORBA Time Service

Time Service Requirements The requirements explicitly stated in the RFP ask for a service that enables the user to obtain current time

together with an error estimate associated with it. Additionally, the RFP suggests that the service also

provide the following facilities:

• Ascertain the order in which “events” occurred.

• Generate time-based events based on timers and alarms.

• Compute the interval between two events.

The general architectural pattern used is that a service object manages objects of a specific category as

shown in Figure 1-1.

The service interface provides operations for creating the objects that the service manages

and, if appropriate, also provides operations for getting rid of them. The Time Service

object consists of two services, and hence defines two service interfaces:

• Time Service manages Universal Time Objects (UTOs) and Time Interval Objects

(TIOs), and is represented by the TimeService interface.

• Timer Event Service manages Timer Event Handler objects, and is represented by the

TimerEventService interface.

CORBA Persistent State Service

The Persistent State Service presents persistent information as storage objects stored in

storage homes. Storage homes are themselves stored in datastores. A datastore is an

entity that manages data, for example a database, a set of files, a schema in a relational

database.

In order to manipulate a storage object, you need a programming-language object that

represents it in your program. In Java and C++, this programming language object is an

instance of a class: therefore we call it a storage object instance.

A storage object instance may be bound to a storage object in the datastore, and provides

direct access to the state of this storage object: updating the instance updates the storage

object in the datastore. Such a connected instance is called a storage object incarnation.

Likewise, to use a storage home, you need a programming language object called a

storage home instance. Storage home instances themselves are provided by catalogs.

To access a storage object, you need a logical connection between your process and the

datastore that contains the storage home of this storage object. This logical connection,

called session, can give access to more than one datastore.

The management of sessions is either explicit (you create and manage sessions yourself)

or implicit (you create one or more session pools that manage sessions for you). Sessions

and session pools are the two kinds of catalogs defined by this specification.

Conceptually, a datastore is a set of storage homes. Each storage home has a type. Within

a datastore, a storage home is a singleton: there is at most one storage home of a given

type in this datastore. A storage home contains storage objects. Each storage object has

an ID unique within its storage home (its short-pid) and a global ID (its pid). The scope

of the pid is all storage objects that can be accessed through the same catalog. Each

storage object has a type, which defines the state members and operations (also known as

stored methods) of instances of this type. A storage object type can derive from another

storage object type.

A storage home can only contain storage objects of a given type. The type of a storage

home defines this storage object type, plus operations and keys (defined below). A

storage home type can derive from another storage home type: the storage object type of

the base storage home type must be a base of the storage object type of the derived

storage home type.

Within a datastore, a storage home manages its own storage objects and the storage

objects of all derived storage homes. A storage home and all its derived storage homes is

called a storage home family.

A storage home can ensure that a list of state members of its storage object type forms a

unique identifier for the storage objects it manages. Such a list of state members is called

a key. A storage home can have any number of keys.

CORBA Component Model (CCM) The CCM is part of the CORBA 3.0 specification, is a server-side component model for

building and deploying CORBA applications. It is very similar to Enterprise Java Beans

(EJB) because this also allows system services to be implemented by the container

provider rather than the application developer. The CCM extends the CORBA object

model by defining features and services in a standard environment that enable application

developers to implement, manage, configure and deploy components that integrate with

commonly used CORBA Services. These server-side services include transactions,

security, persistence, and events. The CCM is language-independent. This means that the

OMG must package the technology more tightly to provide a complete vocabulary for

describing binary executables.

CCM Terminology:

Facets

The component facets are the interfaces that the component exposes.

Receptacles

These allow components to "hook" themselves together. Component systems contain

many components that work together to provide the client functionality. The receptacle

allows a component to declare its dependency to an object reference that it must use.

Receptacles provide the mechanics to specify interfaces required for a component to

function correctly.

Event sources/Sinks

Allow components to work with each other without being tightly linked. This is loose

coupling as provided by the Observer design pattern. When a component declares its

interest to publish or emit an event, it is an event source. A publisher is an exclusive

provider of an event while an emitter shares an event channel with other event sources.

Other components become subscribers or consumers of those events by declaring an

event sink.

Attributes

An extension of the traditional notion of CORBA interface attributes that allow

component values to be configured, the CCM version of attribute allows operations that

access and modify values to raise exceptions. This is a useful feature for raising a

configuration exception after the configuration has completed and an attribute has been

accessed or changed. These new terms are illustrated in Figure 1.

Figure 1. Component model

Component Implementation Definition Language (CIDL)

To support all the above features, the CCM extends the CORBA Interface Definition

Language (CIDL) to the point that it "introduces a new declaration language." CIDL, like

IDL, relieves developers from having to provide much of the "plumbing" needed to

include components within a container. To be exact, CIDL is a superset of the Persistent

State Definition Language (PSDL), which itself is a superset of IDL. You should be

aware that OMG IDL is morphing. Since CIDL is a language to emphasize the

description of components, it has many new keywords. All of these keywords have been

added to support the features introduced above. Listing 1 provides an example:

Listing 1. An example

interface VideoPlayer { VideoStream play(); int fastForward(in Speed spd); int rewind(in Speed spd); }; component VCR supports VideoPlayer{ provides AudioStream audioPort; provides VideoStream videoPort; }; component DVD supports VideoPlayer{ provides AudioStream audioPort; provides VideoStream videoPort;

}; A component will have a set of interfaces that will let it "behave nicely" in its designated

container, and it will have to have one or more interfaces that clients will call to perform

business logic. You could say the former would be a management interface and the latter

a service interface. OMG IDL allowed you to express a set of interfaces as a new

interface that inherited all the interfaces. This combined interface was usually long and

complicated. An alternative is to define an entry point interface that is used to navigate to

the other interfaces of the object.

The OMG has long known about the Multiple Interface gap. In fact it issued a Request

for Proposal (RFP) regarding this topic back in 1996. That RFP was never specified

concretely by a document, but the CCM can be looked at as fulfilling the requirements of

that RFP. Component types specified by CIDL are allowed a combination of different

interfaces that are not related by inheritance.

A component type name is defined with the component keyword. The component has is

own interface, termed its "equivalent" interface, that is implicitly defined in the

component definition. The equivalent interface for the components VCR and DVD is

VideoPlayer; the relationship is declared through the supports keyword.

The component facets provide the business logic to expose to clients. These "facets" of

our component come to life through the keyword provides . Provided interfaces are the

"ports" that clients or other components can connect to. This is important because another

keyword introduced in CIDL is the uses keyword. Uses declarations define a dependency

of a component to an interface provided by another component or CORBA object. Clients

can then navigate to a specified interface at runtime using Navigation operations

generated by the provides declarations.

CCM Containers :

These act as the interface between a CORBA component and the outside world. A CCM

client never accesses a CORBA component directly. Any component access is done

through container-generated methods which in turn invoke the component's methods.

There are basically two basic types of containers. They are transient containers that may

contain transient, non-persistent components whose states are not saved at all and

persistent containers that contain persistent components whose states are saved between

invocations. Depending upon the types of components that they can execute, CCM

Containers may be divided into:

• Service containers

• Session containers,

• Entity containers, and

• Other containers

Container model

The Container manages component instances depending on component category. It offers

all its services through simplified APIs: it becomes component's window to the outside

world. The container offers series of local interfaces (internal interfaces) for establishing

component's context and allows component to implement series of callback methods.

Container Architecture

As shown in the figure above, the container manages a component. It creates and uses a

POA with requested features for specific component category. Client can use external

interfaces of a component to interact with component and home interfaces to manage life

cycle of a component.

External API Types

The external API types of a component are the contract between the component

developer and the component client. It consists of two types of interfaces mainly home

interface and application interface. Home interface allows client to obtain reference to

one of the application interfaces the component implements. From client's prospective

two design patterns are supported namely factory patterns for creating new components

and finder patterns for existing components. These patterns are distinguished by the

presence of primary key on home declaration.

Container API Type

The container API defines a API framework, i.e. contract between specific component

and container. It defines two types of container APIs namely session APIs and entity

APIs depending on component category explained below.

CORBA Usage Model

A CORBA usage model specifies the required interaction pattern between the container,

the POA and the CORBA services. It defines three types of usage model that are

distinguished by reference persistence and servant to ObjectId mapping.

• Stateless - which uses transient object references in conjunction with a POA

servant which can support any ObjectId.

• Conversational - which uses transient references with a POA servant that is

dedicated to specific ObjectId

• Durable - which uses persistence references with POA servant that is dedicated to

a specific ObjectId.

Application Servers Application Servers which have become very popular in the last few years, provide the

platforms for the execution of transactional, server-side applications in the online world.

They are the modern cousins of traditional transaction processing monitors (TPMs) like

CICS. They play a central role in enabling electronic commerce in the web context. They

are built on the basis of more standardized protocols and APIs. One of the most important

features of Application Sservers is their ability to integrate the modern application

environments with legacy data sources like IMS, CICS, VSAM, etc. They provide a

number of connectors for this purpose, typically using asynchronous transactional

messaging technologies like MQSeries and JMS.

Traditional TPM-style requirements for industrial strength features like scalability,

availability, reliability and high performance are equally important for ASs also. Security

and authentication issues are additional important requirements in the web context. ASs

support DBMSs not only as storage engines for user data but also as repositories for

tracking their own state. Several caching technologies have been developed to improve

performance of ASs.

An Application server is a software framework dedicated to the efficient execution of

procedures (programs, routines, scripts) for supporting the construction of applications.

Advantages of application servers

Data and code integrity

By centralizing business logic on an individual server or on a small number of

server machines, updates and upgrades to the application for all users can be

guaranteed. There is no risk of old versions of the application accessing or

manipulating data in an older, incompatible manner.

Centralized configuration

Changes to the application configuration, such as a move of database server, or

system settings, can take place centrally.

Security

A central point through which service-providers can manage access to data and

portions of the application itself counts as a security benefit, devolving

responsibility for authentication away from the potentially insecure client layer

without exposing the database layer.

Performance

By limiting the network traffic to performance-tier traffic the client–server model

improves the performance of large applications in heavy usage

environments.[citation needed]

Total Cost of Ownership (TCO)

In combination, the benefits above may result in cost savings to an organization

developing enterprise applications. In practice, however, the technical challenges

of writing software that conforms to that paradigm, combined with the need for

software distribution to distribute client code, somewhat negate these benefits.

Transaction Support

A transaction represents a unit of activity in which many updates to resources (on

the same or distributed data sources) can be made atomic (as an indivisible unit of

work). End-users can benefit from a system-wide standard behaviour, from

reduced time to develop, and from reduced costs. As the server does a lot of the

tedious code-generation, developers can focus on business logic.

Borland Enterprise Server

Borland Enterprise Server, also known as Borland Application Server, was Borland's

Java EE Application Server. The product was developed in 1999 within the team of

former Visigenic company that was acquired by Borland in 1997. Borland's Java Studio

was supposed to have BES and JBuilder tightly integrated, but in reality this integration

never happened. BES suffered compatibility problems even with Borland's own products

(JDataStore, OptimizeIt). The appearance of free commercial grade (and more mature)

application servers, like JBoss, made BES unattractive and unable to really compete with

the former.

The Orbix E2A Application Server Platform The Orbix E2A Application Server Platform is a programmer oriented application

development and deployment platform that spans the entire spectrum of hardware and

operating systems, from Windows 2000 to the mainframe, and provides a complete

application integration solution. It provides a complete middleware suite, including: ¥

The worlds most advanced and widely-used CORBA ORB, which implements C++,

Java, COBOL, and PL/I language bindings (the latter two on OS/390).

A J2EE-compliant Application Server whose unique architecture supports full network

distribution, avoiding performance bottlenecks and allowing full use of multiple

machines.

Messaging middleware, including an implementation of EJB 2.0 Message Driven Beans

(part of the J2EE 1.3 specification), complete with ART-based Java Messaging Service

for J2EE-based applications, and the CORBA Notification Service for CORBA

applications.

Web services support: Orbix E2A XMLBus is a development and deployment

environment that lets developers expose existing applications as XML-based Web

services. Built on such standards as SOAP, WSDL, and UDDI, it automatically generates

WSDL interfaces and the SOAP client required to invoke the Web service in question. It

lets you define Web services for publication and access over the Web, bridging .Net,

CORBA, and EJB servers. XMLBus provides support for both Java and CORBA/C++

environments. The platformÕs reliability, availability, and scalability make it suitable for

enterprise-scale, mission-critical systems, as attested to by many of our 4500 customer

deployments.

Because Orbix E2A is neutral with respect to software component models and

programming languages, it allows developers to use their preferred tools to develop

business logic, and to expose business functions in their preferred deployment

environment. Further, Orbix E2A includes XMLBus technology, which not only

simplifies the creation of XML-based Web service applications, but also provides support

for XML communication to new and existing applications. Orbix E2A provides

integration tools for connecting to specific systems and technologies. For example, the

Orbix E2A Application Server Platform provides adapters for MQSeries, CICS, IMS, and

RDBMSs. Further, the Orbix E2A Application Server Platform can easily be integrated

with the Orbix E2A Web Services Integration Platform, to provide adapters for specific

ERP software products, including SAP, Siebel, PeopleSoft, Baan, and so on. Thus, Orbix

E2A supports point-to-point application integration; for example, for a project that needs

to use SAP R/3 as a data source. Orbix E2A makes it easy to use a combination of J2EE,

CORBA, Mainframe integration, and Web services to provide true peer-to-peer service

oriented architectures for enterprise-scale applications and environments. The Orbix E2A

Application Server Platform provides support for enterprise applications by way of three

broad classes of technology: CORBA, J2EE, and the Web services-related technologies

such as XML, SOAP, WSDL, UDDI, and ebXML.

OMG Model Driven Architecture

Model-driven architecture (MDA) is a software design approach for the development

of software systems. It provides a set of guidelines for the structuring of specifications,

which are expressed as models. Model-driven architecture is a kind of domain

engineering, and supports model-driven engineering of software systems. It was launched

by the Object Management Group (OMG) in 2001.

The Model-Driven Architecture approach defines system functionality using a platform-

independent model (PIM) using an appropriate domain-specific language (DSL). Then,

given a platform definition model (PDM) corresponding to CORBA, .NET, the Web, etc.,

the PIM is translated to one or more platform-specific models (PSMs) that computers can

run. This requires mappings and transformations and should be modeled too.

The PSM may use different Domain Specific Languages (DSLs), or a General Purpose

Language (GPL) like Java, C#, PHP, Python, etc. Automated tools generally perform this

translation. The OMG organization provides rough specifications rather than

implementations, often as answers to Requests for Proposals (RFPs). Implementations

come from private companies or open source groups.

MDA principles can also apply to other areas such as business process modeling (BPM)

where the PIM is translated to either automated or manual processes.

Figure 3: A simplified example of PIM to PSM transformation

The simple PIM in Figure 3 represents a Customer and Account. At this level of

abstraction, the model describes important characteristics of the domain in terms of

classes and their attributes, but does not describe any platform-specific choices about

which technologies will be used to represent them. Figure 3 illustrates three specific

mappings, or transformations, defined to create the PSMs, together with the standards

used to express these mappings. For example, one approach is to export the PSM

expressed in UML into XMI format, using standard definitions expressed as either XML

Schema Definitions (XSD) or Document Type Definitions (DTD). This can then be used

as input to a code generation tool that produces interface definitions in Java for each of

the classes defined in the UML.

Usually, a set of rules is built into the code generation tool to perform the transformation.

However, the code generation tool often allows those rules to be specifically defined as

templates in a scripting language.

COM

Component Object Model (COM) is a binary-interface standard for software

componentry introduced by Microsoft in 1993. It is used to enable interprocess

communication and dynamic object creation in a large range of programming languages.

The term COM is often used in the Microsoft software development industry as an

umbrella term that encompasses the OLE, OLE Automation, ActiveX, COM+ and

DCOM technologies.

The essence of COM is a language-neutral way of implementing objects that can be used

in environments different from the one in which they were created, even across machine

boundaries. For well-authored components, COM allows reuse of objects with no

knowledge of their internal implementation, as it forces component implementers to

provide well-defined interfaces that are separate from the implementation. The different

allocation semantics of languages are accommodated by making objects responsible for

their own creation and destruction through reference-counting. Casting between different

interfaces of an object is achieved through the QueryInterface() function. The

preferred method of inheritance within COM is the creation of sub-objects to which

method calls are delegated.

Although the interface standard has been implemented on several platforms[citation needed]

,

COM is primarily used with Microsoft Windows. For some applications, COM has been

replaced at least to some extent by the Microsoft .NET framework, and support for Web

Services through the Windows Communication Foundation (WCF). However, COM

objects can be used with all .NET languages through .NET COM Interop.

Networked DCOM uses binary proprietary formats, while WCF encourages the use of

XML-based SOAP messaging. COM is very similar to other component software

interface technologies, such as CORBA and Java Beans, although each has its own

strengths and weaknesses. The characteristics of COM make it most suitable for the

development and deployment of desktop applications[citation needed]

, for which it was

originally designed.

COM Interfaces

All COM components must (at the very least) implement the standard IUnknown

interface, and thus all COM interfaces are derived from IUnknown. The IUnknown

interface consists of three methods: AddRef() and Release(), which implement

reference counting and controls the lifetime of interfaces; and QueryInterface(), which

by specifying an IID allows a caller to retrieve references to the different interfaces the

component implements. The effect of QueryInterface() is similar to dynamic_cast<>

in C++ or casts in Java and C#.

A COM component's interfaces are required to exhibit the reflexive, symmetric, and

transitive properties. The reflexive property refers to the ability for the

QueryInterface() call on a given interface with the interface's ID to return the same

instance of the interface. The symmetric property requires that when interface B is

retrieved from interface A via QueryInterface(), interface A is retrievable from

interface B as well. The transitive property requires that if interface B is obtainable from

interface A and interface C is obtainable from interface B, then interface C should be

retrievable from interface A.

An interface consists of a pointer to a virtual function table that contains a list of pointers

to the functions that implement the functions declared in the interface, in the same order

that they are declared in the interface. This technique of passing structures of function

pointers is very similar to the one used by OLE 1.0 to communicate with its system

libraries.

COM specifies many other standard interfaces used to allow inter-component

communication. For example, one such interface is IStream, which is exposed by

components that have data stream semantics (e.g. a FileStream component used to read

or write files). It has the expected Read and Write methods to perform stream reads and

writes. Another standard interface is IOleObject, which is exposed by components that

expect to be linked or embedded into a container. IOleObject contains methods that

allow callers to determine the size of the component's bounding rectangle, whether the

component supports operations like 'Open', 'Save' and so on.

COM Classes

A class is COM's language-independent way of defining a class in the object-oriented

sense. A class can be a group of similar objects or a class is simply a representation of a

type of object; it should be thought of as a blueprint that describes the object.

A coclass supplies concrete implementation(s) of one or more interfaces. In COM, such

concrete implementations can be written in any programming language that supports

COM component development, e.g. Delphi, C++, Visual Basic, etc.

One of COM's major contributions to the world of Windows development is the

awareness of the concept of separation of interface from implementation. An

extension of this fundamental concept is the notion of one interface, multiple

implementations. This means that at runtime, an application can choose to instantiate an

interface from one of many different concrete implementations.

COM as an object framework

The fundamental principles of COM have their roots in Object-Oriented philosophies. It

is a platform for the realization of Object-Oriented Development and Deployment.

Because COM is a runtime framework, types have to be individually identifiable and

specifiable at runtime. To achieve this, globally unique identifiers (GUIDs) are used.

Each COM type is designated its own GUID for identification at runtime (versus compile

time).

In order for information on COM types to be accessible at both compile time and

runtime, COM uses type libraries. It is through the effective use of type libraries that

COM achieves its capabilities as a dynamic framework for the interaction of objects.

Consider the following example coclass definition in an IDL :

coclass MyObject

{

[default] interface IMyObject;

[default, source] dispinterface _IMyObjectEvents;

};

The above code fragment declares a COM class named MyObject which must implement

an interface named IMyObject and which supports (not implements) the event interface

_IMyObjectEvents. Ignoring the event interface bit, this is conceptually equivalent to

defining a C++ class like this:

class CSomeObject : public ISomeInterface

{

...

...

...

};

where ISomeInterface is a C++ pure virtual class.

Referring once again to the MyObject COM class: once a coclass definition for it has

been formalized in an IDL, and a Type Library compiled from it, the onus is on the

individual language compiler to read and appropriately interpret this Type Library and

then produce whatever code (in the specific compiler's language) necessary for a

developer to implement and ultimately produce the binary executable code which can be

deemed by COM to be of coclass MyObject.

Once an implementation of a COM coclass is built and is available in the system, next

comes the question of how to instantiate it. In languages like C++, we can use the

CoCreateInstance() API in which we specify the CLSID (CLSID_MyObject) of the

coclass as well as the interface (specified by the IID IID_IMyObject) from that coclass

that we want to use to interact with that coclass. Calling CoCreateInstance() like this:

CoCreateInstance(CLSID_MyObject,

NULL,

CLSCTX_INPROC_SERVER,

IID_IMyObject,

(void**)&m_pIMyObject);

is conceptually equivalent to the following C++ code:

ISomeInterface* pISomeInterface = new CSomeObject();

In the first case, the COM sub-system is used to obtain a pointer to an object that

implements the IMyObject interface and coclass CLSID_MyObject's particular

implementation of this interface is required. In the second case, an instance of a C++

class CSomeObject that implements the interface ISomeInterface is created. A coclass,

then, is an object-oriented class in the COM world. The main feature of the coclass is that

it is (1) binary in nature and consequently (2) programming language-independent.

COM Registry

In Windows, COM classes, interfaces and type libraries are listed by GUIDs in the

registry, under HKEY_CLASSES_ROOT\CLSID for classes and

HKEY_CLASSES_ROOT\Interface for interfaces. The COM libraries use the registry to

locate either the correct local libraries for each COM object or the network location for a

remote service.

Under the key HKCR\clsid, the following are specified:

-> Inprocserver32 = object is to be loaded into a process

Path to file/object and readable name

HKCR\interface:

example: ISTREAM, IRPCSTUB, IMESSAGEFILTER

connects to a CLSID. You can specify

NUMMETHODS and PROXYSTUB(if web-object)

HKCR\typelib

One or more CLSID kan be group into tp. it contains parameters for

linking in COM. The rest of the info in the COM parts of the REGISTRY,

is to give an application/object a CLSID.

Reference counting

The most fundamental COM interface of all, IUnknown (from which all COM interfaces

must be derived), supports two main concepts: feature exploration through the

QueryInterface method, and object lifetime management by including AddRef() and

Release(). Reference counts and feature exploration apply to objects (not to each

interface on an object) and thus must have a centralized implementation.

The COM specifications require a technique called reference counting to ensure that

individual objects remain alive as long as there are clients which have acquired access to

one or more of its interfaces and, conversely, that the same object is properly disposed of

when all code that used the object have finished with it and no longer require it. A COM

object is responsible for freeing its own memory once its reference count drops to zero.

For its implementation, a COM Object usually maintains an integer value that is used for

reference counting. When AddRef() is called via any of object's interfaces, this integer

value is incremented. When Release() is called, this integer is decremented. AddRef()

and Release() are the only means by which a client of a COM object is able to influence

its lifetime. The internal integer value remains a private member of the COM object and

will never be directly accessible.

The purpose of AddRef() is to indicate to the COM object that an additional reference to

itself has been affected and hence it is necessary to remain alive as long as this reference

is still valid. Conversely, the purpose of Release() is to indicate to the COM object that a

client (or a part of the client's code) has no further need for it and hence if this reference

count has dropped to zero, it may be time to destroy itself.

Certain languages (e.g. Visual Basic) provide automatic reference counting so that COM

object developers need not explicitly maintain any internal reference counter in their

source codes. Using COM in C, explicit reference counting is needed. In C++, a coder

may write the reference counting code or use a smart pointer that will manage all the

reference counting.

The following is a general guideline calling AddRef() and Release() to facilitate proper

reference counting in COM object:

• Functions (whether object methods or global functions) that return interface

references (via return value or via "out" parameter) should increment the

reference count of the underlying object before returning. Hence internally within

the function or method, AddRef() is called on the interface reference (to be

returned). An example of this is the QueryInterface() method of the IUnknown

interface. Hence it is imperative that developers be aware that the returned

interface reference has already been reference count incremented and not call

AddRef() on the returned interface reference yet another time.

• Release() must be called on an interface reference before that interface's pointer is

overwritten or goes out of scope.

• If a copy is made on an interface reference pointer, AddRef() should be called on

that pointer. After all, in this case, we are actually creating another reference on

the underlying object.

• AddRef() and Release() must be called on the specific interface which is being

referenced since an object may implement per-interface reference counts in order

to allocate internal resources only for the interfaces which are being referenced.

• Extra calls to these functions are not sent out to remote objects over the wire; a

proxy keeps only one reference on the remote object and maintains its own local

reference count.

COM Instantiation

COM standardizes the instantiation (i.e. creation) process of COM objects by requiring

the use of Class Factories. In order for a COM object to be created, two associated items

must exist:

• A Class ID.

• A Class Factory.

Each COM Class or CoClass must be associated with a unique Class ID (a GUID). It

must also be associated with its own Class Factory (that is achieved by using a

centralized registry). A Class Factory is itself a COM object. It is an object that must

expose the IClassFactory or IClassFactory2 (the latter with licensing support) interface.

The responsibility of such an object is to create other objects.

A class factory object is usually contained within the same executable code (i.e. the

server code) as the COM object itself. When a class factory is called upon to create a

target object, this target object's class id must be provided. This is how the class factory

knows which class of object to instantiate.

A single class factory object may create objects of more than one class. That is, two

objects of different class ids may be created by the same class factory object. However,

this is transparent to the COM system.

By delegating the responsibility of object creation into a separate object, a greater level of

abstraction is promoted, and the developer is given greater flexibility. For example,

implementation of the Singleton and other creation patterns is facilitated. Also, the

calling application is shielded from the COM object's memory allocation semantics by

the factory object.

In order for client applications to be able to acquire class factory objects, COM servers

must properly expose them. A class factory is exposed differently, depending on the

nature of the server code. A server which is DLL-based must export a

DllGetClassObject() global function. A server which is EXE-based registers the class

factory at runtime via the CoRegisterClassObject() Windows API function.

The following is a general outline of the sequence of object creation via its class factory:

The object's class factory is obtained via the CoGetClassObject() API (a standard

Windows API). As part of the call to CoGetClassObject(), the Class ID of the object (to

be created) must be supplied. The following C++ code demonstrates this:

IClassFactory* pIClassFactory = NULL;

CoGetClassObject(CLSID_SomeObject,CLSCTX_ALL,NULL,IID_IClassFactory,

LPVOID*)&pIClassFactory);

The above code indicates that the Class Factory object of a COM object, which is

identified by the class id CLSID_SomeObject, is required. This class factory object is

returned by way of its IClassFactory interface. The returned class factory object is then

requested to create an instance of the originally intended COM object. The following

C++ code demonstrates this:

ISomeObject* pISomeObject = NULL;

if (pIClassFactory)

{

pIClassFactory->CreateInstance (NULL,IID_ISomeObject,

(LPVOID*)&pISomeObject);

pIClassFactory->Release();

pIClassFactory = NULL;

}

The above code indicates the use of the Class Factory object's CreateInstance() method

to create an object which exposes an interface identified by the IID_ISomeObject GUID.

A pointer to the ISomeObject interface of this object is returned. Also note that because

the class factory object is itself a COM object, it needs to be released when it is no longer

required (i.e. its Release() method must be called).

The above demonstrates, at the most basic level, the use of a class factory to instantiate

an object. Higher level constructs are also available, some of which do not even involve

direct use of the Windows APIs.

For example, the CoCreateInstance() API can be used by an application to directly

create a COM object without acquiring the object's class factory. However, internally, the

CoCreateInstance() API itself will invoke the CoGetClassObject() API to obtain the

object's class factory and then use the class factory's CreateInstance() method to create

the COM object.

VBScript supplies the New keyword as well as the CreateObject() global function for

object instantiation. These language constructs encapsulate the acquisition of the class

factory object of the target object (via the CoGetClassObject() API) followed by the

invocation of the IClassFactory::CreateInstance() method. Other languages, e.g.

PowerBuilder's PowerScript may also provide their own high-level object creation

constructs. However, CoGetClassObject() and the IClassFactory interface remain the

most fundamental object creation technique.

COM Reflection

At the time of the inception of COM technologies, the only way for a client to find out

what features an object would offer, was to actually create one instance and call into its

QueryInterface method (part of the required IUnknown interface). This way of

exploration became awkward for many applications, including the selection of

appropriate components for a certain task, and tools to help a developer understand how

to use methods provided by an object.

As a result, COM Type Libraries were introduced, through which components can

describe themselves. A type library contains information such as the CLSID of a

component, the IIDs of the interfaces the component implements, and descriptions of

each of the methods of those interfaces. Type libraries are typically used by Rapid

Application Development (RAD) environments such as Visual Basic or Visual Studio to

assist developers of client applications.

Sample Code

Following are two examples that illustrate the COM concepts covered in the article. The

code is also contained in the article's sample project.

Using a COM object with a single interface

The first example shows how to use a COM object that exposes a single interface. This is

the simplest case you'll ever encounter. The code uses the Active Desktop coclass

contained in the shell to retrieve the filename of the current wallpaper. You will need to

have the Active Desktop installed for this code to work.

The steps involved are:

1. Initialize the COM library.

2. Create a COM object used to interact with the Active Desktop, and get an

IActiveDesktop interface.

3. Call the GetWallpaper() method of the COM object.

4. If GetWallpaper() succeeds, print the filename of the wallpaper.

5. Release the interface.

6. Uninitialize the COM library.

WCHAR wszWallpaper [MAX_PATH];

CString strPath;

HRESULT hr;

IActiveDesktop* pIAD;

1. Initialize the COM library (make Windows load the DLLs). Normally

you wouldcall this in your InitInstance() or other startup code. In

MFC apps, use AfxOleInit() instead.

CoInitialize ( NULL );

2. Create a COM object, using the Active Desktop coclass provided by

the shell.The 4th parameter tells COM what interface we want

(IActiveDesktop).

hr = CoCreateInstance ( CLSID_ActiveDesktop,

NULL,

CLSCTX_INPROC_SERVER,

IID_IActiveDesktop, (void**) &pIAD );

if ( SUCCEEDED(hr) )

{

3. If the COM object was created, call its GetWallpaper() method.

hr = pIAD->GetWallpaper ( wszWallpaper, MAX_PATH, 0 );

if ( SUCCEEDED(hr) )

{

4. If GetWallpaper() succeeded, print the filename it returned.

Note that I'm using wcout to display the Unicode string

wszWallpaper.

wcout is the Unicode equivalent of cout.

wcout << L"Wallpaper path is:\n " << wszWallpaper << endl <<

endl;

}

else

{

cout << _T("GetWallpaper() failed.") << endl << endl;

}

5. Release the interface.

pIAD->Release();

}

else

{

cout << _T("CoCreateInstance() failed.") << endl << endl;

}

6. Uninit the COM library. In MFC apps, this is not necessary

since MFC does it for us.

CoUninitialize();

Distributed Component Object Model (DCOM) DCOM is a proprietary Microsoft technology for communication among software

components distributed across networked computers. DCOM, which originally was

called "Network OLE", extends Microsoft's COM, and provides the communication

substrate under Microsoft's COM+ application server infrastructure. It has been

deprecated in favor of the Microsoft .NET Framework.The addition of the "D" to COM

was due to extensive use of DCE/RPC (Distributed Computing Environment/Remote

Procedure Calls) – more specifically Microsoft's enhanced version, known as MSRPC.

In terms of the extensions it added to COM, DCOM had to solve the problems of

• Marshalling – serializing and deserializing the arguments and return values of

method calls "over the wire".

• Distributed garbage collection – ensuring that references held by clients of

interfaces are released when, for example, the client process crashed, or the

network connection was lost.

One of the key factors in solving these problems is the use of DCE/RPC as the underlying

RPC mechanism behind DCOM. DCE/RPC has strictly defined rules regarding

marshalling and who is responsible for freeing memory.

DCOM was a major competitor to CORBA. Proponents of both of these technologies saw

them as one day becoming the model for code and service-reuse over the Internet.

However, the difficulties involved in getting either of these technologies to work over

Internet firewalls, and on unknown and insecure machines, meant that normal HTTP

requests in combination with web browsers won out over both of them. Microsoft, at one

point, attempted and failed to head this off by adding an extra http transport to DCE/RPC

called ncacn_http (Network Computing Architecture, Connection-based, over HTTP).

This was later resurrected to support a Microsoft Exchange 2003 connection over HTTP.

COM Object Reuse

Reusability is the mantra for the success of any technology in the field of programming.

C++ supports the reusability by providing the concept of inheritance. Inheritance is the

way by which the subclass inherits the functionality from its base class (parent class).

The subclass has the option to modify the functionality, which has been provided by the

base class (overriding), or to continue with the same functionality as that of the base

class. Inheritance has made the life of the programmers easier, but this has a problem

associated with it. Implementation inheritance, which is supported by C++, creates a

strong bonding or the contract between the base class and a subclass. Any changes in the

base class can cause a drastic affect on the child class and may cause the clients of the

child class to break. Implementation Inheritance is not an appropriate technology to

create a reusable components, which could be used anywhere, at anytime, by anybody

without worrying about the internal implementation of the component.

COM doesn’t support the Implementation inheritance as it violates the motto of the COM

technology i.e. to create the reusable components. COM does support the Interface

inheritance, in which the subclass inherits the interface of the base class and not the

implementation. Interface inheritance protects the clients of a component from change.

Implementation inheritance can be simulated in COM by using the concept of component

containment. In COM, the reusability is achieved by using containment and aggregation.

This article will be covering the Containment technique, in which the outer component

uses the inner component, in detail. The Aggregation will be covered in a separate article

Containment

Everything in COM is related with the interfaces. Containment is also implemented at the

interface level. The COM containment is same as the C++ containment, in which the

outer component is a client of an inner component. The outer component has pointers to

interfaces on the inner component. The inner component is not exposed directly to the

client and hence only the IUnknown of the outer component will be exposed to the client.

In Containment, the outer component forwards (delegates) the calls to the inner

component.

There could be two scenarios in which the containment can be implemented. The first

case is the outer component implements its own interfaces and uses the interfaces of the

inner component. The second case could be that the outer component reimplements an

interface supported by the inner component and forward the call to the interface of the

inner component.

In Contain ment, the outer component is acting as a client and using the interface of the

inner component. In implementing the containment, the inner component and client are

unaware of the fact that they are being the part of the containment implementation. The

outer component has to be modified to support the containment.

Sample code This article will explore the first scenario to explain the containment technique. In this

sample code, the outer component utilizes the functionality, which is being provided by

an inner component. The outer component needs some modification to accommodate the

inner component as a contained object. The client and an inner component won’t be

affected and will be unaware of the fact that they are taking part in the containment

implementation. This sample code will demonstrate that the client is unfamiliar with the

fact that the outer component is using the services of an inner component.

The outer component i.e. CMath will have a new member variable m_pISqaure, which is

a pointer to ISquare interface on the inner component.

class CMath : public IMath {

public:

// Implementing IUnknown Interface.

virtual HRESULT __stdcall QueryInterface(const IID& iid,void

**ppv);

virtual ULONG __stdcall AddRef();

virtual ULONG __stdcall Release();

//Implementing IMath Interface.

virtual void _STDCALL SumSquare(int Val1,int Val2,int*

pResult);

// Constructor

CMath();

// Destructor

~CMath();

// Pointer to ISquare interface on the inner component.

ISquare* m_pISquare;

private:

long m_cRef;

};

As this COM Server (ContainmentSample.dll) supports two COM Components i.e. CMath

and CSquare, therefore DllGetClassObject should have a validation for these two

ClassIDs.

// Code Snippet for DllGetClassObject.

// COM SCM creates a class object only when the request has

come for CLSID_CMath and CLSID_CSquare. After creating the

// class object, the IClassFactory interface pointer on the

// class object is returned back to the client.

STDAPI DllGetClassObject(const CLSID & clsid,const IID& iid,

void **ppv)

{

// This server supports two COM Components and hence

// validation is performed.

if((clsid == CLSID_CMath) || (clsid == CLSID_CSquare)) {

cout<<"The requested component is supported by "

"COM server (ContainmentSample.dll)" << endl;

}

else

{

return CLASS_E_CLASSNOTAVAILABLE;

}

CFactory *pFactory = new CFactory();

if (pFactory == NULL) {

return E_OUTOFMEMORY;

}

HRESULT hResult = pFactory->QueryInterface(iid,ppv);

static_cast<< IUnknown* >>(pFactory)->Release();

return hResult;

}

The CreateInstance for the outer component has to be modified to accommodate the

creation of an inner component and storing the ISquare interface on the inner

component in its member variable i.e. m_pISqaure. The outer component calls the

CoCreateInstance with the CLSID parameter as CLSID_CSquare and queries for

ISquare interface on the inner component (CSquare), and if the call is succeeds it stores

the interface pointer in m_pISqaure.

//Code Snippet for CreateInstance of Class Object.

//This snippet shows the part of the code which is

//executed during the creation of an outer component.

//This is executed when the client calls the CreateInstance,

//after getting the IClassFactory

//interface pointer on CMath's instance.

//The client gets the IClassFactory interface pointer by

//calling CoGetClassObject.

if ((iid == IID_IMath) || (iid == IID_IUnknown)) {

CMath* pMath = new CMath();

if(pMath == NULL) {

return E_OUTOFMEMORY;

}

// Here, the Outer Component initializes the inner component.

// The CoCreateInstance is called by

// the outer component during its creation and

// it queries for the ISquare interface on the inner

// component, and if the calls succeeds it stores the pointer

// in its variable m_pISqaure.

cout<<"Call to Create the Inner Component" << endl;

hResult =CoCreateInstance (CLSID_CSquare, NULL,

CLSCTX_INPROC_SERVER,

IID_ISquare,(void**)&pMath->m_pISquare);

cout<<"CoCreateInstance for CSquare has been called"<< endl;

hResult = pMath->QueryInterface(iid,ppv);

if(SUCCEEDED(hResult)) {

pMath->Release();

}

}

Interfaces and Versioning

A good versioning mechanism allows one system component to be updated without

requiring updates to all the other components in the system. Versioning in COM is

implemented using interfaces and IUnknown::QueryInterface. The COM design

completely eliminates the need for things like version repositories or central management

of component versions.

When a software module is updated, it is generally to add new functionality or to improve

existing functionality. In COM, you add new functionality to your component object by

adding support for new interfaces. Since the existing interfaces don't change, other

components that rely on those interfaces continue to work. Newer components that know

about the new interfaces can use those newly exposed interfaces. Because

QueryInterface calls are made at run time without any expensive call to some

"capabilities database" (as used in some other system object models), the current

capabilities of a component object can be efficiently evaluated each time the component

is used; when new features become available, applications that know how to use them

will begin to do so immediately.

Improving existing functionality is even easier. Because the syntax and semantics of an

interface remain constant, you are free to change the implementation of an interface,

without breaking other developers components that rely on the interface. For example,

say you have a component that supports the (hypothetical) IStack interface, which would

include methods like Push and Pop. You've currently implemented the interface as an

array, but you decide that a linked list would be more appropriate. Since the methods and

parameters do not change, you can freely replace the old implementation with a new one,

and applications that use your component will get the improved linked list functionality

"for free."

Windows and OLE use this technique to provide improved system support. For example,

in OLE today, structured storage is implemented as a set of interfaces which currently use

the C run-time file input/output functions internally. In Windows 2000 (the next version

of Windows NT), those same interfaces will write directly to the file system. The syntax

and semantics of the interfaces remain constant; only the implementation changes.

Existing applications will be able to use the new implementation without any changes;

they get the improved functionality "for free."

The combination of the use of interfaces (immutable, well-defined "functionality sets"

that are extruded by components) and QueryInterface (the ability to cheaply determine

at run time the capabilities of a specific component object) enable COM to provide an

architecture in which components can be dynamically updated, without requiring updates

to other reliant components. This is a fundamental strength of COM over other proposed

object models. COM solves the versioning/evolution problem where the functionality of

objects can change independently of clients of that object without rendering existing

clients incompatible. In other words, COM defines a system in which components

continue to support the interfaces through which they provided services to older clients,

as well as support new and better interfaces through which they can provide services to

newer clients. At run time old and new clients can safely coexist with a given component

object. Errors can only occur at easily handled times: bind time or during a

QueryInterface call. There is no chance for a random crash such as those that occur

when an expected method on an object simply does not exist or its parameters have

changed.

Language Independence

Components can be implemented in a number of different programming languages and

used from clients that are written using completely different programming languages.

Again, this is because COM, unlike an object-oriented programming language, represents

a binary object standard, not a source code standard. This is a fundamental benefit of a

component software architecture over object-oriented programming (OOP) languages.

Objects defined in an OOP language typically interact only with other objects defined in

the same language. This necessarily limits their reuse. At the same time, an OOP

language can be used in building COM components, so the two technologies are actually

quite complementary. COM can be used to "package" and further encapsulate OOP

objects into components for widespread reuse, even within very different programming

languages.

.NET Components

• It is a platform neutral framework.

• Is a layer between the operating system and the programming language.

• It supports many programming languages, including VB.NET, C# etc.

• .NET provides a common set of class libraries, which can be accessed from any

.NET based programming language. There will not be separate set of classes and

libraries for each language. If you know any one .NET language, you can write

code in any .NET language!

• In future versions of Windows, .NET will be freely distributed as part of

operating system and users will never have to install .NET separately.

.NET Framework Assemblies The .NET framework introduces assemblies as the main building blocks of your

application. An application can contains one or more assemblies. An assembly can be

formed in one or more files. This all depends on your programming needs.

An assembly can consist of the following four elements: Your code, compiled into MS

intermediate language (MISL). This code file can be either an EXE file or a DLL file.

The assembly manifest, which is a collection of metadata that describes assembly name,

culture settings, list of all files in the assembly, security identity, version requirements,

and references to resources. The assembly manifest can be stored with the intermediate

code, or in a standalone file that contains only assembly manifest information.

1. Type metadata

2. Resources

The main and only required element of the above four elements is the assembly manifest.

The remaining elements are optional depending on your requirements.

As we have mentioned above, an assembly can be formed into a single physical file. In

this case all the above four elements will be stored inside this file (either an EXE, or a

DLL file). Or it can be formed in more than one file, and in this later case we call it a

multi-file assembly. In multi-file assembly the above four elements can be stored in

separate files like module files for code, resources files for images, or other files required

by the application. Note that the files that forms the multi-file assembly are not physically

linked, instead they are linked through the assembly manifest.

You may ask yourself, when should I use multi-file assembly technique? The answer is,

you should use this form of assembly when you want to combine modules written in

different languages, when you want to optimize downloading an application that consists

of more than one module, or when you want to use a huge resource file so you put it in a

separate resource file and the .NET framework will download it only when it is

referenced which will optimize your memory usage and system resources.

Benefits of Using Assemblies

Assemblies are mainly introduced to solve the problems of versioning, DLL conflicts,

and simplifying the process of deployment. Most end users have encountered versioning

or deployment problems when they do install a new application or a new version of an

existing one. There are many situations where you install a new application only to find

an existing one stopped working, and the system can not recover from that. Many

developers spent a lot of time trying to retain the registry entries consistence in order to

activate a COM class. All this frustration occurs because of versioning problems that

occur with component-based applications.

Versioning Problems

There are two versioning problems that arise with WIN32 applications. The first one is

that versioning rules are enforced by the operating system not between the pieces of an

application. Backward compatibility between the new piece of code and the old one is the

current approach of versioning and this is hard to maintain in most applications. Beside

that only a single version of an application is allowed to be present and executing on a

computer at any given time. The second problem is that there is no way to preserve

consistency between groups of components that are built together and the current present

group at run time.

DLL Conflicts

As a result of the above two versioning problems, DLL conflicts do occur. Which is:

when installing a new application an existing one may break because of that the new one

installed a new version of a component or a DLL that is not fully backward compatible

with the previous one.

The Solution

To solve the above problems, Microsoft began a new approach in its Windows 2000

platform. Windows 2000 gives you the ability to place DLL files used by your

application in the same directory as your application's exe file, so that your application

can use the right version it was designed for using. Beside that, Windows 2000 locks files

that exist in the System32 directory to prevent their replacement when new applications

are installed, and this prevents the DLLs that are used by existing applications from being

replaced and so prevents the crashing of existing applications.

The .NET framework introduces assemblies as an evolution towards the complete

solution of versioning problems and DLL conflicts. Assemblies on their core design give

developers the ability to specify version rules between components, offer the

infrastructure required to enforce these rules, and allowing multiple versions of the

component to be run side by side at the same time.

How Does It Work?

You may recall that an assembly manifest contains the versioning requirements of the

current assembly. The version of the assembly and the versions of the required

assemblies and/or components are recorded in the manifest. So, when you run an

application, the .NET runtime checks the assembly manifest of your application and

executes the version of assemblies or components that are recorded in the manifest. To

gain the advantages of versioning you must give your assembly a strong name (will be

explained later).

Assembly Version

An assembly can have two types of versions. The first one which we call "Version

Number" consists of a four-part string with the following format:

<Major Version>.<Minor Version>.<Build Number>.<Revision Number>

For example a version number of 3.5.20.1 indicates 3 as the major version, 5 as the minor

version, 20 as the build number, and 1 as the revision number. Note that the version

number is stored in the assembly's manifest. The second type of versions is called

"Informational Version". The informational version consists of a string that contains the

version number besides additional information like packaging, marketing literature, or

product name. This type of version is used for informational purposes only, and is not

used at runtime for calculating versioning related decisions.

Assembly Locations

An assembly can be placed into one of the following three locations:

1. Under the application directory or subdirectories. This is the most common

location for placing an assembly. If your assembly uses a culture other than the

default one which is "en-US", you have to put it under a subdirectory with this

culture name.

2. In the global assembly cache, which is a machine code cache installed whenever

the common language runtime is installed. You deploy your assembly to the

global assembly cache when you want to share it with multiple applications.

3. On an ftp server.

The location of an assembly determines whether the common language runtime can

locate it when it is referenced and whether this assembly can be shared with other

applications or not.

The Manifest

How .NET makes sure the right version of every assembly is used?

In the bad old days of COM, shared components were usually dumped into that

unorganized Windows warehouse, the System directory, and then "registered" in

Windows so other programs could locate them. This is a big reason why the registry is

such a critical and sensitive part of Windows. Without it, one piece of software can't

locate another one. (The registry is still used in .NET, by the way. But .NET doesn't use it

to identify components anymore.) In .NET, every assembly starts with something called a

manifest that takes the place of the information that was formerly placed in the registry.

The manifest contains metadata (data about data) telling the CLR what it needs to know

to execute the assembly instructions. One of the things the CLR needs to know is the

version number of components used by the assembly. To illustrate the point, let's

compare the manifest of a very simple 1.1 and 2.0 .NET component (DLL file) using the

ILDASM utility. This is a standard tool that is installed with the .NET Framework and it

lets you look inside the IL code in an assembly.

.NET Application Domains

Overview

Before .NET framework 2.0 technology, the only way used to isolate applications

running on the same machine is by the means of process boundaries. Each application run

within a process, and each process has its own boundaries and memory addresses relative

to it and this is how isolation from other processes was performed.

.NET framework 2.0 introduces a new boundary called the Application Domains. Each

application running within its main process boundaries and its application domain

boundaries. So, you can think of the application domain as an extra shell to isolate the

application and making it more secure and robust.

The above is not the main advantage of application domains. The main advantage is the

ability to run several applications domains in a single process or application. All of this is

performed while maintaning the same level and quality of isolation that would exist in

separate processes, without the need of making cross-process calls or switching between

processes.

Advantages

The following advantages of application domains answer this question.

• In terms of isolation, code running in one application domain can not access code

or resources running in another application domain.

• In terms of security, you can run more than one set of web controls in a single

browser process. Each set of them is running in a separate application domain so

each one can not access the data or resources of the other sets. You can control

the permissions granted to a given piece of code by controlling the application

domain inside which the code is running.

• In terms of robustness, fault in code running in one application domain can not

affect other applications although they all are running inside the same process.

Individual application domain can be stopped without stopping the entire process,

you can simply unload the code running in a single application domain.

So, from the above advantages, you can observe that by using application domains you

can create rather robust .NET applications. It increases isolation, stability, and security of

your application.

Relation Between Application Domains and Assemblies

Most development and runtime environments has a definition for the building blocks of

an application. Assemblies are the building blocks of .NET framework applications. They

are the fundamental unite of deployment. An assembly consists of types and resources

working together to form a logical unit of the functionality of your application. You can

divide your .NET application into assemblies. The assembly file can have an .EXE or a

.DLL extension.

As we mentioned previously, you can run more than one application domain within your

application. Each application domain will run a given piece of code. An assembly is

simply the piece of code we mean here. So, each application domain can run an assembly

within the entire application. This is the relation between application domains and

assemblies.

Whenever we start an application, we're actually starting a Win32 process and running

our application inside it. These processes use resource such as memory, objects, kernel

and blah blah blah. Each Win32 process contains at the least one thread (eventually we

end up running multiple threads) and if we are to run other tasks or open up other

applications through our application, these tasks will belong to our particular Win32

process running on a collection of multiple threads.

One of the characteristics of a Win32 process is that it is very much similar to a virtual

boundary. It's pretty easy to communicate within a process but the same is restricted to a

certain level outside that particular Win32 process. To interact with other Win32

processes, we would require some special mechanisms to work on as there are a couple of

security contexts we would have to take into consideration and also the need to restrict

what a Win32 process can and should do on a particular system. So who takes care of

running a process and what are the factors involved in running a process successfully? The

execution of a process and running our code within it is usually the domain and prerogative of the

operating system. There are many complex situations and issues that the Operating System has to

handle while maintaining an active process.

.NET Context

All applications run within an application domain, and this domain can contain numerous

contexts…an application which is executing within a given app domain isn’t necessarily

tied to one specific context though, it is free to switch contexts freely (meaning it is

context agile); thus getting a reference to a specific object means obtaining a direct

reference which makes it impossible to hook into and perform processing on the

messages the object contains (method calls, exceptions, etc). By deriving an object from

ContextBoundObject you force the runtime to isolate the object into a single context

where it will remain for it’s entire lifetime. Any hooks into this object from other

contexts are done via a runtime generated proxy which is a reference to the object; there

are no direct hooks into the object itself. As it’s a proxy, it is now possible to write your

own sinks to hook into the message chain and perform any type of processing you’d like.

This is comparable to remoting in .Net (and indeed most of the objects needed for this

live in the System.Runtime.Remoting namespace), albeit on a smaller scale.

So how does this relate to real world programming? There are certain services that need

to be applied to every layer of the application and are not domain specific; in the case of

the typical 3 tier app which contains a data layer, a business layer, and a presentation

layer; they all need common services such as logging and exception management. The

most common approach to solve this is to write a separate utility library that

accomplishes the functionality you need, and then just reference it from your

projects…this can create unneeded dependencies though, and in the case of exceptions

means lots of try/catch blocks with logging code in each catch block (in the case of a web

application, I do realize this can be centralized in Application_OnError which is great in

theory, but is usually not used out in the real world). This can get a quite messy in larger

applications. In this case, I’ll attempt to provide an alternate solution to this by providing

a simple example that centralizes exception logging by using a ContextAttribute and a

ContextBoundObject which allows the attribute to hook into the message chain.

A channel is needed inside an application domain: calling objects across contexts.If

you’ve previously written COM+ components, you already know about COM+ contexts.

Contexts in .NET are very similar. A context is a boundary containing a collection of

objects. Likewise, with a COM+ context, the objects in such a collection require the same

usage rules that are defined by the context attributes.

As you already know, a single process can have multiple application domains. An

application domain is something as a subprocess with security boundaries. An application

domain can have different contexts. A context is used to group objects with similar

execution requirements. Contexts are composed from a set of properties and are used for

interception: when a context-bound object is accessed by a different context, an

interceptor can do some work before the call reaches the object. This can be used for

thread synchronization, transactions, and security management, for example.

A class derived from MarshalByRefObject is bound to the application domain. Outside

the application domain a proxy is needed to access the object. A class derived from

ContextBoundObject, which itself derives from MarshalByRefObject, is bound to a

context. Outside the context, a proxy is needed to access the object.

Context-bound objects can have context attributes. A context-bound object without

context attributes is created in the context of the creator. A context-bound object with

context attributes is created in a new context or in the creator’s context if the attributes

are compatible.

To further understand contexts, you must be familiar with these terms:

• Creating an application domain creates the default context in this application domain.

If a new object is instantiated that needs different context properties, a new context is

created.

• Context attributes can be assigned to classes derived from ContextBoundObject. You

can create a custom attribute class by implementing the interface IContextAttribute.

The .NET Framework has one context attribute class in the namespace

System.Runtime.Remoting.Contexts: SynchronizationAttribute.

• Context attributes define context properties that are needed for an object. A context

property class implements the interface IContextProperty. Active properties

contribute message sinks to the call chain. The class ContextAttribute implements

both IContextProperty and IContextAttribute, and can be used as a base class for

custom attributes.

• A message sink is an interceptor for a method call. With a message sink, method calls

can be intercepted. Properties can contribute to message sinks.

Activation

A new context is created if an instance of a class that’s created needs a context different

from the calling context. The attribute classes that are associated with the target class are

asked if all the properties of the current context are acceptable. If any of these properties

are unacceptable, the runtime asks for all property classes associated with the attribute

class and creates a new context. The runtime then asks the property classes for the sinks

they want to install. A property class can implement one of the IContributeXXXSink

interfaces to contribute sink objects. Several of these interfaces are available to go with

the variety of sinks.

Communication Between Contexts

How does the communication between contexts happen? The client uses a proxy instead

of the real object. The proxy creates a message that is transferred to a channel, and sinks

can intercept. Does this sound familiar? It ought to. The same mechanism is used for

communication across different application domains or different systems. A TCP or

HTTP channel is not required for the communication across contexts, but a channel is

used here too. CrossContextChannel can use the same virtual memory in both the client

and server sides of the channel, and formatters are not required for crossing contexts.

Introduction to .NET Reflection Programming languages like C++ had the ability to collect information on the types. But

this ability had limited scope. With .NET there is a powerful mechanism called .NET

Reflection that not only allows you to introspect types but also raise methods on those

types during runtime. Though the process of retrieving types information in .NET

Reflection is slow compared to direct access of a method, property, or field, .NET

Reflection provides dynamic execution of code and controls when used sparingly. All

types with their methods are stored in assemblies.

Assemblies are the most integral part of any .NET application. All the functionality of

.NET application can be exposed through assemblies. The .NET Reflection provides you

with Application Programming Interface (APIs) to inspect assemblies. Apart from

inspecting assemblies, reflection APIs also allows you to dynamically create assembly in

memory and use it in your program code. All the APIs related to .NET Reflection are

located under System.Reflection namespace. These .NET Reflection APIs are also used

to develop applications editors, class browsers, and add-ons for Integrated Development

Environment (IDEs).

Since .NET most often uses types, .NET Reflection is frequently required when you do

not know the particular type that you are dealing with at compile time. For example, if

you dynamically create server-side controls or objects at runtime, or use meta-based

programming where some of the data is dynamically created based on data stored in a

database, you often create things in hurry. This is where the .NET Reflection comes to

rescue when it can be used against these ‘dynamic’ runtime generated objects which the

compiler knows nothing about.

There are other usages for .NET Reflection like compilers for languages such as JScript

make use of reflection to construct symbol tables and the classes in the

System.Runtime.Serialization namespace makes use of reflection to access data and to

determine which fields to persist. Moreover, fhe classes in the System.Runtime.Remoting

namespace also use reflection through serialization.

Reflection – The process of getting the metadata from modules / assemblies. Whwn .net

code is compiled, metadata about the types defined in the modules is produced. These

modules isproduced. These modules are in turn packaged as assemblied. The process of

accessing this metadata is called Reflection. The namespace Syste.Reflection contains

classes that can be used for interrogating the types for a module/assembly. We use

reflection for examining data type sizes for marshalling across process and machine

boundaries.

Reflection is also used for:

1) To dynamically invoke methods (using System.Type.InvokeMember)

2) To dynamically create types at runtime(using System.Reflection.Emit.TypeBuilder)

The System.Type Class The System.Type class is the main class for the .NET Reflection functionality and is the

primary way to access metadata. The System.Type class is an abstract class and that

represents a type in the Common Type System (CLS). It represents type declarations:

class types, interface types, array types, value types, enumeration types, type parameters,

generic type definitions, and open or closed constructed generic types.

Use the members of Type to get information about a type declaration, such as the

constructors, methods, fields, properties, and events of a class, as well as the module and

the assembly in which the class is deployed. Three ways to obtain a Type reference:

Metadata & Reflection

Reflection is the ability to read metadata at runtime. Using reflection, it is possible to

uncover the methods, properties, and events of a type, and to invoke them dynamically.

Reflection also allows us to create new types at runtime, but in the upcoming example we

will be reading and invoking only.

Reflection generally begins with a call to a method present on every object in the .NET

framework: GetType. The GetType method is a member of the System.Object class, and

the method returns an instance of System.Type. System.Type is the primary gateway to

metadata. System.Type is actually derived from another important class for reflection: the

MemeberInfo class from the System.Reflection namespace. MemberInfo is a base class

for many other classes who describe the properties and methods of an object, including

FieldInfo, MethodInfo, ConstructorInfo, ParameterInfo, and EventInfo among others. As

you might suspect from thier names, you can use these classes to inspect different aspects

of an object at runtime.

.NET Remoting Remoting enables software components to interact across application domains. The

components interacting with each other can be in different processes and systems. This

enables us to create n-tier Distributed applications. Here is a simplified illustration of the

.NET Remoting Architecture.

The Key Players are: ...........- Client Object

...........- Server (Remote) Object

...........- Proxy Object

...........- Formatter

...........- Channel

Client Object is the object or component that needs to communicate with(call) a remote

object.

The Server (Remote) Object receives the request from the client object and responds.

Proxy Object When the client object needs to call a method from the Remote Object it

uses a proxy object to do this. Every public method that defined in the remote object class

can be made available in the proxy and thus can be called from clients. The proxy object

acts as a representative of the remote object. It ensures that all calls made on the proxy

are forwarded to the correct remote object instance. There are two types of proxies

transparent and real proxy

The TransparentProxy contains a list of all classes, as well as interface methods of the

remote object. It examines if the call made by the client object is a valid method of the

remote object and if an instance of the remote object resides in the same application

domain as the proxy. If this is true, a simple method call is routed to the remote object. If

the object is in a different application domain, the call (call parameters on the stack are

packaged into an IMessage object) is forwarded to a RealProxy class by calling its

Invoke method. This class is then responsible for forwarding messages to the remote

object.

Formatter

The formatting can be done be done in three ways –

...........a) Binary

...........b) SOAP or

...........c) Custom.

The remoting framework comes with two formatters: the binary and SOAP formatters.

The binary formatter is extremely fast, and encodes method calls in a proprietary, binary

format. The SOAP formatter is slower. Developers can also write their own Custom

Formatter and have the ability to use that.

Channels

Channels are used to transport messages to and from remote objects. You can choose a

TcpChannel or a HttpChannel or extend one of these to suit your requirements.

HTTP channel : The HTTP channel transports messages to and from remote objects

using the SOAP protocol. However, All messages are passed through the SOAP

formatter, where the message is changed into XML and serialized, and the required

SOAP headers are added to the stream. The resulting data stream is then transported to

the target URI using the HTTP protocol.

TCP Channel : The TCP channel uses a binary formatter to serialize all messages to a

binary stream and transport the stream to the target URI using the TCP protocol. It is also

possible to configure the TCP channel to the SOAP formatter.

Connectors

An Architecture Description Language (ADL) is a computer language used to describe

software and/or system architectures. An Architecture Description Language (ADL) is a

language designed to model a system. They have often graphical as well as plain text

syntax. Architecture Description Languages (ADLs) are computer languages describing

the software and hardware architecture of a system. The description may cover software

features such as processes, threads, data, and subprograms as well as hardware

component such as processors, devices, buses, and memory. The connections between the

components can be described in logical as well as physical terms. The difference between

an ADL and a programming language or a modelling language is not totally clear.

However, there are some requirements for a language to be classified as an ADL:

• It should be suitable for communicating architecture to all interested

parties.

• It should support the tasks of architecture creation, refinement and

validation.

• It should provide a basis for further implementation, so it must be able to

add information to the ADL specification to enable the final system

specification to be derived from the ADL.

• It should provide the ability to represent most of the common architectural

styles.

• It should support analytical capabilities or provide quick generating

prototype implementations.

Connector components The connector for SAP is written in Java and consists of two parts: the vision connector

framework and connector modules (the connector's application-specific component, the

connector framework, and business object handlers). The vision connector framework

provides a metadata-driven layer of abstraction to the connector framework used by all

WebSphere business integration system adapters. The vision connector framework

extends the methods in the Adapter Framework. The connector modules extend the

methods in the vision connector framework and communicate with an SAP application.

Figure illustrates the architecture of the connector and the relationship of the system-wide

and vision connector frameworks. The visionConnectorAgent class can implement any

number of connector modules.

Figure Architecture of the connector for SAP

Real-Time Object-Oriented Modeling (ROOM)

Model real time systems based on timeliness, dynamic internal structure, reactiveness,

concurrency and distribution, using the ROOM notation.

ROOM is an object-oriented methodology for real-time systems developed originally at

Bell-Northern Research. ROOM is based upon a principle of using the same model for all

phases of the development process. ROOM models are composed of actors which

communicate with each other by sending messages along protocols. Actors may be

hierarchically decomposed, and may have behaviors described by ROOM charts, a

variant of Harel's state charts. Descriptions of actors, protocols, and behaviors can all be

reused through inheritance. Edraw contains special shapes and setting for creating

ROOM diagrams. In Edraw Professional, the ROOM Diagrams template and shapes are

in the Software folder.

ROOM Diagram Symbols

Some symbols can changed into other shapes. When you drag them into the canvas, a

dialog popup. Then you can choose the types.

For example:

Modified Actor Ref. An actor is an active architectural component of a software system.

Actors interact with their environment via ports. A dynamic actor is created and

destroyed by the containing actor.

Ports provide an interface between actors using protocols that define how information

should be accessed and changed.

ROOM Port Type

• Relay port - Shares the interface between a contained class and the container

class.

• Conjugated port - Handles both the outgoing and incoming messages of its

protocol. Conjugated ports are usually white colored.

• External end port - Communicates with the actor's state machine or behavior.

• Internal end port - Connects a component actor to the container actor's behavior.

It is illustrated using the same notation as an external end port, but the port is

placed inside the container border rather than on it.

Transition points

• Initial transition point - Indicates the first transition within the state.

• Choice point - Indicates a choice between transition path segments. One path will

be the default.

• Non-extending transition point - Marks the end of a transition that does not extend

outside of the state context.

• Non-extending transition point - Illustrates an incoming transition.

Contextual Framework Contextual Composition

The final form of composition in our little list is contextual composition. The typical

examples here are EJB containers and .NET enterprise services contexts. Unlike

connection-oriented composition, contextual composition is implicit and asymmetric. A

component instance inside a container instance benefits from container-supplied services

and from a container-maintained abstraction (simplification) of the world outside the

container. For example, a container can add transactional processing and limit what a

contained instance can and cannot see.

How is a container different from a platform? It depends. Some platforms actually have

container properties. For example, operating systems that strongly isolate processes form

process containers. As a result, two applications running in separate processes of such an

operating system can communicate with each other only through the mechanisms

provided and controlled by the operating system. Other platforms don't form containers:

They don't intercept all incoming and all outgoing communications and thus can be

bypassed.

How is a container different from a set of connections? A container completely encloses

its contained instances, while the same can be established only by convention for peer-to-

peer connections. Also, a container is implicitly connected to all instances inside—all

enclosed instances uniformly benefit from their single shared container instance's

services and all are uniformly constrained by that container's policies.

EJB containers An Enterprise JavaBeans (EJB) container provides a run-time environment for enterprise

beans within the application server. The container handles all aspects of an enterprise

bean's operation within the application server and acts as an intermediary between the

user-written business logic within the bean and the rest of the application server

environment.

One or more EJB modules, each containing one or more enterprise beans, can be installed

in a single container. The EJB container provides many services to the enterprise bean,

including the following:

• Beginning, committing, and rolling back transactions as necessary.

• Maintaining pools of enterprise bean instances ready for incoming requests and

moving these instances between the inactive pools and an active state, ensuring

that threading conditions within the bean are satisfied.

• Most importantly, automatically synchronizing data in an entity bean's instance

variables with corresponding data items stored in persistent storage.

EJB Roles

In RMI there are two fundamental roles in the RMI environment: the client of the remote

object, and the object itself, which acts as a kind of server or service provider. These two

roles exist in the EJB environment as well, but EJB adds a third role, called the container

provider. The container provider is responsible for implementing all the extra services for

an EJB object : transaction processing, security, object persistence, and resource pooling.

If you're familiar with CORBA, you can think of the EJB container as being roughly

equivalent to the ORB in CORBA, with a few of the CORBA services thrown in as well.

In EJB, however, the container is strictly a server-side entity. The client doesn't need its

own container to use EJB objects, but an EJB object needs to have a container in order to

be exported for remote use. Figure shows a conceptual diagram of how the three EJB

roles interact with each other.

Figure The basic roles in an EJB environment

The EJB Container

The EJB container represents the value-added features of EJB over standard remote

objects built using RMI or CORBA. The EJB container manages the details of

transactional processing, resource pooling, and data persistence for you, which reduces

the burden on client applications and EJB objects and allows them to deal with just the

business at hand.

An EJB application server can contain multiple EJB containers, each managing multiple

EJB objects. In this chapter, I'll refer to EJB servers and EJB containers somewhat

interchangeably, depending on the context. In general, though, the container is strictly the

runtime elements that interact directly with your EJB objects to provide client proxy

services and notifications, while the server is the other glue outside the core EJB standard

that integrates the EJB containers into a larger application management structure of some

kind.

An EJB container is the heart of an EJB environment, in the same way an ORB is the

heart of a CORBA environment. The container registers EJB objects for remote access,

manages transactions between clients and EJB objects, provides access control over

specific methods on the EJB, and manages the creation, pooling, and destruction of

enterprise beans. The container also registers the home interface for each type of bean

under a given name in a JNDI namespace, allowing remote clients to find the home

interfaces and use them to create enterprise beans.

Once you provide the EJB container with the home and remote interfaces and the

implementation class for your bean, along with a deployment descriptor, the container is

responsible for generating the various classes that connect these components, as shown in

Figure. The home and remote interfaces you provide are RMI Remote interfaces; the

container generates both the client stubs and the server-side implementation for these

interfaces. When a client looks up a bean's home interface through JNDI, it receives an

instance of the home stub class. All methods invoked on this stub are remotely invoked,

via RMI, on the corresponding home implementation object on the EJB server. Similarly,

if the client creates or finds any beans through the home stub, the client receives remote

object stubs, and methods invoked on the stubs are passed through RMI to corresponding

implementation objects on the server. These remote objects are linked, through the EJB

container, to a corresponding enterprise bean object, which is an instance of your bean-

implementation class. Optionally, the EJB container may also generate a container-

specific subclass of your bean implementation (e.g., if it wants to augment some of your

bean methods to facilitate synchronization with the container).

Figure 7-2. Relationship of bean-provider classes and container-generated classes

The container receives client requests to create, look up, and/or remove beans. It either

handles them itself or passes the requests to corresponding methods on the EJB object.

Once the client obtains a reference to a remote interface for an EJB object, the container

intercedes in remote method calls on the bean, to provide the bean with required

transaction management and security measures. The container also provides support for

persistence of enterprise beans, either by storing/loading the bean state itself or by

notifying the bean that it needs to store or reload its state from persistent storage.

A container can maintain multiple EJB objects and object types during its lifetime. The

container has some freedom to manage resources on the server for performance or other

reasons. For example, a container can choose to temporarily serialize a bean and store it

to the server filesystem or some other persistent store; this is called passivating a bean.

The EJB object is notified of this and given a chance to release any shared resources or

transient data that shouldn't be serialized. The bean is also notified after it is activated

again, to allow it to restore any transient state or reopen shared resources.

An EJB container can make any EJB object type available for remote use. When you

deploy an EJB object within an EJB server, you can specify how the container should

manage the bean during runtime, in terms of transaction management, resource pooling,

access control, and data persistence. This is done using deployment descriptors, which

contain parameter settings for these various options. These settings can be customized for

each deployment of an EJB object. You might purchase an EJB object from a vendor and

deploy it on your EJB server with a particular set of container management options, while

someone else who purchased the same bean can deploy it with a different set of

deployment options.

CLR Contexts and Channels CLR context infrastructure attempts to provide an extensible infrastructure for contextual

composition. Third parties can introduce new properties to context boundaries. CLR

objects come in four top-level flavors:

• Value types

• Pass-by-value types

• Pass-by-reference types

• Context-bound types

Standard types include marshalling over SOAP/HTTP and DCOM

All applications run within an application domain, and this domain can contain numerous

contexts…an application which is executing within a given app domain isn’t necessarily

tied to one specific context though, it is free to switch contexts freely (meaning it is

context agile); thus getting a reference to a specific object means obtaining a direct

reference which makes it impossible to hook into and perform processing on the

messages the object contains (method calls, exceptions, etc). By deriving an object from

ContextBoundObject you force the runtime to isolate the object into a single context

where it will remain for it’s entire lifetime. Any hooks into this object from other

contexts are done via a runtime generated proxy which is a reference to the object; there

are no direct hooks into the object itself. As it’s a proxy, it is now possible to write your

own sinks to hook into the message chain and perform any type of processing you’d like.

This is comparable to remoting in .Net (and indeed most of the objects needed for this

live in the System.Runtime.Remoting namespace), albeit on a smaller scale.

So how does this relate to real world programming? There are certain services that need

to be applied to every layer of the application and are not domain specific; in the case of

the typical 3 tier app which contains a data layer, a business layer, and a presentation

layer; they all need common services such as logging and exception management. The

most common approach to solve this is to write a separate utility library that

accomplishes the functionality you need, and then just reference it from your

projects…this can create unneeded dependencies though, and in the case of exceptions

means lots of try/catch blocks with logging code in each catch block (in the case of a web

application, I do realize this can be centralized in Application_OnError which is great in

theory, but is usually not used out in the real world). This can get a quite messy in larger

applications. In this case, I’ll attempt to provide an alternate solution to this by providing

a simple example that centralizes exception logging by using a ContextAttribute and a

ContextBoundObject which allows the attribute to hook into the message chain.

A channel is needed inside an application domain: calling objects across contexts.If

you’ve previously written COM+ components, you already know about COM+ contexts.

Contexts in .NET are very similar. A context is a boundary containing a collection of

objects. Likewise, with a COM+ context, the objects in such a collection require the same

usage rules that are defined by the context attributes.

Black Box Component Framework

"Another way to customize a framework is to supply it with a set of components that

provide application specific behavior. Each of these components will be required to

implement a particular protocol. All or most of the components might be provided by a

component library. The interface between components can be defined by protocol, so the

user needs to understand only the external interface of the components. Thus, this kind of

a framework is called a black-box framework."

BlackBox Component Builder is an integrated development environment optimized for

component-based software development. It consists of development tools, a library of

reusable components, a framework that simplifies the development of robust custom

components and applications, and a run-time environment for components.

In BlackBox, the development of applications and their components is done in

Component Pascal. This language is a descendant of Pascal, Modula-2, and Oberon. It

provides modern features such as objects, full type safety, components (in the form of

modules), dynamic linking of components, and garbage collection. The entire BlackBox

Component Builder is written in Component Pascal: all library components, all

development tools including the Component Pascal compiler, and even the low-level run-

time system with its garbage collector.

Directory Objects:

A software design pattern, the abstract factory pattern provides a way to encapsulate a

group of individual factories that have a common theme. In normal usage, the client

software creates a concrete implementation of the abstract factory and then uses the

generic interfaces to create the concrete objects that are part of the theme. The client does

not know (or care) which concrete objects it gets from each of these internal factories

since it uses only the generic interfaces of their products. This pattern separates the

details of implementation of a set of objects from their general usage.

In software development, a Factory is the location in the code at which objects are

constructed. The intent in employing the pattern is to insulate the creation of objects from

their usage. This allows for new derived types to be introduced with no change to the

code that uses the base class.

Use of this pattern makes it possible to interchange concrete classes without changing the

code that uses them, even at runtime. However, employment of this pattern, as with

similar design patterns, may result in unnecessary complexity and extra work in the initial

writing of code. Used correctly the "extra work" pays off in the second instance of using

the Factory.

The factory determines the actual concrete type of object to be created, and it is here that

the object is actually created (in C++, for instance, by the new operator). However, the

factory only returns an abstract pointer to the created concrete object.

This insulates client code from object creation by having clients ask a factory object to

create an object of the desired abstract type and to return an abstract pointer to the object.

As the factory only returns an abstract pointer, the client code (which requested the object

from the factory) does not know - and is not burdened by - the actual concrete type of the

object which was just created. However, the type of a concrete object (and hence a

concrete factory) is known by the abstract factory; for instance, the factory may read it

from a configuration file. The client has no need to specify the type, since it has already

been specified in the configuration file. In particular, this means:

• The client code has no knowledge whatsoever of the concrete type, not needing to

include any header files or class declarations relating to the concrete type. The

client code deals only with the abstract type. Objects of a concrete type are indeed

created by the factory, but the client code accesses such objects only through their

abstract interface.

• Adding new concrete types is done by modifying the client code to use a different

factory, a modification which is typically one line in one file. (The different

factory then creates objects of a different concrete type, but still returns a pointer

of the same abstract type as before - thus insulating the client code from change.)

This is significantly easier than modifying the client code to instantiate a new

type, which would require changing every location in the code where a new object

is created (as well as making sure that all such code locations also have

knowledge of the new concrete type, by including for instance a concrete class

header file). If all factory objects are stored globally in a singleton object, and all

client code goes through the singleton to access the proper factory for object

creation, then changing factories is as easy as changing the singleton object.

H-MVC Framework First, let’s start with the famous MVC pattern; we will come back on the ‘H’ of H-MVC

later. The MVC is decomposed in three different components called the Model, the View

and the Controller. Those components are connected to each other in such a way that the

Controller acts as a mediator between the model and the view. Each component can send

and receive events from its connected counter part(s).Let’s see how MVC can help us in

organizing and reusing our code:

• The View is where the interactions with the GUI are programmed. This is

typically the place where user input validation rules are programmed. The view

fires events (called ViewEvents) to the controller, which will process them

accordingly. The View also receives view events from the controller to perform

GUI updates.

• The Controller is a real mediator1: it receives events from either the model or the

view and decides which component must be triggered to process the action. There

is no business logic here; it’s just a relay, which “forwards” events from one

component to the other.

• �The Model contains all the rest! In other words, anything, which is not GUI

related and process flow related. Typically, Business logic and data model

methods would be defined here. The Model, as the above components can send

and receive events (called ModelEvents).

HMVC Design rules

There are a few important rules about this pattern:

1. The Model and the View don’t know anything about each other. They use the

controller to exchange information.

2. There is no constrains on how components talk to each other. It can be a direct method

call, listeners… this framework implements a listener approach.

3. Finally don’t be surprise if you see other declination of this pattern. MVC has been

widely used in the computer industry and hence widely changed. But the main idea

remains the same. Just for information, the framework implements late instantiation in

order to minimize memory usage.

Cross-Development Environment Developers new to embedded development often struggle with the concepts and

differences between native and cross-development environments. Indeed, there are often

three compilers and three (or more) versions of standard header files such as stdlib.h.

Debugging an application on your target embedded system can be difficult without the

right tools and host-based utilities. You must manage and separate the files and utilities

designed to run on your host system from those you intend to use on your target.

When we use the term host in this context, we are referring to the development

workstation that is sitting on your desktop and running your favorite Linux desktop

distribution. Conversely, when we use the term target we are referring to your embedded

hardware platform. Therefore, native development denotes the compilation and building

of applications on and for your host system. Cross-development denotes the compilation

and building of applications on the host system that will be run on the embedded system.

Keeping these definitions in mind will help you stay on track through this chapter.

Figure shows the layout of a typical cross-development environment. A host PC is

connected to a target board via one or more physical connections. It is most convenient if

both serial and Ethernet ports are available on the target. Later when we discuss kernel

debugging, you will realize that a second serial port can be a very valuable asset.

Figure Cross-development setup

In the most common scenario, the developer has a serial terminal on the host connected to

the RS-232 serial port, possibly one or more Telnet terminal sessions to the target board,

and potentially one or more debug sessions using Ethernet as the connection medium.

This cross-development setup provides a great deal of flexibility. The basic idea is that

the host system provides the horsepower to run the compilers, debuggers, editors, and

other utilities, while the target executes only the applications designed for it. Yes, you

can certainly run compilers and debuggers on the target system, but we assume that your

host system contains more resources, including RAM, disk storage, and Internet

connectivity. In fact, it is not uncommon for a target embedded board to have no human-

input devices or output displays.

Cross compiler A cross compiler is a compiler capable of creating executable code for a platform other

than the one on which the compiler is run. Cross compiler tools are used to generate

executables for embedded system or multiple platforms. It is used to compile for a

platform upon which it is not feasible to do the compiling, like microcontrollers that don't

support an operating system. It has become more common to use this tool for

paravirtualization where a system may have one or more platforms in use.

Uses of cross compilers

The fundamental use of a cross compiler is to separate the build environment from the

target environment. This is useful in a number of situations:

• Embedded computers where a device has extremely limited resources. For

example, a microwave oven will have an extremely small computer to read its

touchpad and door sensor, provide output to a digital display and speaker, and to

control the machinery for cooking food. This computer will not be powerful

enough to run a compiler, a file system, or a development environment. Since

debugging and testing may also require more resources than are available on an

embedded system, cross-compilation can be less involved and less prone to errors

than native compilation.

• Compiling for multiple machines. For example, a company may wish to support

several different versions of an operating system or to support several different

operating systems. By using a cross compiler, a single build environment can be

set up to compile for each of these targets.

• Compiling on a server farm. Similar to compiling for multiple machines, a

complicated build that involves many compile operations can be executed across

any machine that is free regardless of its brand or current version of an operating

system.

• Bootstrapping to a new platform. When developing software for a new platform,

or the emulator of a future platform, one uses a cross compiler to compile

necessary tools such as the operating system and a native compiler.

• Compiling native code for emulators for older now-obsolete platforms like the

Commdore 64 or Apple II by enthusiasts who use cross compilers that run on a

current platform (such as Aztec C's MS DOS 6502 cross compilers running under

Windows XP).

Use of virtual machines (such as Java's JVM) resolves some of the reasons for which

cross compilers were developed. The virtual machine paradigm allows the same compiler

output to be used across multiple target systems.

Typically the hardware architecture differs (e.g. compiling a program destined for the

MIPS architecture on an x86 computer) but cross-compilation is also applicable when

only the operating system environment differs, as when compiling a FreeBSD program

under Linux, or even just the system library, as when compiling programs with uClibc on

a glibc host.

Component-Oriented Programming "Component Oriented Programming offers a unique programming-centered approach to

component-based software development that delivers the well-developed training and

practices you need to successfully apply this cost-effective method. There will be a

unified component infrastructure for building component software using JavaBeans, EJB,

OSGi, CORBA, CCM, .NET, and Web services. Component-oriented programming

supports constructing software systems by composing independent components into a

software architecture. However, existing approaches decouple implementation code from

architecture, allowing inconsistencies, causing confusion, violating architectural

properties, and inhibiting software evolution.

Principles of Component-Oriented Programming Systems that support component-oriented programming and the programmers that use

them adhere to a set of core principles that continues to evolve. The most important

of these include:

• Separation of interface and implementation

• Binary compatibility

• Language independence

• Location transparency

• Concurrency management

• Version control

• Component-based security

Separation of Interface from Implementation

The fundamental principle of component-oriented programming is that the basic unit in

an application is a binary-compatible interface. The interface provides an abstract service

definition between a client and the object. This principle contrasts with the object-

oriented view of the world that places the object rather than its interface at the center. An

interface is a logical grouping of method definitions that acts as the contract between the

client and the service provider. Each provider is free to provide its own interpretation of

the interface—that is, its own implementation. The interface is implemented by a black-

box binary component that completely encapsulates its interior. This principle is known

as separation of interface from implementation.

To use a component, the client needs to know only the interface definition (the service

contract) and be able to access a binary component that implements that interface. This

extra level of indirection between the client and the object allows one implementation of

an interface to be replaced by another without affecting client code. The client doesn’t

need to be recompiled to use a new version. Sometimes the client doesn’t even need to be

shut down to do the upgrade. Provided the interface is immutable, objects implementing

the interface are free to evolve, and new versions can be introduced. To implement the

functionality promised by an interface inside a component, you use traditional object-

oriented methodologies, but the resulting class hierarchies are usually simpler and easier

to manage. Another effect of using interfaces is that they enable reuse. In object oriented-

programming, the basic unit of reuse is the object.

In theory, different clients should be able to use the same object. Each reuse instance

saves the reusing party the amount of time and effort spent implementing the object.

Reuse initiatives have the potential for significant cost reduction and reduced product-

development cycle time. One reason why the industry adopted object-oriented

programming so avidly was its desire to reap the benefits of reuse.

Binary Compatibility Between Client and Server

Component-oriented programming revolves around packaging code into components, i.e.,

binary building blocks. Changes to the component code are contained in the binary unit

hosting it; you don’t need to recompile and redeploy the clients. However, the ability to

replace and plug in new binary versions of the server implies binary compatibility

between the client and the server, meaning that the client’s code must interact at runtime

with exactly what it expects as far as the binary layout in memory of the component entry

points. This binary compatibility is the basis for the contract between the component and

the client.

Language Independence In component-oriented programming, the server is developed independently of the client.

Because the client interacts with the server only at runtime, the only thing that binds the

two is binary compatibility. A corollary is that the programming languages that

implement the client and server should not affect their ability to interact at runtime.

Language independence means exactly that: when you develop and deploy components

your choice of programming language should be irrelevant. Language independence

promotes the interchangeability of components, and their adoption and reuse.

Location Transparency A component-based application contains multiple binary components. These components

can all exist in the same process, in different processes on the same machine, or on

different machines on a network. Recently, with the advent of web services, components

can also be distributed across the Internet. The underlying component technology is

required to provide a client with location transparency, which allows the client code to

be independent of the actual location of the object it uses. Location transparency means

there is nothing in the client’s code pertaining to where the object executes.

The same client code must be able to handle all cases of object location although the

client should be able to insist on a specific location as well. Note that in the figure, the

object can be in the same process (e.g., Process 1 on Machine A), in different processes

on the same machine (e.g., Process 1 and Process 2 on Machine A), on different

machines in the same local network, or even across the Internet (e.g., Machines B and C).

Location transparency is crucial to component-oriented programming for a number of

reasons. First, it lets you develop the client and components locally (which leads to easier

and more productive debugging), yet deploy the same code base in distributed scenarios.

Second, the choice of using the same process for all components, or multiple processes

for multiple machines, has a significant impact on performance and ease of management

versus scalability, availability, robustness, throughput, and security. Organizations have

different priorities and preferences for these tradeoffs, yet the same set of components

from a particular vendor or team should be able to handle all scenarios. Third, the

location of components tends to change as the application’s requirements evolve over

time.

Concurrency Management A component developer can’t possibly know in advance all the possible ways in which a

component will be used and particularly whether it will be accessed concurrently by

multiple threads. The safest course is for you to assume that the component will be used

in concurrent situations and to provide some mechanism inside the component for

synchronizing access. However, this approach has two flaws. First, it may lead to

deadlocks; if every component in the application has its own synchronization lock, a

deadlock can occur if two components on different threads try to access each other.

Second, it’s an inefficient use of system resources for all components in the application to

be accessed by the same thread.

The underlying component technology must provide a concurrency management service-

way for components to participate in some application-wide synchronization mechanism,

even when the components are developed separately. In addition, the underlying

component technology should allow components and clients to provide their own

synchronization solutions for fine-grained control and optimized performance.

Versioning Support Component-oriented programming must allow clients and components to evolve

separately. Component developers should be able to deploy new versions (or just fixes)

of existing components without affecting existing client applications. Client developers

should be able to deploy new versions of the client application and expect it to work with

older versions of components. The underlying component technology should support

versioning, which allows a component to evolve along different paths, and for different

versions of the same component to be deployed on the same machine, or side by side. The

component technology should also detect incompatibility as soon as possible and alert the

client. .

Component-Based Security In component-oriented programming, components are developed separately from the

client applications that use them. Component developers have no way of knowing how a

client application or end user will try to use their work. A benign component could be

used maliciously to corrupt data or transfer funds between accounts without proper

authorization or authentication. Similarly, a client application has no way to know

whether it’s interacting with a malicious component that will abuse the credentials the

client provides. In addition, even if both the client and the component have no ill intent,

the end application user can still try to hack into the system or do some other damage

(even by mistake).

Component Design Tools Rapid Application Development (RAD) refers to a type of software development

methodology that uses minimal planning in favor of rapid prototyping. The "planning" of

software developed using RAD is interleaved with writing the software itself. The lack of

extensive pre-planning generally allows software to be written much faster, and makes it

easier to change requirements.

Components based on Rapid Application Development paradigm

• Add-in Express – Visual RAD tool for developing COM add-ins, smart tags, RTD

servers and Excel user defined functions in Visual Studio .NET and Delphi.

• Panther is a cross-platform (Windows, Unix, Linux; TUI, GUI, Web), cross-

database RAD toolset for development of n-tier component based database

oriented applications. It builds native components employing the same visual

paradigm used for client screens. Editions for middleware from IBM, BEA and

Microsoft exist (and can be combined).

Component Testing Tools Testing software components

When to test a component

One of the first issues in testing software components is whether all that effort is required

in the first place or not. When is it ideal to test a component in a system? If it is seen that

the results of the component not working is greater than the efforts to test it, then plans

should be made to test such a malfunctioning component.

Which components to test

When risk classification of the use cases is mapped onto components, we find that not all

components need to be tested to the same coverage level [10].

• Reusable components - Components intended for reuse should be tested over a

wider range of values.

• Domain components - Components that represent significant domain concepts

should be tested both for correctness and for the faithfulness of the representation.

• Commercial components - Components that will be sold as individual products

should be tested not only as reusable components but also as potential sources of

liability.

The ultimate goal of testing

Testing a software component is basically done to resolve the following issues:

• Check whether the component meets its specification and fulfill its functional

requirements.

• Check whether the correct and complete structural and interaction requirements,

specified before the development of the component, are reflected in the

implemented software system.

Problems in testing software components

The focus now shifts to the most important problem of component software technology

i.e. the problem of coming up with efficiently testing strategies for component integrated

software systems.

Building reusable component tests

Current software development teams use an ad-hoc approach to create component test

suites. Also it is difficult to come up with a uniform and consistent test suite technology

to cater to the different requirements like different information formats, repository

technologies, database schema and test access interfaces of the test tools for testing such

diverse software components. With increasing use of software components, the tests used

for these components should also be reused [25]. Development of systematic tools and

methods are required to set up these reusable test suites and to organize, manage and

store various component test resources like test data and test scripts.

Component Assembly Tools After creating the component logic and writing the business logic, we can use the

component from a simple client program. A big advantage of the CORBA component

model is the ability to connect components together to form larger structures called

component assemblies.

A component assembly is a set of components (with their component descriptors) and a

Component Assembly Descriptor (CAD). Based on the CAD we can generate an

assembly object that instantiates components and connects them together to form a

component assembly.

CCM component assembly tools

Assembly Descriptor Generator

The component descriptor files are the base for the higher-level assembly descriptor,

which describes the components of the assembly and their connections. That means that

all information of the assembly descriptor comes from the component descriptors of the

related components (and some additional data from a GUI).

Assembly Object Generator

At runtime a managing object is needed that can establish an assembly instance. The

assembly object creates the component instances and connects their receptacles and

facets. All information for generating an assembly object comes from the assembly

descriptor (or its DOM model in memory). Note that this object must eventually be able

to create local or local/remote assembly instances.

UML Parser

As with components, there should be a way to define component assemblies in a UML

diagram. Therefore we need a UML parser that reads the UML-XMI file and translates

the data into the DOM model used by the assembly descriptor. The OMG has not yet

defined a mapping between UML and CCM assemblies.

Assembly Packaging Tool

After generating the component assembly and its descriptor file we have to package these

files into a zip file called a component assembly package. The assembly packaging tool

provides these functionalities to the assembly developer.

Assembly Deployment Tool

On the target host the component assembly package must be unzipped, and the assembly

must be deployed in the application server. The assembly deployment tool provides these

functionalities to the assembly deployer.