84
Winter Issue, January 2000 Cover Story Super Models, Super Geeks, and Design Patterns Christian Buckley and Darren Pulsipher Features Fine Tuning Rose to Complement Your Process Steve Franklin The UML C++ Implementation Model D. J. Supel Integration Focus: Use Case Management Catherine Connor Grokking COM+ and UML Dr. Al Major Driven to Success: A Rose Case Study Featuring Cambridge Technology Partners and Carey International Lisa Dornell Departments Publisher's Note Magnus Christerson Magnus Opus Magnus Christerson & Wojtek Kozaczynski Rose, Rational's Architecture Practice, and Architecture Reuse Editor's Email Terry Quatrani Rose Around the World Nasser Kettani We Buried the Millennium Bug... Amigo Page Dr. James Rumbaugh Bridging the Gap—Building Complex Systems by Leveling and Layering Rose 101 Matt Terski Building ATL Components Extending Rose John Hsia Using COM in Your Rose Extensibility Solution Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Winter Issue, January 2000 - IBM - United States the largest fashion houses in Europe — know that this show could very well determine their share of the $250 billion fashion and

Embed Size (px)

Citation preview

Winter Issue, January 2000

Cover Story

Super Models, Super Geeks, and Design PatternsChristian Buckley and Darren Pulsipher

Features

Fine Tuning Rose to Complement Your ProcessSteve Franklin

The UML C++ Implementation ModelD. J. Supel

Integration Focus: Use Case ManagementCatherine Connor

Grokking COM+ and UMLDr. Al Major

Driven to Success: A Rose Case Study Featuring Cambridge Technology Partners and Carey InternationalLisa Dornell

Departments

Publisher's Note Magnus Christerson

Magnus Opus Magnus Christerson & Wojtek KozaczynskiRose, Rational's Architecture Practice, and Architecture Reuse

Editor's EmailTerry Quatrani

Rose Around the World Nasser KettaniWe Buried the Millennium Bug...

Amigo Page Dr. James RumbaughBridging the Gap—Building Complex Systems by Leveling and Layering

Rose 101 Matt TerskiBuilding ATL Components

Extending RoseJohn HsiaUsing COM in Your Rose Extensibility Solution

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Super Models, Super Geeks, and Design Patterns

The Dawn of the Digital Sweater Christian Buckley and Darren Pulsipher

Christian Buckley, co-founder of QOSES.

Darren Pulsipher, co-founder of QOSES.

It is a cold winter afternoon in the European fashion capital — Paris. People have been sitting patiently in their seats for hours, photographers poised to shoot, all waiting for the lineup of world-renowned designers to display the latest in their spring collections. The designers — even the most critically acclaimed members of the largest fashion houses in Europe — know that this show could very well determine their share of the $250 billion fashion and footwear industry. Everything must be perfect. Hundreds of thousands of dollars are spent on this event. Of specific concern is finding the perfect models for their designs. The designers have hand-picked teams of supermodels to showcase their best designs. Each garment is meticulously crafted to the measurements of the model. In turn, each model is selected for features that accentuate the nuances of each individual garment. If done correctly, dress and model are perfectly matched, and the design will all but guarantee millions of dollars in sales for the designer.

We now transport to a parallel universe ... It is a cold summer afternoon (who keeps messing with the AC?) in a cube farm somewhere in the heart of the software capital of the world — Silicon Valley. People have been sitting patiently in their ergonomically-correct chairs for hours trying to decipher the latest design pattern threads on the wiki wiki web (http://www.c2.com/cgi/wiki?WelcomeVisitors) while cross-referencing proceedings from the last PloP conference. These pattern designers have spent countless hours trying to clearly define "the" design pattern that could bring millions of hits to their website. They have selected the perfect model to demonstrate their design. Time has been spent elaborating the model and design, creating the perfect example.

What can we learn from this parallel between cloth creators and code manipulators? Plenty.

Back in the real world, it's yet another Saturday afternoon filled with spillover work from the previous week. I look at my watch and realize I have only about an hour and half to get to the mall and buy my Mother a birthday present. My time has been consumed by work this weekend — I've been trying to locate a design pattern to help a project come in on time and on budget. After taking a moment to prioritize my daily tasks, I grab my car keys and head out the door intent upon finding the perfect gift for Mom. I arrive at the mall in moments, and witness a breathtaking array of clothing and accessory selling grandeur. I enter Nordstrom's. Truly, it is a veritable cornucopia of shopping splendor.1

My first task is to hunt for the model that most closely resembles my Mom, because clothing designers create outfits for specific body types, right? Of course not. I ponder, "So how does all of this transfer to software development?" The birthday present is selected easily enough — I grab a sweater. How can you go wrong with a sweater? I'll personalize it with a great card, of course. But again I ask — how does this apply to my software project? Maybe I'm looking for the design pattern equivalent to the sweater — something that fills the general requirements, but can be personalized to fit the specifics of the project.

As I pay for the sweater, I start running through my other tasks for the day. So much to do. If only Nordstrom's had a software development department (located somewhere between the shoes and cosmetics). Even in my daydream, I can't seem to locate the design patterns I require. I hunt through the racks and shelves for that perfect design pattern - the one that matches the constraints of my project precisely. Of course, there is no perfect match, so I try to find something close to what I need — close enough that I can adapt it to my project and speed up my time to delivery. My mind reeling with thoughts of cyber models wearing digital sweaters, I wake from my daydream just as the pianist in the lobby starts into the second verse of the elevator music version of U2's "I Still Haven't Found What I'm Looking For". I hurriedly exit the mall. There is work to be done.

Modeling Software Design Patterns in UML

For those of you unfamiliar with the history of reusable design, object-oriented technology is based on a software design and programming methodology created at AT&T Bell Laboratories that enables the creation of reusable modules of software code that can be easily combined together to form a complete application. Early tests of object-oriented systems, although limited by the primitive state of the technology, have demonstrated the order-of-magnitude improvements in programming time and cost that these systems can deliver. By 1994, object-oriented systems became widely recognized as the predominant enabling technology allowing applications developers to deliver faster, cheaper, more easily extensible, and more customized solutions to customers2. The theory behind design patterns follows this same premise — the creation of templates or working examples that can be easily adapted to different projects. But where do you find these patterns?

This is where the software industry mimics the fashion industry as described above. Designs are largely proprietary, and not easy to use. While some work has been done standardizing software design patterns so that they can be easily interpreted and reused (http://www.c2.com/ppr/about/portland.html), all of these forms and standards are in text format and remain difficult to map to graphical modeling languages such as the UML. Even if you locate what looks to be an applicable design pattern, it can be very tedious and time consuming to apply that pattern to multiple elements in your model.

In the course of the last year or so, the industry has seen a huge leap in the number of design patterns being described using UML. These models have been extremely useful tools in everyday software development, but they are still tedious to apply. Each time a design pattern is to be applied, the relevant information in the pattern must be copied, pasted, and modified in the model being worked. After performing this action repeatedly — and for multiple projects — the need for a method of automatically applying these patterns to our designs was apparent. Mass production demands it.

The search began with the companies closest to the heartbeat of Rational Software — Rational's Rose Technology Partners (formerly known as RoseLink Partners). Looking at the latest partner list, it was easy to identify a couple of companies working with design patterns and Rational Rose. Each one had great ideas, offering through their web sites several design patterns from some of the available design pattern books. However, they could not offer the level of functionality necessary to automate the process. What was needed was the ability to locate and adapt patterns to fit individual needs, as well as the ability to create new patterns and to make them available to be used again in the future.

OCL and UML: A Powerful Combination

About this time, Object Constraint Language (OCL) (http://www.rational.com/uml/resources/documentation/index.jtmpl) was adopted into the UML standard. Upon investigation, I discovered this to be an interesting tool that could be used as more than just a constraint language — it could be used to describe design patterns in terms of model elements. OCL allows designers to describe aspects of the model using the constructs developed in the model. As an experiment, I tried modeling some of the GOF patterns (http://hillside.net/patterns/DPBook/GOF.html) using UML and OCL together. I was glad to see that the combination of UML and OCL was a useful and powerful way of describing the pattern in a way that could be applied to my design.

Based on these findings and some additional ideas gathered from the Rose Forum, I set out to create a formal tool that harnessed both the UML and the OCL. Using the Rose Extension Interface (REI) and Visual Basic, I wrote an OCL parser and a lexical analyzer that read the pattern from one Rose model, the input from a design in another Rose model, and produced a new design according to the pattern. Over the next few weeks, the Quarry pattern-mining tool was assembled (http://www.qoses.com/products). A few days of experimenting resulted in the discovery of several patterns in my current development project. From these patterns, I was able to create reusable design patterns, which could then be applied to model elements in the design. With Quarry, I found that consistent and accurate designs could be generated faster than before, decreasing the amount of time I had to spend performing redundant design work and allowing me to focus almost entirely on the quality of the design.

I found my digital sweater.

Making the Pattern Re-useable

The joy of discovery! After utilizing Quarry on several projects, I created a simple process to define patterns and reuse them in my designs:

● Define the Pattern in Generic Terms in UML The first step in creating reusable patterns is the most important. In my experience, it is often difficult dealing with patterns that have abstract meaning. Create the pattern in a UML model and give it meaningful, yet generic names. In this way, the pattern remains descriptive but non-specific. For example, take a look at the ever-popular Model-View-Controller pattern (Figure 1).

As you can see, the names of the classes are very descriptive, but generic as they apply to a particular project.

● Prototype the Pattern It has always been a good idea to make sure the pattern can be used in a practical manner. You should be able to take the generic model and generate and run the code. How else will you know if your pattern is going to work or not?

● Determine the Context of Each Element in the Pattern Determining the context of each element in the pattern can be the hardest part of the process. It is important to distinguish three things when performing this step: the Input, the Pattern, and the Output. The input is the selected model element. This model element is then applied to the pattern, creating the output design element.

For example, if I have a class named "Foobar" and I need to apply my "MVC" pattern, I must choose the context to be that of the selected class. Using the keyword "context" in OCL, I specify which part of the model I will use as the input to the pattern.

● Replace the Generic Terms with OCL Expressions

Now that the input has been selected for each model element in our pattern, we need to specify the string value result from the input of our model. All of the OCL expressions for this part of the process must result in strings. These strings are then included as the values of the properties for the model elements. Let's look at our example.

OCL is a fairly rich language and you can do create conditional statements, iterations, and set and collection operations. More examples can be found on our web site at http://www.qoses.com.

Finding Patterns in Your Design

As I drive up to Mom's house, I can see that the party is in full swing. I pull the emergency brake, and again my thoughts drift to my projects, my customers, and design patterns. There are patterns in my current project that are prime candidates for reuse. Come to think of it, every project I've ever worked on has had familiar patterns throughout the code. In the past, I've personally written dozens of macros in my favorite editor to perform the repetitive tasks as result of not having an automated pattern-generating tool. Clearly there is an enormous benefit to building this same functionality into a visual modeling tool such as Rational Rose.

Here are some tips that I have used to identify patterns in my software designs:

● Look for things that are similar or that you find yourself typing repeatedly. ● Look for commonality in attributes and methods. For example, if you decide that all

attributes should be private and have accessor operations. Another example may include third party tool requirements. Many OODB systems require specific methods for attributes to store data persistently. This is also a pattern.

● Look at scenarios. For example, for each known accessor operation there is also a scenario that contains all of the parameters of the operation and shows the primary flow of that operation. Another example might be that you start noticing similarities in the scenarios for a number of classes, but the objects and messages are slightly different.

● Look at state diagrams. For example, for each state in the state diagram there is a corresponding value in an enumeration. Or you may notice that all of your controller classes have the same set of initialization states and termination states. This is another possible pattern in your design.

● Be careful not to replace a design pattern where inheritance or a parameterized class will suffice. Typically, patterns are more dependent on the input class and its attributes, methods, scenarios, states, and so forth.

● Remember, the key to design patterns is to decrease time and to increase product quality.

Birthday Presents

Reuse is the key to the order-of-magnitude improvements in programming time that object-oriented systems promise. As stated by Burgelman, it is "the predominant enabling technology allowing applications developers to deliver faster, cheaper, more easily extensible, and more customized solutions to customers."2 The creation of design patterns that can be easily adapted to different projects is a step in the right direction, and tools that automate this process are important to development teams.

Mom always encourages me to do the best that I can in my work. Translation: quality is important. Mom may not know much about design patterns, but I think she understands the basics in customer satisfaction. She may have liked the sweater, but the birthday card made all the difference. It would have been nearly impossible for me to find an article of clothing to fit her exact physical model, and so I chose something that fit the general

model and added the personal touch. As a result, I increased the quality of my gift and satisfied my customer — Mom. Applying this concept to your software project is just as simple. Patterns reduce the amount of time you spend coding, and increase the time you can spend building quality into your products.

About the Authors

Darren Pulsipher and Christian Buckley are co-founders of QOSES, a software and process development company based in Tracy, CA. (http://www.qoses.com) The QOSES Web site offers the Internet's largest collection of free Rose scripts and models, as well as the Quarry Pattern Mining Tool. By using the UML's Object Constraint Language (OCL), users are able to define design patterns that can be used to generate designs, documentation, and code. QOSES also offers Bouquet (formerly Florist), an add-on to Rational Rose. Darren has been modeling (visual) for over nine years, while Christian dreams of someday playing the piano in the lobby of a Nordstrom's.

Footnotes

1 For those of you unfamiliar with Nordstrom's, it's a chain of upscale department stores distinguishable from other upscale department stores by the existence of a grand piano in the lobby, at which sits a pianist fluent in Broadway show tunes, light jazz, and the translation of good music into bad.

2 Strategic Management of Technology and Innovation, Burgelman, p. 280.

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Fine Tuning Rose to Complement Your Process

Steve Franklin, MacDonald Dettwiler and Associates

Steve Franklin of MacDonald Dettwiler and Associates

"The Rose Extensibility Interface (REI) provides a tremendous capability to customize the appearance and informational content of a given Rose model. Prior articles by John Hsia in Rose Architect have presented valuable information on the REI and property customization. This article will continue with the thrust of enhancing and automating process through use of the REI. Customization of REI properties will be demonstrated through presentation of an enhanced use case documentation interface.

The key parts of the REI required to customize the model include the API and the scripting engine. By extending the capabilities of Rational Rose, the tool can more effectively accommodate your internal needs and processes in a number of areas. Some areas where the REI has improved capability include use case documentation, requirements traceability, model completeness and consistency, automated document generation, and enhanced support for configuration management. One of the most powerful features is the customization of model properties that provides the ability to "add" properties to any REI class at run-time. These properties can customize the information associated with any Rose class and be manipulated and displayed through additional scripts.

The UseCase class is a perfect candidate for this property customization. Rose provides an excellent visual interface for documenting and interacting with use cases. The REI, however, gives us the ability to further customize information associated with use cases to satisfy contractual, documentation, or problem domain requirements. In addition to demonstrating

property customization, I would like to take this opportunity to propose a potential enhancement to use case documentation within Rational Rose.

Improving Use Case Documentation

Use cases have always been the subject of healthy and vigorous discussions whenever questions are raised regarding contents and granularity. There are logical arguments that justify massive and coarse use cases, and there are also equally valid reasons to have very specific and granular use cases. Because of this, it is vital to define the role of use cases in accordance with the needs appropriate for your specific project or phase of the software lifecycle.

Given that use case documentation needs are dependent on many variables, it would be unwise for Rose to enforce specific documentation or informational content on users. As a result, Rose has provided a relatively generic and open means of capturing documentation for use cases (Figure 1).

This interface is sufficient for capturing use case information, but there may be room for more detail if the project is striving for completeness, consistency, and/or support for document generation.

If you feel that you have a need to enforce or increase use case documentation within your project, you will have to decide:

● How the information will be stored. ● How the information will be entered. ● What information will be entered.

This article will address the first two points in some detail, and will touch on the third point through presentation of a sample use case entry form. The additional use case information presented in the sample serves as one possible option. Many discussions on use case documentation and templates reside in the archives of comp.object, OTUG and other forums. I encourage you to investigate alternative templates that may better suit your project's needs.

Customizing Properties in Your Model

Rose provides the ability to add properties to a specific instance of a model. This allows you to customize portions of your model to extend information associated with an object's state.

For example, it might be sensible to add a "requirements" property to use cases to support traceability. This can be easily achieved using the REI and some very simple scripting techniques. A brief outline of property customization will serve as a quick review of this.

Properties are added to classes of the model as shown in Listing 1.

When adding properties using AddDefaultProperty, several things must be identified:

● The class name ("UseCase") which will be given the new default property.

● The tool name ("Demo") identifying the tool that will contain the property - C++, Default, etc.

● The set name ("Default") which contains the new property.

● The name of the default property ("ID" or "Requirements") itself.

● The data type of the new property ("String").

● The default value for the property ("" meaning no initial value).

Various Rose classes can be customized in this way, and new tools can be created to organize this information. It will become clear in the next section why the tool "Demo" was used.

Sample Use Case Interface

Before deciding how to customize the use case, it is important to identify the objectives of the use case. As mentioned previously, the content of a use case will be greatly affected by its purpose. High-level use case documentation to support a heavily incremental RAD process will be very different in appearance from a very detailed analysis document using use cases in the analysis phase of a mission-critical waterfall project (it is possible!).

For this example, we will assume that our primary objectives are documentation consistency across a large analysis team and granular use case details to support automated document generation. At the very least, the following should be associated with the use case:

Identification:

● Name

● Unique ID ● Stereotype ● Traceable requirement ● Overview

Description:

● Start state ● End state ● Description ● Variations

Notes:

● Temporary "scratchpad" notes

In order to manipulate this information for a given use case, a user interface will have to be developed that can set and change these properties. All of this logic will be executed using RoseScript facilities and interactions with the well-documented Rose API. This approach has been very effective, although the reader should note that we have since pursued a Visual Basic DLL solution to allow for greater control, GUI customization, and maintainability of the code.

The User Interface is designed first by creating a new script (Tools->New Script ... ) and inserting a dialog from the Edit menu. A GUI builder allows you to create prompts, text boxes, drop boxes, group items, and more. The following interface (Figure 2) was created very quickly using the GUI builder.

Once the interface has been established and reviewed by colleagues for completeness, the Main() logic should be structured to support the dialog. Listing 2 outlines the major tasks that must be achieved to support customized use case documentation.

The method getSelectedUseCase selects a single use case (error handling associated with multiply-selected or unselected use cases not shown).

The addUseCaseProperties method updates the Rose Model with new properties associated with a given use case. The logic associated with this method is essentially an extension of that demonstrated in Listing 2.

The initUCDialog method updates a dialog object (based on your custom dialog created in Figure 2) with the information contained within the selected use case. If the use case has no

custom properties with assigned information, then all of the fields of the dialog will be blank. However, for a use case that has already been modified with some custom property information, the fields of the dialog will be updated with that information.

The dialog is then created and model control is passed to the dialog for display. Once the user has completed changes within the dialog interface and selected "OK", the return value is tested by the code. If "OK" is selected, then getUIContents and saveUIContents are called and the updated field values are copied from the dialog back into the use case via the intermediate options data structure. Finally, as a convenience that takes advantage of the documentation window, a text preview is rendered upon closure of the dialog box, summarizing the attributes that were entered into the user interface.

To use this functionality, the script must be mapped to a Toolbar menu item via the rose.mnu configuration file. When a use case is selected, the user can choose this menu item. The dialog pops up as shown in Figure 3. The user can then make changes and hit OK or Cancel. Assuming that "OK" is selected, the preview is updated and shown in the documentation window (Figure 4).

Supporting Your Decisions

In the context of model customization, there may be no need to further enhance your model content. Rational Rose already does a great job of providing extensibility through support for UML. Notes, stereotypes, and other features allow for excellent documentation of your model throughout the software lifecycle. However, specific needs may arise that justify model customization.

When considering customization, be sure to factor in the following issues:

● If I customize a portion of my model, will it be useful and easily accessible, or will I reduce usability?

● By adding new properties, is it possible that I am duplicating information that could be entered elsewhere?

● By providing an interface to manipulate these properties am I making things more efficient, or am I imposing constraints and overheads that will reduce the quality of my work?

With respect to the use case interface shown above, these are legitimate concerns. For example, this interface is not accessible by double-clicking on the use case. Instead, the user must select "Tools->Modify Use Case..." or Alt-T, M. Opening the use case by double-clicking will bring up the familiar dialog with an additional "Demo" tab (Figure 5).

Custom properties are still accessible from the Demo tab, but are not easily modified. When accessed in this manner, multi-line items (such as UseCaseText) are not easy to enter or update.

Furthermore, the documentation window has been provided with a preview of the properties for convenience when traversing the model (Figure 4). However, if a user accidentally updates the information in this window, those changes will be lost when the use case is modified through the dialog interface. There is no way to set the documentation window to read-only.

Finally, a GUI interface breaks up the information into specific fields. This can be problematic when a large amount of information has to be moved from documents into the model. Cutting and pasting becomes much more tedious when it must be executed field-by-field, rather than once for each use case.

These limitations simplify reiterate the fact that you must consider each customization's benefits and limitations. Rose is already a powerful tool, and cost versus value must be considered when extending the software to complement or fine tune your team's process.

When used appropriately, property customization model is yet another example of the strength provided by the REI. There are a number of advantages available to users and teams when they explore customization through the UML's flexibility and the extensible architecture and design of Rational Rose itself.

When deciding how and where to add new properties to Rose's framework, care should be taken to ensure that the information is useful, unique, and easily accessible. Otherwise, the benefits of a domain- or process-specific model are outweighed by the cumbersome means of accessing and manipulating this information.

The author would like to thank John Hsia and Chris Risley for excellent technical feedback and Lisa Dornell for rapid and valued editorial

contributions. Dave Morash and Thomas Kunst also made contributions through early and valuable discussions related to this topic.

Full source code for the sample presented in this article can be downloaded here.

About the Author

Steve Franklin works at the Halifax office of MacDonald Dettwiler and Associates (http://www.mda.ca). His main responsibilities include architecting and design of military and commercial software systems. He can be reached at [email protected].

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

The UML C++ Implementation Model

D. J. Supel, Logicon Advanced Technologies

D. J. Supel, Scientist at Logicon Advanced Technologies

The Unified Modeling Language (UML) has quickly been adopted by the industry as the standard object-oriented modeling notation. Developed by Rational Software Corporation and its partners, it is the successor to modeling notations found in Booch, OOSE, OMT, and other methods. The language provides a mechanism for specifying, visualizing, and documenting the artifacts of software systems, as well as for business modeling and other non-software systems.

Past notations have provided the basics for object-oriented development: class, object, and state diagrams among others. These notations, however, were rigid and lacked the robustness and extensibility required by most practitioners. The UML, on the other hand, when applied properly, can be extended to support most methodologies or domains, and its robustness can present such detailed diagrams that implementation nears the trivial.

The most common diagram, and often considered the most important, is the class diagram. This diagram presents classes, their associations, generalizations, methods, attributes, dependencies, constraints, and more. Those familiar with Booch or the Object Modeling Technique (OMT) notation find the basic UML class diagrams familiar in nature and easy to grasp, however, advanced UML class diagrams can be extremely complicated and require a thorough understanding of the notation's standard.

Why modelers require a thorough understanding of the UML is discussed here. Presented will be two modeling efforts of the same domain space and their respective C++ implementation model. The first class diagram will be one that, unfortunately, is most commonly found in industry. To put it bluntly, this effort is close to useless. It lacks clarity and has possibly hundreds of corresponding implementations where none may be what the original designer attempted to represent. The second class diagram will present a slightly advanced usage of the UML. The implementation model corresponding to the second class diagram is very robust and differs very much from the Figure 1 implementation model.

A basic UML class diagram is presented in Figure 11. Here the developer has failed to use any of the features of the UML to provide an accurate and specific visualization of his/her solution:

● Associations do not have names, rolenames, or rolename visibility defined. ● Association multiplicity use is minimal. ● Dependencies have not been stereotyped. ● Notes have not been used to assist in understanding. ● All association navigability has been defined as bi-directional resulting in a more complex

implementation where uni-directional would have been sufficient in most cases.

Figure 2 presents a more thorough representation of the same model. The most noticeable change is the rectangles representing classes have changed into a different icon. This introduces the concept of "stereotypes", one of the built-in extensibility mechanisms of the UML. A stereotype is, in effect, a new class of modeling element introduced at modeling time. It represents a subclass of an existing modeling element with the same form but different intent. Stereotypes are represented as a keyword within matched guillemets or, as in the example, an icon. The UML predefines some 40+ stereotypes, all with unique definitions.

The classes stereotyped in Figure 2 conform to the Objectory Software Engineering methodology's extension to the UML. This methodology instructs designers to stereotype classes as boundary, entity, or control classes. The Objectory stereotypes, however, play no role in the implementation model. They are simply concepts that assist in defining a system design. Other stereotypes, as we will see, can dramatically effect the implementation model.

Also introduced is the concept of a "property". A property is a named value denoting a characteristic of an element. Properties are displayed as a common delimited sequence of specifications of the form "keyword = value" all inside a pair of braces ( {} ). In Figure 2, properties have been associated with operations and attributes, and all properties are of type Boolean, thus the default value is omitted. Properties are a very powerful tool to add robustness to modeling efforts. The UML predefines and presents several properties.

The following sections present and compare the default code generation between the basic class diagram (Figure 1) and the advanced class diagram (Figure 2). The following C++ code generation rules are followed:

● UML classes that generate C++ classes will have ... 2 ● a default constructor ● a virtual destructor ● a copy constructor and assignment operator if the class manages pointers ● all methods defined as virtual ● all attribute names and rolenames preceded with an underscore ● UML parameters of class type "Foo" for parameter kind ...

❍ "in" generates "const Foo&" ❍ "out" generates "Foo&" ❍ "inout" generates "Foo&"

● UML generalizations that generate C++ class inheritance will be defined as virtual.

The rationale for the virtual destructor is simplicity. The C++ language only requires base classes whose destructor deallocates memory to have a virtual destructor. Classes that are derived from that base class are then safe from memory leaks from the base. By making all destructors virtual, we can avoid this issue throughout developmental evolution and at the same time cause no harm.

The above example presents

● abstract classes ● property "query" ● property "update" ● "in", "out", and "inout" parameter kinds

The advanced class diagram depicts class Detonation as an abstract class (note the italicized name3). As an abstract class, no instances of it can be created except by derived classes, thus its constructor is inaccessible to everything except derived classes. The advanced class diagram also defines two operations named "location". The property "query" denotes that the operation does not modify the system state4, where "update" denotes the operation may alter the system state, although there is no guarantee that it will do so. In C++, "query" operations always map to constant methods. Operations with no return value generate C++ methods with return type "void". In the advanced class diagram, note the "update" operation fails to denote parameter kind or name. When parameter kinds are absent, the default kind "in" is assumed5, and parameters without names are left blank. Parameter kinds of "out" and "inout" must be denoted on the diagram and are placed at the beginning of the each formal parameter expression.

The above example presents

● generalizations ● stereotype "protected" ● stereotype "private" ● association rolenames ● association rolename visibility

Class Nuclear Detonation is a generalization of class Detonation. The default code generation of generalizations is "public virtual". Had the generalization been something other than "public", you would have seen a "protected" or "private" stereotype on the generation. The basic class diagram depicts the association between a Nuclear Detonation and a Debris Patch. However, it fails to label the role the object plays or its visibility. The advanced class diagram defines the rolename as "debris patch" and that its rolename visibility is "private". The visibility of rolenames is depicted by the standard UML visibility characters of '+' for public, '#' for protected, and '-' for private6.

The above example presents

● stereotype "becomes" ● association rolenames ● association rolename visibility

Again, the basic class diagram fails to define the rolename and rolename visibility of an association. This time, class Debris Patch is impacted by the failure to provide association role information. The most confusing issue about the code generation of class Debris Patch is its dependency on class Plume. For the basic class diagram's code generation, we can only assume we must #include the file containing class Plume although we see no utilization of the class. The advanced class diagram, however, has applied the predefined UML stereotype "becomes." The UML defines "becomes" as a dependency whose source and target represent the same instance at different points in time, but each with potentially different values, states, and roles7. In our example, an instance of a Debris Patch will become a Plume at a moment in time or space. This shows how a minimal effort, applying a stereotype, adds understandability to a model. The only thing missing is the documentation of the transition from one object to another, which could have been associated with the dependency by the use of a note. The code generation of a "becomes" dependency is not straightforward. Many implementations are possible. The code generated here supports an implementation where a client operates directly with an instance of Debris Patch, but never a Plume. The implementation of Debris Patch will pass through operation requests to its held instance of Plume when the Debris Patch becomes a Plume. We must assume class Debris Patch and Plume have common or compatible specifications.

The above example presents

● abstract classes ● stereotype "actor" ● property "abstract"

The icon associated with class Simulated Element in the advanced class diagram represents a class stereotyped as an "actor". An actor is a role of object or objects outside the system that interacts directly with it as part of a coherent work unit8.In this example, the developer is modeling the interactions with the actor; thus it has associated operations. Similar to class Detonation, Simulated Element is an abstract class, but unlike class Detonation, the constructor for class Simulated Element is declared "public". This is possible since no client can create an instance of Simulated Element because it contains "abstract", or "pure virtual" in C++ terminology, operations. Operations denoted "abstract" are considered incomplete and need to be completed by a derived class9. We can now conclude that the UML provides two valid means of denoting an entity as abstract: an italic specification and the property "abstract".

The above example presents

● stereotype "constructor" ● constraint "or"

● association rolenames ● association rolename visibility ● association multiplicity

The difference between basic and advanced class diagram is quite dramatic in this example. The advanced class diagram defines three construction operators, which define what parameters are required to create instances of the class. These "constructors" support "or" constraint which demands that only one instance of the two associations from class Account may exist10. By defining association multiplicity, rolename, and rolename visibility we see the association from Account to Person is two instances representing primary and secondary accounts. The association from Account to Corporation consists of one corporation reference.

The above example presents

● enumerations ● stereotype "enumeration" ● stereotype "import" ● association rolenames ● association rolename visibility ● association multiplicity

By using an association rolename, visibility, and multiplicity, the advanced class diagram better defines the association between Person and Account. This example introduces a means of defining enumerations. The advanced class diagram defines the enumeration "Gender" in the attribute type-expression to be either "male" or "female". Since the definition of Gender does not utilize a namespace it must be defined internal to class Person (i.e., Person:: Gender). Another way of defining enumerations is by creating a class named the enumeration-type name and stereotyping it with "enumeration". The use of a dependency between a class and the enumeration stereotyped with "import" will designate the namespace to which it belongs.

The above example presents

● association rolenames ● association rolename visibility ● association multiplicity

By using association rolenames and rolename visibility, the advanced class diagram generates slightly more descriptive code for Corporation. The effect of using association multiplicity, however, was the major difference between the two class diagrams. The advanced class diagram

now correctly shows that a corporation can have zero or more associated accounts.

The above example presents

● stereotype "structure"

The advanced class diagram applies the stereotype "structure" to class Location. In the C++ implementation model, any class stereotyped as "structure" results in a "struct" being developed not a class.

The above example presents

● stereotype "bind"

This is another situation where the basic class diagram presents a dependency but fails to give any hint why a String List depends upon a List. The default code generation for the basic class diagram's class String List would just have a #include statement referring to template class List. The advanced class diagram uses the UML predefined stereotype "bind" signifying the bound element is fully specified by its template; declaration of new attributes or methods is not permitted11. The parameter passed to the template is denoted within parenthesis.

The above example presents

● collaborations ● stereotype "framework" ● association rolenames ● association rolename visibility ● association names ● association name-direction arrow ● aggregation

The basic class diagram presents a somewhat simplistic view of the design. Again, the association between class Event Channel and Event is not named and rolenames and rolename visibility are not defined. The advanced class diagram depicts class Event Channel playing a role in a collaboration12 (or design pattern)13. Its interface is thus inherited from the design pattern Singleton. The Singleton design pattern was first published in Design Patterns: Elements of Reusable Object-Oriented Software by Gamma, Helm, Johnson, and Vlissides. The definition of design patterns is usually held in a model supported by a CASE tool within a package stereotyped as "framework"14. The advanced class diagram also provides an association name, rolename, and rolename visibility. Note the small triangle on the association name15, this is an association name-direction arrow. This optional arrow indicates the direction to read the association name16, therefore in this example, each Event Channel instance manages events. This example also introduces the aggregation association. Aggregation (the hollow diamond) denotes object references are associated to and possibly disassociated from the aggregate (here the Event Channel) during its lifetime. If the association multiplicity of the aggregate is one, the objects and their references are destroyed when the aggregate is destroyed. If the association multiplicity of the aggregate is greater than one, a possibly confusing state occurs; the implementation must define which aggregate class (or possibly object) is responsible for destroying the aggregate parts when they are no longer needed.

The above example presents

● abstract classes ● association navigability ● property "class-scope" ● property "abstract" ● stereotype "constructor"

Since the basic class diagram defines all association navigability as bi-directional as the default, class Event must manage a pointer back to class Event Channel. The advanced class diagram designates the association navigability as uni-directional. Class Event is another abstract class with an abstract operation. An additional constructor is defined to complement the default constructor. The placement of the constructors is different than the last abstract class example where an abstract operation was defined. Both examples are correct. As long as the compiler does not permit non-derived classes to create instances of the abstract class, the goal is met. The advanced class diagram used another property on attribute "counter" and also assigns that attribute a default value. The UML standard states a class-scope attribute is shown by underlining the name and type expression string17. Since the CASE tool utilized to generate the class diagram did not have this capability, the alternative was to use a property string. Class-scope attributes in the C++ implementation model are denoted as "static".

The above example presents

● stereotype "utility"

Any class stereotyped as a "utility" is a class in which all its methods and attributes are treated as global18, or in C++ terminology "static". An instance of the class can never be created, therefore the constructor is private.

The above example presents

● constraint "ordered" ● association ordering ● association names ● association rolenames ● association rolename visibility ● association name-direction arrow ● composition aggregation

Again the basic class diagram fails to denote association name, rolename, and rolename visibility. Both class diagrams, however, do constrain one association end by the keyword "ordered". By default, the class that is being ordered must define ordering. Therefore, in the example, class Point will be required to define and implement operators for greater-than, less-than, and possibly others. Code generation for both diagrams make the assumption class Priority List an applicable implementation for the ordering. Had the model desired a specific implementation to perform the ordering, an implementation extension might substitute the data structure to hold the elements for the generic specification "ordered"19. This example also introduces composition aggregation. Composition aggregation (the solid diamond) denotes the associated objects are created at the time the aggregate is created. The objects are then destroyed when the aggregate is destroyed. The association multiplicity of the aggregate can be no other value than one; therefore any other value assigned is invalid.

The above example presents

● property "frozen" ● notes ● constraints ● comments ● stereotype "requirement"

This example makes the assumption data types for month, day, and year are "unsigned int", although the types should have been specifically provided. The advanced class diagram makes use of a property and note. Non-changeable attributes, or constants in C++ terminology, are denoted with the property "frozen"20. Constraints can also be denoted as an associated note with the text string enclosed in braces21. If the braces were omitted, however, the note would be interpreted as a comment. Had the note been stereotyped as a "requirement", this would represent a responsibility or obligation22. By providing the note, the algorithm supporting dates can be better scoped.

About the Author

D. J. Supel studied computer science at the University of Pittsburgh. His career has focused on the application of object technology to a variety of domains. He currently is a Scientist at Logicon Advanced Technologies whose research includes complex adaptive systems and artificial life. He can be reached at [email protected].

References

1 Visibility is denoted by icon. A lock represents "private", a key represents "protected", and a plain tag represents "public".

2 Default code generation should also include the definition of equality and inequality operators. This rule has been ignored in an effort to present cleaner interfaces.

3 Notation Guide, Rational Software Corporation, Version 1.1, Section 5.4.4

4 UML Notation Guide, Rational Software Corporation, Version 1.1, Section 5.8.2

5 Ibid., Section 5.8.2

6 Ibid., Section 5.7.2

7 UML Semantics, Rational Software Corporation, Version 1.1, Appendix A

8 UML Notation Guide, Rational Software Corporation, Version 1.1, Section 6.3.1

9 Ibid., Section 5.8.2

10 Ibid., Section 5.20.5

11 Ibid., Section 5.12.2

12 Ibid., Section 8.3.2

13 The version of the CASE tool used to generate the class diagram did not support the collaboration icon, thus a note was used instead. The collaboration icon is a dash-lined oval.

14 UML Semantics, Rational Software Corporation, Version 1.1, Appendix A

15 The version of the CASE tool used to generate the class diagram did not support the name-direction arrow, thus greater-than or less-than characters were used instead.

16 UML Notation Guide, Rational Software Corporation, Version 1.1, Section 5.20.2

17 Ibid., Section 5.7.2

18 Ibid., Section 5.13

19 Ibid., Section 5.21.2

20 Ibid., Section 5.7.2

21 Ibid., Section 4.12

22 UML Semantics, Rational Software Corporation, Version 1.1, Appendix A

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Use Case Management

Rational Product Integration Focus: Rose and RequisitePro

Catherine Connor, Rational Software

Catherine Connor, Requirements Management Evangelist for Rational Software.

1. How do you organize your use cases?

2. Can you tell in which release a particular use case is implemented?

3. How do you know that the entire system functionality is tested?

4. Which tests are affected by a change in a use case?

If the inability to answer these questions has caused you frustration, schedule setbacks, or the delivery of products that have missed the mark, read on.

New in Rational Rose 2000 and Rational RequisitePro 4.5, Integrated Use Case Management enhances Rose use case modeling with powerful requirements management capabilities. By extending use cases beyond diagrams with sortable attributes, documents, and traceability, Integrated Use Case Management helps manage large numbers of use cases across your team. This is the tightest and most robust integration in the market between a visual modeling tool and a requirements management tool.

Why Manage Requirements?

Requirements management is a systematic approach to finding, documenting, and managing requirements. Without it, more than two-thirds of projects end up missing user needs, are late or over budget (Standish Group, CHAOS Report). Why so many? Primarily because managing requirements is about managing change — and managing change is

hard. Plus, change is pervasive. We live in a dynamic world: customers change their minds, competitors come up with better solutions before we deliver ours, the business environment changes. Being open to this changing environment is a good thing, but it can bog you down. Change in itself isn't bad. The evil beast is uncontrolled change — change whose impact is not measured before it happens. By managing requirements you are more likely to deliver a timely solution that solves your customers' real problems.

Why Manage Use Cases?

By providing a user's view of what the system should do, use cases are requirements. As such, use cases should participate in the management of all system requirements. Most software projects have numerous use cases, all of which have different priorities and dependencies — just like any other requirement. For example, a use case describing the processing of an order on the Web might stem from the business need of generating more revenue through the Web. By establishing a tangible dependency between the use case and its business need, you can better respond to change affecting either of these requirements. And prioritizing the importance of implementing this use case versus another helps you know where to start.

Managing use cases along with all other requirements is key to understanding the state of your project and better enables you to deliver the right system. The value of Integrated Use Case Management is that it seamlessly integrates uses cases with your requirements information.

Setting up the Integration

Integrated Use Case Management begins by associating your Rose model with a RequisitePro project. This association provides the context for selecting use case document templates and use case attributes from the Rose environment. You can establish this association either at the model level or at the package level, where each package may be associated with its own RequisitePro project. The package association lends itself to large software projects that might use either multiple RequisitePro projects (typically one per subsystem) or different use case document templates (for business use cases vs. system use cases).

A RequisitePro project consists of a number of Microsoft Word documents and a database

(Microsoft Access, Microsoft SQL Server, or Oracle) to organize the requirement information. Use case documents in RequisitePro contain use case textual descriptions, just like what you may be writing today. Requirements in these documents are linked to a database that stores additional requirement information, such as attributes, traceability links, versioning, change history, project security, and more. From the RequisitePro database, you can query the requirement information to check coverage and measure the impact of change. You can also easily navigate to the RequisitePro Word environment and back to Rose.

Requirements Management Capabilities in Rose

Requirements management capabilities are visible from the standard shortcut menu in the Rose browser —right-click on a use case in Rose to view the new use case menu options (Figure 1). They include:

● Use Case Document to create a new use case document or associate the use case with an existing RequisitePro use case document.

● Requirement Properties to view and edit attributes and traceability links to the use-case.

● View RequisitePro Association to view the RequisitePro context for that use case (the associated use case document template and use case attributes — set via a requirement "type").

Let's look at these capabilities in more detail.

Use Case Document

Today, you may be writing use case documents and attaching them to your use case model via the Rose External File property. The new Integrated Use Case Management capability goes beyond simply attaching a file to a Rose use case. Because the documents attached to your use cases are RequisitePro documents, you benefit from the following advantages:

● Use case documents are based on proven use case document templates. Integrated Use Case Management provides Rational Unified Process use case templates. These templates contain informative guidelines as well as use case formatting, saving you time and providing consistency from document to document.

● Requirement text is clearly marked.

Requirement text is visually differentiated from additional descriptive information in the document (Figure 2.). This makes it easier to "see the trees in the forest."

● Any modification to use case documents is automatically tracked. Information about who modifies what, when, and why is stored in the RequisitePro database. These revisions help you gain control of use case changes.

● Requirements in use case documents can be linked to other requirements they may relate to. By tracing use cases to business requirements, tests, or even other use cases you can more easily measure the impact of change on related requirements and verify coverage.

To associate a use case document with a Rose use case, right-click on the use case in the Rose browser, and select Use Case Document—New from the shortcut menu. The RequisitePro Word environment is launched and your template-based document is displayed, ready for editing. You can also associate an existing RequisitePro document to a Rose use case by using the Use Case Document—Associate menu item.

Requirements Properties

The second new addition to the use case shortcut menu is "Requirements Properties." Requirements and, similarly, use cases are not just text. They have additional properties, such as attributes, traceability links, revision history, and, particularly for use cases, diagrams.

Attributes help manage scope. Especially helpful in iterative development, attributes provide an easy way to scope manage each iteration of your project. They make more objective the process of deciding which use cases to implement in a particular release. Too often organizations decide which use cases to implement based upon personal agendas, emotions, or pet peeves. Poor decisions made early carry through to implementation and are more costly to change the farther the project gets in its lifecycle. Attributes provide a simple way to assign non-emotional weight to use cases and requirements alike.

Traceability helps measure the impact of change and ensure requirements coverage. For instance, if a business need changes, what use cases might be impacted? By establishing traceability links, you can query the requirements to answer questions like "Are all business needs implemented at the use case

level?" or "Are there test requirements for all the use cases?"

(Note for Rational Unified Process users: you may want to review the Traceability Strategies for Requirements Management with Use Cases white paper available in RUP 5.5. The paper outlines various traceability approaches, depending on your needs.)

Revisions helps you track who changes what, when, and why, to provide an audit trail of requirement changes. This helps you measure the stability of requirements and concentrate on more stable requirements first, inherently diminishing the amount of change.

To set use case attributes in Rose, right-click on the use case in the Rose browser, and select Requirement Properties—Open from the shortcut menu. In the dialog box (Figure 3), click the Attributes tab, and set attribute values. Note that you can change the out-of-the-box use case attributes and their values in the RequisitePro project associated with your model. From this same dialog box, you can also establish traceability and view revisions.

Benefits of Managing Use Cases

Once you've attached a use case document, or assigned requirement properties to a use case in Rose, the use case is part of your requirement set in RequisitePro. As such, you can use all RequisitePro capabilities to sort your use cases (by priority, by iteration number, etc.), to query on specific use cases (i.e., only the use cases planned for the next iteration), and even produce requirements metrics.

Using an Attribute Matrix in RequisitePro (Figure 4), you can view all or a select subset of use cases and their respective attributes. This helps you organize the use case information answering the first question at the beginning of this article: How do you organize your use cases? You can run queries to determine which use cases are assigned to which designer, how difficult they are to implement, or in which release they should be implemented (Can you tell in which release a particular use case is implemented?).

Once you have selected the use cases to be implemented in the next iteration, you should verify that test requirements are traced to use cases ensuring that all the functionality will be tested. The Traceability Matrix in Figure 5 shows the relationships established between use cases

and test cases. Using Traceability Matrices, you can query on use cases not yet traced to test requirements (answering the question How do you know that the entire system functionality is tested?). Additionally, testers can query test requirements potentially impacted by modifications in use cases ensuring that they're testing the latest functionality. A suspect link (red slashed arrow in Figure 5) indicates that test case TC1.2 may need to be revisited due to a change in use case UC1.3. Querying on suspect links answers our last question Which tests are affected by a change in a use case?

Summary

Integrated Use Case Management extends use cases with requirement information. This benefits the Rose user by establishing a real-time window to modify use case attributes, check traceability, and view revision history from Rose. And the RequisitePro use case document is just a click away. Integrated Use Case Management provides requirements managers more accurate and timely information to work with because that information is available from the Rose users' fingertips. By managing use cases in conjunction with other requirements your project can be better scope managed, change can be controlled and coverage can be verified. In short, Integrated Use Case Management helps ensure that you are implementing the functionality that was agreed upon, and that this functionality will be fully tested.

Integrated Use Case Management is available in RequisitePro 4.5 and works with Rose 2000. These products are available separately or packaged within Rational Suite AnalystStudio 1.5, Rational Suite DevelopmentStudio 1.5 and Rational Suite Enterprise 1.5. To check out this feature, visit http://www.rational.com/products/reqpro/tryit where you can download an evaluation of RequisitePro 4.5 to be used with Rose 2000.

About the Author

Catherine Connor is a Requirements Management Evangelist for Rational. Her role consists of helping Rational teams and their customers to be successful in deploying Rational requirements management solutions, RequisitePro and Rational Suite AnalystStudio. Catherine has more than 10 years of experience in programming and supporting software customers in implementing software solutions.

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Grokking1 COM+ and UML

Dr. Al Major

Dr. Al Major is the author of COM IDL and Interface Design.

It is clear that COM+ based applications are going to play an increasingly important role within the architecture and modeling universe over the next few years. Unfortunately, the basics of UML modeling in COM+ are not very well understood and a number of common COM+ related misconceptions are prevalent in the UML community. There are two reasons for this:

● Interface-based architectures are relatively recent, and different enough from straight OO architectures to need special treatment.

● The design of COM+ has largely been product driven, and it has a number of idiosyncrasies whose consequences are not fully appreciated, even by gurus.

In the past, this lack of understanding has been reflected in the UML tools that were used for object modeling. Fortunately, newer tools, such as Rose 2000, do a much better job at representing COM+ object models in UML.

I explored many of the issues that surround COM+ application design in my book COM IDL and Interface Design. However, the usual constraints of space and publication deadlines prevented me from fully exploring the relationship between COM+ and UML. In this article, I'll begin this exploration by looking at the structural properties of COM UML models.

Using a simple example application, I'll show you the elements of COM+ models. I'll be making the following important points:

● COM+ naturally encourages three layers of refinement: a client-centric interface-only model, a client-centric classes-and-interfaces model, and a component-centric realization model. Understanding

where the COM+ only "client" models end and the C++/VB/Java™ "realization" model begins is a necessary prerequisite to producing clear architectural documents.

● The relationships between classes, objects, and interfaces in COM+ are much more constrained than those in a C++/Java/VB based OO model. A great deal of confusion has been caused by attempting to force-fit some of the more traditional relationships (such as implementation inheritance) onto COM+ architectures.

In this article I will look exclusively at class diagrams. I decided not to examine use cases, since these are not particularly different from those of other kinds of UML models. I had a tougher problem with interaction/collaboration, component and deployment diagrams. These diagrams have interesting COM+ characteristics, but space constraints prevent me from covering them in this article. If there is sufficient interest, I will cover them in a subsequent article.

This article looks at UML based COM+ modeling in general terms and covers the "client" models. It can be understood outside the context of any specific UML tool (although Rose 2000 has very good support for this perspective). The details of the VC/VB-based realization models are somewhat tool specific, and will be covered in the companion article on Rose2000 ATL/VB support in the next issue of Rose Architect.

A Home Automation Application

As my sample application, I'll look at some parts of a make-believe (but utterly plausible) architecture being developed by the venture-backed firm, http://www.HomePortal.com. The goal of HomePortal.Com is to capture the large and growing (and needless to say, utterly unprofitable) market for home automation products. Their software will allow the happy homeowner to log on (naturally, for free) to HomePortal's site and perform far-out and futuristic things like:

● Turn up the heating from their two-way pager on the commute home.

● Check on the images on the inside video-cams while on holiday.

● Remotely disable the burglar alarm to allow the neighbor in to feed their pets.

● Turn off the sprinkler system from work if the weather web site shows rain.

● Control the oven settings for that turkey dinner.

● Arrange to have e-mail sent to the police and fire department when the pizza delivery person inadvertently tries to deliver to the wrong house.

Not having designed billg's palatial (and undoubtedly COM+ based) automated residence, I can only postulate a reasonable design for such an architecture. Each home will have controllers (either a PC for the whole house, or more likely, simple WinCE-based embedded controllers attached to various pieces of hardware) that are hooked together by a relatively high-bandwidth, high-availability LAN. Each controller will be running COM+ objects that are suited to its function. Thus the COM+ object that implements sprinkler functionality will be different from the one that is attached to the heating plant.

The COM+ objects will be remotely accessed from one or more client programs that invoke the simpler hardware operations to provide the desired end-user functionality. The next section shows the scripting model that might be used for this purpose.

Interfaces-Only: The Scripting Client Model

This section examines an interfaces-only COM+ model. The rationale for such a model is two-fold:

● Client-component decoupling is enhanced if a client does not make any assumptions about class structure and knows only about interfaces that it must use.

● Many, if not most, COM+ scripting languages (such as VBScript, JScript, Perl, and Python) cannot access the IUnknown::QueryInterface functionality directly. This means that class structure is not directly visible to scripting clients.

I need to emphasize that this model is a concrete model, i.e. code (client side code) can be written to this model. It is not just an abstract specification (although I did call it a Specification Model in "COM IDL", reflecting its position in the modeling technique that I demonstrated in that book).

Figure 1 shows the interfaces that may be associated with a single control. In our example, there are three interfaces of interest, each representing a capability of a control. The interface IValueInRange represents the ability to read the value of a control (such as the reading on a thermometer, the timer setting on a sprinkler, or the speed of the fan). A related

interface, IModValueInRange, is used to set the value of a control (such as the azimuth of the video camera). The IPowerMode interface is used to turn a control (such as the TV or VCR) on, off, or into stand-by. The methods on these interfaces are more or less self-explanatory.

This diagram was drawn with the Visual C++ support in Rose 2000. Several features of Rose 2000 are worth mentioning here:

● The interfaces are drawn using the VC++ interface stereotype in Rose. This gets translated into the appropriate IDL/C++ or TypeLib/VB using the corresponding language support module (to be covered in next issue's article).

● Every interface/method/parameter can have IDL attributes associated with it. These are accessed by the COM tab in the specification dialog as shown in Figure 2. These attributes are inserted into the Rose 2000 generated IDL/ TypeLib file as necessary.

● Rose 2000 allows a COM+ interface to be declared either as a dual (derived from IDispatch) or as a custom (derived directly from IUnknown) interface.

● Rose 2000 distinguishes between interface properties and methods, although it (correctly) displays properties as operation-pairs (get/set), rather than as attributes.

● It allows for the specification of associations between interfaces.

Although this is not graphically represented in the diagram, the interfaces above are all dual interfaces. Hopefully, this graphical omission will be fixed in the next iteration of the product (possibly by use of the BoxSpoon diagramming notation that I described in "COM IDL and Interface Design").

The translation to IDL is trivial. For example, the interface IControlScript in Figure 1 is translated into the IDL shown in Figure 3.

If you're familiar with COM IDL, you'll recognize that this is almost the same information as shown in the class diagram. The "missing" pieces of information, the IDL attribute information and the derivation from IDispatch , are found in the "COM" and "Relations" tabs of the appropriate (interface/method/parameter) specification.

So, what does IControlScript actually do? Its function, in the absence of a true IUnknown:

:QueryInterface, is to act as an interface-discovery stand-in. This is accomplished by means of the three read-only properties IVIR,IMVIR, and IPM that return the corresponding interfaces IValueInRange,IModValueInRange, and IPowerMode.

These read-only properties form the basis for the associations shown in the diagram. In general, an out-only parameter to a method call is the only way in which an interface can have an association or dependency on another interface (or data type).

However, not every control implements each of these interfaces. A control which implements IModValueInRange doesn't need to implement IValueInRange separately, and a read-only control (such as a thermometer) might implement the latter but not the former. Similarly, not every control will actually be capable of being switched on and off. For example, you would never willingly want to turn off a fire-alarm. The read-only properties (IVIR,IMVIR, and IPM) shown earlier can therefore return a null pointer value for certain controls. This is modeled in the diagram by showing a cardinality of 0..1 on the association end-point. Each association start-point has a cardinality of 1..1 reflecting the fact that each of the capability interfaces (IValueInRange,IModValueInRange and IPower Mode) can be associated with exactly one IControlScript interface.

By now, the eagle-eyed among you have probably noticed a curious fact: IModValueInRange and IValueInRange are very closely related. Couldn't I have derived (specialized) the former from the latter? Shouldn't I have used a generalize relationship in the class diagram?

To get to the bottom of this issue, consider the class diagram in Figure 4. It represents a collection of controls (for example the sprinklers in the front yard could be one collection, the sprinklers in the back yard could be another, etc.). The two interfaces IControls and IModControls represent a non-modifiable and a modifiable collection respectively.

IControls, which follows the conventions for a COM collection interface, has an Item method that returns a numbered element of the collection. This element is returned as an out-only parameter of type IControlScript. This sets up an association between the collection

interface and the individual controls. The association has a cardinality of 0..n on both ends, reflecting the fact that a collection might have zero (there are no sprinklers in the driveway collection) or more controls, and that a control might belong to more than one collection (a front yard sprinkler also belongs to the collection of all sprinklers). The size of the collection can be accessed via the Countmethod.

Now look at theIModControls interface. We capture the common collection methods by specializing the IControls interface and adding two new methods, Add and RemoveByID (which have the obvious semantics). This is shown by the generalize relationship in the diagram. Is this OK to do?

As it turns out, the generalize relationship is almost meaningless (the exception is that inheritance from IUnknown is mandatory) when applied to COM interface models. This is a source of considerable confusion to those of us who are accustomed to C++, Java, or some other language that has "deep" inheritance semantics.

COM IDL's interface inheritance semantics are "shallow" rather than deep. IDL inheritance only means that the first n method signatures of the derived interface exactly match the n method signatures of the base interface. That's it. There are no semantic consequences whatsoever. No part of the COM run-time makes use of the inheritance relationship (although it is documented in the type library). The lack of "deep" semantics is caused by the fact that there is no notion in COM of a type-cast from a derived interface to a base interface. The only kind of type casting present in COM are the dynamic_cast semantics implicit in the QueryInterface mechanism, and as we'll see later, IDL inheritance is meaningless even in that context.

IDL inheritance therefore serves only two relatively unimportant functions:

● It's a shorthand notation that saves on having to duplicate the method signatures from another interface.

● It documents the evolution of one interface from another.

In fact, using IDL inheritance (remember that we're talking about UML generalization) can actually cause problems in practice. Many tools (including the VB design time and parts of Rose 2000), have difficulty dealing correctly with IDL

inheritance relationships. Until these features are fixed, the minor benefits gained from the use of generalization are hugely outweighed by its disadvantages. My advice: avoid the use of generalization.

To wrap up our tour of the home-automation interfaces-only model, let's quickly examine the complete model as shown in Figure 5.

In addition to the interfaces that we have already seen, there are a couple of new interfaces. IControllerScript represents a controller (all the sprinklers in the front yard might be actuated by a single controller). This association relationship between a controller and a control is interesting because we assume that each control can only belong to a single controller, and that each controller must have at least one control. This dictates the start-point cardinality of 1..1 and the end-point cardinality of 1..n.

IModControllers represents a collection of controllers (all the sprinkler controllers taken together) and is very similar to the collection interface that we've already seen.

The IAlarm interface represents an outgoing interface, i.e., one that a COM+ object uses to communicate asynchronously with another COM+ object (frequently living in a client). Notice that it has no associations with any other interface (recall that associations between interfaces can arise only through out-only interface parameters).

As far as class diagrams go, this is not a tremendously complicated one. There are only a few simple structural associations that are possible between different COM+ interface types.

Classes-and-Interfaces: The Compiled Client Model

So far, we've seen an interface-only class diagram. As I mentioned earlier, you can actually come up with an interface-only model simply by examining the use cases seen by the scripting programmer. I show how to do this in "COM IDL". The interface-only diagram is a very pure perspective on our COM+ application. For one thing, it says absolutely nothing about actual implementation. This is why I called it a "Specification Model" in "COM IDL".

Clearly, we need to implement this model somehow. This is where the next level of client-

centric COM+ models enters the picture. The classes-and-interfaces model shows more detail about the implementation of the object model. It does so by showing the COM classes that realize (implement) the different interfaces of the interfaces-only model. Typically it also includes additional interfaces that are not directly available in the interfaces-only model.

Figure 6 shows one generic realization of the Interfaces-only model that we saw earlier. We define four classes, Controllers,Controller, ControlsandControl. Each class realizes one or more of the interfaces that we saw earlier.

Once again, this diagram is drawn using the Visual C++ support in Rose 2000. Rose 2000 is the first UML tool I've worked with that allows COM+ class structures to be correctly drawn. A few points about Rose 2000 coclasses:

● They are drawn using the new VC++ coclass stereotype.

● As with the interface stereotype, it is possible to attach IDL attributes to the coclass type.

● They may be associated with interfaces as well as with coclasses.

I will not show you the generated IDL. It is fairly straightforward.

Like the interface stereotype, the coclass stereotype is automatically inserted into the IDL file. Unlike the interface stereotype, the coclass stereotype is not currently used by Rose 2000 VB support. VB support in Rose skips the coclass representation and maps interfaces directly on to the VB implementation (we'll see more about this in next issue's article). This is conceptually incorrect, even though it maps well on to VB COM+ from an operational standpoint. I would like to see this fixed, since the coclass permits greater architectural clarity and flexibility at very little extra operational cost.

Let's go back and examine the diagram in greater detail. What we've done is take all the interfaces from the interfaces-only model and assign each to the coclass that seems best suited to realize it. For example, the IValueInRange,IModValueInRange and IPowerMode are all related toIControlScript by an association with endpoint cardinalities of 1..1 and 0..n,indicating that they are all (optionally) implemented by the same identity. We therefore assign them all to the Control coclass. In general, this kind of assignment is relatively obvious, although it might change as

requirements change over implementation iterations.

The correct way to view the relationship between a coclass and an interface is that the former realizes the latter. It is definitely not an inheritance relationship.

Take another look at the Controls coclass. Notice that it implements the IModControls interface, but not the IControls interface. Isn't this incorrect, since the latter is a generalization of the former? Shouldn't the Controls class be forced to implement both interfaces? Or perhaps the implementation of IModControls automatically ensures the implementation of IControls?

Once again, COM+ turns out to have "shallow" semantics where generalization is concerned. There is no requirement that Controls implement (i.e., exposevia IUnknown::QueryInterface) IModControls, as well as the generalization IControls. If the control should happen to implement both interfaces, it is not required that the interfaces have the exact same implementation for the common methods. This is non-intuitive to those who are accustomed to languages with "deep" inheritance semantics. Nevertheless, it is the actual behavior of the COM+ run-time.

Of course, you the architect are totally free to create object models which always follow the rule that interface generalizations have "deep" semantics. Just recognize the fact that COM+ run-time, left to itself, will not enforce such rules.

Figure 6 also shows relationships between coclasses and coclasses, and relationships between coclasses and interfaces. These relationships are required to satisfy the associations between the interfaces themselves and should have the same characteristics (type, cardinality, etc.) as the originals. Thus the Controls class is associated with the IControlScriptinterface in order to fulfill the association between IModControls and IControlScript. A similar rationale is used to create the association between Controller and IModControls. A variation on this theme is the association between Controllers and Controller. In this case, we satisfy the association between IModControllers and IControlScriptby associating with a class that realizes the interface, instead of the interface itself. Although this is a legitimate relationship, I advocate against it because it is less flexible than the alternative of associating

with an interface. My advice: unless you have a really good reason, avoid associations between a coclass and another coclass.

One point I should make is that all class relationships are private. This is the only visibility that makes sense, since a client cannot see any part of a class except its interfaces.

Recall that I called the diagram in Figure 6 a generic model. What makes it generic? Take a closer look at the Control class. It is represented as having all four interfaces. We know that different controls, (for instance, controls for a TV, a sprinkler, a thermometer or a smoke detector) may implement one or more of the optional interfaces. Is this an incorrect representation then? It turns out that the answer is somewhat complex.

The requirement of having optional interfaces is actually fulfilled. COM+ allows different objects instantiated from a single coclass to expose different sets of interfaces, although it is rare to use this fact in practice. In theory then, the individual controls we mentioned earlier (TV, sprinkler, etc.) could all be instances of the generic control class.

The practical problem, of course, is that we almost certainly want different code modules to implement each of these controls. This is beneficial for convenient implementation as well as for efficient deployment. The requirement of independent implementation forces us to create specific classes for each control type. These are represented in Figure 7.

Figure 7 shows the coclass diagram for a TV, a sprinkler and a a thermometer.

It turns out that a TV only has the on-off-standby capability, and therefore only implements the IPowerMode interface. Similarly the Sprinkler class implements IModValueInRange in addition to IPowerMode. The Thermometer coclass which only implements IValueInRange.

There actually is a COM+ notation that can be used to represent this situation. It's called a CoType, and in this case, the Control class is really the following CoType:

CoType Control { [mandatory] interface IControlScript; [optional] interface IValueInRange;

[optional] interface IModValueInRange; [optional] interface IPowerMode;}

Unfortunately, the CoType notation is informal (even though it's used in the specification of OLE-DB) and not part of IDL. Although it probably makes sense to have a UML stereotype for CoType, it is not a concrete entity that can be realized in code. Should it make it into the next version of Rose (the Space Odyssey version?), or if you should choose to implement it yourself before then, it can be thought of as a generic coclass. A specific coclass can be considered a realization of a cotype.

All of these coclasses continue to provide correct realizations of the interfaces-only model.

Figure 8 shows a Smoke Detector coclass, which implements none of the optional capabilities. However, that isn't the only interesting feature of the model. What's more interesting is the fact that the class has an association with the IAlarm interface (remember the interface that had no association with any other interface). This reperesents the semantics of a Smoke Detector: it asynchronously raises an alarm when the situation warrants it. The association is many to one on both ends.

This is the first example we've seen of a class association that isn't simply extrapolated from an interface association (which is completely determined by parameter signatures). This association is adding meaningful structural semantics to the model.

Realizing Class Associations

There is an obvious code realization of this association: a collection of IAlarm pointers in the Smoke Detector class. In fact this would be the implementation used by the connection point protocol for asynchronous events. However this is not the only possible realization. A more sophisticated realization would be one that uses a Pub-Sub service to connect the two entities. Depending on your preferences, you may wish to model the Connection-Point or Pub-Sub relationship explicitly in a further refinement of this model.

This raises the general question of whether this class diagram is concrete (remember I made the claim that both client-centric models were concrete). It turns out that the classes-and-interfaces diagram is not completely concrete. A

client can see into a class only to the extent that it can navigate between the interfaces exposed by a class. As we've just seen, relationships cannot be viewed or manipulated by code (are private and abstract) and therefore permit alternate realizations.

For example, an association with a cardinality of 1..1 on both ends might be realized by a COM connection, or it might be realized by containment which would include the COM "aggregation" technique (different from the UML term aggregation) as a special case (check out "COM IDL" for explanations of these terms).

Since many of these realization options (Aggregation, Connection Points, Pub-Sub, etc.) follow a well-defined pattern, they can potentially be supported as new UML relationship stereotypes (better still, as options within the specification of a stereotype). Tools such as Rose 2000 can then generate language-specific code from these stereotypes.

Representing Identities

If you've read some of my recent work, you know that I have an identity problem, bordering on a fetish. The issue of representing identities in UML is tricky enough that I thought I'd address it separately.

Object identity is one of the major sources of gotchas in COM+. It turns out that a single "object", instantiated from a COM+ class, can have multiple identities. Some of these identities are "natural", caused by the proxying that takes place when a COM+ call is remoted (or intercepted). Others are "artificial", created by devious developers who need to get around COM marshaling constraints. The commonest ones are "Alternate Identity" (AI) and "Weak Identity" (WI), which are used to expose multiple dispinterfaces on an object and break reference cycles, respectively.

It can be argued that certain aspects of the identity issue are not relevant in the class diagram, since they have to do with object instances rather than classes. In the case of proxying, and simple WI and AI, each identity can be regarded as a separate object instance that happens to share state with the original. This is not dissimilar to having multiple objects in a class that differ in the set of interfaces that they implement, although it can be argued that the latter situation should not be permitted either.

If you favor the last argument, you might wish to model each identity as an independent coclass in the UML. Indeed, there are certain situations where this (displaying separate coclasses in the model) is the only proper approach. For example, there is an application of Alternate Identity that permits a "single" COM object to expose multiple IDispatch interfaces. In this situation, each identity actually has a different implementation of IDispatch than that exposed by others (since that's the whole point of the exercise).

In the diagrams presented in this article, I have finessed this issue. This was done deliberately in order to avoid confusing the presentation of the main elements of COM+ structural modeling. However, you should be aware that this issue exists, even if you choose (as I have here) to avoid modeling it explicitly.

Interception is a little more interesting, since it allows interposition of services that may not be visible to the server implementor. There are several ways to address the issue, including a separate, client-centric, interception services model. However, space constraints prevent me from addressing it any further in this article.

Summary: UML and COM+ Structure

Let me summarize the points that I've made about COM+ structural models.

An interface:

● is represented in UML as the interface stereotype.

● is simply a collection of operation signatures.

● may be generalized to another interface, but this is not meaningful, and potentially confusing.

● depends on all types in its operation signatures.

● cannot have any attributes, since it has no state.

● can have associations, but only with interfaces that are defined as out parameters in its message signature.

● cannot be instantiated.

A class:

● provides a realization of its interfaces. The set of interfaces is object specific, not class specific.

● has all the dependencies of all its interfaces. Also has the associations of all

its interfaces. ● can have attributes and associations, but

they're private and abstract. ● is instantiable to one or more COM

identities. ● cannot participate in generalize

relationships with either classes or interfaces.

That's it. COM+ structural models are actually very simple beasts, with only a limited number of possible class and relationship stereotypes.

Much of the early confusion about COM+ models was caused by the fact that many standard OO relationships, such as interface or implementation inheritance, UML OO aggregation, etc., do not have any analog in COM+. Also adding confusion is the fact that a COM interface is mapped onto abstract base classes in C++, and interfaces in Java. These language constructs have much deeper inheritance semantics than the simple COM interface.

It's my hope that I have clarified the situation.

What's Next?

In this article I've covered the basics of the structural UML modeling of COM+ applications. I talked solely about the client-centric models that serve as a concrete specification of a COM+ architecture. I did not discuss the server-centric code-realization model. This final model, which is a refinement of the classes-and-interfaces model is the layer in which actual C++/VB/Java code is present. This is the layer in which the full expressive power of these languages can be used to more completely specify the implementation of the application.

In the follow-up article, coming in the spring issue of Rose Architect, I will look at the code-realization model that is provided by Rose 2000 to convert the classes-and-interfaces model into working VC++/ATL and VB executables.

References

1. I'm paying homage to the current Retro trend by resurrecting a wonderful word. Here's the definition from "The New Hacker's Dictionary": grok: /grok/, var. /grohk/ [from the novel Stranger in a Strange Land, by Robert A. Heinlein, where it is a Martian word meaning literally 'to drink' and metaphorically 'to be one with'] vt. 1. To understand, usually in a global sense. Connotes intimate and exhaustive

knowledge. Contrast zen, which is a similar supernal understanding experienced as a single brief flash. See also glark. 2. Used of programs, may connote merely sufficient understanding. "Almost all C compilers grok the void type these days".

About the Author

Dr. Al Major is the author of COM IDL and Interface Design from Wrox Press. His current passion is the Scaling to Sagan project, a scaleability initiative based on the DNA 2000 architecture.

In his murky past Al has, among other things, been a Wall Street Rocket Scientist, co-founded one of the earliest e-commerce companies "BrainPlay.Com" (now "KBKids.Com") and consulted for Microsoft. He has the dubious distinction of having had his picture appear on the front page of the Wall Street Journal. The "Dr" is a more-or-less legitimate honorific, since Al received his PhD in Computer Science from Yale University. He can be reached at [email protected].

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Driven to Success

Lisa Dornell, Rational Software

Lisa Dornell is Editor of Rose Architect.

"Why is it no one ever sent me yet, one perfect limousine, do you suppose? Ah no, it's always just my luck, to get one perfect rose."

Dorothy Parker, 1927

Poor Dorothy could have ordered her own limousine if she'd contacted Carey International — and her definition of "rose" might have extended beyond flowers and into the world of software development, had she contacted Rational. But then, Dorothy was a writer, not a businesswoman, and she had other things on her mind ...

Carey International offers a system of chauffeur driven services in 420 cities, in 60 countries around the world. Their wide variety of professional services and transportation options, not to mention the number of countries in which they do business, combine to make their reservation center one of the most advanced and complex in the industry. And, with approximately 100,000 reservations per month, one of the busiest as well. Developing and maintaining this reservation system is crucial to Carey's continued ability to provide reliable service, so when it needed updating, Carey International looked for a company with an equal reputation for quality. They found that in Cambridge Technology Partners, an international management consulting and system integration firm. Cambridge is using its years of experience in the business, and tools like Rational Rose, and the UML to make this project happen.

"We're developing a custom software solution, a multi-business unit, enterprise-wide project," explains David Raal, Team Architect at Cambridge. "They need a lot of functionality, since they're expanding and providing more service to more cities. They need a strong IT product to do that, so that's what Cambridge

Technology Partners is here to provide. It's a Java™/CORBA system with a Java front-end, and a Java back-end on top of an Oracle database."

I'm sure most of us really haven't put much thought into what goes in to reserving a limousine. You make a call, leave a credit card number, and it shows up, right? Well, it may appear that way but what really happens, according to Raal, is more complicated than that. "You talk to an operator who finds out who you are. If you've booked with us before, the system will pull up your passenger record and associated information. If you have any existing preferences, they can be applied to your reservation. Maybe you'd like to have champagne or a copy of the New York Times in your car. What methods of payment have you used in the past? Would you prefer to use your corporate account or is this a private expense?" Because a great deal of Carey's business is with corporate executives who appreciate efficiency, it's important to Carey to get the details right. Does the customer need a telephone in their car? Have they requested a specific driver? Will they need a larger car than usual if they are traveling with guests?

Then, once the reservation is confirmed, it's put in a centralized database where the entire enterprise can access it. At the appropriate time, it is sent, along with any additional information required, to the city where the dispatch is performed. This same system then handles the job of routing and assigning drivers and cars. When the job is complete, attention turns to a bewildering array of billing options. Will the reservation be charged to a corporate account or to the individual? Is there a prearranged corporate discount rate? Was a travel agent involved and do they get a commission for the sale? What will the chauffeur be paid? Because Carey is an international corporation, variations in exchange rates must also be considered. And upon completion of the job, there might be extra charges (phone charges, drinks, unscheduled stops, for example) that must be calculated before the final bill is sent out.

Because of the vast number of variables to be considered in this system, the project is, to put it mildly, complex. "It's huge," explains Raal. "This goes way, way beyond your average Java applet or web-based application. This system has the sophistication and feature set traditionally associated with a Visual Basic or Powerbuilder application, but it must perform and be deployed across a wide area network in the US and overseas. It's a large system with

114 screens and dialogs supported by 250 business objects just for the Reservations portion. On the 36 major screens, each screen has links to between 5 to 15 different dialogs and screens. That indicates a lot of complex relationships and functionality."

A Great Team, Hard Work, and Rose

With so much to keep straight and so much riding on getting this project done right, how does the Cambridge team do it? "A great team, hard work, and Rose," explains Raal. Rose is their primary tool for analysis, design, and development. They capture use cases in Microsoft Word, and then capture all their business objects inside Rose, giving them an analysis model of what the major domain objects are, their attributes, their relationships, and how they are packaged and partitioned.

"Our middle tier team", Raal explains, "designs and implements all the business objects. They use Rose every single day to design their classes and then generate Java code from the Rose model. We have some fairly advanced scripts that help us build frameworks to reduce the amount of time it takes to code all of this. We're over two hundred thousand lines of commented code in seven months, so we've been cranking pretty seriously. Our middle tier team could not survive without Rose because it takes away so much of the grunt level work of coding. We use Rose to model all of the classes that we implement in the system, with the exceptions of some of the UI classes. We use JBuilder to design and implement the UI classes. At the moment we are actively modeling 300 plus classes. With EOJ, Billing and Dispatch, we might end up with 500 classes in Rose."

Rose is also earning its keep as an analysis and architecture tool according to Raal. "We're able to discuss the domain models and concepts much better with the customer and end-users. This allows us to gain a deeper understanding of their business and system requirements. As a designer and architect, I can't design systems without Rose any more. I haven't done a serious OO project in four years without Rose, and I don't intend to do any without it either. The advantages of being able to visually model, to partition and layer your architecture and see your class relationships and dependencies, that's huge. On a project like this, if a customer made me try to do it without Rose, I'd walk away, because I know I'd fail."

Levels of Complexity

The Carey reservation system is built using a 3-tier (layered) architecture. The first tier is the presentation layer (the user interfaces), the second tier is the application layer (business objects, services, and workflow logic), and the third tier is the foundation layer (base-level services and data access). Within these three layers, Raal is able to package up different interfaces and functionality and parcel out the work, having different team members working on separate parts of the same layer at the same time. By generating a view out of the model, he can search for things like circular dependencies, and fix them before they become a problem.

"I don't know how you'd build a system like this without being able to visually model it. Just take, for example, the relationships of our passenger object. Its got credit cards, addresses, comments, service alerts, preferences, accounts, arrangers ‹—a whole list of items. Using Rose, I can see everything that's in there very easily. I can look at all the attributes, or take the attributes off and look purely at the relationships. I can look at inheritance hierarchies. There's no way you could ever get this level of understanding and detail without modeling. It's fabulous. And I think it's a large part of our success in putting together a good software architecture and a quality design. I wouldn't attempt it without Rose."

Complexity of requirements is one facet of this project; another facet is technical complexity. In addition to Rational Rose, Rational SoDA, and Microsoft Word, Java, CORBA (Visibroker), and UML, the system must integrate with PeopleSoft, which handles Carey's general ledger and accounts payable and receivable business. The system architecture is a PC client, distributed over a wide area network using CORBA as the communication protocol. Application servers implement both the business services and data access. Oracle is the centralized database repository and is accessed using TOPLink, an OO-Relational mapping tool from the Object People. TOPLink provides advanced OO-RDBMS mapping facilities as well as runtime transaction control and object caching.

"One of the interesting things we've done technically," Raal explains, "is that we've made our business objects available on both the client and the server. This dramatically increases performance and reduces network traffic. One of the problems with web or thin client systems is that you're going across the network all the time. You enter a form, it goes across the network, validates it, and it comes back. We serialize business objects to the client and have

the code available on the client side to provide rapid feedback to the user. We didn't think we could take reservations quickly if every business rule had to be checked across the network. We had to come up with a kind of distributed client/ server architecture that centralizes business rules and business objects in the software design, and yet allows those rules to be executed on the client. Rose has really helped us keep all this straight. In fact, we have a Rose script that generates the framework for our business objects to achieve this goal. "

Rapid Delivery — Excellent Quality

One of the benefits achieved through the use of Rose comes in the area of reuse. The system is really three separate efforts combined into one major project. Reservations, Dispatch, and End of Job and Billing are all captured in the same Rose model. The team works off this centralized business object model to leverage previous work and ensure consistency.

"One of the great things about this project," Raal states, "is the foundation layer with all the re-usable services and patterns. We've got persistence, event services, object distribution, report service, cache management, security ... they're all reusable packages or subsystems that will spread across the entire project. We're working just on the reservations functionality now, and are building a foundation level infrastructure that everyone can reuse. An important by-product of visual modeling using Rose, is the ability to harvest and communicate designs. Another Cambridge team can plug portions of our model into their Rose model, and they immediately have access to all our classes, interfaces, and diagrams. They can now browse the Rose model to understand how it all works. It's a very powerful tool for reuse throughout the enterprise. And everyone loves using Rose, especially with hot technology like Java, CORBA and the UML.

Additional benefits to using Rose have come to Cambridge Technology as a consulting firm. Cambridge, unlike the majority of their competitors, works on a fixed price/fixed time schedule. So it's crucial to their profitability and their reputation to get this project done on time. "If we significantly underestimate a project or phase, then we pick up the tab for it," Raal explains, "so we're under very significant time and scheduling cost pressures. All that we've done with Rose, all the scripts, modeling and code generation, we're not doing it because we have a tremendous amount of time to spare. We're doing it because without the scripts and

without Rose, we couldn't develop the system anywhere near as fast. I think visual modeling is a must for a complex system on a tight deadline. Our approach is to model everything in Rose. Model our system using UML, contain it, understand it in Rose, generate code from it, and generate documents from it. That increases our efficiency to such a level that we can rapidly deliver good projects with excellent quality. My opinion is go Rational, or don't go at all."

In early August, the first phase of development for the reservation system was completed. Final code count was 215,000 lines of commented code. There are 114 screens and dialogs supported by over 270 business objects. The end-users describe the system as the "Mercedes Benz" of applications. Qualitatively, the software is stable, robust and easy to maintain. Of the 600 bugs detected during internal integration testing, only 50 or 60 remain open. Bugs are easy to analyze and fix and new bugs are seldom introduced as old ones are fixed. This indicates a solid design and architecture of the system whichsystem that will provide high ROI for many years.

The Cambridge team is now turning their attention to the Dispatch, End of Job and Billing phases of the project — building off of the functionality and Rose models of the reservations application.

With all this reliability, the next time you book a limousine for a special occasion, or a car to pick up an important client at the airport, you might want to call Carey. Thanks to Cambridge Technology (with a little help from Rational Software); you'll never end up like poor Dorothy Parker, endlessly waiting for a perfect limousine that never arrives.

For more information on Cambridge Technology Partners, please visit http://www.ctp.com. For more information on Carey International, please visit http://www.careyint.com.

About the Author

Lisa Dornell is Editor of Rose Architect. She likes limousines and hates referring to herself in the third person. When not working on the magazine, she inhabits a postcard-wallpapered cubicle at Rational Software where she writes marketing material and exquisite e-mails. She believes there's a novel in the making there. Book contracts and fan mail can be sent to [email protected].

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Publisher's Note

Magnus Christerson

Magnus Christerson, Director of Product Management for Rose Visual Modeling Products.

Now that we've all survived the turnover to the year 2000, it's time to start looking toward the future. And, according to a recent survey in Software Development Magazine, the future is in Object-Oriented Development and Modeling and Design. The November 1999 edition of SD printed the opinions of nearly 4000 readers, visitors to the SD web site, and attendees at the Software Development Conference in San Francisco, who responded to a variety of industry-related questions. According to the survey, the most crucial skills needed today are Object-Oriented Development skills, and Modeling and Design skills. The skills most people voted as being crucial in five years? Modeling and Design skills, and Object-Oriented Development skills.

To those of us at Rose Architect, and to you, the Rose community, this is probably old news. We already know the importance of OO development and the difference that modeling can make to the quality, reliability, and predictability of our software projects. That's why this, and every issue of Rose Architect, is dedicated to furthering your skills in these crucial areas by bringing you articles designed to help you develop better software, faster.

This time around we are focusing a little more on the technical and implementation side of modeling. Our Amigo Page is by Jim Rumbaugh, who shares with us his insight into building complex systems. And instead of an Inside Rose column, this issue features an Inside the UML column by guest contributor D.J. Supel of Logicon Advanced Technologies: The UML C++ Implementation Model. We also feature the first in a series of articles by Dr. Al Major on COM and UML.

Darren Pulsipher and Christian Buckley of QOSES once again continue to enlighten and entertain by introducing us to patterns, courtesy

of the digital sweater (you'll have to read the article to understand). Steve Franklin of MacDonald Dettwiler and Associates helps us fine tune Rose, and I am joined in Magnus Opus by my colleague Wojtek Kozaczynski for an article on architecture practice and reuse. Also in this issue is an integration focus on RequisitePro, and articles by regular contributors John Hsia, Nasser Kettani, and Lisa Dornell.

Also, if you've used Rose and/or the UML in an interesting way and would like to publish an article in Rose Architect, please let us know. We'd love to feature your work in future issues. Just send a note to [email protected].

All the best,Magnus Christerson, Publisher

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Magnus Opus

Wojtek Kozaczynski and Magnus Christerson

Magnus Christerson, Director of Product Management for Rose Visual Modeling Products.

Wojtek (Voytek) Kozaczynski is the Director of the Architecture Practice at Rational. The practice is a combination of consulting, mentoring, education, and reusable assets development services specializing in software architectures. Wojtek has 20 years of industrial and academic experience in building software. He can be reached at

Rose, Rational's Architecture Practice, and Architecture Resuse

The three key principles of modern software development processes, and the Rational Unified Process (RUP) in particular, are iterative development, architecture-centric development, and use case-driven process. Producing a robust architecture is a major milestone dividing the software lifecycle into two stages, the Engineering Stage dominated by analysis, discovery, and design; and the Production Stage dominated by code development, integration, and testing.

As modern development processes are accepted and adopted by the software industry, Rational has seen a rapidly growing demand for services and products in the broadly defined area of architecture. To meet these demands Rational created an Architecture Practice.

The Architecture Practice can be best described as an initiative coordinating and facilitating different architecture-related activities. The practice is facilitated by a small group called, quite appropriately, the Architecture Practice Group. This article briefly introduces the practice and its objectives and then concentrates on what role tools in general, and Rose in particular, play with respect to architectures and architecture asset reuse.

The Rational Architecture Practice

The practice has three facets:

wkozaczy

@rational.com. 1. Education concentrates on creation and delivery of both open-enrollment and customer-specific education. Rational offers a course on the Principles of Architecting Software Systems. The course covers a broad spectrum of topics such as definitions, architecture representations, benefits, stakeholders, deliverables, process, reusable assets, etc.

Also popular is a Customizable Process-centric Workshop created primarily for teams wanting to compare their architecture practices with the recommended best practices.

2. Services offers prepackaged services, architecture mentoring, and consulting. The prepackaged services are Architecture Capability Assessment and Architecture Review. The assessment looks at a company's readiness and ability to architect complex software systems in its specific domain. The review looks at a specific system and evaluates its architecture against the system's requirements, quality attributes and development risks. Both packages are structured very much as self assessments with evaluations facilitated, coordinated, and advised by Rational.

3. Creation/collection, management, and reuse of architecture assets. The above aspects of the practice lay the foundation for its third and most promising facet; creation of processes, tools, and assets for reusing architecture designs.

Due to immaturity of the domain, false belief in protecting competitiveness of companies, and lack of strong market leadership, architects work in isolation learning and reinventing what otherwise should have been captured as canonical, reusable architecture designs. For example, there are no standards for architecting banking systems, yet all of them look about the same. The same is happening right in front of our eyes in the e-business space. An incredible amount of creative energy is going into unnecessary "reinvention of the wheel". The creation, maintenance, and reuse of architecture assets is where tools will play a significant role.

Reusing Architectural Designs

In order to be able to reuse architecture designs one must answer the following questions:

1. What are the reusable architecture designs?

2. Can they be generalized, specified, and

captured using tools?

3. How can the reusable designs be stored, maintained, and shared?

4. How can generalized reusable designs be selected, combined, and specialized to produce specific system designs?

Let us try to answer these questions one at a time.

In order to answer the first question we have to agree, at least in principle, on what architecture is. The architecture course mentioned above has an entire module devoted to this very subject and it takes over an hour to deliver it. We don't have the space to discuss all nuances of architecture, so we'll use the following definition that should be suitable for the purpose of our discussion. "Architecture is this part of the system design which captures the significant decisions about the organization of the system, selection of its structural elements and their interfaces, collaborations among these elements, and composition of the structural and behavioral elements into larger subsystems."

So what are the design artifacts that either capture the significant structural or behavioral elements of a system or provide significant input to making architecture design decisions? Our list is as follows:

● Domain Models Domain models capture the key concepts of the domain for which software is built. These concepts are then represented either as static or dynamic elements of the system design. Precise definition of domain concepts is the first step in agreeing on a canonical architecture for a family of systems.

● Use cases A set of use cases captures the system's behavior from the external, user perspective. Similar to a domain model, an agreement on what is the "typical" system behavior forms a foundation of, and a major input to, its architecture.

● Collaborations Collaboration is a grouping of system elements that collectively perform a meaningful function. Collaborations are used to represent use case realizations and mechanisms.

● Subsystems By subsystem we mean a "design subsystem" as defined by RUP. A design

subsystem is a modeling element that inherits from both the class and the package. It is a grouping of elements (including subsystems) that have an external behavior defined by interfaces. Due to its package heritage, a subsystem is opaque from the outside. Although it may contain collaborations, it is different from a collaboration — a collaboration can group elements from different subsystems.

● Subsystems are the fundamental architecture blocks of a system.

● Components Components are the packaging construct for the final application. They control how the application can be delivered and distributed.

● Interfaces Interface is a contract that can be met by many architectural elements. In a sense, it is the most general architectural expression.

● Mechanisms A mechanism describes a general, horizontal capability of a system. A list of typical mechanisms would include persistence, error handling, event management, security, etc. Mechanisms are very important architecture concepts since they provide an abstraction of the capabilities of the lower (architectural) layers of a system.

● Data Models Data models include not only the persistent data structures and their dependencies, but also information about database partitioning. In enterprise systems in particular, they are a dominant element of system architecture (sometimes referred to as "data architecture" or "information architecture").

● Deployment and Process Models The above design artifacts all describe the functional aspects of a system structure. The deployment and process models, on the other hand, capture the operational aspects; that is, physical nodes of the system, allocation of processes, threads, and other resource managers (like containers) to the nodes, and allocation of functional elements to these resource mangers.

A careful reader may be surprised that we have not included architecture patterns in our list above. This is for two reasons. First, we believe that patterns have been already applied to derive a canonical architecture. Second, parameterizable (or abstract) collaborations are

a way of describing pattern structures and those are already on our list.

Which of the above artifacts can be generalized? Most of them. Particularly useful from the architecture point of view are abstract use-cases and abstract (or parameterizable) collaborations.

How many of the artifacts can be captured formallyin a tool? All of them, although there are still discussions on their exact representations and semantics.

Do we know how to store, maintain, and share reusable architecture assets? The first answer should be "yes". There has been a large body of work done in the reuse community on reusable asset libraries, asset classification, etc. However, there are no good examples of successful, general-purpose, and generally available reusable architecture asset libraries (as opposed to class or component libraries). The main reasons for this seem to be lack of market leadership, immaturity of the domain, and narrow scope of the previous attempts.

Do we know how to select, combine, and specialize generalized architecture assets to produce an architecture for a system? The honest answer is "not yet". This is the most challenging aspect of the entire architecture reuse puzzle.

Architecture Reuse and Rose

What is the role of tools, and of Rose in particular, in architecting systems? What is the role of Rose in architecture assets creation and reuse? The answer is simple. It is sufficient just to look at the list of artifacts on the above list to realize that all of them are Rose artifacts. (In previous issues we have described how these constructs can be described in Rose.)

Many of you use these constructs, but usually to describe a particular system. Unfortunately the specifics make the constructs not reusable in more than one application. A growing number of our customers recognize this and are trying to take active measures to at least allow some of these artifacts to become more reusable at the architectural level. We see more and more use of concepts like subsystems and interfaces to allow more robust and reusable architectures.

We are currently working with a set of customers to advance this practice much further. By capturing architectures and architectural patterns for reuse in more than one

application, we start to learn how these pieces of knowledge can be captured and reused over and over again. Over time we will be adding more and more of these reuse capabilities into the product. In Rose 2000 you start to see some of this. If you are a VC++ developer using ATL and COM, you can apply patterns to create COM ATL objects. If you are a Java developer doing EJB development, you can see similar mechanisms by applying EJB patterns in the Rose add-on product jointly developed by Rational Software and Inline Software.

These two examples represent a new generation of features in our Rose product development where, by applying specific architectural knowledge to a model, Rose can assist in developing the model for you and making sure that the right architectural code gets generated. In a sense you can build and assemble a running application by reusing architectural knowledge. This will not only accelerate your own individual development, but also ensure that your team is designing by reusing common architectural mechanisms. We call this initiative within Rational Executable Architectures. It means you can get an architecture that is also executable by reusing a set of architectural knowledge from a specific domain. This reuse is not only applicable to the technology domain, like the COM ATL and EJB examples, but can also be applied at a more generic level, like the banking application we mentioned initially.

Editor's note: For more information on Rational's Architecture Practice please send a message to [email protected].

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Editor's E-mail

Terry Quatrani

Terry Quatrani is the author of the best selling book Visual Modeling with Rational Rose and UML.

Dear Terry,

Hi, I've been reading various sections of your web site about how to build Rose Add-Ins. The following link points to a zip file that, apparently, contains an example oriented approach to building a Rose add-in: http://www.rosearchitect.com/db_area/RAArticle.zip. I am unable to open the zip file. I am fairly sure I've downloaded the entire file... the zip file appears to be corrupt ... pkzipfix is unable to fix the file. Please could you look into this? If you are able to fix this, please could you be so kind as to mail me and point me to the fixed zip file. Thanks.

Regards, Stuart

Dear Stuart,

Thank you for bringing this to our attention. The file at that location was indeed corrupted. This has been corrected and all should be fine.TQ

Hello Terry,

I have been teaching Object oriented Systems analysis and design for two years and before that I taught structured analysis and design. I like your book Visual Modeling with Rational Rose and UML and I am using it to help structure my new tutorials in the computer labs with the students. I would like to know why you use the unidirectional association to represent the relationship between an actor and a use case as opposed to the regular <<communicates>> association (see Figures 3.9 and 3.10 in your book.)

Regards, David Moss

Dear David,

I have always liked to use an uni-directional association in my use case diagrams to show where the use case is started. To me, this is a nice visualization to have. Remember that information may flow in both directions so even though most of the information may flow from a use case to an actor, an actor still must start the use case. Hope this helps. Happy modeling, TQ

Dear Terry,

There are times that I would like to move from one diagram to another diagram in my model. Is this possible with Rational Rose?

Mark McGowan

Dear Mark,

I also find it nice to be able to move from diagram to diagram in my model and the capability to link diagrams is found in Rational Rose. Here's how it's done:

Step 1: Create a note on the originating diagram.

Step 2: Select the successor diagram in the Browser and drag it onto the note.

Now, to move to the linked diagram, simply double-click on the diagram in the note. Happy Linking, TQ

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Rose Around the World

Nasser Kettani

After 13 years working on several projects in different domain areas, Nasser Kettani joined Rational as senior consultant helping customers implement effective software development processes and technologies. He is the Marketing Manager of Rational Software France. He recently co-authored a book De Merise a UML.

We Have Buried the Millennium Bug, the Next One is an Intentional Fault

Good news: by the time you get this issue of Rose Architect in hand and read this article, humanity will have reached the new millennium. All this "Y2K, Millennium Bug mass hysteria" is hopefully behind us. (This is really optimistic, since at the time I am writing this, I am betting that nothing really critical will happen, only a few humorous stories that we will see on the Internet.)

Much has been said, and still is to be said, about the "millennium bug". So let's agree on the following few evident lessons learned from this interesting experience:

● While software has been considered as a commodity, it is now widely accepted that software is the cornerstone of our economies. Almost everything depends and runs in some manner on software. This means that we need to consider software (and software development) more seriously than we have before.

● The software lifecycle starts when somebody gets an idea and ends when nobody uses the system anymore, and the software undergoes several changes during that cycle. We have thought for a long time that the software lifecycle ends when we deliver the software to end users, and then just start a maintenance effort. We also thought that some of the systems we built would last only a few years and we forget that since they are

working, there was no reason for us to change them. This is clearly not the case; the change frequency of actual software systems is really weekly, if not daily.

● Though technology evolves at high speed, we need to deal with existing assets for a long time. Most of the systems that we build today are connected in a given form to legacy systems and it is not because we have new technology that one would throw away existing software assets, given the time, energy and the high costs required to build new software systems.

The bad news is that there is yet another "date related bug". Some computer systems will be facing this exact problem in the next few years. And there are many other similar situations that will challenge us. The millennium bug is not unique. Hopefully, we will be well prepared to deal with these situations.

Let's change that

"The Web changes everything, the Web does not change anything."

— Grady Booch, 1997.

The Web changes everything The Internet was probably the last major IT move of the last decade. The Internet promises are important and it is clear now that the new economies are increasingly dependent on the Internet. Most companies worldwide are now moving to some form of Internet-based businesses (Extranets, Intranets, Telecommunications, Palm Pilots, fridges with Web browsers, Cars with Web browsers ... ).

It is evident that the technologies (component models, distributed computing, OO programming languages,) that we are using have changed in the last ten years. This is a natural and constant move of technology. These technologies provide unique chances to build the sophisticated systems that today's economies depend on. They allow new types of architectures, more interconnectivity, more distribution, etc.

The Web does not change anything However, the internet did not change the fundamental problem of software development that is getting even more complex. It is not because we are using sophisticated programming languages, middlewares, and databases that the software projects succeed, are on schedule, implement the business requirements, and have the good level of

required quality.

Software developers are dealing with the software paradox: that is, they need to simultaneously accelerate time to market while improving reliability and quality.

Only proven software engineering best practices will help companies develop the software systems they depend on, and leverage the software technologies they use to build these sophisticated systems.

This is where Visual Modeling and the UML play a major role in helping software designers to:

● Master complexity ● Make important decisions explicit ● Communicate with the overall team ● Continuously evolve the systems

Web development is not a bunch of HTML pages but real, critical architectural problems that need to address:

● Reliability: the e-system should not crash in the face of a customer

● Performance: the e-system should not crash when several thousands of customers are using it

● Resilience: the e-system changes every day (and not every year) to accommodate business requirements

Let's avoid the next major fault

The millennium bug should be considered by all (business managers and software engineers) as a revelation and an indicator that IT organizations need to treat software construction as a major, critical business process. Let's apply software engineering best practices and not implement the same bug again. Otherwise, it should be considered as intentional fault.

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Amigo Page

Jim Rumbaugh

Dr. James Rumbaugh is one of the leading software development methodologists in the world. Along with Rational colleagues Grady Booch and Ivar Jacobson, Jim developed the Unified Modeling Language (UML). Jim has a B.S. in physics from MIT, an M.S. in astronomy from Caltech, and a Ph.D. in computer science from MIT.

Bridging the Gap

Building Complex Systems by Leveling and Layering

Using Modules

No one can understand a system with a million parts directly. There are too many potential interactions to consider. The only effective approach to building complex systems that humans have invented over thousands of years is to build a system from a few large modules, then build each of those modules from a few medium modules, and so on until the smallest modules can be constructed directly from existing parts. This approach goes under many names: divide-and-conquer, hierarchical decomposition, and recursive structure, among others.

The problem is that the complexity of a system increases faster than the number of elements (Figure 1). If we consider the number of potential connections among elements, it grows as the square. If we consider the number of possible states of the elements in the system, it grows exponentially. So it is fairly easy to build a system directly out of three or four parts, possible to build one out of ten or twenty parts, very difficult from a hundred, and impossible from a thousand. (In this discussion, I am referring to different kinds of parts or parts in different relationships to the system, not arrays of identical parts, which are easy to construct but don't provide complex behavior unless they are used in unique ways, which makes them essentially different.)

But if we build ten small modules of ten

elements each, and then build one larger system from the ten modules, we have a hundred-element system for the design cost of 11 ten-element systems-many orders of magnitude simpler to design. Figure 2 shows a system built from five modules of five elements each. Such a system has two layers — the system layer, in which the system is constructed out of modules, and the subsystem layer, in which each of the ten modules is constructed out of basic elements, whatever they happen to be — Java™ statements, transistors, or human workers. The process can obviously be repeated using more than two layers. We end up with a tree or a partial order graph (Figure 3).

What's the trick? We just made a large system with many parts without much work. Well, there is a catch. The parts within different subsystems can't interact directly, otherwise the subsystems really have many elements each and become difficult to understand. Furthermore, the subsystems have to behave as units when they interact with other subsystems, otherwise the top level doesn't really work. These are very stringent restrictions, but they are exactly what is needed to make complex design of any kind possible. We restrict the structure of a system by forcing interactions into limited, regular patterns. Put in other words, each subsystem must encapsulate its contents and must have a well-defined interface toward other subsystems. The encapsulation allows us to consider the structure of each subsystem independently, because its parts are not affected by the contents of the other subsystems. The interfaces allow us to build the top-level system by considering each subsystem as a single entity and ignoring its parts.

Consider driving a car. A car is a fairly simple device to operate — four basic elements for controlling motion (steering, accelerator, brake, shift) with a few other auxiliary controls thrown in, such as ignition switch, lights, windshield wipers, ventilation, and so on. Most persons can drive a car, but they couldn't drive four cars at once. (If you think you can do it, I'd like to see you prove it using four Nintendo controllers and a racing game.) But most drivers can maneuver on a freeway containing scores of other cars. It works because we treat the other cars as units and only think about the nearer ones individually. We don't think about the accelerator pedals or shift levers in other cars. We think of the other cars as units that can turn, speed up, slow down, tailgate, and so on, and just worry about the controls of our own car. We drive using a hierarchical model.

Unfortunately, many modules are not completely isolated. It is difficult to prevent all interaction among the parts of different modules. Sometimes they are "mostly" independent, but the encapsulation shells leak slightly. We can usually live with this situation if the leaks are small, compared to the main interactions among the subsystems. We can construct a system as if the subsystems were encapsulated, and then make minor adjustments to account for the encapsulation violations. For example, Isaac Newton analyzed the gravitational interactions among two planets and obtained a closed-form equation for their motion. It is impossible, however, to derive a closed-form equation for more than two bodies (the so-called n-body problem). But the solar system consists of the sun and nine planets, plus many smaller bodies. The sun is much larger than anything else in the solar system, however, and its gravitation dominates the motion of the planets. Astronomers analyze the movement of each planet around the sun independently to get a close approximation to the actual motion of the planets. Then, based on this approximation, they calculate the effect of each planet on the others, and correct the original approximation.

This is called perturbation analysis — you compute an approximate solution and then perturb it with adjustments — and it has been used to obtain good approximations to many physics problems that are not directly solvable. It is an example of breaking a complex subsystem with many interacting parts into subsystems that are independent (the sun and each planet), but in this case they are not completely independent. Since the interactions among subsystems are small compared to the behavior within each subsystem, we can perform the original computation and then adjust for the fact that the subsystems are not completely independent. Complete encapsulation is desirable, but we can often live with some leakage if it is small and well understood. In a software system, such leaks increase the number of interfaces of a module or the complexity of the interfaces. If the interactions are large compared to the internal behavior, however, the technique breaks down and the system remains intractable.

Coupling and Cohesion

Over 20 years ago, Larry Constantine proposed words for the ways that the modules within a system interact. Coupling is the connectedness of different modules. If you must understand most of module B to understand module A, then A and B are highly coupled. If you must

understand a bit of module B, they are loosely coupled. If you don't need to know anything about module B to understand module A (and vice versa), they are uncoupled. Obviously things are easier to understand if most modules are uncoupled or loosely coupled. Coupling can be shown as a UML dependency relationship (Figure 4).

Cohesion is the functional relatedness of the elements of a module. If everything in module A works to carry out a single purpose, then A is cohesive. If the elements of A do several unrelated things, then A is not cohesive. The more different things a module does and the less they are related, the less cohesive it is. Cohesion makes a module easier to use within a larger module, because its purpose is clear. If it lacks cohesion, someone examining its use within a larger module may not understand why it is there. Cohesion cannot usually be seen directly on a model, because it depends on the meaning of the parts of a module and not on their structure (Figure 5).

I often use a couple of catch phrases to describe these design principles. "Minimize dependencies" means to avoid unnecessary dependencies in a system to minimize coupling. "Do one thing well" means to keep a module focused on a single goal to keep it cohesive. Break incohesive modules up into smaller, cohesive modules.

Leveling and Layering

There are two different ways in which a module can be built from other modules. I will use the words leveling and layering. Each of these words has been used in the way that I describe, but I am using them together to make a distinction that has not usually been implicit in the words. I think it is a useful distinction and these words will serve as well as any.

The term leveling has been used to mean hierarchically expanding a high-level element into finer pieces of narrower scope and adding detail to the pieces. A level corresponds to a depth in the hierarchy. Each level provides a view of a system using elements of a particular granularity. For example, an automobile can be expanded into a frame, propulsion system, electrical system, ventilation system, and skin. The propulsion system in turn can be expanded into a power generation system and a power transmission system. The power transmission system can be expanded into a drive shaft, transmission, universal joint, and so on. At each level of detail, we understand the car at a given

precision, but we can always go to the next level of detail to learn more. This leveling structure allows the detail of the system to be matched to the need. We don't have to think of a car as an assembly of thousands of basic parts. Leveling corresponds to UML composition. The elements in the composition are at different levels, but they are all in the same semantic domain.

The term layering has been used to mean implementing an element out of elements with a different semantic basis. Each layer represents a consistent semantic system with its own syntax, rules, and meaning. Elements from two different layers are not compatible; they represent different views of the world that cannot be mixed together. For example, biological structures, such as nerves or muscles, can be explained in terms of chemical interactions. Chemical interactions can be explained in terms of forces between electrons. Forces between electrons can be explained in terms of unified field theories. Each layer provides a comprehensive description of the world, but going from one layer to another represents a shift in viewpoint. Each layer provides meaning of a different kind. You can view the world as a collection of Feynmann events among fundamental particles, but this description is not of much help in understanding the biology of the heart, for example. It's not that the quantum physics description is wrong — it happens to be the most accurate description in all of science — but it doesn't provide insight at the biological level. In going to higher layers, we discover emergent behavior — meaning that cannot be reduced to lower layers, even though it is constructed out of the lower layers. The emergent concepts are usually the key to understanding a given layer.

Layering represents a semantic shift in meaning. To make this shift, a computer system requires an interpreter or a compiler. For example, a C++ program is compiled into machine instructions. The C++ program and the machine language object module imply the same result, but they are expressed at different levels. Meaning has been lost in the compilation process. The program works in terms of functions and variables, while the machine code just pushes bits. Machine instructions, in turn, are interpreted by the run time processor, which converts them into commands to various subunits, such as adders or memory units. These commands, in turn, are implemented by electrical networks of transistors. The transistors, in turn, are implemented by electric fields applied to complex materials. Each layer has its own semantic meaning, but elements from different layers have no meaning together.

Each layer is often called a virtual machine because it provides the infrastructure for the higher layer.

Both layering and leveling are needed to construct complex systems. Leveling reduces the size of the functionality of a system into finer and finer pieces until functional units are small and simple. You could take a requirements list and check off the requirements against the set of bottom-level elements. Leveling reduces complex functionality by breaking it up.

Layering reduces the level of abstraction of the language of expression to a more basic language that is easier to physically construct. It provides the infrastructure for implementing functionality. But you can't take a list of functional requirements and check them off against elements from a lower layer. Layering usually cuts across functionality. Several implementation mechanisms may support a module from a higher layer, and one implementation mechanism may support several unrelated modules. For example, the loop construct and a container class in a programming language support many different domain-level semantic constructs.

How does layering show up in UML? In some cases, it doesn't appear at all. We usually don't model the Java language or the machine instruction set, but just take them for granted. In other cases, however,layering is modeled, particularly in representing implementation mechanisms and other aspects of system architecture that are implemented as part of the system infrastructure. Consider a container class, for example. It is a mechanism, not an element of a problem domain. Container classes should not appear in logical models. You can build applications using container classes, but they don't make sense in a problem domain. You use them to provide mechanisms to implement algorithms. This is an example of layering rather than leveling. It can be modeled using the UML realize relationship. The realize relationship represents a shift in semantic layer.

To summarize: leveling is expansion of detail within a semantic domain. It permits us to understand a system by treating it as being composed of a few pieces, and then expanding the functionality of each of the pieces in turn. You expect to see a tree or a partial order graph with a moderate fan-out at each node. It is somewhat arbitrary how to group the elements, because they all exist at the same level of meaning. The goal is to balance the levels so that the elements at a level are all at the same

granularity. The levels may have names (such as chapter, section, and paragraph), but they aren't rigid, and you can move an element from level to level. You could expand all the levels of the tree to get a flat system, and it would still make sense, although you probably couldn't comprehend all of its consequences. Leveling is modeled by UML composition.

Layering is a shift of meaning, the implementation of one virtual world in terms of another, presumably more fundamental level. The layers are rigid — you can't move elements for one layer to another layer, because each layer has its own unique meaning and interpretation. Domain-level functionality gets sliced and diffused across layers, so it is not useful to trace functional requirements across layers. Layering is modeled by realization. Layering is not needed in logical analysis models that are entirely in the problem domain, because they don't deal with implementation.

The Nature of the Design Task

Figure 6 shows the essence of the design task. There is a set of user-level features that you want your system to achieve. You have a set of existing facilities out of which to build the system. Think of the distance between them as a gap. Your job is to build a bridge across the gap. Each span of the bridge can represent layering or leveling, but each span has a limited length.

If each feature can be directly constructed from the existing facilities, you are done. For example, a salesman can use a spreadsheet to construct a formula for his commission based on various assumptions. The existing facilities are a good match for the task.

But usually it's not so easy. Suppose you want to build a Web-based ordering system. Now it's not so easy to build the system from a spreadsheet or a programming language. The features are at a much higher level than the facilities. You want to do complicated things and have a tailored user interface. You can't build each feature directly in terms of the facilities. The gap is too big. You must invent some intermediate elements to bridge the gap, so that each element can be implemented in terms of a few elements at the next lower level (Figure 7). The intermediate elements may be operations, classes, components, or other UML elements.

But the intermediate operations don't pop out of the air. There are usually many ways to break

up a high-level operation into parts. Bridging the gap requires you to guess at a likely set of intermediate operations, then see if you can implement them easily. You also hope to reuse the lower-level operations to implement more than one higher-level operation. But there is no guarantee that this will happen. It takes good intuition to come up with a good initial set, and then iterative adjustment to make the fit among the various elements better. Often you find that the intermediate operations are similar but not identical. You need to try to come up with a single operation that will suffice, then go back to the previous level and reimplement the higher-level operations in terms of the new common operation. Sometimes this is less than ideal for one or more of the higher-level operations. You have to compromise, and sometimes use a general purpose facility when a custom facility would be simpler. You have to make trade-offs. A good system architecture does not optimize each separate decision. It optimizes the design of the entire system. Sometimes that means using choices that are locally suboptimal to avoid proliferation of special cases.

This trade-off occurs in every domain. Electrical devices do not all ideally run on 110 or 220 volts (as your country may be). Some would work better on 85 volts or 147 volts. But appliance designers understand that people don't want special power supplies for each appliance in their houses, so they design to the existing standards. Maybe the power supply has to be a bit larger than optimal. Standards always involve this kind of compromise to a suboptimal solution in the individual case. What we gain is a more optimal overall solution.

Sometimes the intermediate elements have already been built, in which case you can use them, but the principle of bridging the gap is the same. Even if they already exist, in a framework or a class library, you still have to find them, select them, and fit them together. The problem isn't making up individual modules — anybody can do that well. The problem is to fit the entire system together — to bridge the gap cleanly.

Design is difficult because it is not a pure analytic task. You can't analyze system requirements and derive the ideal system. The essence of design is bridging the gap: there are far too many possible choices of intermediate elements to try them all, so the engineer must apply heuristics to narrow the choice. Design requires synthesis: you have to invent new intermediate elements and try to fit them together in a successful manner. It is a creative task, like solving puzzles, proving theorems,

playing chess, building bridges, or writing symphonies. You can't expect to push a button or follow a recipe and automatically get a design. A development process provides a lot of useful guidance, just as chess books and engineering handbooks and music theory courses help, but eventually it takes an act of creativity to bridge the gap in any field of engineering or art.

Top Down and Bottom Up

How should we approach the task of bridging the gap? Should a system be designed top down or bottom up? In other words, do we start with what we want to achieve, break it into modules and connect them to achieve it, and then drop down to implement the modules? Or do we start with existing implementation elements, put them together into modules, and work upward until we build the desired system? Both approaches have been used effectively many times. Or do we work from both sides at once, like workers boring a tunnel through a mountain? That may also work, provided they actually meet in the middle, which requires careful planning.

Top-down decomposition works well for leveling functionality. Leveling divides and distributes functionality among submodules, so starting with a high-level function and breaking it into cohesive pieces often works well for covering the overall functionality. Top down design doesn't necessarily work so well for layering. It is not so obvious which mechanisms would best serve to implement a system, especially since the mechanisms have to be implementible themselves in terms of existing lower layers. Layering often works best bottom up, because it is usually clear what kinds of mechanisms can be easily implemented using an existing lower layer. So top down seems to work best for decomposing functionality, but bottom up seems to work best for building implementations. On real systems you need to use both leveling and layering. Who said that life was supposed to be easy?

In most cases, you can't build a design in just one pass, whether you work top down or bottom up. If you work top down, for example, you have to divide up the functionality at each level into subsystems. You have to do this in such a way that the pieces within each of the subsystems don't interact directly, or at least not very much. That takes guesswork. Sometimes you aren't so lucky. You break up a system in a way that appears clean, but the lower-level pieces interact too much across subsystem boundaries. Then you need to go back and refactor the

higher level. If you used good judgment in the first place, you probably won't have to change everything, but that depends on your skill as an architect. But even a good architect doesn't get everything right the first time, so don't be afraid to iterate the architecture. Deriving a good architecture requires iterative development and a willingness to occasionally go back and revisit a higher-level decision. The same principles apply if you work bottom up. You have to make some decisions before you know all their consequences, and then be prepared to modify them as you learn more. Iterative development is the only rational approach.

History

The ideas in this article are not original — they have been discovered repeatedly in many different fields of human creativity. The concept of hierarchical systems probably dates back to Mesopotamia. Large empires learned that layers of bureaucracy could do what a single monarch could not accomplish alone. Hierarchical systems come from the early, but not the earliest, days of computing. Functional decomposition is an old design approach. Tom DeMarco used the term leveling to describe balanced hierarchical data flow charts. The principles of coupling and cohesion were expounded by Constantine, based on analogs from other engineering fields. The concept of layers of virtual machines starts with the work of Turing and Von Neumann, but was gradually extended to allow multiple layers.

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Rose 101

Matt Terski

Matt Terski is a Software Engineer on the Rose development team. His professional interests include distributed computing, visual modeling, and COM programming in ATL. He can be reached at mterski@

rational.com.

Building ATL Components

C++ is the lingua franca of COM. But COM development in C++ can be laborious, so many developers turn to the Active Template Library (ATL). ATL provides a set of classes, wrappers, and wizards that perform the toilsome but necessary tasks required to build COM components. Rational Rose 2000 extends the power of ATL by giving developers the ability to perform round-trip-engineering with their ATL-based projects.

In this article, I'll help you get your feet wet with the new ATL features of Rose by building a component that implements a simple lookup table. The lookup table allows clients to insert, remove, and fetch named elements.

Creating the Interface

In COM, design begins with interfaces, so I'll begin my design by creating an interface for the lookup table. But before I get started, I'll set the default language in Rose to Visual C++ (Tools->Options->Notation). This ensures Rose will set the language model property to Visual C++ on any newly created model elements. To create the interface, I'll insert a new Interface onto a class diagram in Rose and name it ILookup.

I want to give clients of my lookup table the ability to insert and remove elements. To do this, I'll add two methods to the interface with the help of the Visual C++ add-in (right-click the interface->COM->Add Method). I could have created the methods by simply adding operations to my interface, but using the VC++ add-in defaults the methods' return type to HRESULT and parses any IDL keywords in the

argument specification. The add-in also lets me add additional IDL attributes for the method. Experienced ATL developers will notice the 'add method' dialog looksfamiliar to ATL's 'add-method' dialog in Visual C++ (Figure 1).

Creating the Component

Whether you want to generate ATL code from a model or reverse-engineer a model from the code, you're going to need a COM server to house your component's implementation. A COM server can be a dynamic link library, an executable, or a system service (if you're running Windows NT). Besides hosting the implementation of the classes in the component, the COM server registers itself and the classes it holds, provides access to COM's Service Control Manager, and manages the lifetime of the component.1

Visual C++ developers typically create COM servers using the ATL COM AppWizard. This wizard creates the boilerplate code required to implement a COM server. Rose identifies a COM server through its ATL COM AppWizard project and lets you create a new project, use an existing project, or even attach to a project that is currently running.

To create my lookup component, I'll right-click the interface and select Assign to component. This invokes the component assignment tool. Since I'm working in Visual C++, I'll drag the ILookup interface onto the Visual C++ icon. After confirming the assignment, Rose prompts me to select a new, existing, or running project as the component. I'll create a new Visual C++ project. Rose then starts Visual C++ and lets me perform the steps in creating an ATL AppWizard project.

Creating the ATL Object

Although I've defined the ILookup interface, I still need to tell Rose that it is actually an ATL object. I'll right click the interface and under the COM heading, select New ATL Object. Rose presents a dialog that looks like the ATL Object Wizard in Visual C++ (Figure 2). The dialog uses the interface name (ILookup) to set default values for the Class, CoClass, and ProgID. I'll select custom interface and hit OK. Rose then creates the Class and CoClass on my class diagram (Figure 3).

Generating the Code

I want a working component, not just a model, so I need to forward engineer the model into source code. I'll right-click on the ILookup CoClass in the class diagram and select Update Code. After generating the code, I'll switch into Visual C++, compile, and link my component. I now have a working component, albeit one with an empty implementation.

Updating the Model

A quick examination of the source code shows that I haven't provided a way for clients to fetch items in the lookup table. I'll add another method to the ILookup interface, but this time I'll do it in Visual C++. I'll right-click the ILookup interface in Visual C++'s ClassView and choose Add Method. Visual C++ presents the familiar Add Method dialog, letting me specify the method name and parameters. After I accept the changes, Visual C++ updates the IDL and generates the required C++. I can recompile and link my component, which contains the new method.

I'd like this change to be reflected in my model so I'll switch back to Rose and invoke the model update tool by right-clicking the class, and selecting Update Model. After I select which component to update, model update tool adds the method to the ILookup interface that I added in Visual C++. The update model is shown in Figure 4.

I now have a working component that implements the ILookup interface. Of course, I still need to build the functionality behind those interfaces. But using Rose and ATL allowed me to focus on the important aspects of the implementation. You can find my full implementation here.

Conclusion

ATL makes life easier for COM developers working in C++. Rose 2000 integrates with Visual C++ and ATL to provide round-trip-engineering on ATL-based projects. As I've demonstrated in this article, using Rose and ATL together lets developers focus on the unique aspects of a software design by freeing them from the mundane parts of COM development.

Footnotes

1 Brent Rector and Chris Sells, ATL Internals (Reading, Massachusettes: Addison-Wesley), p. 157.

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.

Extending Rose

John Hsia

John Sunda Hsia is the Technical Lead in Business Development at Rational Software. He has spent the last 3 years helping Rational's 100+ Rose Technology Partners (formerly known as RoseLink) learn how to create Rose Add-Ins. Currently, he manages the technical folks responsible for supporting Rational's Technology Partners for Rose, RequisitePro, ClearCase and ClearQuest. Today, he can still be found slaving behind a hot projector teaching the training class "Extending Rational Rose." John can be reached at jhsia@

rational.com

Using COM in Your Rose Extensibility Solution

Have you ever wished for more advanced controls in the dialog support within RoseScript? Did you ever want completion facilities so you didn't have to look up interface names and signatures? Would you be more comfortable scripting in another language — e.g., C++, Java™, SmallTalk? Do you need access to information or functionality in other applications such as Excel, MS Repository, or Project? Like many advanced Rose users, you may have yearned for more functionality than was available through RoseScript or the RoseScripting environment. If this describes some of your frustrations, then I'd suggest you investigate using COM (Component Object Model) as part of your overall Rose Extensibility solution.

What is COM?

Within the context of extending Rose, COM is simply a communication mechanism by which an application can gain access to objects from other applications as well as reusable components. For the purposes of this article, let's define some basic COM terminology. COM objects are objects exposed by an application to be shared with other applications. In fact, the REI objects (e.g., model, class, operation, attribute) exposed by Rose are COM objects. When two applications communicate, one application always initiates the relationship. The initiating application is called the COM client or COM controller while the other application is called the COM server. So the concept of client or server is not an indication of who is serving up the information

but rather who initiated the relationship.

With COM, applications can manipulate objects from other apps, or expose objects for other apps to manipulate. Additionally, a neutral client can access objects from multiple servers (e.g., VB app accessing Rose model objects and populating MS Excel spreadsheets). Since this is a programmatic interface, no parsing of formatted ASCII files is called for — definitely an advantage.

For Rose Windows users, COM comes natively with the operating system so no additional software is needed. For Rose UNIX users, the third party development product called MainWin from MainSoft (http://www.mainsoft.com) is needed to develop COM-based applications.

Accessing COM Servers from RoseScript

Let's start with a simple Excel example. Suppose we wanted to create an Excel spreadsheet with the names of the packages in our Rose model. The output of such a RoseScript (see Listing 1) when run on the model ordersys (which comes with Rose) would look like the screen snapshot in Figure 1. There are a couple of interesting things to note about the RoseScript that generates this spreadsheet:

4: The CreateObject call creates the server application. The name of the server ("Excel") and the name of externally createable COM object ("Application") is passed in a dot notation. If the server is already running, an equivalent call to GetObject can be used instead. The result of either is equivalent to the RoseApp variable that is preset for RoseScripts in Rose.

6-10: There are always some objects and methods calls that are domain specific. In this case, the creation of a new book, a new sheet and the labeling of some column header cells is necessary. This could just as easily have been calls into the COM server for the MS Repository or the COM server for one of Rational's products (e.g., RequisitePro, ClearQuest or ClearCase).

11-16: These lines are specific to the REI so no explanation is needed. Note that cells require a string value so the integer counter needs to be converted on line 14.

In Listing 1, there are no declarations for the Excel COM objects. By default, these variables (e.g., ExcelApp) become variants, which can be of any type. If we were to declare them, they

would be declared as "object". Since Rose cannot reference the interface definitions of other COM servers, Rose simply doesn't know what the possible types are. Rose will bind these at run-time (late binding) rather then at compile time (early binding). The REI types in RoseScript are early bound. Because COM server objects in Rose are late bound, this does provide for interesting, if not challenging, debugging.

One might also observe how clean Listing 1 is. Now that the method calls to create a book and a sheet are in front of you, it becomes very obvious what those calls are. So how does one discover the appropriate calls to make in the first place? One option is to wade through documentation for the respective COM server. An easier approach is to use the macro recording facility (if one is available). In Excel, record a macro (i.e., Tools/Macro/ Record ... ) of the specific manual operation(s) you want to automate. Then just look at the objects and methods used in the source code.

Accessing Rose from COM Clients

Using the same example, what would this script look like if it were written as a VBA macro in Excel? Specifically, Rose is now a COM server being called from the COM client Excel. The new script (see Listing 2) remains almost the same except for:

2-3: Now Rose is the server so we need to create it along with loading in the appropriate model. Note that most REI objects are accessed as return values from REI method calls.

4-8: This is the same as the previous script but we're running within the Excel process so there's a little less legwork.

As you can see in this script, with very little change, we have converted this same script into a VBA macro. Fortunately VBA syntax and RoseScript syntax is almost identical so this was fairly painless. This same subroutine could be part of a VB application since VB and VBA are identical. With a little more effort, this could be expressed in languages such as C++, Java, Pascal, or Smalltalk; basically, any language and IDE that is COM enabled.

To enter the above script, you must be in VBA (menu: Tools/Macros/VB Editor). Then from within VBA, a module (VB's version of a class) must be created (menu: Insert/Module). This will bring up an editor from which you can type the new code and, of course, run it.

A Little Help from the IDE

In the previous example, all the REI objects were still late bound. Since the IDE won't know anything about the COM server until run-time, we can hardly expect it to help us with browsing of the COM server or completion of our method names.

To achieve early binding in VB or VBA, one simply needs to add a reference to the Rational Rose type library. This is done through the following steps:

1. Bring up VB or VBA.

2. Bring up the Object Browser (menu: View/Object Browser).

3. Select "References" from the Object Browser's context menu (mouse right click in browser).

4. Check off the reference to "RationalRose" in the "Available References" list box and select "ok".

5. In the Object Browser, select "RationalRose" from the list box that currently has the default of "<All Libraries>" selected.

Now we can browse the REI through the Object Browser (see Figure 2). By selecting a COM object in the "Classes" sub-window, the respective attributes and operations are displayed in the "Members ..." sub-window. Additionally, selecting a "Member" will display the respective signature in the status sub-window.

In addition to the convenience of browsing the REI, we can also have the IDE help us write code. As we declare our REI objects, the editor displays a list of matches (see "RoseCat ... " in Figure 3). Pressing the tab key will insert the current selection. This time-saving feature also applies to methods (see Figure 4) and attributes.

COM Clients in Other Languages

As stated earlier, COM clients can be written in a myriad of languages. In fact, most Rose Add-Ins incorporate COM into their architecture. Between Rational's own Rose Add-In developers and Rational's Rose Technology Partners (formerly known as RoseLink partners), the logic for add-ins has been implemented in languages such as

C++, Java, J++, Pascal, Smalltalk, ASP (really VBA), VB, and VBA. Ultimately, the support of COM in the respective IDE is the essential ingredient.

Performance Sacrifice

You're probably thinking that all this sounds too good to be true. I can communicate between different applications but there must be a price to pay for this convenience. As it turns out, you're right. There is a price but that price will depend on your configuration. If your server runs as a separate process (e.g., an exe such as Excel in Listing 1) then yes, there is a noticeable performance penalty. On the other hand, if your server is an ActiveX DLL, then your server would actually run in the same process as Rose. With this configuration, the performance hit is negligible.

Conclusion

The RoseScript language and the RoseScripting IDE is definitely much more functional than one expects from a scripting environment. In a lot of cases, this is more than enough for most users to put together useful RoseScripts. For those who need more, one no longer has to restrict him/herself to just RoseScript. By mixing a bit of COM into your Rose Extensibility Solution, you can:

● Choose to implement your logic in the most appropriate language or even languages (e.g., C++ for parsing, VB for the GUI).

● Write your code in a richer IDE (e.g., visual studio).

● Gain access to functionality of other apps and components (e.g., spell checkers, GUI tabs).

Amazingly, with a little thought to architecture, all of these gains can be achieved without losing any noticeable performance.

Back to table of contents

Copyright © 1999 Rational Software and Miller Freeman, Inc., a United News & Media company.