24
PROFESSIONAL COMPUTING THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY JUNE 1991 Information Technology International * 0 , Realising the Dreamtime VS, a ^ W flk asm < ♦%W\\V»Y .7 kt iIVAAt/A •*. •fl* # « » v. «« # * **« * •< * **• r*;* •» #*•« •«• ,• ®» •» * * *, ••« L A A i

ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

PROFESSIONAL

COMPUTINGTHE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY JUNE 1991

Information Technology International

• • •* • 0

• ,

Realising the DreamtimeVS, a ^ W flk asm <

♦%W\\V»Y .7kt iIVAAt/A

•*. • ••

•fl* # « »

♦ v.• «« # * •**« • * •<* **•

r*;*• • •• •» #*•« ••«•

,• • •• • ®»• •»

• • • • * * *,

••«

L A A i

Page 2: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

"When Sony executives want the big picturethey look with FOCUS"

/?Uocs/ -8.

—Robert A. SchwartzVice President, Business Systems Sony Corporation of America

"We wanted an Executive Information System that would take the large amounts of data we gather on our mainframe and turn it into the useful information our management needs to know," explained Mr. Schwartz. "Two programmers using FOCUS built our entire EIS system in about two months time."

"First thing every morning our top executives turn towards their PCs and get up-to-date sales and month-to-date sales- versus-budget. Menu selections let them easily work at the summary or detail level with direct access to their on-line data."

Gary Fischer, Manager End User Com­puting explains how Sony selected FOCUS. "We looked very carefully at seven com­peting products. We ultimately chose FOCUS because of a number of factors that were important to us. For example, we wanted a strong PC/mainframe con­nection. Although most of our data is on the mainframe, we wanted to minimize the use of mainframe resources. That called for a robust, full function PC ver­sion. And that's PC/FOCUS.

"We also run a number of operating environments at Sony—AS/400, UNIX,DOS, OS/2 and LANs—in addition to MVS and VM. FOCUS can run on all of them.

"We wanted a powerful application development product that would substan­tially improve our programmers' productiv­ity, yet still be able to be used by our end users. Again, that was FOCUS.

"And we wanted a strong vendor who had a strong commitment to their PC product.

"FOCUS won."For more information on how FOCUS can

help you, contact FOCUS Technologies, 22 Upton Road, Windsor or call (03)525 2099.

UF O C U S=Jtechnologies

- Information Builders, Inc;This advertisement refers to numerous software products by their trade names. In most, if not all cases, these designations are claimed as trademarks or registered trademarks by their respective companies.

Page 3: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

A pox on you sir! maybe; I think.

ANYONE with an interest in the medicine of the nineteenth century will be familiar with the story of how the observation that small­pox was uncommon among milkmaids who had

suffered the fairly minor disease of cowpox, may have been the lead to the discovery of vaccina­tion. A little hurt can save you from greater and perhaps permanent harm, and it possibly could, even with computer viruses.

In the Bits ’n Bytes section of Pacific Computer Weekly, April 19th, is an item ‘Computer virus talent harnessed’. The first two para­graphs make the absurd claim that the Internet virus held the record for parallel processing of 384 billion operations per second; the sum of the cycles on all of the machines it was known to have infected. Ignore this part of the item, and go on to read about the “useful virus” competition.

“To be eligible for the prize, contestants must write and test their viruses ONLY on systems which have given permission for the viruses to run, must provide the means for their easy removal from the affected systems, and must submit source code for the viruses and documentation describing what they do and how they operate.

“All non-malicious viruses submitted to the con­test will be made available for licensing by interest­ed parties, with the authors maintaining copy­rights to their submissions and earning licensing fees from those who choose to use them.”

Is the competition a hoax? I don’t know and it’s not that matter that has lead me to discuss the item. An irate reader wrote to the editor con­demning her and her paper for promoting such an evil, and cancelling his subscription. Clearly, he is a person of no great intellectual capacity; he has not read the item correctly, he clearly supports the evil of censorship and has avoided a discussion in favor of the “dramatic gesture” of cancelling his subscription.

The issues raised by the competition are really quite interesting. I can see a benefit to the user very much like catching a case of cowpox. A bene­ficial virus could at least reveal that a user was vulnerable. With more elegance it could establish a system of learning how to detect and reveal other viruses and so on. And it would give an opportunity for virus writers to come clean and work with the virus busters. But there remains the offence of unauthorised entry into the victim’s system.

Please don’t write to me cancelling your sub­scription; pick up the topic and write you opin­ions to our Forum column.

Tony Blackmore

PROFESSIONAL

COMPUTINGCONTENTS: JUNE 1991

A CASE FOR INTEGRITY: The second part of an interview with Michael Oldham, ofITI, makers of DBQ. 2

TALES OF TWO CONFERENCES: There are major conferences in Adelaide and Sydney. Important in their own right; doubly so for the PCP. 4

PCPssst!: Letters to Kate Behan cast light on the PCP program. 6

ACHIEVING SUCCESS WITH JAD: What are the key factors leading to the success use of Joint Application Development techniques? This article looks at how JAD has been applied by Synthesys and Axis for projects ranging form business planning to detailed design. 8

DRIVING THE PERFORMANCE MANAGEMENT INDUSTRY: IBM, the growing sophistication of DP/MIS operations, and the dramatic shortage of skilled, senior personnel are key forces influencing the performance measurement business. 10

4GLs PRESS ON INTO THE 1990s: “Those who fear to learn the lesson of history are doomed to repeat it.”, Napoleon. 12

OPENING MOVES: 386 Monitoring the Stallion way 14

REGULAR COLUMNS ACS in View 17:

COVER:For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter­faces are used in CASE tools, to analyse and design manage­ment information systems.

ITI, Australia’s leading indigenous relational technology company, provides an integrated CASE environment which combines NIAM technology, a GUI based upper CASE tool, artificial intelligence techniques and the DBQ RDBMS for a comprehensive CASE solution.

ITI’s CASE tools include: Conceptual Designer, an upper CASE analysis and design tool that provides the most com­plete implementation of NIAM, the world’s leading fact-based database methodology; Intelligent Developer, a lower CASE tool which captures the data definitions from the Conceptual Designer or other CASE products such as IEW, and using artificial intelligence concepts, creates complete 4GL applica­tions and documentation.

With more than 8000 DBQ sites world, ITI has formed strategic partnerships with government, academic and com­mercial organisations. Major international DBQ users include the French Army, Aerospatiale and the UK Open University. ITI Europe, the company’s Amsterdam-based subsidiary serves as a European Portation Centre and supports a growing network of overseas DBQ distributors.The cover uses an original work produced for ITI by Iris Smith — Brisbane 1990.

PROFESSIONAL COMPUTING, JUNE 1991 1

Page 4: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

IN THE March issue of Profes­sional Computing, we interviewed Information Technology Interna­tional’s director of research and de­

velopment, Michael Oldham, dis­cussing this Australian company’s relational database DBQ, and its par­ticularly strong properties when con­currency is important. This month we explore the company’s CASE tech­

nology and the relationship with DBQ.

Changing operating conditions, the revised needs of users, new laws, eco­nomic conditions and recruitment problems are some of the factors dic­tating that organisations continually reassess their software systems re­quirements.

Not surprisingly, computer-aided

software engineering (CASE) tools have been seen as an answer by many of the DP departments struggling to reduce maintenance costs and im­prove programmer activity. However, productivity is only part of the CASE story. Oldham discusses the compa­ny’s work in leading edge CASE tech­nology and the added benefit users should expect from their CASE tools.

A Case for I ntegrityMichael Oldham interviewed by your editor

Editor. Does CASE technology rep­resent a new stream of business for ITI?

No, since the mid-1980s we’ve held the view that CASE was a key compo­nent in achieving rapid application de­velopment for high performance multi­user applications with data integrity. Our first product, SDD, allowed specifi­cation of NIAM conceptual schemas and was commercially released in 1987. Since then, we’ve continued research and development of both upper and lower CASE tools for developing client applications and large internal projects. How important was the choice of NIAM as the underlying methodology in your CASE strategy?

Very few CASE tools enforce any rig­orous database design methodology, but the reality is that a CASE tool is only as effective as the methodology on which it’s based. We believe NIAM offers a simple but precise methodology. NIAM detects errors in the conceptual schema far earlier in the development cycle than would be the case with, say, ER. It offers a “cookbook” approach to modelling so changes are also more easily made.

There’s also more semantic richness in NIAM. It offers many different types of constraint — subset, exclusion and equality constraints as well as domains and frequencies. All are vital in faithful­ly modelling many enterprise consider­ations.How integrated is your CASE offering?

We offer a full CASE line starting at the top with Conceptual Designer which produces very powerful, clean and pre­cise user specifications. Conceptual Designer’s GUI is a mouse-driven front end which allows the graphical entry of the NIAM conceptual schema. This out­put then passes through a semantic checker and the standard ONF algo­rithms to produce either a relational

schema for your target database, or it can be fed directly into our lower CASE tool, Intelligent Developer. This gener­ates 4GL application code, which in it­self is a highly maintainable form of code. 3GL code, of course, is not only generally more voluminous, but is also much more difficult to maintain.

Intelligent Developer’s artificial intel­ligence capabilities enable it to interpret the NIAM constraints and map them into 100 per cent 4GL application code, complete with automatically generated on-line help, system and user documen­tation.

Because of the lack of integration in other tools, typically this constraint in­formation is either seen as irrelevant (because the lower CASE tool can’t han­dle it) or the upper CASE tool doesn’t gather the information in the first place. With an integrated solution, you are able to pass all constraints down and make maximum use of this information at the lower CASE end.

How does this sit with an open systems approach?

Our underlying philosophy in devel­oping this CASE environment has al­ways been an ‘open system’ approach. The upper CASE tool can address differ­ent output targets and the lower CASE tool can accept input from different models.

Conceptual Designer produces rela­tional schemas for the DBQ RDBMS or for alternative targets such as DB2. Be­cause the relational output from the Conceptual Designer is stored in a so­phisticated SQL database repository, a programmer can then perform ad hoc SQL or QBE queries, or use the power of the 4GL to write an application to access that data. All the information about a given relational schema, the pre- and post- ONF schema information is stored in the repository. This informa­

tion is now openly available to any oth­er product via standard gateways. This also opens up the possibility of perform­ing reverse engineering.

Intelligent Developer also accepts in­put from other CASE tools such as IEW. The only disadvantage with IEW is that it can’t easily supply Intelligent Devel­oper with the same amount of semantic information as the Conceptual Designer. So, in cooperation with Knowledgeware, we added semantic richness to the ER “Comments” field for a filter to Intelli­gent Developer. Obviously, the more in­formation Intelligent Developer is sup­plied, the better the code it generates. But regardless of the code generated, it still makes assumptions about the sophistication of the engine underneath it, which is the essential difference in our offering.

How have you addressed some of the main limitations of existing CASE tools, for example, multi-user limitations?

One of the misleading assumptions about CASE tools at the upper end is that there is infinite memory available and relatively small schemas to be man­aged. For some reason, it is assumed that requirements will never exceed ini­tial expectations. We see it as important to provide CASE tools which not only utilise memory efficiently but also have the ability to divide schemas into man­ageable chunks.

Most CASE products are impossible to use multi-user. You can’t easily di­vide work up and have many people working in parallel without getting into difficulties. Conceptual Designer intro­duces the concept of “scoping”, where you can divide the meta schema into smaller chunks or “scopes” which can be manage independently and later merged in a background process. The ability to sub-divide the task gives true multi-user capabilities.

2 PROFESSIONAL COMPUTING, JUNE 1991

Page 5: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

DBQ Application Environment Other Products

Other CASE Products

IEW/ADW Other CASE ONF | Products ONF Generator j Generator

Relational Table Design

3GL API Interface

IEW/ADWConceptualDesigner

DBQ/ AG Easiform

ER - Filter

DBQ Applications

IntelligentDeveloper

Other RDBMS e.g. DB2

DBQ application environment

Another drawback of common CASE tools is the fact that the hardware is often so expensive, you end up with only two or three people using the prod­uct. Conceptual Designer will run on anything form a PC up, which does away with the need for expensive hard­ware.And what are the multi-user issues at the code generation level?

Lower CASE is basically the area where the code generation is performed. Typically, this code generation is ap­proached with a single user philosophy. When coding for an application, most programmers tend to think in terms of a single thread, or a single stream of thought. We discussed in our previous interview how I believe programmers rarely think in terms of concurrent pro­cesses. CASE exacerbates this by having even less consideration for what might happen in parallel by producing single user code.

The problem with this approach is that it assumes that the transaction model offered by the database offers full protection from transaction anomalies. Unfortunately, most database products cannot offer this protection. In fact, when DBA’s run their database with the highest level of integrity (the ANSI Iso­lation Level 3 or IBM’s Repeatable Read) the concurrency is generally so poor the database is effectively running in single user anyway. To counteract this, DBA’s then try to lower the integri­ty level of the transactions or split trans­actions. This introduces massive prob­lems of integrity loss.How does CASE assist transaction integrity?

An essential requirement for CASE tools is that they have the ability to run on a database engine which supports high integrity at high transaction con­currency. ANSI ISO Level 3 is the only level of integrity which is generally con­sidered adequate for transaction correct­ness — all others do not conform to the serialisability constraint. DBQ offers such an environment, and is unique in the levels of concurrency it offers at the necessary integrity level.

With transaction integrity intact, ap­plication developers can then begin to address the larger issue of “application integrity”. Transaction integrity is only a building block — it’s not enough. Your transaction needs to encompass enough SQL statements in order to offer application correctness. The only mech­anism that can reliably determine trans­action size is the CASE tool, because it is the only mechanism which reliably knows all the relevant constraints.

So it’s possible to lose application integ­rity while still maintaining transaction integrity?

Yes. Consider a database manag­ing tax require­ments on people’s sources of income.Suppose income is divided into two separate tables for income from “SALARY” and income from “IN­VESTMENTS”.Let’s say the tax laws deem it is mandatory that an individual be taxed at the pre­mium rate on only one source of in­come, either from a salary or an in­vestment, and all other incomes are taxed at the lower rate.

So we have a mandatory con­straint on the rela­tionships of SALARY and INVESTMENT “having” a premium tax rate and an exclusion constraint across these relationships.

Let’s say a person is already being taxed at the premium rate on invest­ments and you need to add new income from a salary which is now to be taxed at the higher rate. The transaction could run something like: start transactionADD NEW SALARY AT PREMIUM RATEIF OTHER SALARY AT PREMIUM RATE, CHANGE TO LOWER RATE IF OTHER INVESTMENT AT PRE­MIUM RATE, CHANGE TO LOWER RATEcommit transaction

Conversely, the transaction could run: start transactionIF INVESTMENT AT PREMIUM RATE, CHANGE TO LOWER RATE IF SALARY AT PREMIUM RATE, CHANGE TO LOWER RATE ADD NEW SALARY AT PREMIUM RATEcommit transaction

Because this transaction involves up­dates on two tables it may seem appro­priate to split it into two transactions — effectively maintenance transactions on each of the tables. But depending upon the order in which the updates were then done, you would break constraints.

In the first transaction, if you commit after updating the SALARY table you then have two incomes taxed at the pre­mium rate, breaking the exclusion con­straint. If another multi-user transaction happens to calculate that person’s tax

after this incorrect commit, that person will be taxed too much.

In the second case if you commit after updating the INVESTMENT table you have no income taxed at the premium rate and break the mandatory con­straint. In both cases, application integ­rity is destroyed.So, you are saying the upper CASE tool should pass down these cross table constraints

Yes. The CASE tool should logically group the required SQL statements into the right transaction so the 4GL can generate appropriately sized transac­tions to ensure no constraints are broken.So integration is the key to true quality control?

Yes. I believe the much touted bene­fits of quality control that CASE can provide are really only possible if you have an integrated CASE environment as this is the only mechanism whereby you can actually generate correct appli­cation code. Then you can really claim that CASE removes individual coding errors and offers “consistent bugs”.

If you have a bug in the design, this can be changed at the upper level and automatically implemented at the lower level. This is also necessary in order to reverse engineer existing applications. You then have the ability to take basic enterprise models, render them into a conceptual schema, render that into a relational schema and 4GL application, and know that this is all supported by a database engine with a sophisticated locking system guaranteeing integrity of your data. This is the logical extension of the CASE “correctness” argument.

PROFESSIONAL COMPUTING, JUNE 1991 3

Page 6: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

In Sydney ..

Japanese 5th generation computer project to exhibit at major international artificial

intelligence conference in Sydney

IN A break from the long northern hemisphere dominance of previous events the 12th International Joint Conference on Artificial Intelligence (IJ-

CAI-91) will be held in the southern hemisphere for the first time later this year. Running from 24 to 30 August, 1991, at the Sydney Convention Centre at Darling Harbor, the conference re­flects the growing awareness of the ex­traordinary development potential of AI research in the SE Asian, Japanese and Pacific Rim regions, as well as a strong practical applications emphasis.

IJCAI-91 has scored a major coup by securing the participation in the confer­ence as a major exhibitor, Japan’s inter­nationally famous Institute for New Generation Computer Technology known by its Japanese acronym as ICOT but much better known all over the world as the 5th Generation Com­puter Project. The challenge that ICOT was perceived as presenting to the West’s dominance of advanced comput­ing technology when it was established in the early 1980s is widely regarded as having been responsible for the explo­sion of interest and research in Artificial Intelligence that followed and which continues unabated today.

IJCAI-91 marks the first time that ICOT has exhibited at a major interna­tional AI Conference. The ICOT exhibi­tion at IJCAI-91 will be the first interna­tional public unveiling of their Psi-3 Artificial Intelligence workstations. These in turn will be connected to the just-completed PIM parallel supercom­puter in Tokyo using the Australian Academic Research Network (AAR- Net). The PIM machine is the first spe­cial purpose Artificial Intelligence super­computer ever built and represents the final stage of the ICOT Project. The

ICOT exhibition will feature demonstra­tions running on the Psi-3 machines, demonstrations running on the PIM machine over AARNet and a video pre­sentation about ICOT. This will be a unique opportunity to see in Australia the work of ICOT and some of its com­puters.

Expanding awareness of the impact of artificial intelligence in terms of its prac­tical applications in business and indus­try will be the task of a series of panels, tutorials and workshops. A special emphasis has been made to orientate many of these to the practical applica­tions of AI in the field — business, fi­nance, engineering, telecommunications and medicine.

This program will provide an excel­lent opportunity for computer managers and programming officers from the fi­nancial, banking and accounting sector, the service sector, the government sec­tor and engineering and architectural companies to obtain the most up to date information about the present and fu­ture impact of artificial intelligence in their industries.

Participants in this program will in­clude some of the most famous names in Artificial Intelligence, such as Marvin Minsky and Rod Brooks (an Australian) from MIT, Feng Hsiung-Hsu, from IBM, Danny Bobrow, from the Xerox Palo Alto Research Centre, and Woody Bledsoe, former director of Artificial In­telligence research at MCC in Texas.

During IJCAI-91, the Computers and Thought Awards and Lectures, regarded by some as the Nobel Prizes of Artificial Intelligence, will be awarded to Rod Brooks (the first time an Australian has won the award) and Martha Pollack, of SRI International. Marvin Minsky will also accept the prestigious IJCAI Award

for Research Excellence at IJCAI-91, while Woody Bledsoe will receive the IJCAI Distinguished Service Award awarded only three times previously. Feng Hsiung-Hsu, who designed the computer chess world champion “Deep Thought”, will participate in a panel dis­cussion on AI in computer chess. This panel will be followed by a chess match between Deep Thought and a highly rat­ed international player. A plenary ad­dress on “The Commercial and Indus­trial Impact of Artificial Intelligence” will be given by Mr Shigeru Sato, direc­tor of Fujitsu Laboratories.

IJCAI-91 is co-sponsored by IJCAI Incorporated and the Australian Com­puter Society. IJCAI-91 has assembled one of the most powerful collections of corporate sponsors ever put together for a conference of this kind in Australia. The principal corporate sponsor is IBM, while the major corporate sponsors are Anderson Consulting, the federal gov­ernment’s Department of Industry, Technology and Commerce and Tele­com Australia. Other corporate sponsors include three of Australia’s most presti­gious research centres — ANU, CSIRO and DSTO , as well as the Australian Computing and Communications Insti­tute, the Commonwealth Bank, Digital Equipment Corporation and the NSW State Government. The general chair for the conference is Professor Barbara Grosz of Harvard, and the chair of Aus­tralian national committee is Professor Michael McRobbie of ANU.

The deadline for early registration, representing substantial cost bene­fits, is June 30,1991. For further infor­mation, contact the IJCAI-91 secretar­iat on Tel: 02 357 2600.

4 PROFESSIONAL COMPUTING, JUNE 1991

Page 7: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

... and in AdelaideHCC11

Quo Vadis?

BEING asked to deliver a keynote speech for the 25th ACS national conference has caused me to re­flect on the past 25 years of the informa­

tion industry in Australia and to ponder the next 25. Readers will appreciate that while the past does not necessarily de­termine the future, there are frequently historical patterns or events which will give clues as to what the future may hold.

Perhaps an example might help to il­lustrate this point. In hardware we start­ed out with single user, single tasking systems. Twenty years later we had re­turned to single user, single tasking sys­tems but this time they were on desk­tops in the form of PCs. If this cycle were to repeat, what form would the next single user, single tasking system take? Would it again take 20 years to appear?

In software we have moved from cryptic, primitive low-level languages through third generation languages to fourth generation languages and on to CASE tools. The widespread adoption of C as a language again marks a return to the beginning of a cycle since it too is a cryptic language. As libraries of C sub­routines are built, the language becomes richer and more productive. We’ve seen this before of course, remember how COBOL compilers changed English style languages into assembler code? The conflict between personal produc­tivity and machine efficiency has always been there and to my mind always will be. For that reason we’ll always have high and low level languages but what will they look like? Will we always pro­gram in alphanumerics structured in a linear fashion? (By the way did you no­tice how many 4GLs became CASE tools without any obvious changes?)

In communications we’ve moved from simple, slow, custom built utilities to complex, fast, increasingly more stan­dard utilities. We used to think that the sky was the limit but with the advent of satellites, maybe that isn’t true anymore!

M OS IAC — Fitting the IT pieces together, reflects the intention of the organisers to bring together different groups from within the Information

Technology industry under the ACC’91 umbrella.The conference will include:

• Australian Computer Conference, ACC’91 • Shaping Organisations Shaping Technologies Conference, SOST 91 • Australian Medical Informatics Association Conference, AMIA'91 • Port Information Systems Conference, Port Systems Strategies’91

A preview from a keynote speaker

JOHN Carwardine is an Information Systems professional with more than twenty years industry experience.

Early in his career he became a qualified Telecommunications Tech­nician and moved into data process­ing shortly afterwards. He has car­ried out almost all of the tasks one would encounter in the IS business — from program­ming, design, specification, etc up to high level business con­sulting, project management and the estab­lishment of a large, indepen­dent Information Systems business. His industry experience spans the Telecommunications, Glass, Automo­tive, Aluminium, Food, Travel, Insur­ance and Computing fields and he has operated in both the private and public sectors.

He has a B.Sc in Electrical Engi­neering and an MBA with electives in investments, takeovers and mergers, producing a thesis dealing with Man­agement Stress. He is a Member of the Australian Computer Society and he has worked as a guest lecturer and consulting author for Deakin Univer­sity. In his capacity as a Manage­ment Consultant he is involved with the formulation of Business Strategy, defining the role of Information Sys­tems within an enterprise and the in- teraction between both of these items.

What will the communications facilities look like in the future?

These changes have obviously been brought about by people but what of the impact of technology on people? End- users are now better educated and new generations of children are less intimi­dated by technology but are there as­pects of IT that they should be con­cerned about? For the person in the

street much of the mystique has disap­peared and far from being insular, com­puter industry inspired language has be­come part of everyday speech with expressions such as “user friendly”, and “methodology” appearing in unlikely places. The image of the computer pro­fessional has progressed from that of a sandals and beard clad boffin to a pin stripe conservative pillar of society, no longer are we looked upon nervously by bank managers and policemen. We have a professional society, a code of ethics and significant slice of the national economy. What will happen to comput­er professionals in the future? Will they become even more conservative or will they help change the social structure of Australia?

The industry has matured into more discernable market segments each with its own profile. For example, the con­tract programming market segment now has quite modest gross margins of about 12 per cent compared with 30 per cent in the late seventies. Vendors are now offering ‘business solutions’ rather than just selling hardware. PCs are sold in chain stores. Education in computer re­lated topics is provided by vendors, in­dependent companies and educational institutions. What other changes will there be to the information industry structure.

The theme of this year’s national ACS conference (MOSAIC), is “Fitting the IT Pieces Together” and as organisations and individuals try to come to terms with rapidly unfolding change, diversity and uncertainty it is a particularly ap­propriate theme. In my keynote speech I expand on these topics and offer some personal opinions on what the future information industry might be like. It promises to be a truly stimulating con­ference. I hope to see you there. MOSAIC: Adelaide, October 6th-10th. Registration brochure is included in this issue of Professional Computing, so that enthusiasts can take advan­tage of the early bird concession, val­id until June 30th. The concession is also available until August 8th, for those attending the COMTEC confer­ence in Adelaide 6th-8th August.

! . rr---. j

PROFESSIONAL COMPUTING, JUNE 1991 5

Page 8: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

The Behan letters: Enlightening PCP

I READ with interest of the PCP scheme and, in particular, of the requirements for 30 hours of en­

dorsed professional education, or its equivalent.

Within this school, we mount a major in Computing in our Bachelor of Applied Science (Mathematics) degree which is accredited at Level 1 by your society. The majority of the staff who are involved in the teaching of comput­ing hold ACS membership at least the grade of Member, and a number of them have expressed an interest in the con­cept embodied in the PCP I am there­fore writing to request some further in­formation concerning this scheme.

Could our full-time staff who hold society membership at the grades of MACS or FACS and who are involved in the teaching of our major in comput­ing be accorded PCP status on the basis of their occupation? As you can imag­ine, teaching in this area require a con­siderable effort to maintain the neces­sary level of currency in the courses. As such, I believe that they should fall with­in the sphere of practising computer professionals by virtue of their occupa­tion and the status which had been ac­corded by your society to the course in which they teach.

The concept of the PCP is indeed sound and the society should be con­gratulated for its moves in this direc­tion. In fact, we are considering submit­ting for your endorsement, some of our

Graduate Certificate courses, which are presently under development. I look for­ward to your response with some inter­est.K.B.

The development of a new course, if supported by the academic’s manager, would be considered for PCP hours, but not the regular delivery of the same course. All IT professionals could claim that there is a continuing intellectual ef­fort involved in doing their jobs.

It is a requirement that all PCPs ac­crue at least 15 hours attending courses. I know that when I was an academic I learned a great deal of useful material and techniques from attending commer­cial conferences and courses as well as academic ones. I am sure your staff will easily accrue their 30 hours and that they will all participate in the PCP Scheme.

Re the PCP scheme.1 Tertiary courses are left out. While un­

dergraduate courses can be considered “entry-level” qualifications, higher de­grees are not. They are an extension of an established knowledge base.

2. Presentation of courses is not cov­ered. While I am teaching the course, certain amounts of research (often ex­ceeding the length of the course) must be done to verify/update course con­tent. Perhaps award half of the course length to the course presenter.

3. What of non-computer related (but equally important) courses such as time management skills, person man­agement skills and so on. Professional development courses of a non-com­puter content.

M SOFTWARE

ASSOCIATES I'TV 1. Tl>

Inc. Digital Consulting Pacific & Datacom Education. Quality, skills based IS training.

“ENTITY & FUNCTION MODELLING TECHNIQUES”

25 - 28 June Sydney2 - 5 July Melbourne6-9 August Canberra

For analysts, database administrators and others seeking a thorough grounding in EFM. Includes hands-on use of CASE tools for data modelling on screen.

Maximum of 15 participants on most courses !! 24 Call us for a competitive quote

on quality in-house courses.

For registration/information on our courses - please contact CatrionaPh : (02) 290 3555 Fax : (02) 290 1046

4. What about research performed be­fore presenting papers at conferences, newsletter articles, journal articles, and so on. This mostly amounts to “private” study, but is still “enhanc­ing professional skills” and “keeping up to date” c.f. P VI.

5. (A tricky one). What of research work? You aren’t keeping “up to date” you’re setting the standards for others to follow.

K.B.Thanks for your feedback on the PCP

scheme. To reply to your comments:1. Many undergraduate courses have

second and third year units that could be valuable for IT professionals who qualified four or more years ago. For example, courses on information analysis, AI, etc. As the invitation to members stated, units in accredited courses are eligible.But it depends on the member. If you are working as a data modeller, ‘INTRO TO DATA ANALYSIS’ does not extend your knowledge and skills. But if you are a systems programmer it certainly does. So, the member has to determine the relevance of an ac­tivity.

2. Development of a new course is eligi­ble, but not each delivery of it.

3. The invitation to participate clearly indicated that professional skills in any area are eligible. There are many such courses listed in the PCP direc­tory.

4. If an article or paper is accepted for publication/presentation then some hour can be claimed. Mostly people write/present on things they already know about so the test is “Did you acquire new skills/knowledge or did you just package it differently?”

5. Research is indeed a tricky one. You need to provide some objective evi­dence that it has happened, e.g. a supervisor’s support if relevant.I cannot speak for every branch PCP

committee in the ACS, but, my attitude is that we want to encourage as many members as possible to participate in the scheme. I trust most members to claim honestly and am a little surprised at the energy some people to go to “find 30 relevant hours”. I trust your staff will do so easily and that many will partici­pate in the scheme.

6 PROFESSIONAL COMPUTING, JUNE 1991

Page 9: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

BUYING A CASE TOOL NEEDN’T COST YOU AN ARM & A LEG

Voted Best

CASE Tool

1990-’91

POSE™ for affordable CASE development.

Comprehensive— POSE with its comprehensive data dictionary, supports an extensive range of analysis and design techniques such as: ♦ Data Modeling ♦ Dataflow Diagramming ♦ Functional Decomposition ♦ Dependency and Structure Charting ♦ Normalisation ♦ Transaction Path Analysis ♦ Rapid Prototyping ♦ New Data Modeling Bridge, allows Data Modeling Tool Kit to communicate with the IEW™ encyclopedia.

Powerful— Advanced graphics capabilities easily manages large diagrams at sub-second speeds.

Easy to Use— Consistent user interface across all modules utilises pull-down/pop-up menus,

on-screen icons and colour graphics.Source Code Generation

— MS/DOS users can build applications in a fraction of the time.Modular and Integrated

— Order only the modules you need now . . . expand later.Proven

ACSI’s CASE tools are used at more than 6000 sites in 25 countries worldwide.Inexpensive

— A complete CASE product with Upper & Lower CASE at PC prices.Upper CASE from $2,300 Lower CASE from $4,000 Modules start at $800 each.

To find out how to save your arms and legs and still buy a CASE Tool: Call today and ask about our free demos and our 30 day money back guarantee.

POSE®CASE Tool winner

of the

Dtmmi

„ i Liliiliii

f

HEADERS'CHOICE

AWARD

A f \ I SYDNEY: Suite 10, 174-180 Pacific Highway, NORTH SYDNEY NSW 2060. Tel: (02) 957 2233, Fax: (02) 957 2968.g\ | I MELBOURNE: 1st Floor, 50 Camberwell Rd, EAST HAWTHORN 3123. Tel: (03) 813 3944, Fax: (03) 882 3515

V# I ACT: CPMC Pty Ltd. National Surveyors House, 27-29 Napier Close, DEAKIN ACT 2600. Tel: (06) 285 3393, Fax: (06) 285 3394IEW™ is the trademark of KnowledgeWare Inc.

Page 10: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

Achieving success with JADWHAT are the key factors leading to the success use of Joint Application

Development techniques? This article looks at how JAD has been applied by Synthesys and Axis for projects ranging form business planning to detailed

design.

Norm Beck

THE use of IE lifecycle methodolo­gies, structured analysis/design techniques and CASE tools has en­hanced our ability to manage and engi­

neer systems. However, in themselves they have not substantially improved systems quality. Methodologies have improved our focus on deliverables for each lifecycle phase, structured tech­niques have improved our ordering of system information while CASE tools have improved our ability to produce deliverables and inter-relate system in­formation.

What is necessary to complete the framework for quality systems delivery are techniques that actively involve business users. Proactive involvement from users is vital in setting system strategy and in all phases in the develop­ment of system requirements and de­signs.JAD workshops

Joint Application Development (JAD) is one of the most successful mechanisms for involving users. The fo­cus within projects using JAD is the conduct of one or more workshops. In workshops users actively participate alongside IT staff in formulating strate­gy and/or making system requirement and design decisions.

JAD workshops are more than group meetings. The key focus is on participa­tion where system documentation is produced as the workshop progresses. This implies active participation in deci­sion making rather than listing prob­lems and issues for later resolution. JAD workshops achieve their full potential when agendas are set and group energies focused to produce deliverables defined by an organisations’ lifecycle method­ology and CASE is used to capture ses­sion output. This output then forms the basis of system documentation.

JAD also differs from group meetings in that workshops are lead by a facilita­tor or unbiased session leader. The role of the facilitator is to ensure the group stays on track, objectives are met and all participants have the opportunity to state their requirements. The facilitator must ensure a full and open discussion

occurs that leads to issue resolution and decision making. Ideally the facilitator should have IT skills in strategy formu­lation and analysis/design along with skills in behavioral psychology, group dynamics and knowledge of techniques for maximising group effectiveness. The facilitator maybe supported by one or more additional staff to act as design analysts, scribes or CASE tool drivers. Often support staff will work staggered times to the workshop to turnaround information over breaks or overnight for validation the next day.JAD benefits

The key benefits from the effective use of JAD tech­niques are:— ownership of workshop output by the group pro­motes longer term project commit­ment,— the utilisation of user knowledge leads to systems that more accurate­ly reflect business needs,— by minimising iterative serial interviews it improves the quality of deliverables while reduc­ing design time and costs,— the group dynamics enhance the identification and resolution of issues, shortening decision making time, cut­ting across organisation boundaries and improving communication of require­ments between all interested parties, and— JAD projects have key focus points for the generation and delivery of sys­tem information improving project management and project momentum.

Typical environments Consider Telecom with nine regions

and a head office. Systems may impact six or more different sections in each region and head office and have require­ments to satisfy the needs of three or four levels within each section.

A simple sum of 10 x 6 x 3 indicates

180 different user classes where needs must be met before personal views on system requirements are considered. In this environment gaining user commit­ment, and determining system require­ments is a complex task. Without work­shops the time to understand requirements is too great and many sys­tem decisions may be left to analysts isolated from the business.

Another organisation requiring JAD was TAFE — NSW. Here IT personnel, auditors, registrars from colleges, prop­

erties and stores personnel, finance and risk management personnel plus other staff were to be bought together to de­fine total requirements for an Asset Management system: In this environ­ment gaining a full understanding of the relationships between the units, consid­ering all information flows and func­tional processing requirements would have been extremely difficult without JAD.

Virtually all organisations can benefit from the application of JAD. When there are user groups with different in­formation needs, where politics or poli­cy impact system success, where users have competing priorities and require­ments or where gaining commitment to the system development processes has been difficult in the past, JADF is likely to provide significant benefits.

JAD STEPS

ScopeInformation PROJECT

SCOPING

tEx. Spons.

Presentation

PLANNINGBRIEFING

j Focus Interview

! InformationWORK

ANALYSIS (workshop

TT prepare. ■j VALIDATION iTRANSITlOhExisting

Document.

LogisticsInformation

ParticipantInformation

Proj. Mgt. Team Goals

Mgt. & Sys.Information

ActivitiesNext

SYNTHESYS®

8 PROFESSIONAL COMPUTING, JUNE 1991

Page 11: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

Key success factorsAt Synthesys and Axis we have used

the METHOD JAD techniques success­fully over many projects from business planning, IT policy formulation, prepar­ing IT strategic plans, investigating or­ganisation structural change, defining system lifecycle methodology standards, system requirements definitions and for detailed design. Throughout these pro­jects we have either produced deliver­ables to an organisations methodology or have produced deliverables to re­quirements as specified in the METH­OD. Various tools including IEW, Exce- lerator, Oracle, CASE, spreadsheets and word processing packages have been used to produce documentation as the workshops progress. These projects have identified a number of key factors in the success of JAD.i. Apply the full process.

Within the METHOD we use a seven- step process to ensure proper prepara­tion so that appropriate deliverables are produced. Ignoring steps can lead to in­adequate workshop preparation, unclear objectives and ultimate poor quality project output. Often for small projects these steps are condensed or combined.ii. Adequate project scoping

Often there are overly optimistic ex­pectations as to what scope can be cov­ered in a single workshop. It maybe nec­essary to split a project into multiple workshops. Proper scoping helps ensure project objectives are realistic, deliver­able requirements are known, partici­pants are identified and whether there is project commitment.iii. Focus on objectives

Throughout the project continuallyfocus on objectives and do not allow the workshop to be sidetracked. The facili­tator and project manager should always critically evaluate whether issues are within the scope of the project and workshop. If not they should be posted for later resolution. Attempts to change the scope and resolve issues without ad­equate preparation by the project team or participants generally result in time wastage and poor deliverable quality.iv. Brief participants

Often participants are unfamiliar with their role in JAD sessions. Participants should be made aware that workshops are working sessions where active par­ticipation is expected and decisions will be made. Consequently detailed prepa­ration by all participants is required.v. Plan the workshop approach

Obviously workshops must be tai­lored to meet specific project needs and the agendas planned accordingly. Agen­das may be detailed down to the type of information gathering techniques to be used and what each workshop session should achieve.

However, it is vital to remain flexible

and vary techniques and format depend­ing on the group dynamics. Further it is not always possible to predict accurately the time necessary to resolve particular requirements.vi. Logistic preparation

Prepare in advance requirements for rooms, lunch and coffee breaks, support tools such as electronic whiteboards, CASE tools, photocopiers, stationary, etc. Participants should not be distract­ed from focusing on issues and informa­tion requirements.vii. Deliverable planning

Ensure the workshop is structured for sessions to produce information that matches objectives and deliverable specifications. For requirements and de­sign projects this usually entails system context modelling, information and pro­cess modelling along with system access definitions.viii. Knowledgable support

The amount of quality information that can be generated at a well run JAD workshop is enormous. The use of scribes, design analysts and CASE tool drivers can greatly assist the facilitator manage the collection of the informa­tion throughout the workshop. In one project for the specification of a Busi­ness Funding model, support included

information capture on a combination of word processors, IEW and spread­sheets to document requirements both during the workshop and after hours.ix. Generate enthusiasm

The facilitator needs to remain posi­tive throughout sessions and keep par­ticipants motivated. This is best done by varying the techniques used to gather information and resolve issues. Also it helps to generate a sense of commitment and ownership of the workshop and pro­ject outputs.x. Validate information as the workshop progressesConclusion

JAD provides a focus for the identification of the central business is­sues and information requirements. The active participation of stakeholders in workshop sessions in vital in building commitment and ownership in the diffi­cult task of building commitment and ownership in the difficult task of build­ing systems of lasting benefit to an or­ganisation.Author: Norm Beck is a senior consul­tant for Synthesys. He has over eigh­teen years experience in the IT indus­try and specialises on the application of JAD for planning and system re­quirements definition.

TURBOCHARGE YOUR CASE

TOOLMaximise your CASE tool investment with The METHOD, a proven JAD technique from Performance Resources Inc. (PRI) that accelerates the requirements definition and design process.

The METHOD works with leading CASE tools — like IEW® and Excelerator® — to provide the most powerful fusion of tool and technique available today.

Call now. Well show you how leading companies and government agencies worldwide are cutting their application development and maintenance backlogs with CASE andThe METHOD.

Consultants in Information Technology and Systems DevelopmentNSW and ACT: Victoria:

32 York Street Sydney NSW 2000

Phone (02) 299 4334 Contact: Brian Lees

SYNTHESYS*)113-115 Little Lonsdale Street

Melbourne VIC 3000 Phone (03) 662 1800 Contact: David Mills

Axis and Synthesys are the sole representatives in Australia for PRI and The METHOD

Performance. Not PromisesIEW is a trademark of KnowledgeWare. Inc. Excelerator is a trademark of Index Technologies. Inc

PROFESSIONAL COMPUTING, JUNE 1991 9

Page 12: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

Driving the performance management industryIBM, the growing sophistication of DP/MIS operations, and the dramatic

shortage of skilled, senior personnel are key forces influencing the performance measurement business

Mike Wynd

IT has only been in recent years that the importance of the performance management industry has become apparent. Today there are three primary

forces driving the industry. These are IBM, the growing sophistication of DP/MIS operations, and the dramatic shortage of qualified or seasoned per­sonnel at senior level.

Viewed individually, DP staff could deal with these industry influences. Viewed together, the whole is signifi­cantly greater than the sum of its parts. For many firms in the computer indus­try, IBM is viewed as a formidable com­petitor. However, companies in the per­formance management field generally view the computer giant as an asset to their balance sheet.

IBM most certainly wants its systems to perform well. However, from a busi­ness perspective, IBM emphasises spending its resource dollar on hardware development and getting applications onto its systems to increase their capa­bilities and attractiveness to the market.Tracking the giant

Since 1964, with the introduction of the System/360, IBM has carried out a program of introducing a series of fairly compatible machines. Today, these sys­tems are at least 40 times more powerful than the originals.

To keep pace with corporate and gov­ernment user demands, IBM is creating an increasingly complex environment. It is one that involves high-powered cen­tral processors connected to departmen­tal processors and workstations.

As part of what appears to be the firm’s master plan, IBM is tying distrib­uted elements to the centralised data processing environment — an IBM strength.User sophistication increases

A great number of DOS, VM, and MVS systems are still being used for batch work and software development

projects. However the relatively recent thrust of online systems into the con­sumer marketplace, as evidenced by automatic teller machines (ATMs) and point-of-sale (POS) systems, has been making performance management and capacity planning efforts critical.

The principal added dimension is that the performance problems in batch op­erations which in the past have caused concerned among DP/MIS personnel, but were not catastrophic have become critical with online systems. Here per­formance problems affect customers di­rectly and can have a very negative im­pact on an organisation. Over the past few years, this change has been instru­mental in the parallel evolution of oper­ating systems and supporting perfor­mance management products.

The management information sys­tems of most large organisations have:

IBM-equipped data centres running IMS with a great degree of organisa­tional dependence on system perfor­mance for survival.

CICS, a rapidly-growing online pro-

Mike Wynd

duction environment that is becoming increasingly sophisticated and impor­tant.

The MVS operating system, which with TSO, batch operations and a myri­ad of other components promises to be with us for several years to come.

VSE and VM, for smaller installations forced to meet increasing demands with limited budgets.

The performance management prod­ucts which have developed in conjunc­tion with these MIS features include:

Realtime products, employed for day- to-day fire fighting.

Current-time products, assisting the organisation in manipulating and con­figuring the systems to reduce the need for fire fighting.

Future-time products, generally used by organisations that have been able to reduce fire fighting and optimise their system, and are now in a position to plan for the various possibilities they may encounter in the future.

These products include modelling tools to answer the “what if’ questions.

The importance of these product cate­gories at any single data processing or­ganisation depends on the maturity of the shop and how long it has been carry­ing out various performance manage­ment activities.

Systems analysts and programmers realise that they should be using future­time tools to project capacity require­ments, although most are forced to wor­ry about realtime problems. Fortunately, there are strong, user-prov­en tools available to assist them in this area.

While these people would like to be able to plan, many are in the survival and reactive mode, so they can focus only on solving today’s problems — fire-fighting. They cannot be worried about planning for tomorrow when cus­tomers are demanding immediate ser­vice and support.

By implementing available realtime tools, they are able to do a better job of

10 PROFESSIONAL COMPUTING, JUNE 1991

Page 13: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

fighting fires and meeting customer de­mands.

This provides them with the breath­ing room they need to use current-time tools to minimise their fire-fighting ac­tivities. Over a reasonable period of time, realtime problems become the ex­ception rather than the rule. Of course, the Utopian dream is to have the system pre-empt the fires and take corrective action before users are impacted. As their operations mature, they will be po­sitioned to plan for the future.

In the early 1960s, systems program­mers became experts because IBM oper­ating systems didn’t work.

They were able (and were encouraged) to get inside the operating system to reg­ularly debug and enhance operating sys­tem code. In the ’70s, their level of in­volvement with the operating system decreased. By the ’80s, IBM’s operating systems became relatively bug free.

‘People whowould like to plan are in

survival andreactivemode’

As a result, younger systems program­mers no longer have the hands-on expe­rience with the operating system code of their predecessors.

As an industry, we have made major strides in hardware (and software) reli­ability. Unfortunately, managing the performance of those systems has not stabilised. Multiple systems have differ­ent presentation services, different com­mand/message formats, multiple logs, multiple consoles, multiple sources of information, and multiple recovery or fallback procedures.

The challenge that faces suppliers in the 1990s is to integrate the pieces and tools. To have value for DP/MIS man­agement and their users, software ven­dors have to help bring companies’ per­formance management activities to a manageable level. More universal per­formance management tools and sys­tems are needed to meet the changing needs of the data centre.

Single site, single CPU systems are the

exception, rather than the rule. We now see multiple-site, multiple-CPU distrib­uted systems with applications talking to applications and interaction between user terminals and networks.

As systems programmers’ tasks be­come more complex, the products they use must not only do a better job for them but must also contribute to their learning process.

This integration is occurring and is frequently based on automated opera­tions. Expert systems are playing a part. These newer tools and systems will not replace the human experts, but will be­come labor saving devices that allow ex­perts to focus on real issues rather than trivial, recurring problems.

As IBM continues to introduce more system layers and the underlying com­plexity increases, tools for automated detection and correction will become vi­tal in this talent-starved arena. As DP/MIS organisations become larger and more indispensable to their firms’ daily operations, the potential for disas­ter can only increase.

Integrated performance management systems based on automated operations and expert systems technology will be vital if DP organisations are to econom­ically meet critical, constantly expand­ing user requirements.Author: Mike Wynd is manager, Enter­prise Systems Group, Distributed Data Processing.

&

Teamwork

Software engineers can't always agree on which tools, networks, or platforms are best. So it's good news that the open environment of TeamnwA: can agree with all of them.

Cadre's Teamwork takes maximum advantage of any situation. It's modular, easy to use, easy to extend, and lets you deal with changing requirements throughout the life cycle. It lets you automate standard techniques to simplify the analysis, design, coding, testing, and maintenance of complex software systems.

Cadre Technologies (Australia)5th Floor, 10 Moore Street, Canberra ACT 2601 Telephone: (06) 268 3145 Facsimile: (06) 249 7261 CADREAJXandOS# are trademarks of International Business Machines Corporation, HP/UX and Domain are trademarks of Hewlett-Packard, Inc., pG/UX is a trademark of Data Genera] Corporation, ULTRJX and VMS are trademarks of Digital Equipment Corporation, X is a trademark of \?H^,cJ]useIts Institute of Tecnology, SunOS and the Sun logo are trademarks of Sun Microsystems, Inc., UNIX is a registered trademark of AT&T Bell Laboratries.

PROFESSIONAL COMPUTING, JUNE 1991 11

Page 14: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

4GLs press on into the ’90s“Those who fear to learn the lesson of history are doomed to repeat it.’’

Napoleon

Graeme Burt

FOR some time, commentators have been prophesying the demise of fourth generation languages (4GLs). The talk in the streets is that

they have outlived their usefulness. Sev­eral commercial ventures have been bold enough to announce the arrival of fifth generation software solutions.

In these (relatively) early days of the 1990s, it is appropriate to pause and consider where 4GLs are headed. Are they a tool of yesterday or will they evolve and continue to serve business? How will they adapt to the changing business and technical environment?

In this article we explore the future of 4GLs and how they could evolve and maintain their solution capabilities.

Before peering in to the future, how­ever, it is worth taking some time to review the past. To avoid confusion, a standard definition of a 4GL should be agreed upon. James Martin, in Applica­tion Development Without Program­mers, (Prentice Hall 1982), defines the 4GL as “a high-level language that pro­vides an integrated syntax for data defi­nition, data maintenance, and data anal­ysis, yielding productivity gains of 10:1 or more over COBOL.”15-year history

To understand where 4GLs are going, it is worth reviewing their 15-year histo­ry. The term Fourth Generation Lan­guage was first used by Datamation in the United States to describe FOCUS which is widely regarded as the world’s first 4GL.

Until 1975-76, computer systems were primarily batch systems. Input from the user environment in written form created requirements for the data processing people who used languages such as COBOL to develop applications.

During the era of 1976-1989, Third Generation Languages were primarily in use and all in-house development, re­ports and inquiries were performed by DP professionals. Business typically op­erated in a mainframe environment, and the growing backlog of dissatisfied user requests created an environment where 4GLs could flourish.

The world of the end user changed. The personal computer was launched and quickly took hold in the office.

More and more end users used their own tools to access information. 4GLs began to appear: report writers, lan­guages, and application development systems.

PC versions of mainframe 4GLs be­gan to emerge, although in most cases PC versions of the languages were not fully compatible with the mainframe versions. Report writers and applica­tions development systems were data specific and tied to proprietary DBMS.

The acceleration of end-user comput­ing advanced through the growing use of decentralised midrange computers. Dur­ing this period many vendors began in­stalling disconnected, standalone and midrange machines.

End users became computer literate. Everyone wanted to have their own source of computing power because of the traditional lack of responsiveness from the centralised data processing or­ganisation. Different kinds of machines and DBMS technology became a barrier to productivity and success.

It soon became apparent that decen­tralised systems needed to be able to communicate. Business needed data dis­tributed, but it also needed corporate wide access to the data. Cross-platform connectivity became very important. Business and information systems peo­ple recognised the need for standards in terms of communications, operating en­vironment and DBMS.

Graeme Burt

T- s

In the later 1980s we saw new technol­ogies such as CASE and Knowledge- Based Systems begin to emerge. The 4GL became a principal vehicle for con­necting and accessing data across sys­tems and across operating environ­ments.

Where to from here?There are five strategic challenges for

departmental Information Systems managers in the 1990s.

1. Find ways to make the information assets of data and applications sharable.

2. Seamless interoperability will be­come the norm by the mid-1990s.

3. An open information system archi­tecture able to integrate cost effectively new hardware and software technol­ogies.

4. For some time to come, business will have to deal with proprietary ven­dor architectures and standards. No new enterprise system will/should be built based upon a single vendor’s standards.

5. The ability to control the decen­tralisation and control the information asset on a strategic basis will continue to be a major challenge.Enterprise information architecture

Information Architecture in the 1990s will be vastly different from earlier de­cades. The 1990s will see the establish­ment of an infrastructure of capabilities such as tools, services and controls.

Success in an enterprise-wide, multi­vendor environment will be dictated by the degree to which the information ar­chitecture is open.

Users will need the ability to deal with proprietary hardware architectures in­stalled across the business enterprise; to have a completely open and indepen­dent operating environment; to be able to choose DBMSs freely; and to be able to cope with the differences in systems- to-systems communications.

The ability to distribute data and ap­plications to the most cost effective pro­cessing location is a critical require­ment.

Many tend to believe that a single vendor such as IBM with its SAA stan­dard, or DEC with its AIA/NAS strate­gies, or the UNIX environment, will be a total solution for their enterprise infor­mation environment.

In reality, most businesses today use

12 PROFESSIONAL COMPUTING, JUNE 1991

Page 15: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

multivendor environments. When plan­ning they need to think of what is im­portant or what is strategic in building their information architecture.

Business runs on data, and the objec­tive of any enterprise-wide information architecture for applications to be able to use data as though it resides in a single, logical, common data founda­tion.

An environment is needed where an application can use the concept of a data warehouse: Back the truck up to the warehouse, ask for the data it needs, and have that data extracted and given to the application or the user. Users need not care where data comes from and whether it’s remote or local.

On the base of a common data model or common data foundation, users will be able to develop a spectrum of distrib­uted and interoperable applications that run in system independent environ­ments and are portable and scalable be­tween systems. The hardware and/or software platform used would become irrelevant. On any platform, the applica­tion would look and feel the same.

A single set of tools that can be used in any environment, on any system to develop distributable and interoperable applications is needed. Those tools need to be able to integrate seamlessly into business environments and have control facilities that allow users to easily main­tain applications regardless of the hard­ware and software environment.Role of the 4GL in enterprise systems

The 4GL is a strategic tool in the architecture of enterprise information systems. It has come a long way and these six points summarise what it will be in the 1990s.

1. The 4GL will be used as a complete enabling tool for applications, that is, the ability to build any kind of applica­tion that is appropriate in the business environment.

2. It will also be a tool for the access of information from any platform and any database structure within the environ­ment.

3. The 4GL, as it has been a connec- tivity solution in the 1980s, will extend

that capability in the 1990s to operate in both the distributed and interoperable systems environment.

4. It will also be an enabling technol­ogy for the integration of new technol­ogies such as object-oriented program­ming and image processing that we see rapidly becoming a part of the environ­ment of the 1990s.

5. In addition to that, the 4GL will be an excellent migration aid, allowing us­ers to move from their existing environ­ments into the environment of the ’90s, maintaining full integrity of existing ap­plications and existing data.

6. Finally, the 4GL of the 1990s will be a management control tool, provid­ing for the integrity and control of appli­cations across the multivendor systems.

From the EIS and DSS systems up through and including the transaction processing systems, integration with CASE technology, and the integration of the 4GL with expert systems, 1990s 4GLs will enable a broad variety of solu­tions.

4GLs will incorporate support for graphic user interfaces, such as Presenta­tion Manager, DEC windows, Motif, and the other GUIs that become impor­tant, and will be portable across these GUIs. This will allow the user to devel­op an application, for instance, under Presentation Manager and move it into the DEC windows or Motif environ­ment without modification.

The 4GL will also take full advantage of the ever improving price performance in hardware and operating environ­ments allowing 4GL based applications to be downsized to more cost effective computing environments, and provide a vehicle for co-operative processing.

As object-oriented programming tech­niques improve the productivity of pro­grammers in the 1990s, the 4GL will evolve to support them, providing the ability to develope reusable applications components and have those applica­tions components distributed across dif­ferent computer technologies through­out the connected enterprise systems.

The 4GL will be integrated with the knowledge-based systems technology of

the expert systems environment, pro­viding data-based and knowledge-based applications that are extendible beyond the common application today. In addi­tion, 4GLs will incorporate the ability to manipulate image, picture and docu­ments.

4GLs today can join non-relational and relational data, and can be used to help move data from non-relational data environments such as IMS into re­lational environments such as DB2.

In the area of management arid con­trol, each of the major hardware ven­dors is providing their own repository and control environments, such as IBM with its AD/Cycle announcement and repository of VS, DEC with the CDD+ repository services. The 4GL will pro­vide its own controls that will allow you to maintain the control and integrity of applications across environments.

4GL role summary; a strategic solution4GLs are a strategic solution. The

right 4GL will give the ability to access data from any location transparently and to treat it as though it resides in a common data model; and create a full set of integrated applications, all using the single language solution.

4GLs are evolving to meet the chal­lenge of change. That evolution is forc­ing a broader definition of the term Fourth Generation Language. It is also ensuring that 4GLs continue to be a key tool at all levels in business.

Fifth generation languages which are still some considerable distance away will be designed to solve a new class of problems yet to be defined. Their char­acteristics and capabilities will be quiet different to what we know today as a 4GL.

Napoleon advocated that we ought to learn from history. Some 4GL vendors have learned from their 15-year history and face the 1990s strong and ready for a challenge.

Author: Graeme Burt is Technical Manager of FOCUS Technologies, a supplier of the FOCUS 4GL product line.

The Australian Computer SocietyOffice bearers

President: Alan Underwood. Vice-presidents: Peter Murton, Geoff Dober. Immediate past president: John Goddard. National treasurer: Glen Heinrich. Chief executive officer: Ashley Goldsworthy.PO Box 319, Darlinghurst NSW 2010. Telephone (02) 211 5855.Fax (02)281 1208.

Peter Isaacson Publications(Incorporated in Victoria)

PROFESSIONAL COMPUTINGEditor: Tony Blackmore. Editor-in-chief: Peter Isaacson. Publisher: Susan Coleman. Advertising coordinator: Linda Kavan. Subscriptions: Jo Anne Blrtles. Director of the Publications Board: John Hughes.

Subscriptions, orders, editorial, correspondenceProfessional Computing, 45-50 Porter St, Prahran, Victoria, 3181. Telephone (03)520 5555. Telex 30880. Fax (03)510 3489.

Advertising

National sales manager: Peter Dwyer.Professional Computing, an official publication of the Australian Computer Society Incorporated, is published by ACS/PI Publications, 45-50 Porter Street, Prahran, Victoria, 3181.

Opinions expressed by authors in Professional Computing are not necessarily those of the ACS or Peter Isaacson Publications.

While every care will be taken, the publishers cannot accept responsibility for articles and photographs submitted for publication.

The annual subscription is $50.

PROFESSIONAL COMPUTING, JUNE 1991 13

Page 16: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

JUNE 1991

386 monitoring the Stallion way

WHEN taking an airplane flight over a city, you can see road traffic with great clarity. You can see bottle-necks, roadblocks and un­

der-utilised roads. Road planners can say “lets put a motorway in here and go around the bottleneck, that will solve our problem.”

Solving bottlenecks in this way may yield dramatic improvement, providing there is no further bottleneck lurking just down the road. How many times have we seen major highway expendi­tures only move the location of the bot­tleneck. Too often, money is spent on increasing the power of a computer, which remains under-utilised, due to unforeseen bottlenecks in certain parts of the system.

We tune and service our cars regular­ly, but neglect our computer systems, even though there could be an extra 25 per cent increase in capacity when tuned.

Unfortunately, there has been a lack of understanding of the tools and meth­ods available to support systems tuning, so it has remained a mysterious art. While SCO has streamlined the installa­tion of its Unix operating systems to provide good performance on initial in­stallation, dealers and system integra­tors are often unaware of the gradual degradation of a system’s performance as the job mix changes over time. Con­sequently, they rarely become involved with tuning during the system’s lifetime.

This paper addresses the various sys­tem resources that require regular tun­ing; the CPU, memory and disk. We will discuss the tools available to monitor performance, and the methods available to improve it. Figures given are only a rough guide, as acceptable performance statistics vary widely from system to system. Changes should not be made to the system without carefully monitoring and evaluating the results. It is just as easy to make matters worse as to improve them.

The CPUSelecting the correct CPU is critical to

the ongoing success of any UNIX instal­lation. It remains immune to tuning, and usually cannot be upgraded after installation.

First, make sure the CPU architecture is 32-bit (386/486). Although most in­dustry-standard microcomputer plat­forms also support 16-bit (286) proces­sors, these handle 32-bit arithmetic and memory sharing inefficiently. A 386 CPU clocked at the same speed as a 286 CPU will run applications two to three times faster.,

CPU resources are measured in time, which cannot be stored for later use. They must be rationed by load-sharing. Spread the load evenly throughout the day, and avoid running programs at peak times if they can be run out of work hours. A good example is any large file transfer using UUCP and NFS. These can consume up to 50 per cent of CPU time. Pinpointing and replacing any programs that make inefficient use of CPU resources with efficient ones can also make a big difference. Any system experiencing a consistent shortage of CPU resources will probably need a more powerful processor; loadsharing and other tuning methods will not be enough.

Available toolsThe most commonly known CPU

monitoring tool is the ps command. With the a, d, e and f flags, it provides valuable information about processes that are running on the system. The most important statistic is found in the TIME column, which gives the minutes and seconds the CPU has spent execut­ing each process. This will highlight “runaway process”, which can occur on ports to which no terminal is attached and electrical interference is creating in­terrupts. These interrupts can spawn an­other login, which in turn creates an interrupt, and so on. A cure to this prob­lem is to only have gettys running on ports that are regularly used and are connected to terminals.

ps has shortcomings. System status can change while the process is running, so ps can only give an approximation to what is happening. Running ps with the nice -20 command will increase its pri­ority, reducing the risk of interruptions to the CPU distorting the results. How­ever, this will have an affect on system performance if run at regular intervals, as the CPU will be frequently executing ps rather than other important tasks.

The time command gives the amount of CPU time a process consumes, in real (the length of time the process has exist-

Wed Sep 6 Total CPU Usage 12.23:08System User Floating Point Help Quit IdleDisk Delay System Time User Time Float Point

20 40 60 80Percent Usage

100

Figure 1: Monitor's Total CPU Usage screen

14 PROFESSIONAL COMPUTING, JUNE 1991

Page 17: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

Opening Moves Wed Sep 6 System Memory Usage 12.23:08Help Quit Size (kb)System Code ■ 368System Data 1 290Driver & Misc 130Buffering 1150User Programs 3334Free Memory ■ 428

0 20 40 60 80 100Percent of Total Memory Total 5700

Figure 2: Monitor's System Memory Usage screen

ed for), user (the time spent computing) and sys (time in system space, process­ing system calls).

Adding the user and sys values will give the total time the CPU spent run­ning the program. If you suspect a cer­tain program is consuming large quanti­ties of CPU time, run time on it and compare the results with other pro­grams.

uptime provides information on the load average of the system in the past minute, five minutes and 15 minutes. Load average refers to the average num­ber of jobs waiting in the queue to be processed. It is a good indicator of the load the system has experienced over the past few minutes. However, it can­not be used to determine instantaneous load.

The first two fields of the vmstat com­mand can be used to get the instanta­neous load on a system. They give the number of processes waiting for the CPU, the number of processes waiting for resources, that may soon be requir­ing the CPU, and the number of pro­cesses that are swapped out but were recently run. vmstat, run with no argu­ments, gives the average values since the system was boosted. With arguments, it gives the average or total values since the last sample, at set intervals (e.g. vmstat 5 5 will report the values every 5 seconds 5 times). Once a certain thresh­old is reached, where too many process­es are vying for the CPU’s resources, performance will become intolerable. This threshold varies from system to system (around 5 to 10). The last three fields of vmstat give the user and system time and the per cent of time the CPU was idle. A value consistently less than 30 per cent suggests the presence of a bottleneck. This value normally oscil­lates between 40-80 per cent.

The sar command is similar in execu­tion to vmstat, but is not available on SCO XENIX. With the -u flag, it gives the %idle and the %wio value (%wio

specifies the amount of time the CPU waits for I/O requests to/from disk to complete). A %wio value greater than 7 could be an indicator of a disk bottle­neck. sar can be run in the background during peak demand (via cron), logged to a different file for every day of the month, and reviewed at a later date to obtain minute by minute and daily aver­age performance statistics.Floating point arithmetic

Certain applications are heavy users of floating point arithmetic. A math co­processor can execute these functions 100 times faster than a 386 and is an inexpensive way of boosting system pro­cessing power. An alternative is to pur­chase a 486-based system with an inte­grated floating-point co-processor. Your system will need a co-processor if you regularly use spreadsheets, CAD soft­ware and other graphical packages, or awk.

In addition to the above tools, Stallion Technologies has developed a user-friendly UNIX performance ana­lyser software package, called Monitor. Monitor identifies and diagnoses system bottlenecks and gives a graphical real time presentation. It focuses on the four main areas of system performance: the CPU, disk, system memory and disk buffering. For each problem identified, Monitor suggests possible solutions.

The screen depicted in Figure 1 is used to determine CPU load. In this case, the load is user programs and float­ing point processing. Other screens can then be accessed to determine exactly which processes are using CPU time and floating point emulation.System memory

There are three uses of memory in a UNIX system:1. To hold the operating system text (object code)

This component of system memory is largely inflexible. It is permanently held in memory and cannot be swapped out.2. To hold the operating system data

The kernel data region is partly madeup of the various system tables. The size of these, and the amount used, can be determined by executing sar -v. The -sz columns show the ratio of used versus allocated for each of the four tables; text, process, inode and file. These should never be allowed to overflow, so great caution should be exercised before you reduce the size of these tables. You can also reduce the kernel data region by reducing the number of virtual consoles. As only 3 or 4 are normally used, elimi­nating 6 or 7 may gain an extra 100 KB.

The kernel data region mainly con­sists of the disk I/O buffers. The most recently accessed copies of disk blocks (those most likely to be re-accessed) are kept in memory to reduce the number of requests to the disk. Disk buffers can consume well over 1 MB of system memory. Correctly sizing them can have a big effect on system performance.

Determining the correct balance be­tween memory allocated to disk buffers and to user programs can be tricky. A general guide is to allocate 20 to 25 per cent of memory to disk buffers up to a limit of 4 Mbytes (about 600K on sys­tems with limited memory, up to 4 MByte on systems with abundant mem­ory).

Available toolssar -b gives a good gauge of hit rates

Wed Sep 6 Block Distribution Per File 12.23:08Help Quit File: /etc/termcap

% BlocksAllocated

CylinderFigure 3: Typical Block Distribution of a fragmented file

PROFESSIONAL COMPUTING, JUNE 1991 15

Page 18: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

Opening Movesfor disk reads (%rcache) and writes (%wcache). A hit is when the system finds data in the disk buffers; a miss is when it has to go to disk to get this data, which will take many thousands of times longer. The read hit rate should be sustained around 95 per cent most of the time. Note that a five per cent miss rate will result in half as many disk ac­cesses as a 10 per cent miss rate. The rate values should not fall below 80-85 per cent. Allocating ample memory to disk buffers will maintain high hit rates and remove disk I/O bottlenecks.3. To run user programs

A system should have sufficient mem­ory for all active programs to be resi­dent. When a UNIX system has consis­tently less than 10 per cent of memory free, active paging will become a serious disk bottleneck.Paging and Swapping

When a program accesses some code or data that is not in memory, the oper­ating system must swap in a page of the program from disk. This may take over 5000 times longer to access and execute than a direct access from system memo­ry- All paging and swapping must be virtually eliminated, as they are poten­tially the two greatest causes of perfor­mance degradation.Available Tools

The easiest and quickest remedy to excessive paging is to add more memory to the system. Alternatively, you can steal memory by reducing the size of your buffer cache. However, this will affect the buffer hit rate, in turn creating a disk bottleneck. It is very important to balance the memory allocated to disk buffers against the memory allocated to user programs.

Memory bottlenecks can be deter­mined with one of Monitor’s Memory screens. Figure 2 shows a system with less than 10 per cent free memory, indi­cating a paging overhead. Other screens can be used to determine the correct balance between user space and disk buffers.Fixed disks

CPU speeds have been doubling every two to three years and memory size and disk capacities have increased similarly. Program and data base sizes are also growing dramatically. However, indus­try-standard, fixed-disk access times are improving relatively slowly. Disk drives are approaching some mechanical tech­nology limits that do not look like being solved in the near future. Thus, the disk bottleneck on UNIX systems has be­come considerable.File fragmentation

File fragmentation, where the blocks

that make up a file are scattered over the entire disk, can severely limit disk throughput. Average disk throughput on UNIX systems is often restricted to 100- 200K per second, even though some disk controllers can support 1 to 2 MByte-per-second transfers. Even cach­ing disk controllers, which boast phe­nomenal disk transfer rates, sometimes yield only moderate disk improvement in a UNIX system environment, as their buffering may be constrained by file fragmentation.

Disks suffer badly from fragmentation after heavy use. As files are created or grow over time, UNIX places data wherever spare blocks exist. Conse­quently, the file system gradually de­grades. Figure 3 illustrates a fragmented file.Tools Availablesar -d gives the amount of time the disk is busy (%busy), the average amount of time in milliseconds, requests wait on the queue to be processed (avwait) and the average amount of time to service each request (avserv). Aim for values better than the rated average seek time of the disk. High avwait and avserv val­ues indicate data fragmentation. Controlling data placement

File systems can be arranged to mini­mise disk seeks. Making files contiguous and sorting the free list can also mini­mise disk seek overhead. Extra disk drives can also be added to spread the load. These mechanisms may sound crude, but they are effective.

Contiguising filesTo reduce fragmentation of a file/-

data:1. Back it up to tape;2. Delete the file/database;3. Trim as much unwanted space from your disk as possible;4. Sort the freelist;5. Finally, restore the file or database back onto the disk.

Many sites do this regularly as part of their backup procedure and achieve 100 per cent throughput improvement on database accesses.Sorting the freelist

Most 3.2 and 4.0 UNIX systems use a bit-map strategy for disk block alloca­tion, making sorting the freelist unnec­essary. However, it is important for XENIX file systems to have the freelist sorted before restoring. To do this, use the file system check utility fsck with the -S option under maintenance mode. This should also be done as part of the normal daily backup procedure. Not only do you gain the assurance that the file systems are totally checked and cor­rect, but the free list will be sorted at the start of each day, so new files will be created relatively contiguously.

The disk should be kept below 95 per

cent allocated. File systems fragment far more critically when allocation ap­proaches 100 per cent. An easy way to make more space on your file systems is to locate large unwanted files by typing

find / -size +200 -print and determining which are log files that are no longer needed.

Another way of reducing fragmenta­tion is to use Stallion Technologies’ in­telligent disk optimiser, Crocodile. Crocodile uses a built-in intelligent monitoring function that analyses the system’s daily file activity and places files on the disk surface in order of im­portance. In a typical SCO XENIX or UNIX systems, this can boost file throughput by an average of 30 per cent, with a peak measured improvement of 500 per cent for certain disk configura­tions.File systems

As you cannot easily resize file sys­tems once they have been configured, you must carefully size them before in­stallation. Files cannot be split across file systems.

Directories should be kept to a man­ageable size. By typing

find / -type d -size +4 -print you can locate all directories larger than four blocks. You should not allow users’ directories to grow beyond four blocks (126 files). An indirect block read (where it takes several reads to find a block) occurs when reading blocks of files in large directories. This incurs a huge disk and CPU overhead. Also, di­rectories cannot shrink, even when files are removed. A directory that previous­ly held a lot of data will contain a large number of empty slots. These can be resized by1. Creating a new directory2. Copying the contents of the old direc­tory (using cpio) into the new one3. Removing the old one4. Renaming the new one with the old name

Do not resize the lost+found directo­ry; it is used for data recovery in a crash and needs many empty slots. If a direc­tory is large and most of the slots are used, it is a good idea to split it into two or more smaller directories.Two drives better than one

Up to 40 per cent improvement can be obtained on some systems simply by having two drives instead of one. Statis­tically, multiple disk heads have a great chance of being near the required data compared to a single drive system.Authors: Michael O’Brien and Richard Fay co-ordinate marketing at Stallion Technologies, in Brisbane, Australia. Stallion develop a range of productivity enhancement and com­munication products for UNIX sys­tems.

16 PROFESSIONAL COMPUTING, JUNE 1991

Page 19: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

ACS in VIEW

B ERNIE Glasson, Associate Professor, Information Systems, Curtin University, Western Australia, was elected Australian Representative to IFIP TC8 in 1990. He is also a member of the WA branch

executive and a member of the national education, mem­bership and international boards. Through his work, he has participated in eight TC8 working conferences over the past 10 years, principally those of WG8.1 and WG8.2. He is known in TC8 for his efforts to make the activities of IFIP more relevant to practitioners. He con­vened and chaired an international panel at the 1990 WG8.2 conference on Information Systems Research to discuss ways this might happen. The 90 or so quite specific suggestions resulting from that panel are current­ly under discussion within the TC8.

International information systems activity

^RESIDENTS MESSAGi!]

IT HAS been said that in this world nothing is certain except death and taxes. To this statement we should add “and that information technology

is an essential and integral component of our everyday lives.”

This statement is based on the ex­traordinary growth in computing since the introduction of the computer into the commercial environment in the early 1950s.

The long-term ability of Australians to maximise the benefits accruing from technological change, and to minimise adverse consequences, is crucially de­pendent upon the provision of appro­priate training for all members of the community. Unfortunately, in the past decade, particularly, we have seen a trend by students entering tertiary in­stitutions away from science and tech­nology courses. Perhaps this is a result of a poor standard of teaching of math­ematics and science disciplines or per­haps it reflects the views recently ex­pressed by Barry Jones, Professional Fellow, in his University Day address at the University of Wollongong where he was reported as saying that Austra­lia had a profoundly anti-intellectual tradition compounded by a materialist obsession that all values had dollar signs on them. Indeed, he said, the concept of non-material value was a contradiction in terms among the materialists.

We have a responsibility to stem the flood of students entering business and law courses presumably searching for the material returns mentioned above. Equally disconcerting is that many of these students are Australia’s most ac­ademically gifted. In addition, it is es­sential that the Australian government recognise its responsibility in revising its relative funding model to equate the teaching of information technology to other laboratory-based disciplines such as science and engineering.

For Australian industry to prosper in open competition demands that the application of technology proceed quickly and on a wide front. Higher productivity, cost efficiency, and im­proved management techniques will demand use of the latest information technology. This relationship between competitive effectiveness and techno­logical change is just as valid for ser­vice industries as it is for government.

Alan Underwood, MAC President, Australia Computer Society

IFIP Technical Committee 8 (TC8) was established in 1966 and revised in 1990 to address the domain of “Information Systems”.

The scope of the TC8 is such that it should be of interest to most profession­al members. It is the planning, analysis, design, construction, modification, im­plementation, utilisation, evaluation, and management of information sys­tems that use information technology to support and coordinate organisational activities.

Its aims are to promote and encourage interactions among professionals from practice and research and the advance­ment of investigation of concepts, meth­ods, techniques, tools, and issues related to:

Effective utilisation of information technologies in an organisational con­text.

Interdependencies of information technologies and organisational struc­ture, relationships and interaction.

Evaluation and management of infor­mation systems.

Analysis, design, construction, modi­fication and implementation of comput­er-based information systems for organi­sations.

Management of knowledge, informa­tion, and data in organisations.

Information systems applications in organisations such as transaction pro­cessing, routine data processing, deci­sion support, office support, computer- integrated manufacturing, expert support, executive support, and support for strategic advantage plus the coordi­nation and interaction of such applica­tions.

Relevant research and practice from associated fields such as computer sci­ence, operations management, econom­ics, organisation theory, cognitive sci­ence, knowledge engineering, and systems theory.

Like the other IFIP Technical Com­mittees, TC8 operates through a number of Working Groups — in the case of TC8 there are five.

WG 8.1 (Design and evaluation of in­formation systems) was established in 1976 and has as its scope the develop­ment of approaches for the analysis, de­sign, specification and evaluation of computer-assisted information systems. Its aims are to — identify concepts and to develop theories relevant to the de­sign of information systems; develop methods and tools for applying these theories to the design process; develop methods for the specification of infor­mation needs within an enterprise, with emphasis on interface aspects; develop methodologies for evaluating proposals for information systems; and to develop methodologies for evaluating the opera­tional effectiveness of information sys­tems.

WG 8.1 has focused its activities through two task groups. The CRIS (comparative review of information sys­tems design methodologies) task group ran through the early to mid ’80s and the FRISCO (framework of information systems concepts) task group started in the late ’80s and is mid-way through its activity. The CRIS activity involved four formal working conferences (the proceedings of each were published by North-Holland with T.W. Olle as the principal editor in each case) and sever-

PROFESSIONAL COMPUTING, JUNE 1991 17

Page 20: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

ACS in VIEWal informal meetings. The ultimate out­come was a tutorial level book by T. William Olle and others entitled “Infor­mation Systems Methodologies: A Framework for Understanding”, pub­lished by Addisson-Wesley as a second edition in 1991. The more recent FRISCO group is attempting to reach a common understanding with regard to information systems concepts. It has published an interim framework which is the subject of current debate. The next WG 8.1 working conference “Informa­tion Systems Concepts: Improving the Understanding” will critically evaluate that interim framework in an attempt to get a common understanding interna­tionally. Without that common under­standing it is difficult to transfer IS tech­nology globally. That conference takes place in Alexandria, Egypt in April 1992.

WG 8.2 (The interaction of informa­tion systems and the organisation (was established in 1977. Its aim is the inves­tigation of the relationships and interac­tions among four major components: Information systems, information tech­nology, organisations and society. The focus is on the interrelationships, not on the components themselves. These com­ponents are understood as: Information systems — which includes information processing, the design of systems, or­ganisational implementation and the economic ramifications of information; information technology — which in­cludes technological changes such as mi­crocomputers, distributed processing, and new methods of communications; organisations — which includes the so­cial group, the individual, decision-mak­ing and the design of organisational structures and processes; and society — which includes the economic systems, society’s institutions and values of pro­fessional groups.

WG 8.2 has pursued two streams of interest in parallel. It has focused on the question of IS research. It has held two conferences which have focused on legitimate strategies for IS research and future directions for IS research. The proceedings of these conferences would be of interest to academic researchers. Unfortunately, that focus has led to the group being dominated by academics. As a consequence its more general con­ferences (eg on desktop computing) have not been well attended by practi­tioners. The next WG 8.2 conference will be held in Minneapolis, Minnesota in June 1992 and has the title “The Impact of Computer Supported Tech­nology on Information Systems Devel­opment”. The program committee is

working hard to make this conference more attractive and useful for practitio­ners.

WG 8.3 (Decision support systems) was established in 1981. Its scope is the development of approaches for applying information systems technology to in­crease the effectiveness of decision-mak­ers. In particular, in situations where the computer system can support and en­hance human judgments in the perfor­mance of tasks that have elements which cannot be specified in advance.

Its aim is to improve ways of synthe­sising and applying relevant work from resource disciplines to practical imple­mentations of systems that enhance de­cision support capability. The resource disciplines include — information tech­nology; artificial intelligence; cognitive psychology; decision theory; organisa­tional theory; and operations research and modelling. WG 8.3 tries to hold a conference every two years. The next event entitled “Decision Support Systems: Experiences and Expectations” will take place in Fontainbleau, France in June/July 1992 hosted by the Europe­an Institute of Business Administration.

WG 8.4 (Office systems) was estab­lished in 1986 and is perhaps the most active of the TC8 working groups hold­ing conferences each year. The group’s preferred style of operation is to run joint conferences with other IFIP work­ing groups. Its next event is a joint WG8.3/WG8.4 working conference en­titled “Support Functionality in the Of­fice Environment” in Kent, UK Sep­tember 1991.

WG 8.4’s scope encompass the study and development of information sys­tems for office work. Such systems are concerned with the support of, and com­munication in connection with, human activities in an organisation. They are characterised by, among other things, variety, informality and irregularity, but often interact strongly with the more or­derly, formal and predictable computer- based information systems used in that organisation.

The aims of WG 8.4 are to — develop concepts and formalisms applicable to the office system; further design meth­odology in the field of office systems, covering the entire range from prelimi­nary study to implementation and sys­tem evolution, and to consider ap­proaches to planning strategy; contribute to the building of automated tools and the adoption of standards of value in the office environment; study methods for evaluating design products and working systems, including the in­teraction with the traditional data pro­

cessing systems; and to exchange with the traditional data processing systems; and to exchange experience and dissem­inate research findings. There is a pro­posal under consideration to run a joint WG 8.4 and WG 8.5 conference in Aus­tralia in September 1993.

WG 8.5 (Information systems in pub­lic administration) is the youngest of the TC8 groups being established in 1988. It focuses on information systems in pub­lic administration at international, na­tional, regional and local levels. The group’s special emphasis is on the rela­tionship between central and local use of information systems and the provision of citizen services, together with the ac­complishment of social goals.

The aims of the Working Group are to — analyse information processing policies in public administration; dis­cuss specific applications of information systems in public administration; ana­lyse the impacts of information systems on public administration; apply the re­sults of other IFIP Working Groups, and specifically of TC8 Working Groups, to public administration; and to improve the quality of information systems in administration. Apart from being aware of the office bearers, I have no information on the group other than this. However, as the group has just formed, now is the time for interested folk to get involved and influence the group’s agenda.

Like all Technical Committees TC8 mainly works through its working groups and they in turn use the working conference and task group as the focus for their activity. TC8 itself does howev­er have large conferences of its own and topics that cross working group bound­aries (eg “Government and Municipal Information Systems”), or with other TCs (eg “Artificial Intelligence in Data­base and Information Systems with TC2). Former TC8 representative, Cyril Brookes was also successful in persuad­ing TC8 to run joint events with ACS here in Australia. I hope I can do like­wise. The next general TC8 conference incorporating all working groups enti­tled “Collaborative Work, Social Com­munication and Information Systems” will take place in Helsinki, Finland in August 1991.

Further information on IFIP TC8 and its working can be obtained from:

Dr Bernard Glasson, School of Infor­mation Systems, Curtin University, GPO U 1987, Perth, 6001, Western Aus­tralia. Phone (09) 351 7682 or (09) 351 7685, Fax (09) 351 3076 or (09) 351 2378, E-Mail Glasson@BA- lxurtin.edu.au.

t8 PROFESSIONAL COMPUTING, JUNE 1991

Page 21: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

i CO-PROCESSORSSIMM & SIP CO-PROCESSORS256Kx9 80ns $21.00 CYRIX 387DX16 $405.001MBx9 100ns $62.00 CYRIX 387DX20 $440.001 MBx9 80ns $72.00 CYRIX 387DX25 $550.004MBx9 80ns $295.00 CYRIX 387DX33 $665.001 MBx8 100ns $56.00 CYRIX 387SX16 $350.001 MBx8 80ns $68.00 CYRIX 387SX20 $375.00DRAM NT 287-10 $185.00

NT 287-12 $205.001 MBx4 (STATIC) $46.00 NT 287-20 $250.001 MBx1 100ns $6.10 I IT 387-20 $430.001 MBx1 80ns $7.30 IIT 387-25 $550.00414256 100ns $7.00 IIT 387-33 $585.00414256 80ns $7.45 IIT 387SX16 $315.0041256 100ns $2.00 IIT 387SX20 $345.0041256 80ns $2.20 TOSHIBA LAPTOP414644164

100ns100ns

$2.50$2.35 T1000SE SE

T18002Mb2Mb

$350.00$235.00

T3100SXT3200T5100T5200

2Mb3Mb2Mb2Mb

EXPANSION CARDSLCS 866IN (AT) EMS 512-32MB (SIMMS)OKBOCARAM AT PLUS OKBOCARAM XT OKCOMPAQ386-20, 386-25, 386-20S 386-S, 386-33 4MB MODULES 4MB BOARDS

$330.00$235.00$235.00

$242.00

$250.00

$170.00

$490.00$580.00

Sales Tax 20%. Overnight delivery. Credit cards welcome1st Floor, 100 Yarrara Road, P.O. Box 382 Pennant Hills 2120 PELHAM Tel (02) 980 6988 Fax (02) 980 6991

HARDWAREBUY, SELL, RENT,

ALL BRANDS. “COMPUTERS ARE

CHEAPER THE SECOND TIME AROUND’’

CALL US NOW (02) 949 7144jj—55 zzmn COMPUTER resellers

#0^ y///i INTERNATIONALUnit 25, Balgowtah Business Park, 28 Rosebeny Street, Balgowtah NSW 2093 FAX (02) 9494419

SUNS +

Workstation Traders^ micro systemsProductivity Software SolutionsLotus 1-2-3; WingZ; Uniplex; Informix;

Word Perfect; Island Write Paint, Draw.

© (02)906-6229 JOMOMCOV

PROFESSIONAL

Macpherson Open Systems — COMPUTINGOpen System Specialists

THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY

Unix, Pick, Project Development ADVERTISINGExpertise available in ENQUIRIES

Progress and System BuilderSydney (02) 416 2788 Melbourne (03) 866 1177

Phone Peter Dwyer

Greg MacPherson Ashley Isserow (03) 520 5555

Advertising conditionsAdvertising accepted for publication in Professional Com­puting is subject to the conditions set out in their rate cards, and the rules applicable to advertising laid down from time by the Media Council of Australia. Every adver­tisement is subject to the publisher’s approval. No respon­sibility is taken for any loss due to the failure of an adver­tisement to appear according to instructions.The positioning or placing of an advertisement within the accepted classifications is at the discretion of Professional Computing except where specially instructed and agreed upon by the publisher.Rates are based on the understanding that the monetary level ordered is used within the period of the order. Maxi­mum period of any order is one year. Should an advertiser

fail to use the total monetary level ordered, the rate will be amended to coincide with the amount of space used. The word “advertisement” will be used on copy which in the opinion of the publisher, resembles editorial matter.The above information is subject to change, without notifi­cation, at the discretion of the publisher.

Warranty and

indemnityADVERTISERS and/or advertising agencies upon and by iodging material with the publisher for publication or auth­

orising or approving of the publication of any material, INDEMNIFY the publisher, its servants and agents against all liability claims or proceedings whatsoever arising from the publication, and without limiting the generality of the foregoing to indemnify each of them in relation to defama­tion, slander of title, breach of copyright, infringement of trademarks or names of publication titles, unfair competi­tion or trade practices, royalties or violation of rights or privacy AND WARRANT that the material complies with all relevant laws and regulations and that its publication will not give rise to any rights against or liabilities in the pub­lisher, its servants or agents and, in particular, that nothing therein is capable of being misleading or deceptive or otherwise in breach of Part V of the Trade Practices Act 1974.

Bookings - Tel: (03) 520 5555, Fax: (03) 521 3647

PROFESSIONAL COMPUTING, JUNE 1991 19

Page 22: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

ASSOCIATED INTEGRATED MAINFRAME

COMPUT€RS P/lALL SYSTEMS AND UPGRADES

BUYING — SELLING — LEASING NEW OR REMARKETED

SYSTEMS & PERIPHERALSCONCURRENT

FUJITSU

NEC

Ultimate

WYSE

ASSOC IATKI) I\!K<;R YI KD VI \I\FRA\IKCOMPUTERS PL

NEW OR■ - REMARKETED

SYSTEMS & PERIPHERALS

ACCESSING A WORLDWIDE DEALER NETWORK CALL US NOW FOR YOUR SYSTEM REQUIREMENTSLevel 8, 35 Spring Street, Bondi Junction NSW 2022 Australia

PO Box 884 Bondi Junction NSW 2022 Australia Telephone: (+61 02) 369 9220 Facsimile: (+61 02) 387 1569

Page 23: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

LBMS IntegratedSystems EngineeringT

he heart of LBMS Integrated Systems Engineering

is LBMS Systems Engineer, the only truly

multi-user, PC based CASE tool available.

Users have observed a 20% increase in productivity over

single-user tools using checkin/checkout procedures due

to the reduction in system administration alone. Project

control is improved because team leaders can inspect

work in progress while the team keeps working.

“....a superb example of complete, end-to-end integration

of the software development process from the initial

concept of an application through its design and

implementation.” -PC Week, February 1991.

LBMS Workplace uses Windows Dynamic Data

Exchange to deliver design information directly to

standard spreadsheet, project management, word

processing, presentation and e-mail tools for tasks such as

estimating, capacity modelling, preparing presentations

and distributing project documentation.

MAINTENANCE & REDEVELOPMENT • LBMS REVENG

STRATEGIC IT PLANNING • LBMS Strategic Planner

CONSTRUCTION •LBMS Application

_ iEngineer

REPOSITORY LBMS Information

Manager

INTEGRATED CASE DEVELOPMENT• LBMS Systems Engineer• LBMS Workplace

Project Engineer uses Systems Engineer design

information and estimating rules based on a supplied or

customised task list to produce complete plans which are

passed directly to standard project control tools.

Construction via Code Generation speeds up delivery of

applications in IMS-DC and CICS/DB2, Unisys Line,

IBM’s CSP and other environments. SE/OPEN links

Systems Engineer with any target environment using

libraries of transformation routines supplied by us or

developed by you.

“A CASE product line.... more comprehensive than any currently available, or planned, from any software vendor anywhere.”

Software Development Monitor, September 1990

Russell Jones, editor of the UK’s most authoritative publication

on development productivity tools, wrote this when he visited

our Research and Development Group last year.

LBMS Integrated Systems Engineering is now in Australia.

SSLBMS ConsuiitMELBOURNETelephone (03) 642 1100 Facsimile (03) 670 2030

SYDNEYTelephone (02) 904 1033 Facsimile (02) 904 1353

CANBERRATelephone (06) 243 5142 Facsimile (06) 243 5143

BRISBANETelephone (07) 831 5055 Facsimile (07) 831 4008

ADELAIDETelephone (08) 373 2903 Facsimile (08) 373 2904

Page 24: ACS - COMPUTING PROFESSIONAL...ACS in View 17: COVER: For over 40,000 years aboriginals have used a graphical means to capture and pass on information. Today, graphical inter faces

t t s tJ 1 Ite-fl

The Boss we all need!

" When you spend as much time as I do developing software applications, you’ve got to have the right software - and I’ve found it!"

DataBoss 3.5 is a truly relational database generator from Kedwell Software.

"It has one of the most powerful report generators I've ever seen.

"I simply paint any reports I need with the new WYSIWYG report generator.

"It generates "C" or Pascal source code which allows unlimited flexibility.

"DataBoss 3.5 saves any programmer valuable time because it operates a full systems development life-cycle.

"There's no run-time licence fees and no programming is required.

"Kedwell Software offers full technical support and provides a comprehensive, easy-to-use manual. They really make it simple."

If you want more information on DataBoss 3.5, please mail this coupon to the address below.

Name:___________________Company:_______________Address:_________________

KEDWELLSOFTWARE P/L.

A.C.N. 010 752 799 P.O. Box 122, Brisbane Market Queensland Australia 4106 Telephone 61-7-379 4551 Facsimile 61-7-379 9422