10
Interprocessor Communication for Multi-Microcomputer Systems Paul M. Russo RCA Laboratories Background Since the announcement of the first commercial micro- processor in 1971, integrated CPU's have evolved from mere laboratory curiosities to ubiquitous fundamental system building blocks. Moreover, rapid advances in LSI technology during the early 1970's have resulted in ever larger RAMs and ROMs - RAMs containing 4096 bits on a single chip are already being delivered in quantity and chips containing 16,384 bits are becoming readily available. The advances made by LSI technology have not been applied solely to microprocessors and memories. Complex but general purpose logic blocks are being integrated on single chips in increasing numbers. The development of single-chip universal asynchronous receiver transmitters (UAR/T's) for data communications followed closely the announcement of the first microprocessor. Other examples of the pervasiveness of LSI include digital watch, calcula- tor, and automotive ignition-control circuits. More recently, single-chip peripheral interfaces are emerging that will permit the system designer to assemble a complete computer system with but a handful of components - at costs that would have been denounced as impossible a mere decade ago. The ability to introduce microprocessor control into many systems currently implemented via hard-wired logic will bring to these systems all the attendant advantages of stored program control. These include greatly improved flexibility, reliability, ease of maintenance, and lower cost. Additionally, many systems and system functions that were uneconomical to implement via hard-wired control are now feasible - making possible a host of new products for the home, school, automotive, and industrial markets. A natural evolution of microprocessor-based system architectures is that of distributed processing, i.e., multi- microcomputer systems. In distributed intelligence sys- tems, intelligent subsystems, dedicated to specific tasks, communicate in an optimal fashion to improve system throughput, increase reliability, and add a new dimension of flexibility. This paper briefly describes several useful multi- microcomputer structures and then details the architec- tural features of a general-purpose inter-processor inter- face for the RCA COSMAC CPU. The flexibility of this interface, which has been built and tested, will permit the development of multi-COSMAC systems for a multitude of applications. Finally, the question of when multi- microcomputer architectures become effective is briefly addressed. Introduction Before one can intelligently discuss systems containing a plurality of CPU's, the class of systems to be considered must be clearly defined. Considerable confusion exists in the literature since the terms "multiprocessor systems" and "multicomputer systems" are often used synony- mously. Since these two classes of systems are radically different, it is perhaps wise to identify their distinguishing features. In general, multiprocessor systems are those that oper- ate on a single input stream or work load. A single integrated operating system allocates hardware resources where needed. These systems are used primarily in situa- tions where high reliability is a must. This is achieved by either assembling a fully redundant system or by allowing for rapid system reconfiguration. Multicomputer systems, on the other hand, operate on several input streams and do not have an integrated operating system. Interprocessor communication is primarily at the data level. In more sophisticated situations, however, data may include com- mands to take action or responses to specific requests. Viewed in another light, the main function of multi- computer systems is to separate or partition the various tasks to achieve improved system throughput. Examples in larger systems include that of separating "number crunching," done in the main CPU, from "I/O processing" performed in various I/O processors. In multi-micro- computer systems, the primary motivation is to separate tasks that are mostly independent, i.e., ones that require relatively little intercommunication. This partitioning enables the various microcomputers to be far more respon- sive to their dedicated tasks. In fact, each microcomputer may be controlling a process requiring, for example, rapid responses to interrupts. It would be difficult, perhaps impossible, for a single microprocessor to respond rapidly to a large number of interrupts. This paper will be restricted to discussing multi- microcomputer systems. The brief exposition of multi- processor systems that follows is intended to provide enough detail so that the reader can readily identify the two broad classes of systems. Note, however, that it is possible to have systems that fall into both categories. April 1977 67

Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

  • Upload
    hamien

  • View
    222

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

InterprocessorCommunication forMulti-MicrocomputerSystemsPaul M. RussoRCA Laboratories

Background

Since the announcement of the first commercial micro-processor in 1971, integrated CPU's have evolved frommere laboratory curiosities to ubiquitous fundamentalsystem building blocks. Moreover, rapid advances in LSItechnology during the early 1970's have resulted in everlarger RAMs and ROMs - RAMs containing 4096 bitson a single chip are already being delivered in quantityand chips containing 16,384 bits are becoming readilyavailable.The advances made by LSI technology have not been

applied solely to microprocessors and memories. Complexbut general purpose logic blocks are being integrated onsingle chips in increasing numbers. The development ofsingle-chip universal asynchronous receiver transmitters(UAR/T's) for data communications followed closely theannouncement of the first microprocessor. Other examplesof the pervasiveness of LSI include digital watch, calcula-tor, and automotive ignition-control circuits. More recently,single-chip peripheral interfaces are emerging that willpermit the system designer to assemble a completecomputer system with but a handful of components - atcosts that would have been denounced as impossible amere decade ago.The ability to introduce microprocessor control into

many systems currently implemented via hard-wired logicwill bring to these systems all the attendant advantagesof stored program control. These include greatly improvedflexibility, reliability, ease of maintenance, and lower cost.Additionally, many systems and system functions thatwere uneconomical to implement via hard-wired controlare now feasible - making possible a host of new productsfor the home, school, automotive, and industrial markets.A natural evolution of microprocessor-based system

architectures is that of distributed processing, i.e., multi-microcomputer systems. In distributed intelligence sys-tems, intelligent subsystems, dedicated to specific tasks,communicate in an optimal fashion to improve systemthroughput, increase reliability, and add a new dimensionof flexibility.This paper briefly describes several useful multi-

microcomputer structures and then details the architec-tural features of a general-purpose inter-processor inter-face for the RCA COSMAC CPU. The flexibility of thisinterface, which has been built and tested, will permit thedevelopment of multi-COSMAC systems for a multitude

of applications. Finally, the question of when multi-microcomputer architectures become effective is brieflyaddressed.

Introduction

Before one can intelligently discuss systems containinga plurality of CPU's, the class of systems to be consideredmust be clearly defined. Considerable confusion exists inthe literature since the terms "multiprocessor systems"and "multicomputer systems" are often used synony-mously. Since these two classes of systems are radicallydifferent, it is perhaps wise to identify their distinguishingfeatures.

In general, multiprocessor systems are those that oper-ate on a single input stream or work load. A singleintegrated operating system allocates hardware resourceswhere needed. These systems are used primarily in situa-tions where high reliability is a must. This is achieved byeither assembling a fully redundant system or by allowingfor rapid system reconfiguration. Multicomputer systems,on the other hand, operate on several input streams anddo not have an integrated operating system. Interprocessorcommunication is primarily at the data level. In moresophisticated situations, however, data may include com-mands to take action or responses to specific requests.Viewed in another light, the main function of multi-

computer systems is to separate or partition the varioustasks to achieve improved system throughput. Examplesin larger systems include that of separating "numbercrunching," done in the main CPU, from "I/O processing"performed in various I/O processors. In multi-micro-computer systems, the primary motivation is to separatetasks that are mostly independent, i.e., ones that requirerelatively little intercommunication. This partitioningenables the various microcomputers to be far more respon-sive to their dedicated tasks. In fact, each microcomputermay be controlling a process requiring, for example, rapidresponses to interrupts. It would be difficult, perhapsimpossible, for a single microprocessor to respond rapidlyto a large number of interrupts.This paper will be restricted to discussing multi-

microcomputer systems. The brief exposition of multi-processor systems that follows is intended to provideenough detail so that the reader can readily identifythe two broad classes of systems. Note, however, that itis possible to have systems that fall into both categories.

April 1977 67

Page 2: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

Multi-Microcomputer structures

As discussed above, a multiprocessor system contains aplurality of processing elements with the following addi-tional constraints:

1. Main memory is accessible by every CPU.2. I/O channels and controllers can be shared betweenCPU's as needed.3. A single integrated operating system coordinates allthe hardware/software interplay.

The above requirements do not preclude each CPU fromhaving private RAM associated with it nor certain I/Odevices from being accessed only by specific processors.Thus, the primary distinction of a multiprocessor systemis that a single workload is applied to it. This singleworkload is processed by an integrated operating systemwhich assigns hardware resources as required. This pro-cess is illustrated in Figure 1.

HARDWARE

*MULTIPLE CPU'S* MULTI- PORT MEMORY* SHARED I/O ACCESS* I/O CONTROLLERS

CGNTROL SIGNALS,ETC.

SOFTWARE

* SINGLE INTEGRATEDOPERATING SYSTEM

* RECONFIGURABILITY(FAI L SOFT )

REPORTS, ETC.

Figure 1. A multiprocessor system.

Several categories of multiprocessor systems are inexistence. These include common bus, crossbar switch,multimemory/multibus, pipeline, and array processors.The single most important advantage of multiprocessorsystems is improved reliability. This is because these sys-tems are either modular in nature and hence readilyreconfigurable under software control (graceful degrada-tion), or they are fully duplexed. In a full-duplex system,all the processing is performed in parallel on two identicalsystems, and the results are constantly compared. How-ever, either portion of the system can carry the full load.The reader interested in additional information on

multiprocessor systems is referred to an excellent treatiseon the subject edited by P. H. Enslow, Jr.'

In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data isinterchanged between processors relative to the total- sys-tem data flow. Each microcomputer can operate relativelyindependently of the others. The software in each sub-system is optimally tailored to the processing task forwhich it has responsibility. Should any one CPU becomeoverloaded, it cannot be unburdened by any other pro-cessor; however, this should not occur with proper sys-tem design.Multi-microcomputer systems, often called distributed

intelligence microcomputer systems (DIMS), offer manyadvantages to the performance of the total system. Thesystem can be modular in nature where each subsystemis similar in hardware but customized in software. Thisgreatly reduces maintenance and spare-parts inventoryproblems. Should one subsystem fail, the remainder canusually keep functioning, giving the system some "failsoft" capability, and hence, improved reliability. In DIMSeach subsystem is generally I/O oriented, and the totalsystem throughput, with careful design, can approachthe sum of the throughputs of the individual processingelements. Additional information on DIMS is available inthe literature.2-4

DIMS structures. A myriad of possible DIMS structuresexists. At one extreme lies the master-slave-+ or starorganization. At the other extreme lies the generalizednetwork, or master-master, structure where every CPUhas equal status. Intermediary structures and loop struc-tures are also possible and may, in fact, be optimal forsome particular application. Before discussing the appli-cability of any given structure in a microprocessor environ-ment, it is worthwhile to briefly examine sopne of theextreme structures and identify their salient characteris-tics. Figure 2 illustrates a master-master structure inwhich any CPU can communicate with any other CPU. Inthis organization, all the CPUJ's must support compatibleinterprocessor interfaces and I/O instructions. This organ-ization may well be effective for large communicationsnetworks, typified by the pac4et-switched ARPA network.However, it may not be suitable for multi-microcomputersystems where the tasks to be performed by specificCPU's may vary drastically.The master-slave organization (see Figure 3) offers

many advantages to multi-microcomputer systems. In amaster-slave organization, all inter-CPU communicationmust always go through the master. The principleadvantage of this structure is that each slave CPU canhave unique interprocessor I/O instructions, optimallytailored to its task. Hence, interface costs are reduced,and extremely efficient use of 1/0 code results. Note,however, that if all the processing elements are identical,the above does not imply different interprocessor interface

INFORMATION BUS

Figure 2. Master-master multicomputerorganization.

68

Figure 3. Master-slave multicornputerorganization.

Figure 4. Multi-microcomputer ringstructure.

COMPUTER

Multiprocessor systems

CPU

if 11SLAVE E SLAVEcou PU 0 0 0 0 CPU

Page 3: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

hardware. The hardware may indeed be identical with thecustomizing achieved in software, as illustrated in aspecific implementation discussed below.Yet another possible formation is the ring structure

illustrated in Figure 4. Note, however, that if the informa-tion bus is also needed by the individual CPU's for theirown processing, severe contention problems will occurwith a resulting degradation in the performance of theoverall system.The master-slave organization appears to be the most

suitable for the bulk of multi-microcomputer applicationsenvisioned, i.e., ones where a main CPU controls andsupervises a multitude of intelligent (microprocessor-based)subsystems, each dedicated to a specific task.

Shared Memory. The distributed intelligence conceptalways raises the question of shared main memory. It isusually advantageous for each processing element to haveat least some local RAM, but there is no clear-cut advantagein always having a shareable main memory. With certainsystem architectures it is almost a must, but with othersit may well detract from the system's performance.

It is clear that, as the number of processing elementsincreases beyond two or three, severe contention problemswill arise if each CPU must access common memory fora substantial fraction of its cycles. Thus, it is imperative,if a common memory is used, that references to it by theindividual CPU's be minimized - hence, some local RAMis highly desirable. In fact, insofar as multi-microcomputersystems are concerned, the primary use of a commonmemory would be to act as a message center, where eachCPU can leave messages for other CPU's and pick upmessages intended for it. Such an organization is illustra-ted in Figure 5. Note that if there are N processingsystems, N mailboxes are required, each one having N-1compartments. Basically, if the ith CPU wishes to leave amessage to the jth CPU, it simply stores the appropriateinformation in the appropriate compartment of the jthCPU's mailbox. Any CPU can scan its mailbox to establishif there are any messages for it.Shared memory is a useful concept in the master-master

DIMS configuration, since this configuration has a largenumber of interprocessor paths and therefore needs a largemessage center. In a master-slave organization, its useful-ness is far more limited, since the number of paths isnow equal to the number of slave processors and hence isrelatively small. A preferable approach in this case is toallow block transfers of data between the master and slave

CPU #1

MAILBOX

000

a0

CPU #NMAI LBOX

CPU #2 TO CPU #1

lCPU #3 TO CPU #1

k CPU #N TO CPU #1CPU #1 TO CPU #2

CPU #2 TO CPU #NF CPU #NITO CPU #Nl~~~~~~~~

[ CPU # N-I' TO CPU #N

COMMONMEMORY(N-I

COMPARTMENTSPER MAILBOX)

SYSTEM SYTEM

I1 2 1 °

Figure 5. Shared-memory "mailbox" organization.

April 1977

microcomputers via direct memory access (DMA) channels.The CPU's may communicate at lower data rates underprogram control, but data blocks will be transferredasynchronously with normal processing thereby improvingsystem throughput. This is the approach taken in theinterprocessor interface for the RCA COSMAC micro-processor discussed below.

The COSMAC Microprocessor

The COSMAC microprocessor5-7 is implemented on asingle 40-pin C2L MOS/LSI chip.* It provides a flexible,powerful building block for a variety of stored programproducts, including device controllers, terminals, andcomputers. Special features such as an on-chip DMAchannel minimize added circuits for complete systems. Aparallel internal structure using static circuits providesmaximum reliability, speed, testability, and applicationflexibility. Proprietary architecture, utilizing a one-byteinstruction format, minimizes program memory require-ments.Figure 6 illustrates the microprocessor architecture. R

represents an array of sixteen, 16-bit general purposeregisters. (This is essentially a 16- x 16-bit RAM.)

Figure 6. COSMAC architecture.

P, X, and N are three 4-bit registers. The contents ofP, X, or N select one of the 16 R registers. R(N) willbe used to denote the specifieR register selected by the 4-bithex digit contained in the N register. RO(N) denotes thelow-order 8 bits (byte) of the R register selected by N.R1(N) denotes the high-order byte. The contents of aselected R register (2 bytes) can be transferred to the Aregister. The 16 bits in A are used to address an externalmemory byte via an 8-bit multiplexed memory addressbus. The 16-bit word in A can be incremented or decre-mented by 1 and written back into a selected R register.M(R(N)) refers to a one-byte memory location addressed

by the contents of R(N). This indirect addressing systemis basic to the simplicity and flexibility of the architecture.D is an 8-bit register that functions as an accumulator.

The ALU is an 8-bit logic network; I is a 4-bit instructionregister. Bytes can be read onto the common data busfrom any of the registers, external memory, or input/outputdevices. A data-bus byte can, in turn, be transferred to aregister, memory, or input/output device.The operation of the microcomputer is best described in

terms of its instruction set. As shown in Figure 7, a one-byte instruction format is used. The instructions aresummarized below where [XX] contains two hex digits

* C2L is the RCA designation for Closed COS/MOS Logic.

69

Page 4: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

MSD LSD

Figure 7. 3X X X x x X iCOSMAC one-byte M.Instruction format. I N

and represents the instruction byte; see Reference 7 fordetails.

Register Operations

[1 N][2N][8N][9N][AN][BN][60]

Increment R(N)Decrement R(N)Transfer RO(N) to DTransfer R1(N) to DTransfer D to RO(N)Transfer D to R1 (N)Increment R(X)

[4N][5N]

[ON(N =0),72,73,F0,F8]

Load D from M(R(N)) and increment R(N)Store D in M(R(N))Additional memory reference instructions

1/0 Instructions

[6N] I/Obytetransfer(N =*0,8)[7B] SetQ[7A] Reset 0

Miscellaneous Operations

[00][3N][CN]

[70,71,78,79][DNI[EN]

[Fl -F7,F9-FF,74-77,7C-7F]

IdleShort branch and skipLong branch and skipInterrupt and subroutine controlSet P to value in NSet X to value in NALU operations: Operand 1 - contents of D

Operand 2 - M(R(X)) or M(R(P))

For the miscellaneous operations, N no longer selects oneof theR registers, but is decoded as needed. For example,for instruction 3, N selects the type of branch instructiondesired.Of significant interest to all system architectures -and,

in particular, to multi-microcomputer architectures, isCOSMAC's elegant and flexible I/O interface. The inter-face, illustrated in Figure 8, consists of a bidirectional8-bit data bus and a control bus composed of ingoingand outgoing signal lines. The 3-bit N code, along withMRD, indicates to the external device which of seven

-A RIT RlrlRFr,TIANAI flATA RISI

DMA INPUT REQUEST

ADMA OUTPUT REQUEST

NPUT 0-7UTPUT 0-7

(TERNAL,VICE

AIAL)T'RIAL

input or seven output instructions is being executed. Twoexternal timing pulses are provided to strobe data ontoand off of the data bus, and to synchronize the processorand external devices. A 2-bit state code informs the devicewhich state the processor is in: fetch, execute, interrupt,or DMA. A software settable and testable level Q can beused as a serial output port. Four software-testableexternal flags, EF1-EF4, can be raised by external devicesto signal various conditions to the processor. Finally, theinterrupt and DMA lines are used by devices to gainimmediate access to an interrupt routine and a direct datapath to/from memory, respectively.

Instruction 6 permits byte transfers between memoryand input/output devices via the common byte bus underprogram control. The value of N specifies the direction ofthe byte transfer. N=1-7 implies an output instructionwhereas N=9-F implies an input instruction. M(R(X))can be sent to an input/output device, or an input/outputbyte can be stored at M(R(X)). Digits NO-N2 are madeavailable externally during execution of the input/outputbyte transfer instruction. This digit code can be used byexternal I/O device logic to interpret the common busbyte. For example, specificN codes might specify that anoutput be interpreted as an I/O device selection code, acontrol code, or a data byte. Other N codes might causestatus or data bytes to be supplied by an I/O device.In general, as system complexity increases, the limited

number of I/O instructions becomes insufficient. Thislimitation is resolved by the use of a two-level I/O schemein which the entire COSMAC I/O interface is madeavailable to any of up to 256 I/O devices, one at a time.This is accomplished by dedicating one output instructionas a SELECT instruction, and assigning a unique devicenumber to each external device. When a SELECTinstruction is executed, M(R(X)) must contain the appro-priate device number. All devices recognize the SELECTinstruction. The selected device enables its I/O interfacewhereas all the other I/O devices become deselected. Inthis way, the processor can communicate with its fullcomplement of I/O instructions (except the code given upto SELECT) and external flags, with any peripheraldevice. However, it may only communicate with onedevice at a time. Note that all the devices can be simul-taneously on and communicating with the outside world,but only the selected device can communicate with theCPU under program control. Note further that the I/Odevice communicating with the processor via the DMAchannel need not be selected, nor is this usually desirable.COSMAC can directly address up to 65K bytes of RAM

or ROM, has a program execution speed of up to 400,000instructions/second, and can achieve a DMA burst transferrate of up to 800,000 bytes/second. Additional and moredetailed information on the COSMAC microprocessorarchitecture and its instruction set is available in Refer-ence 7.Our experience, to date, indicates that COSMAC is

indeed very well suited to the types of processing requiredin data communications, process control, home, school,and automotive systems (table look-up, interrupt drivensoftware, data management). Multiply and divide must bedone in software, but these operations are uncommon inthe above categories of systems.

An interprocessor interface for COSMAC

Notation. When multi-microprocessor systems are dis-cussed, confusion often arises as to which CPU, externalflag, RAM, etc. is being referred to. To alleviate thisconfusion, the following notation will be used throughout:

COMPVTER

Memory Operations

COSMAC

MEMORY READ (MRD)

3 BIT I/O INSTRUCTION CODE (NO.NI.N2) 0o

2 EXTERNAL TIMING PULSES (TPA.TPB) EX'DE'

2 BIT STATE CODE (SCO SCI)

0 i SEIU'

_4 EXTERNAL FLAG SEINTERPUPT REQUEST IN

.NEPP REQUES

Figure 8. COSMAC I/O interface.

70

I

Page 5: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

a subscript i (e.g., M(R(X)h) or an (i) following any term(e.g. EF1(i)) implies that the signals and/or components ofthe ith CPU are being referred to. For example, CPU(2)refers to CPU#2. EF3(1) refers to external flag 3 associatedwith the I/O control bus of CPU(1). RAM(2) refers to theRAM addressed by CPU(2). M(R(X))h refers to the contentsof RAMM1) addressed by the X register of CPU(1), etc.

Interface philosophy. The philosophy behind the currenthardware design is as follows: a master-slave organizationwill be used with one master and an arbitrary (up to256) number of slave processors. No common memory willbe provided and each CPU will have sufficient RAM tohandle its dedicated workload. The master CPU will havesufficient RAM to buffer all information to be exchangedbetween it and all the slave processors. Interprocessorcommunication will be either via programmed-mode I/O(type 6 instructions) or via DMA for block transfers. Theexternal flags will be used for handshaking and statusinformation. * Finally, the master can interrupt any slave,but no slave can interrupt the master processor.The system organization is such that each slave CPU

will have associated with it a dedicated interprocessorinterface. Each interprocessor interface will be identical inhardware, with the exception of its device number. Themaster CPU is expected to operate in a two-level I/Oenvironment with each slave CPU viewed as a peripheralwith a unique device number. Slave CPU's may or may

Figure 9. DATA BL

Master-slave multi-microcomputerorganization with CPU (3)two slave CPU's. sLAVE) MA3

not use two-level I/O but they too will view the master asa peripheral device.Figure 9 illustrates a three-CPU system consisting of

a master and two slave CPU's, and hence two interprocessorinterfaces. Note that the master CPU I/O and data bussesare simultaneously available to all the slave CPU's butthat information exchange between slave CPU's can onlyoccur via the master.Figure 10 presents a high-level view of the interface

organization. Basically, two 8-bit buffers provide asyn-

* The interface design was originally formulated for the CDP1801two-chip COSMAC which did not sport the serial output port Q.

chronous communication between the master CPU andslave CPU data busses. Note that the two CPU's neednot be synchronized nor need they operate at identicalclock rates. Besides the two 8-bit buffers, latches areprovided for the external flags (EF1-4) of both CPU's, theDMA(1) requests, and for the CPU(2) interrupt line. Notshown is the control logic to provide timing and clockingsignals where needed.The basic hardware organization allows the exchange of

information between the master and slave CPU's underprogrammed mode I/O (I=6 instructions) via the two buf-fers and external flag decodes. The master is allowed tointerrupt the slave, but. not vice versa. High-speed blocktransfers between RAM(1) and RAM(2) (or RAM(1) andthe RAM of any slave CPU) run via the master's DMAchannel, which must be multiplexed between all the slaveCPU's. This approach frees the slave CPU DMA channelsto be used in conjunction with the subsystems to whichthey are dedicated (e.g., disk control). Each CPU, in effect,views the CPU it is communicating with as a peripheraldevice with which it can interact in an asynchronous fash-ion. However, the master and slave CPU protocols aredifferent.

Interprocessor Communication. The following tableslist the basic CPU (1) and CPU (2) I/O instructions andflag assignments required to achieve effective interprocessorcommunication. It is assumed that all CPU's will employ

, (2)

MAB: MEMORY / ADDRESS BUS

Figure 10. Interprocessor interface organization.

April 1977 71

Page 6: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

two-level I/O structures, that is, each will dedicate oneoutput instruction (usually 61) as the SELECT instruction.In Tables 1 and 3, the choice of I/O instruction is arbitrary.Output instructions are 61-67 (excluding the SELECTcode) and input instructions are 69-6F.

CPU(1) instructions. The CPU(1) (master) instructionsand flag assignments are presented in Tables 1 and 2,respectively.

KEY The KEY instruction informs CPU (2) of the mean-ing of the data byte that it will soon receive via a DATAinstruction. The contents of BUFFER(1) (M(R(X)),) areread and decoded by CPU (2). The interpretation of thebit pattern transferred to CPU (2) is entirely up to thesystem designer and is specified by the software.TASK. The TASK instruction commands CPU (2) to

execute a specific procedure. BUFFER(1) is read anddecoded by CPU(2). The bit assignments are specified bythe system designer in software.DATA. The DATA instruction is used to transmit a

data byte to CPU (2), the meaning of which has been pre-viously established via a KEY instruction.

INTRPT. The INTRPT instruction interrupts CPU (2).The contents of BUFFER(1) may then be read and decodedby CPU(2). The bit assignments are specified in softwareby the system designer.READ STATUS. A status byte previously stored in

BUFFER(2) by CPU(2) is clocked into M(R(X)A,, and CPU(2)is informed that BUFFER(2) is now free.

Table 3. CPU(2) interprocessor instructions.

INSTRUC-TION

WRITESTATUS

TYPE FUNCTION

OUTPUT M(R(X))2- BUFFER(2),SET EF3(1)

OUT BYTE OUTPUT M(R(X))2 - BUFFER(2),SET INREQ(1), SET EF1(2)

SET OUTPUT CONTENTS OF M(R(X))2 SET/RESET EF1(1), EF2(1),FLGEN flip flop and RUNFlip Flop

RESET

READ

IN BYTE

OUTPUT CONTENTS OF M(R(X))2 RESEIEF1(2) and INT(2) FLIP FLOF

INPUT BUFFER(1) * M(R(X))2, RESEIEF4(2), EF3(2), EF2(2),EF4(1)

INPUT SET OUT REQ(1) flip flop,SET EFR (2)

APPROPRIA TECPU(1)

RESPONSE

READ STATUS

DMA(1) CYCLESTEAL RESETS

EF1 (2)AND CLOCKSBUFFER(2)

M(R(0))i.

NONE

F NONEp

F NONE

DAM(1) CYCLESTEAL RESETSEF1 (2) AND

CLOCKS M(R(0))i* BUFFER(1).

Table 4. CPU(2) EF assignments.

Table 1. CPU(1) interprocessor instructions.

INSTRUC-APPROPRIA TE

TION TYPE FUNCTION

KEY OUTPUT M(R(X))i - BUFFER(1),SET EF3(2), EF4(2), EF4(1)

TASK OUTPUT M(R(X)). - BUFFER (1),SET EF4(2), EF2(2), EF4(1)

DATA OUTPUT M(R(X))1 - BUFFER (1),SET EF4(2), EF3(2),EF2(2), EF4(1)

INTRPT OUTPUT M(R(X))1 - BUFFER(1),INTERRUPT CPU(2), SETEF2(2), EF4(1),RESET EF4(2) and EF3(2)

READSTATUS

INPUT BUFFER(2) - M(R(X))i,RESET EF3(1), SET EF4(2),EF4(1)

CPU (2)RESPONSE

READ

READ

READ

READ(in interrupt

routine)

Dummy READto reset EF4(2)

Table 2. CPU(1) EF assignments.

EF1 (1) = "1"1 RAM(2)/RAM(1) BLOCK TRANSFER COMPLETED.EF2(1) = "1' .CPU(2) READY TO INITIATE RAM(1)/RAM(2) BLOCK

TRANSFER.EF3(1) = "1" STATUS BYTE FROM CPU(2) STORED IN BUFFER(2).EF4(1) = "1'l CPU(2) HAS NOT YET RESPONDED TO LAST CPU(1)

INSTRUCTION.

CPU(2) instructions. The CPU(2) (slave) instructionsand flag assignments are presented in Tables 3 and 4,respectively.

EF4(2) EF3(2) EF2(2)

1 0 1 * IDENTIFIES BUFFER(1) CONTENTS AS TASKINSTRUCTION.

1 1 +0 IDENTIFIES BUFFER(1) CONTENTS AS KEYINSTRUCTION.

1 1 1 * IDENTIFIES BUFFER(1) CONTENTS AS DATABYTE ASSOCIATED WITH PREVIOUS KEYINSTRUCTION.

1 0 0 * INDICATES TO CPU(2) THAT CPU(1) HASRECEIVED STATUS BYTE.

0 0 1 + IDENTIFIES CPU(1) AS INTERRUPTING DE-VICE.

EF (2): Used in block transfer. EF1 (2) = 1 implies that a byte is ready to be transferredbetween RAM(1) and BUFFER(1)/BUFFER(2) via DMA(1). EF1(2) is reset by aDMA(1) cycle steal.

WRITE STATUS. A status byte, defined by the systemdesigner in software, is clocked into BUFFER(2) andCPU(1) is so informed.OUTBYTE. A data byte to be transferred via DMA(1)

is clocked into BUFFER(2) and the IN REQUEST lineof CPU(1) is activated. The CPU(1) cycle steal resetsEF1(2) informing CPU(2) that BUFFER(2) is free.SET. The SET instruction is used to modify the states

of EF1(1), EF2(1), and RUN. The RUN flip flop is set whena block transfer is taking place. Table 5 lists the bitassignments. Note that bits 1, 3, and 5 are, essentially,masks which allow the SET instruction to modify thestates of the desired flip flops without affecting theremaining flip flops. Note that additional SET instructionsmay be readily added by the use of unused I/O codes.RESET. The RESET instruction is used to reset the

EF1(2) and INT(2) flip flops in the interface as per Table 6.There are six unassigned bit positions.

COMPUTER

1

72

Page 7: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

Table 5. SET instruction bit assignments.BiT5 BIT4 B~~~~~~~~~ ~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~IT3 BT il BT

BIT 5 BIT 4 BIT 3 BIT 2 BIT 1 BIT OY3 X3 Y2 X2 Yi Xi

If Yi = "0", X(i) is unchanged.If Yi = "1"1, xi - IFF(i).

FLIP-FLOP123

SIGNALEF1 (1)EF2(1)RUN

Table 6. RESET instruction bit assignments.

* CPU(2), in its interrupt routine, determines thatCPU(1) is interrupting it (from flags), performs aREAD, and decodes BUFFER(1) to see what is to bedone.

Example 2:CPU(2) has just completed a block transfer and wantsto so inform CPU(1).

* CPU(2) issues a SET instruction with M(R(X))2 =1000 0011 to reset RUN FF and set EF1(1).

* CPU(1) periodically polls the interface. Once it detectsEF1(1) = 1, it knows that a block transfer has been

* If desired, CPU(1) may then issue a TASK instructionto CPU(2) telling it to reset EF1(1).

7 6 5 4 3 2 1 0 BIT

M(R(X))2

BIT POSITION

0- EF (2)1 - INT(2)34 -

5 - UNASSIGNED6 -

7-'

READ. The contents of BUFFER(1) (previously loadeuby CPUM1)) are clocked into M(R(X)h, and CPU(1) isinformed that BUFFER(1) is now free (EF4(1) is reset).IN BYTE. The OUT RiQUEST line of CPU(1) and

EF1(2) are set. The CPU(1) cycle steal will store a databyte in BUFFER(1) and reset EF1(2). CPU(2) will detectEF1(2) and issue a READ instruction to clock the databyte into M(R(X))h.

The interprocessor interface instructions are both generaland open-ended. The system designer is free to assignand interpret bits as he sees fit. The interface simplypasses bytes from CPU to CPU, and the bit decoding isaccomplished in software."To illustrate the versatility and flexibility of the inter-

processor interface architecture, a demonstration softwarepackage was developed. This software runs on twoCOSMAC Development Systems and imustrates onepossible set of bit assignments. Both of the systemssupport two-level I/O and one of the systems supports adot-matrix TV display. In this demonstration, the slaveCPU (CPU(2)) simply responds to the master CPU'srequests (CPU(1)) and refreshes the TV display. In parallelwith the above, CPU(1) obtains the current TV framefrom CPU(2), processes it, and ships it over to CPU(2) sothat the TV display can be updated. This process iscontinually repeated resulting in a dynamically changingTV display. More detail on. the demonstration programis available in Appendix A.

Using the inter-processor interface. A few simpleexamples illustrate the use of the interprocessor interface.

Example 1:CPU (1) (master) wants to interrupt CPU (2) (slave)and have it perform some task.

* CPU(1) issues INTRPT instruction and simultaneouslyloads BUFFER(1) with task code.

April 1977

Example 3:

Block transfers-all block transfers between RAM(1)and RAM(2) occur via DMA(1). That is, CPU(1) can beprocessing other tasks while the transfer takes place.CPU(2) programmed mode instructions control DMA(1).In this way DMA(2) is free to be used in conjunctionwith any peripheral device controlled by the slaveprocessor.

RAM(2) to RAM(1)-Transfers occur as follows (assumeEF2(1) has been previously set, indicating that CPU(2)is ready to perform a block transfer):

* CPU(1) points R0(1) to the desired RAM(1) locationand issues a TASK instruction informing CPU(2)to begin block transfer.

* CPU(2), in a loop, successively issues OUT BYTEinstructions which clock M(R(X))2 into BUFFER(2)and set CPU(1)'s INREQ DMA line and EF1(2).The DMA(1) cycle steal will reset EF1(2) informingCPU(2) that a cycle steal has taken place. The aboveprocess is repeated until the entire block transferis completed.

* CPU(2) then informs CPU(1) that block transfer iscomplete (see Example 2).

RAM(1) to RAM(2)-RAM(1) to RAM(2) transfers areslightly more complex than RAM(2) to RAM(1) transfers.This is because a two-step process is involved. CPU(2)must first initiate a DMA(1) OUT REQUEST to causea CPU(1) cycle steal which will load the desired bytein BUFFER(1). It must subsequently issue a RtEADinstruction to clock the byte into RAM(2). The procedureis as follows (assume CPU(2) has previously set EFl2(1),informing CPU(1) that it (CPU (2)) is ready to initiatea block transfer):

* CPU(1) points R0(1), at the desired RAM(1) locationand issues a TASK instruction informing CPU(2) thatblock transfer can begin.

* CPU(2), in a loop, successively issues an IN BYTE(6A) instruction which sets EF1(2), and the OUT REQline of CPU(1)'s DMA channel. When the CPU(1)cycle steal occurs, EF1(2) is reset and the byte isclocked into BUFFER(2) by the hardware. CPU(2)then issues a READ instruction to clock the byte intoRAM(2). The above is repeated as often as needed.

* CPU(2) informs CPU(1) that block transfer is complete(see example 2).

Advantages. The advantages of the above interprocessorinterface's organization are numerous. Foremost is thecomplete lack of contention since each CPU can makefull use of its cycles. Each slave CPU is free to use its

73

BIT7 BIT6Y4 X4UNASSIGNED

Page 8: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

DMA channel to communicate with the outside world.The master's DMA channel is time-multiplexed betweenthe various slave CPU's under its control. The master caninterrupt any slave, but not vice versa. The architectureis open-ended since most of the bit assignments are userspecified via software. Hence, each interface can be custom-ized in software while the basic hardware remains unchanged.Finally, the communication protocol is such that everyprocessor can operate, asynchronously, with a differentclock rate. Thus, no overall system synchronization isrequired.

When to use multi-microcomputer architectures

The minicomputer revolution has made possible theeconomical decentralization of computing power. This hasbeen achieved not necessarily by making systems withimproved price/performance, but by making computercontrol of many functions economical and practical, rela-tive to their previous implementations. The cost per unitof computing performed, including input and output, muststill be lower for large centralized systems - otherwisethese would have been replaced by banks of minis long ago.A similar argument can be used in the microcomputer

arena. The microprocessor has made it economically pos-sible to introduce computer control to a host of new appli-cations. However, it should be noted that the performanceof CPU's, on a system level, increases more rapidly thancost. This is illustrated in Figure 11. The two points areapproximately correct for the IBM 370 line.' Hence, theuse of multi-microcomputer structures should not, ingeneral, be considered when all that is needed is simplymore raw CPU power. Multi-microprocessor structuresare effective only in situations where the tasks to beperformed can be effectively and efficiently partitioned.This will give rise to improved I/O processing capability,improved reliability, and a fail-soft feature where the bulkof the system can keep operating should any subsystemfail. Improved I/O processing results from creation of aplurality of parallel I/O interfaces. For example, a, multi-COSMAC system with n CPU's will now sport n DMAchannels, 4n external flags, n independent interrupt lines,n serial output ports, and n times more I/O instructions.An additional benefit resulting from the effective parti-

tioning of tasks in a multi-CPU system is that the software,by being partitioned into several relatively independentpackages, is much simpler and runs more efficiently. This

z

cr0

LL

w

0-

5

4

3.3

31

2

u

CPU COST

Figure 11. Uniprocessor system performance vs. cost.

74

is especially true in a system supporting many interruptingdevices, where, in a single-CPU implementation, much ofthe CPU time would be spent resolving and servicinginterrupts.

Conclusion

As discussed above, there are fundamental differencesbetween multiprocessor and multi-computer systems. Thechief virtue of the former is improved reliability; and, ingeneral, the system performance will improve less thanlinearly as the number of processors is increased. Multicom-puter systems, on the other hand, greatly improve the sys-tem's ability to communicate with the outside world by pro-viding more efficient hardware and software in each subsys-tem tailored to the task at hand. Of course, overall systemreliability is also improved. The interprocessor interfacedescribed above permits the interconnection of manyCOSMAC CPU's in a master-slave arrangement. Eachslave CPU will have associated with it one interface,identical in hardware. However, the I/O instructions usedfor master-slave communication can be optimally custom-ized to the task to be performed by each slave processor.This implementation philosophy can be applied equallywell to any other existing microprocessor.The powerful COSMAC I/O structure was basic to the

flexibility and ease of use of the interprocessor interface.In particular, the availability of CPU testable externalflags (EF1-4(1), EF1-4(2)) made the decoding of inter-processor instructions a simple software function and hasenabled the hardware to remain relatively simple (fewerthan 40 standard COS/MOS packages).The use of DMA(1) under the control of the slave processor

is another key feature of the interface. Since the masterCPU has only one DMA channel, only one device at a timecan communicate with it in the "cycle stealing" mode.Hence, the above decision simply allows DMA(1) to betime-multiplexed between the various slave CPU's, freeingthe slave CPU's respective DMA channels to handle thedevices or processes to which they are dedicated.Note that the master-slave multi-microcomputer struc-

ture requires each CPU to have RAM associated with it.It is strongly felt that the shared RAM concept is notviable for most multi-microcomputer systems. This is pri-marily because, even for just a few processors, memorycontention problems become severe (especially with lowcost, slow memories). Furthermore, the hardware requiredto support a multiport memory, and to resolve contention,is more complex and more expensive than the additionalmemory required to give each CPU at least a minimalRAM. Chip costs for static 256- x 4-bit RAMs are alreadybelow $3.00 and NMOS RAMs are projected to cost about0.06 cents/bit by 1980.Multi-microcomputer systems will most likely find

greatest use in process control applications. Each slavecould be dedicated to a specific task, say, testing aparticular module, assembly, or system. The master CPUcould then be used to coordinate the testing, monitor thetest results, and do data logging as required. Applicationsalso exist in the data communications area. Appendix Bsummarizes a possible two-COSMAC store-and-forwardsystem.As a final note, the viability of the master-slave multi-

microcomputer philosophy is being enhanced by the intro-duction of new sophisticated devices optimized for thattask. An initial such offering is the Intel 8041/8741Universal Peripheral Controller, which is based on a self-contained microcomputer consisting of over 18,000 devicesand can be readily slaved to the standard Intel line of8-bit microprocessors.'0

COMPUTER

I slc I

BASIC -0370/165 0 - _

-v~~~~~~~-o

BA SIC370/155

n, ~ ~~el ON n '2 AlA I z -Z.Z s Iq

Page 9: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

Acknowledgment

The author is greatly indebted to A. R. Marcantonio, RCALaboratories, for his verification of the interface architectureand for programming the two-COSMAC demonstration system.

Paul M. Russo is e member of the Technical

RCA Laboratories, Princeton, New Jersey.He is currently involved in microprocessor-related activities spanning the areas of auto-mation, data communications, and consumerproducts. He joined RCA Laboratories in 1970and has done research in various computer-related areas including architecture, perfor-mance evaluation, and program behavior.

He has been actively involved with microprocessors since theirinception in 1971. He has taught microprocessor-related coursesat RCA, Columbia University, and the University of Mexico,and has organized conference sessions and published severalarticles on the subject. He has one patent with others pending.Dr. Russo received the Bachelor of Engineering Physics from

McGill University, Canada, in 1965, and the MS and PhDdegrees in EECS from the University of California, Berkeley,in 1966 and 1970, respectively. During the 1969-70 academic year,he was on the Berkeley faculty where he taught courses in cir-cuit theory and circuit optimization, the subjects of his PhD dis-sertation.Dr. Russo is a member of the IEEE, Eta Kappa Nu, Sigma

Xi, the IEEE Committee on Social Implications of Technology,and the IEEE Energy Committee.

References

1. P. H. Enslow, Jr., ed., Multiprocessors and ParallelProcessing, John Wiley and Sons, Inc., New York, 1974.

2. H. A. Raphael, "Join Micros Into Intelligent Networks,"Electronic Design, March 1, 1975, pp.52-57.

3. L. H. Anderson, "The Microcomputer as Distributed Intelli-gence," Proceedings of the 1975International Symposiumon Circuits and Systems, Boston, Massachusetts, April1975, pp. 337-340.

4. A. J. Weisburger, "Distributed Function MicroprocessorArchitecture," ComputerDesign, November 1974, pp. 77-83.

5. J. A. Weisbecker, "A Simplified Microprocessor Architec-ture," Computer, March 1974, pp. 41-47.

6. A. Young, "Getting to Know COSMAC," ElectronicDesign, October 25, 1976, pp. 136-145.

7. MPM-201A, "User Manual for the CDP1802 COSMACMicroprocessor," RCA Solid State Division, Somerville,New Jersey.

8. P. M. Russo and M. D. Lippman, "A Microprocessor Imple-mentation of a Dedicated Store-and-Forward Data Com-munications System," Proceedings of the National Com-puter Conference. Chicago, Illinois, May 1974, pp. 439-445.

9. P. M. Russo and M. D. Lippman, "Case History: Storeand Forward," Spectrum, September 1974, pp. 60-67.

10. J. M. Murray, "A Universal Peripheral Interface Device,"Proceedings of the Industrial Electronics and ControlInstrumentation Conference, Philadelphia, Pennsylvania,March 21-23,1977, pp. 52-57.

Appendix A: Interprocessor demonstration

The demonstration system, shown in Figure A-1, consists oftwo RCA COSMAC Development Systems. Both systems sup-port two-level I/O and have been modified as follows: the mastersystem contains a TV dot matrix interface, whereas the slavecontains the interprocessor hardware.

Briefly, the demonstration programs display the letters"COSMAC" and rotate them across the TV screen, wrappingaround from right to left. CPU(1), the master, initiates all the

April 1977

Figure A-1. Two-CPU demonstration system. The word "COS-MAC" rotates and wraps around left to right.

data transfers, shifts the display data, and generally controlsthe sequence of events. CPU(2), the slave, refreshes the TV displayand responds to commands from CPU(1) by decoding its externalflags and performing the assigned tasks. The instruction assign-ments are as follows:

CPU(1)InstructionKEY

TASK

M(R(X)) Decode0 address, lower 8 bits1 address, upper 8 bits2 # blocks to be transferred3 block length, lower 8 bits4 block length, upper 8 bits5 spare

.

255 spare2° bit - start RAM(1) - RAM(2)21 bit - start RAM(2) - RAM(1)22. 27 unassigned

INTRPT 2° bit - load status byte21 - 27 unassigned

InstructionWRITESTATUS

CPU(2)M(R(X)) Decode

20 bit - task #1 status21 bit - task #2 status2'- 27 unassigned

The I/O instructions not listed above are used as describedin the text. The sequence of events is as follows: CPU(2) refreshesthe TV display using DMA channel (2) while waiting for com-mands from CPU(1). CPU(1) prompts CPU(2) for the currentdisplay data block. CPU(2) interprets the TASK byte receivedand informs CPU(1), through CPU(1)'s external flags, that it isstarting a block transfer. Upon completion of block transfer,CPU(2) informs CPU(1), again through flags, and awaits furthercommands. CPU(1) now calls appropriate subroutines to shiftthe display block 1 bit right, and then prompts CPU(2) to preparefor receiving the shifted data block, again via CPU(1)'s DMAchannel.When block transfer is finished, CPU(1) is again so informed

by CPU(2), through the use of CPU(1)'s external flags. CPU(1)keeps count of the block transfers, and when a full screen hasbeen shifted, it complements the display block and starts theentire sequence over again.

75

Page 10: Interprocessor Communication Multi-Microcomputer …Jr.' In multi-microcomputer systems, ideally each micro-computer performs a dedicated task. Very little data is interchangedbetweenprocessorsrelativetothetotal-sys-

Appendix B: A data communications system usingone or two CPU's

DATA BUS

DMA

Figure B-1. System with a single CPU.

The COSMAC-based prototype leased-channel controller hasbeen adequately documented.8.9 The primary throughput bottle-neck in this system is the floppy disk because of its relativelylong access time (about l/2 second average). Let's address thefloppy disk interface. In Figure B-1 we illustrate a single-CPUsystem. The complexity of the disk control electronics is directlyrelated to the amount of CPU time you are willing to give up tothe disk control process. Ifyou want the CPU to use a few simplecommands and leave all the work to the interface, it may becomequite complex, perhaps in excess of l00 standard IC packages. Onthe other hand, if you do the bulk of the disk control, such ashead control, sectoring, and synchronization in software, theinterface may be down to 20-30 standard IC packages. But youhave to give up a lot ofCPU cycles.For a low performance application, we adopt the philosophy of

doing as much as possible in software. This will really give us aminimum-cost systerM. Then, if more performance is required, weadd a second CPU dedicated to the disk control function, FigureB-2. Let CPU(1) be the master CPU and CPU(2) be the slavededicated to the disk control electronics. In this organization,the disk control electronics is identical to the single-CPU case.Extra logic, the interprocessor interface, is added to permit inter-processor communication. When disk I/O is required, CPU(1) cannow issue very simple commands to CPU(2), and then it can goand perform its appointed task. CPU(2) will then control the diskand perform the required I/O. By using a two-level I/O schemewith both CPU's, CPU(1) and CPU(2) use each other as peripheraldevices, and entirely different meanings can be attached to theirI/O instructions. Note that high-speed data block transfersbetween RAM(1) and RAM(2) are performed via DMA(1). DMA(2)is used to transfer high-speed data between the disk and RAM(2).CPU(2) may use its spare cycles to preprocess the data or toimplement sophisticated disk management schemes. M

To order: Use the multipurpose order form, p. 91.

COMPUTER

Proeings of thelntenational Symposium onTedmogy forSELECTIVE DISSEMINATION

OF INFORMATION

Proceedings of the International Symposium onTechnology for Selective

Dissemination of InformationSeptember 8-10, 1976 (161 pages)

Proceedings frohn this symposium, held in the Republicof San Marino, includes papers on data base systems, in-formation systems, data networks, transmission techno-logy, data management, user-oriented systems, interactivegraphics, and applications in education and humanities.

LNon-members - $20 Members - $155

Digest of Papers from COMPCON 76 Fall:"Computers by the Millions for the Millions"

September 7-10, 1976 (350 pages)Digest, from the IEEE Computer Society's own interna-tional conference, includes topics dealing with compo-nents, hardware and software, system design, and appli-cation, presented by many of the leading computerscientists and specialists in the world today.Non-members - $20 Members -$15 v

.1,I

w -M

I114,

i

ip

I

I

11

76

wM