7
MVS Systems Programming Home Chapter 3d - MVS Internals Contents The Web Version of this chapter is split into 4 pages - this is page 4 - page contents are as follows: Section 3.8 - Catalogs 3.8.1 Logical Catalog Structure 3.8.2 Physical Catalog Structure 3.8.3 The Catalog Address Space 3.9 Inter-Address Space Communication 3.9.1 Common Storage and SRB's 3.9.2 Cross Memory Services 3.9.3 Dataspaces and Hiperspaces Bibliography Home Contents Prev part of chapter Next Section Top of Page 3.8 Catalogs Catalogs are used by MVS to locate datasets when a task attempts to allocate them without supplying their volume serial number; they hold records of the volume(s) on which each cataloged dataset exists. All allocations of VSAM datasets and SMS-managed datasets must go through the relevant catalog, and in this case the catalog also holds other information - for VSAM datasets, for example, this includes the physical location of each extent of the dataset on the disk, DCB-type information, and much more (this information is held in the VTOC entry, also known as the DSCB, for non-VSAM DASD datasets). This section looks briefly at the logical and physical structure of catalogs and how the catalog management process works. Over the course of MVS's history, catalogs have gone through a number of different structures. The current flavour is known as ICF (Integrated Catalog Facility), and I shall concentrate on ICF catalogs here, though I will make a few comments on the earlier flavours in passing (the predecessors to ICF catalogs were known as VSAM catalogs, and the previous generation as CVOLs). 3.8.1 Logical Catalog Structure Each MVS system must have a master catalog, and there will usually also be a number of lower-level catalogs known as user catalogs. All catalog searches begin by searching the master catalog, and many major system datasets must be cataloged in it. Prior to MVS Version 4, the master catalog was described in the SYSCATxx member of SYS1.NUCLEUS, where the suffix xx was specified in response to the message "IEA3347A SPECIFY MASTER CATALOG PARAMETER" at IPL time, and defaulted to "LG". This is still a valid option on MVS Version 4 systems, but it is also now possible (and simpler) to specify the master catalog parameters in the LOADxx member of SYS1.PARMLIB. The use of LOADxx is discussed in more detail in chapter 7. MVS obtains the name of the master catalog and the VOLSER of the disk it is on from the SYSCATxx or LOADxx member, and then opens the master catalog, which it uses to find the other datasets it requires for the early stages of the IPL process. At this stage, the master catalog is the only catalog which is open and usable by the system, which is why any system datasets opened during the IPL process must be cataloged in it. Later in the IPL process, catalog management services are fully initialised, allowing the use of user catalogs. The physical structure of the master catalog is identical to that of a user catalog - they are defined with the same IDCAMS command, and any catalog can be used as either a master catalog or a user catalog. So it is only the selection of a particular catalog at IPL time that identifies it as the master catalog for the MVS system concerned. However, once you have established which catalog is your master catalog, it is necessary to define its relationships with its user catalogs, and in practice this will make the entries in a master catalog very different from those in a user catalog. Whenever MVS initiates a catalog search in order to locate a dataset, it starts by looking at the master catalog. The dataset to be located may be cataloged in the master catalog, but more commonly it will be cataloged in a user catalog. If the search process is to find it, that user catalog must be defined as a user catalog in the master catalog, and there must be an "alias" entry in the master catalog which relates the high-level qualifier of the dataset's name to the user catalog. When catalog management finds such an alias entry, it interprets this as an instruction to look in the specified user catalog for the catalog entries of all datasets with this high-level qualifier. 1 of 7 11/2/2001 1:48 PM MVS Systems Programming: Chapter 3d - MVS Internals http://www.mvsbook.fsnet.co.uk/chap03d.htm

MVS Internals 4

Embed Size (px)

Citation preview

Page 1: MVS Internals 4

MVS Systems ProgrammingHome Chapter 3d - MVS Internals Contents

The Web Version of this chapter is split into 4 pages - this is page 4 - page contents are as follows:

Section 3.8 - Catalogs3.8.1 Logical Catalog Structure3.8.2 Physical Catalog Structure3.8.3 The Catalog Address Space 3.9 Inter-Address Space Communication3.9.1 Common Storage and SRB's3.9.2 Cross Memory Services3.9.3 Dataspaces and HiperspacesBibliography

Home Contents Prev part of chapter Next Section Top of Page

3.8 CatalogsCatalogs are used by MVS to locate datasets when a task attempts to allocate them without supplying their volume serial number; theyhold records of the volume(s) on which each cataloged dataset exists. All allocations of VSAM datasets and SMS-managed datasetsmust go through the relevant catalog, and in this case the catalog also holds other information - for VSAM datasets, for example, thisincludes the physical location of each extent of the dataset on the disk, DCB-type information, and much more (this information is heldin the VTOC entry, also known as the DSCB, for non-VSAM DASD datasets). This section looks briefly at the logical and physicalstructure of catalogs and how the catalog management process works.

Over the course of MVS's history, catalogs have gone through a number of different structures. The current flavour is known as ICF(Integrated Catalog Facility), and I shall concentrate on ICF catalogs here, though I will make a few comments on the earlier flavoursin passing (the predecessors to ICF catalogs were known as VSAM catalogs, and the previous generation as CVOLs).

3.8.1 Logical Catalog Structure

Each MVS system must have a master catalog, and there will usually also be a number of lower-level catalogs known as user catalogs.All catalog searches begin by searching the master catalog, and many major system datasets must be cataloged in it.

Prior to MVS Version 4, the master catalog was described in the SYSCATxx member of SYS1.NUCLEUS, where the suffix xx wasspecified in response to the message "IEA3347A SPECIFY MASTER CATALOG PARAMETER" at IPL time, and defaulted to"LG". This is still a valid option on MVS Version 4 systems, but it is also now possible (and simpler) to specify the master catalogparameters in the LOADxx member of SYS1.PARMLIB. The use of LOADxx is discussed in more detail in chapter 7.

MVS obtains the name of the master catalog and the VOLSER of the disk it is on from the SYSCATxx or LOADxx member, and thenopens the master catalog, which it uses to find the other datasets it requires for the early stages of the IPL process. At this stage, themaster catalog is the only catalog which is open and usable by the system, which is why any system datasets opened during the IPLprocess must be cataloged in it. Later in the IPL process, catalog management services are fully initialised, allowing the use of usercatalogs.

The physical structure of the master catalog is identical to that of a user catalog - they are defined with the same IDCAMS command,and any catalog can be used as either a master catalog or a user catalog. So it is only the selection of a particular catalog at IPL timethat identifies it as the master catalog for the MVS system concerned.

However, once you have established which catalog is your master catalog, it is necessary to define its relationships with its usercatalogs, and in practice this will make the entries in a master catalog very different from those in a user catalog. Whenever MVSinitiates a catalog search in order to locate a dataset, it starts by looking at the master catalog. The dataset to be located may becataloged in the master catalog, but more commonly it will be cataloged in a user catalog. If the search process is to find it, that usercatalog must be defined as a user catalog in the master catalog, and there must be an "alias" entry in the master catalog which relatesthe high-level qualifier of the dataset's name to the user catalog. When catalog management finds such an alias entry, it interprets thisas an instruction to look in the specified user catalog for the catalog entries of all datasets with this high-level qualifier.

1 of 7 11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3d - MVS Internals http://www.mvsbook.fsnet.co.uk/chap03d.htm

Page 2: MVS Internals 4

Thus there is a two-level hierarchy of catalogs, with most user datasets cataloged in the user catalogs, and the master catalogcontaining alias entries pointing to the user catalogs, plus catalog entries for system datasets required at IPL time. This is illustrated inFigure 3.8 below. It is generally the systems programmer's responsibility to design and enforce this hierarchy.

**** insert Fig 3.8 ****

Home Contents Previous Section Next Section Top of Page

3.8.2 Physical Catalog Structure

Physically, the ICF catalog structure consists of two types of datasets: the BCS (Basic Catalog Structure) and VVDS (VSAM VolumeDataSet). When we define a master catalog or a user catalog (using IDCAMS), we are creating a BCS. The BCS is itself a VSAMKSDS, whose cluster name is 44 bytes of binary zeros (as this is used as the key to the dataset, this ensures that the first entry in thedata component of the BCS is always its own self-describing entry). The name of the data component is the name you assign in yourIDCAMS DEFINE command, and the name of the index component, assigned by IDCAMS, always begins "CATINDEX" andcontinues with a timestamp.

The BCS contains entries of various types, such as ALIAS, NONVSAM, USERCATALOG, and CLUSTER, describing the varioustypes of entity which may be searched for by catalog management:

* ALIAS entries were discussed in the previous section, and redirect a catalog search from the master catalog to a user catalog for agiven high-level qualifier. Note that DFP Version 3 permits multi-level alias structures, but these are still uncommon.

* USERCATALOG entries define user catalogs

* NONVSAM entries describe non-VSAM datasets, and for non-SMS managed datasets include simply their name and the device typeand volume serial number of the volume on which they are cataloged - further information, e.g. on the DCB and physical location ofthe dataset on the volume, is found in the VTOC (Volume Table of Contents) for DASD datasets or in the dataset labels for tapedatasets.

* CLUSTER entries describe VSAM datasets, and point in turn to DATA entries describing the data component of the cluster, and (forKSDS's) to INDEX entries describing the index component. This is where ICF catalogs differ most markedly from their predecessors.

In an ICF catalog, the entry in a BCS for a physical component of a VSAM cluster works in a similar way to a NONVSAM entry. TheBCS entry only contains minimal information about the component, such as the name, the device type and volume serial number, andall the physical details, such as location of extents, CISIZE, etc, is held on the same volume as the dataset itself, to simplify recoveryand space management. For VSAM components, however, this information is not held in the VTOC entry. Instead it is held in thesecond component of the ICF catalog structure - the VSAM Volume Data Set (VVDS).

The VVDS is a special type of ESDS. It is created automatically whenever a VSAM component (including a BCS) is allocated on avolume which does not yet have a VVDS. The VVDS is always called SYS1.VVDS.Vvolser, where volser is the volume serial numberof the volume, and you can preallocate a cluster with this name on the volume if you wish to override any of the defaults for VVDSallocation (or control its physical location on the volume).

The first record in every VVDS is known as the VSAM Volume Control Record (VVCR), and consists of two parts. The first of theselists the BCS catalogs which own (or have owned) VSAM datasets cataloged on this volume - these entries are known as "backpointers". The second part maps free space within the VVDS itself, and allows reuse of space within the VVDS (this is the main wayin which it differs from a normal ESDS). The second record in the VVDS is a self-describing record, and the rest of the records areeither VSAM Volume Records (VVRs) or Non-VSAM Volume Records (NVRs - a new record type introduced with DFSMS).

There is at least one VVR for each VSAM component on the volume, describing its physical extents, key and record sizes, CI/CAsizes, etc. The physical extent information duplicates information in the VTOC, but the VTOC is only used when the component isbeing defined, deleted, and extended, while normal read/write accesses to the component use the extent information in the VVR.

It should be clear that in this structure, there is a many-to-many relationship between BCS and VVDS datasets. That is, each BCS canown VSAM components on multiple volumes (and therefore uses multiple VVDS's), while each VVDS can contain entries for VSAMcomponents owned by multiple BCS's. Just as the BCS entry for each component contains a pointer to the volume it exists on (andtherefore to the VVDS), each VVR contains an indicator which connects it to the back pointer in the VVCR which corresponds to itsowning BCS.

To complicate the matter just a little more, the introduction of DFSMS has extended the function of the VVDS. Non-VSAM datasetsbefore the days of DFSMS were only represented in the catalog structure by an entry in a BCS, while all other dataset information was

2 of 7 11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3d - MVS Internals http://www.mvsbook.fsnet.co.uk/chap03d.htm

Page 3: MVS Internals 4

held in the dataset's entry in the VTOC. SMS-controlled non-VSAM datasets, however, do also have an entry (an NVR) in the VVDSof the volume on which they are located.

Before we leave catalog structures, let us briefly mention the differences between ICF catalogs and their predecessors, VSAMcatalogs, which you may occasionally come across (if you do, I strongly recommend getting rid of them as quickly as possible!). Themain differences were:

* VSAM catalogs did not use VVDS's - all information was held in the main catalog structure, which was simply known as a VSAMcatalog

* Because this would have made recovery of volumes with VSAM components belonging to different catalogs extremely difficult,there was a concept of "VSAM volume ownership" - each volume was "owned" by a VSAM catalog, and VSAM datasets could onlybe allocated on the volume if they were cataloged in the VSAM catalog which owned it

* Under ICF catalogs, every VSAM component corresponds to a dataset which has space allocated to it using normal DADSM (DASDSpace Management) routines and which appears in the VTOC with the same name as the VSAM component. VSAM catalogs,however, allowed the user to allocate a "dataspace" (no connection to the MVS/ESA dataspace concept discussed below!) which wasthen available for VSAM to suballocate to VSAM components without telling DADSM or the VTOC.

Home Contents Previous Section Next Section Top of Page

3.8.3 The Catalog Address Space

With the introduction of MVS/XA, a catalog address space known as CAS was created, which is started up at every IPL. This is usedby the DFP catalog management function to hold most of its program modules and control blocks, which were previously held in thePLPA and CSA respectively. (DFP, or Data Facility Product, like JES, is a product which for all intents and purposes is part of MVS,though IBM markets it as a seperate product).

Catalog management routines also use CAS to "cache" records from catalogs - in other words, frequently referenced catalog records,including ALIAS entries from the master catalog, will be held in virtual storage in the CAS address space to avoid the need torepeatedly perform real I/O to disk for them. Unfortunately, in pre-ESA versions of DFP, records from catalogs shared with otherMVS systems are flushed out when the system attempts to re-use them.

Each request for a catalog management service is handled by a "CAS task", which is assigned a task ID, and the status of these taskscan be monitored using the MODIFY CATALOG operator command. The LIST subcommand lists out CAS tasks, showing their taskID, the catalog they are trying to access, and the job on whose behalf they are trying to access it. The END or ABEND subcommandcan then be used to terminate the CAS task if necessary.

MVS/ESA versions of DFP also provide commands which allow you to allocate and deallocate user catalogs, enabling certainmaintenance functions to be performed more easily.

Home Contents Previous Section Next Section Top of Page

3.9 Inter-Address Space CommunicationThe way in which the MVS addressing scheme works is ideal for protection of users from one another. Separate address spaces areunable to address each other's private virtual storage because they cannot access the page and segment tables which tell them where thereal page frames are which back the virtual storage of another address space. In other words, the architecture itself protects the privatevirtual storage of each address space from tasks executing in other address spaces.

However, this "advantage" turns into an enormous obstacle when you have a genuine reason for wishing to communicate between twoaddress spaces - and this is an increasingly common requirement in the era of online/real-time computing. Originally, MVS only hadone mechanism for communicating between address spaces - the use of common storage. True, the system could schedule an SRB torun in one address space from another one, but this still had to use common storage for any data to be passed between them.

But there are disadvantages in using common storage for inter-address space communication:

* there is a limited amount of common storage available, particularly below the 16 Mb line, so using it for frequent tasks passingsignificant amounts of data can be impractical - indeed, if you try to use more CSA than was allocated at IPL time, your MVS systemwill fall over!

* by definition, any address space can "see" what is in common storage, and update it if it is not page-protected, so sensitive or

3 of 7 11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3d - MVS Internals http://www.mvsbook.fsnet.co.uk/chap03d.htm

Page 4: MVS Internals 4

system-critical data is less secure there than in an area of private virtual storage.

So IBM have developed services which allow us to communicate between address spaces without using common storage - CrossMemory Services, introduced with MVS/XA, and Dataspaces and Hiperspaces, introduced in MVS/ESA.

Each of these mechanisms for inter-address space communication is discussed in a little more detail in the sections that follow. Each ofthem is complex to use, and it seems unlikely that many users or application programmers will use any of them directly, but they arealready in widespread use by writers of packaged software inside and outside IBM.

3.9.1 Common Storage and SRB's

We have already covered common storage in the section on Storage Management above. Some areas, such as the PLPA, arepage-protected, and others, such as the nucleus, have specialised system uses. Others, however, are available for tasks running withstorage protect key zero to GETMAIN, update, read, and FREEMAIN as required. In particular, the Common Storage Area(CSA/ECSA) and System Queue Area (SQA/ESQA) are available for the use of tasks wishing to pass data between address spaces.

Indeed, there are several common system tasks which frequently use CSA for this purpose. The main user of CSA on many systems isVTAM, which puts its message buffers there. Thus, when VTAM has received a message from a terminal via an interrupt from the I/Osubsystem, it places it in the CSA, determines which address space it is intended for, then schedules an SRB into that address space toinform it that a message has been received and where in virtual storage it can be found. If VTAM was to place the message in its ownprivate area, the receiving address space could not access it; and the architecture prevents if from placing it directly into the receivingaddress space's private area.

SRB's were mentioned above in the section on Task Management. They are high-priority dispatchable units of work created with theSCHEDULE macro by functions running in supervisor state and storage key zero. Their relevance to this section is that they can bescheduled to run in any specified address space, whichever address space the function which schedules them is running in. Thus, afunction in one address space can schedule an SRB to prompt a function in another address space to process some data placed incommon storage by the first function.

The use of this mechanism led to increasing requirements for CSA in older levels of MVS. However, since the introduction of the newmethods of inter-address space communication, there has been a tendency to reduce the usage of CSA in favour of the new methods,and most installations should now see a decline in the use of CSA.

Home Contents Previous Section Next Section Top of Page

3.9.2 Cross Memory Services

The term "Cross Memory Services" refers to a set of machine instructions, and the facilities they invoke, which were introduced inMVS/XA. These instructions provide the ability to address data in two address spaces simultaneously, to move data between these twoaddress spaces, and to invoke programs in one address space from another.

Unlike SRB's, Cross Memory Services allow synchronous use of data and programs in different address spaces - a function whichswitches itself into cross memory mode may access resources in another address space without suffering any interrupts in the process,and without the requirement to initiate another unit of work to access the second address space on its behalf. This is clearly much moreflexible, and potentially faster and more efficient, than asynchronous communication using SRB's.

At the heart of Cross Memory Services is the concept of primary and secondary address spaces. Normally the primary address spacewill be the "home" address space of the unit of work which initially invoked Cross Memory Services, and the secondary address spacewill be another address space with which communication is required. There are control registers which are used to hold the realaddresses of the segment tables for the primary and secondary address spaces, and this enables a modified version of Dynamic AddressTranslation to be used to resolve the real storage locations of data in both address spaces for a single cross-memory instruction.

The main facilities supplied by Cross Memory Services are implemented as follows:

* data movement between address spaces - several instructions are provided which can perform this function. MVCP moves data fromthe secondary address space to the primary; MVCS moves data in the opposite direction; and MVCK performs a similar functionbetween areas with different storage protect keys. Note that these instructions require that the function knows the addresses of the dataareas it wishes to use in both address spaces - finding these can be complicated!

* data sharing between address spaces - address spaces can be set up which act as data servers for a number of other address spaces,then one of several methods used to access the data from the "clients", without actually moving it into the client address space. Forexample, programs running in common storage can switch into secondary mode to access data in the shared address space (in primary

4 of 7 11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3d - MVS Internals http://www.mvsbook.fsnet.co.uk/chap03d.htm

Page 5: MVS Internals 4

mode, ordinary machine instructions are taken to refer to storage in the primary address space, but in secondary mode they are taken torefer to storage in the secondary address space). Alternatively, the program call mechanism (see next bullet point) can be used toinvoke a "server" program in the shared address space to access data and return a result to the client.

* program sharing - the PC (program call) instruction can be used to transfer control to a program in another address space, and the PT(program transfer) or PR (program return) instructions used to return. This requires the function running in the original address spaceto know how to find the program to be called, and Cross Memory Services provides a control block structure containing "linkagetables" and "entry tables" to hold this information.

The linkage and entry tables are held in the PCAUTH address space. There is one linkage table for each address space, containing anentry for each PC service the address space is authorised to access. Whenever an address space wishes to offer a PC service, it createsan entry for the service in the linkage table of each address space it wishes to offer the service to. The linkage table entries point toentry tables, which contain more information about the program(s) which can be called as part of this service, including their entrypoint address and the address space in which they will execute. Thus, any given address space can only invoke PC services which thesupplying address spaces have deliberately given it access to.

There are a number of restrictions on programs running in cross memory mode. Some of the most important are:

* they cannot issue any SVC's except ABEND

* cross memory access to swapped-out address spaces is not supported, and any attempt to do so results in an ABEND

* only one step of any given job can set up a cross-memory environment

There is also a seperate authorisation structure which restricts the address spaces which any one address space can access using crossmemory facilities. This, and many other aspects of Cross Memory Services, is well documented in "MVS/ESA SPL: ApplicationDevelopment - Extended Addressability" (or "SPL: System Macros and Facilities Volume 1" for XA systems).

It should be clear that use of Cross Memory Services is highly complex, and will usually only be attempted within sophisticatedsoftware products. IBM themselves warn that improper use of Cross Memory Services can cause severe system problems, so most ofus would be well advised to leave it to the software suppliers!

A typical use of Cross Memory Services is in the implementation of Multi-Region Operation in CICS. Users log on to a"terminal-owning region" (TOR), but when they enter a transaction identifier to select the application they wish to use, the TOR usesCross Memory Services to "ship" the processing request to the "application-owning region" (AOR) corresponding to the applicationselected. The AOR processes the transaction, then when it has a message to return to the user, it uses Cross Memory Services to passthis back to the TOR. The TOR is then responsible for sending the response back to the user. Interestingly, CICS is able to run MROusing either Cross Memory Services or SRB's, and it is at the discretion of the CICS systems programmer which is used.

Home Contents Previous Section Next Section Top of Page

3.9.3 Dataspaces and Hiperspaces

Dataspaces and Hiperspaces, introduced with MVS/ESA, are MVS's latest mechanism for inter-address space communication.

A dataspace is a special type of address space which does not include the common areas in its virtual storage (i.e. it has private areas atthose addresses which are used for common areas in a normal address space), and from which machine instructions can not beexecuted. These restrictions make dataspaces unsuitable for any use other than as a repository for data, hence their name. Althoughthey were ostensibly introduced in order to allow a single address space to address more than the 2 Gigabytes of virtual storageallowed by a 31-bit addressing scheme, it seems unlikely that there are many genuine applications yet available which are restricted bythis limit. A more important reason for their use is to make it possible to bring large data files into virtual storage in such a way thatthey are shareable between multiple address spaces.

Dataspaces are much easier to use than previous methods of inter-address space communication, due to a new way of addressing thedata in them, called "Access Register Addressing Mode" (AR mode) or the "Advanced Address Space Facility" (AASF). This uses anew set of 16 access registers which are part of the ESA architecture, and a series of changes to machine instructions to take advantageof them. A function can switch itself into AR mode with a single instruction; once it has done this, familiar instructions such as MVCtake on new ways of working.

The MVC instruction, for example, normally uses an offset from a value in a general purpose register as a "from" address, and anothersimilarly constructed value as a "to" address. It moves an area of data from the "from" address in the virtual storage of the currentaddress space to the "to" address in the virtual storage of the current address space. In AR mode, however, each general purposeregister has an associated "access register". This can contain a "token" which uniquely describes a dataspace or address space, and if

5 of 7 11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3d - MVS Internals http://www.mvsbook.fsnet.co.uk/chap03d.htm

Page 6: MVS Internals 4

so, the corresponding address will be treated as being in the virtual storage of the dataspace or address space concerned. Thus, forexample, a function can move data from the dataspace into its own address space by first switching into AR mode and then executingan MVC instruction with the token of a dataspace in the access register for the "from" address. By using two different tokens in theaccess registers for the "from" and "to" address spaces, it can move data between two data spaces.

These tokens provide the processor with a route to the segment tables of the address spaces or dataspaces concerned. As long as anaddress space knows the token values for the other address spaces it wishes to communicate with, AR mode provides a much simplerand more efficient mode for moving data between two address spaces than Cross Memory Services. In addition, the ability to havetokens for up to 15 different address spaces simultaneously available in different access registers means that communication involvingmore than two address spaces becomes feasible without painfully long path lengths. The "Extended Addressability" manual alsocovers programming with access registers and dataspaces.

A hiperspace is a special type of dataspace ("high performance dataspace"), which cannot exist in central storage. There are two types,"standard", which is created in expanded storage, and paged out to auxiliary storage as necessary, and "expanded storage only", whichcan only exist in expanded storage. The latter type is used by a number of MVS/ESA facilities to buffer data in expanded storagewhich users would otherwise need to perform real I/O to obtain - hence "high performance". In both cases, it is necessary to move thedata required from the hiperspace to the user's address space (a page at a time) before it can be used, because of the restrictionpreventing storage belonging to a hiperspace from moving into central storage. A new set of Assembler macros has been provided withESA to read pages from hiperspaces or write them out to them.

Hiperspaces seem to be intended more for system usage than application usage, and IBM rapidly introduced a series of new facilitieswhich exploit hiperspaces. These include:

* Virtual Lookaside Facility and Library Lookaside - discussed in the Program Management section above;

* "Batch LSR" for VSAM buffering

* "HIPERSORT" - a hiperspace for sort work areas

* "Hiperbatch" - which keeps a single copy of heavily-used sequentially-read files in a hiperspace to speed up batch job execution

All of these are intended to improve system performance by eliminating I/O for frequently used data. In essence they use expandedstorage as a clever caching device, and the use of expanded storage to cache large quantities of data seems certain to be a major featureof future MVS systems, since this eliminates the number one bottleneck on most commercial systems today - DASD I/O.

Home Contents Previous Section End of Page Top of Page

BibliographyMVS/XA Overview GC28-1348 - a very readable overview of some aspects of MVS internals

Storage Management

MVS/ESA Initialisation and Tuning Guide GC28-1634 - detailed discussions of MVS storage management and how SRM controlsswapping (this is the Version 4 manual - earlier versions of MVS combined the "Init & Tuning Guide" and "Init & Tuning Reference"which covers SYS1.PARMLIB members in a single manual)

"The Age of a Page" Mark Friedman - Mainframe Journal, September 1989 - discusses virtual storage and the paging process

"MVS Expanded Storage - Big Bang or Bust?" Mark Friedman - Enterprise Systems Journal, May 1991 - discusses how expandedstorage works and how to monitor usage of it

"Common Storage Advantages and Problems" Eric Jackson and Paul Klein - Candle Computer Report, October 1989

Task Management

"The Service Request Block" Brett Jason Sinclair - MVS Update, June 1987

I/O Management

370/XA Principles of Operation SA22-7085 - includes several chapters on fundamental I/O processing

6 of 7 11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3d - MVS Internals http://www.mvsbook.fsnet.co.uk/chap03d.htm

Page 7: MVS Internals 4

"MVS/XA I/O Subsystem and DASD Event Traces" William R Fairchild - Mainframe Journal, March 1990 - traces a DASD I/Othrough all the stages of MVS I/O processing

Job Management

"Understanding MVS Converter/Interpreter Processing" Bruce Bordonaro - Mainframe Journal, July 1990

Serialisation

MVS/ESA SPL: Application Development Guide GC28-1852 - detailed programming considerations for system software, includingserialisation, program management, etc

System Macros and Facilities, Volume 1 GC28-1150 - the MVS/XA equivalent of the "ESA SPL: Application Development Guide"

MVS/ESA Diagnosis: System Reference LY28-1011 - includes a chapter on the lock hierarchy and ENQ names used by MVScomponents, plus coverage of storage subpools and storage protection

"Serialisation in MVS/XA and MVS/ESA" Alan Feinstein - Candle Computer Report, November 1989, December 1989, and January1990 - an excellent explanation of ENQ's, locks, and intersects

Program Management

MVS/ESA Linkage Editor and Loader User's Guide SC26-4510 - documents program attributes and addressing modes and how toassign them

"MVS Program Management" Robert H Johnson - Mainframe Journal, December 1989

"Utilizing MVS/ESA's Library Lookaside" Timothy A Brunner - UKCMG 1990 Proceedings (also USCMG 1989 Proceedings)

"Getting the most out of Contents Supervision" Steve Goldin - Candle Computer Report, August 1989 - explains reusability andre-entrancy

Catalogs

MVS/ESA Catalog Administration Guide SC26-4502

"VSAM Catalogs Concepts" James W Clark - Mainframe Journal, July 1989 - a very useful discussion of the structure of ICF catalogs,VVDS's, BCS's, and the old VSAM catalogs

Inter Address Space Communication

MVS/ESA SPL: Application Development - Extended Addressability GC28-1854 - includes a chapter on cross-memory programming

System Macros and Facilities Volume 1 GC28-1150 - the XA manual to look at for cross-memory programming techniques

"Writing a Cross Memory Program" Scott Dimond - MVS Update, October 1987

"Inter-Address Space Communications using Cross Memory Services" Rickland D Hollar - Technical Support, March 1991

"Through the Data Window" Michael Haupt - Mainframe Journal, March 1989 - explains dataspaces and hiperspaces

"IBM Conquers Space" Michael Haupt - Mainframe Journal, June 1989 - discusses VLF and other ESA facilities

Home Contents Previous Section About the book Top of Page

This page last changed: 5 July 1998. All text on this site © David Elder-Vass. Please see conditions of use.E-mail comments to: [email protected] (Please check the FAQ's page first!)

None of the statements or examples on this site are guaranteed to work on yourparticular MVS system. It is your responsibility to verify any information foundhere before using it on a live system.

7 of 7 11/2/2001 1:48 PM

MVS Systems Programming: Chapter 3d - MVS Internals http://www.mvsbook.fsnet.co.uk/chap03d.htm