This chapter describes the facilities provided by MTO for Intersystem Communication and how to set up MTO to use these facilities.
Intersystem Communication(ISC) provides communications between two MTO systems, or between CICS Option in Mainframe Express and MTO in Enterprise Server or between MTO and an IBM-compatible CICS system using Mainframe Access. It enables distributed processing, where the processing of CICS transactions is split between various systems. The distributed processing facilities supported are:
You need to do some configuration to get these facilities working. Some of this is common to all distributed processing facilities, while some is specific to one facility.
To enable any of the distributed processing facilities, you need to do the following:
Function shipping enables a transaction executing on a local MTO system to access remote resources (files, temporary storage queues and transient data queues).
The operation of function shipping is illustrated in Figure 8-1.
Outbound function shipping is what happens when your local transaction programs ship functions to remote MTO systems or to IBM-compatible CICS systems.
Inbound function shipping is what happens when remote MTO systems ship functions to your MTO system.
The following restrictions apply to function shipping as implemented in MTO:
This is advisable because function shipping only incorporates sync level 1 logic. Whether you have single or multiple connected systems, a sync request can be propagated through all the connected systems.
This section describes any configuration required that is specific to function shipping. See the section Common Configuration Tasks for configuration that is required for all types of intersystem communication.
You might need to do some specific configuration for outbound function shipping. There are two ways of ensuring that functions get shipped to a remote system:
EXEC CICS READ FILE(CUSTFILE) SYSID(HS01)
reads the file CUSTFILE which belongs to the remote system HS01.
EXEC CICS WRITEQ TD QUEUE(QUEUE01) SYSID(HS01)
writes to the transient-data queue QUEUE01 which belongs to the remote system HS01.
Table | Required Entry | ||||
---|---|---|---|---|---|
FCT | The file to be accessed, specifying:
|
||||
DCT | The TD queue that can be accessed by local
transactions, specifying:
|
||||
TST | The remote TS queue that can be accessed by local
transactions, specifying:
|
Note: If local transactions using ANSI ship functions to an IBM-compatible EBCDIC CICS system, you must set up data conversion facilities on the remote system.
Asynchronous processing enables a local transaction program to initiate a transaction program on a remote system by means of the EXEC CICS START command. The remote transaction does not attempt to send a response back to the initiating local transaction program; this is what makes it asynchronous.
There are two ways of doing outbound asynchronous processing with MTO:
If you use the first way, you need to do some specific configuration: define the fields in the PCT as follows:
Required Entry | ||||
---|---|---|---|---|
Each remote transaction that can be initiated (using
ATI) by a local transaction, specifying:
|
No specific configuration is required for inbound asynchronous processing.
See the section Common Configuration Tasks for configuration that is required for all types of intersystem communication.
Transaction routing enables an MTO system to invoke transactions on a remote system. The terminal attached to the local system is treated as if it were attached to the remote system.
The operation of transaction routing is illustrated in Figure 8-2.
Figure 8-2: Transaction Routing
You can implement transaction routing in MTO in either of two ways:
You can use the supplied transaction CRTE to start a routing session on a remote system. When the routing session has been established, your local system appears as a remote terminal to the remote system. All transactions invoked during this session are executed on the remote system.
The use of CRTE to implement transaction routing is illustrated in Figure 8-3.
Figure 8-3: Transaction Routing via CRTE
No configuration of the local or remote system (other than the link itself) is required when using CRTE. You should use CRTE, instead of making PCT definitions, in the following cases:
Note that, because the routing session has to be explicitly established and canceled, you might have to perform additional signon operations.
CRTE supports both conversational and pseudo-conversational transactions. Transactions invoked by PA or PF keys cannot be invoked through CRTE.
The remote system needs a definition of the terminal from which the transaction is issued. You can either:
3270 terminals and printers default to the shipping method, but you can duplicate the definitions if you choose.
With either method, each terminal name must be unique across the entire network.
Shipping definitions has some advantages over duplicating definitions:
There are several restrictions that apply to the transaction routing as implemented in MTO:
This section describes any configuration required that is specific to transaction routing. See the section Common Configuration Tasks for configuration that is required for all types of intersystem communication.
You might need to do some specific configuration for outbound transaction routing. There are two ways of ensuring that transactions get routed to a remote system:
EXEC CICS START TRANSID(CWEN)SYSID(HS01)
starts the transaction CWEN on the remote system HS01.
Table | Required Entry | ||||
---|---|---|---|---|---|
PCT | Each remote transaction that can be initiated
(using ATI) by a local transaction, specifying:
|
No specific configuration is required for inbound transaction routing.
Distributed program linking (DPL) enables a program running on MTO to link to a program running on a remote system. This provides an easy way to access DL/I (IMS) and SQL databases and BDAM files on a remote CICS system.
There are certain restrictions on the types of program to which you can link using the distributed program linking feature of MTO.
You cannot link to programs that:
In addition to these restrictions, you should also be aware that when you are accessing DB2 from a transaction running on MTO, security access to the DB2 application plan is based on the transaction ID under which the program runs. The default transaction ID is one of the mirror transactions CPMI, CSMI and CVMI. To enable greater selectivity for DB2 application plans, you can specify an alternative transaction ID in either the PCT entry or in the EXEC CICS LINK command. This transaction muct be defined in the target system and must point to the mirror program DFHMIRS. In all other respects (such as security attributes and task priority), the mirror transaction operates normally.
If the remote program terminates abnormally, the remote system's mirror program returns the last abend code to MTO, using the ASSIGN ABCODE command. The remote program might have handled other abends before terminating.
This section describes any configuration required that is specific to distributed program linking. See the section Common Configuration Tasks for configuration that is required for all types of intersystem communication.
You might need to do some specific configuration for outbound distributed program linking. There are two ways of linking to a program on a remote system:
EXEC CICS LINK TRANSID(PPQQ) SYSID(HS01)
links to the transaction program PPQQ which belongs to the remote system HS01.
Table | Required Entry | ||||||
---|---|---|---|---|---|---|---|
PPT | Each remote program to which a local program can
link, specifying:
|
Note: If local programs that use a different character set (ANSI or EBCDIC) are linked, you may need to set up data conversion facilities on the remote system.
Distributed Transaction Processing (DTP) allows a CICS transaction running in one system to communicate with a transaction running in another system. The transactions are written and designed to communicate with each other over the intersystem communication link. DTP provides facilities so that two programs can communicate with each other, signal each other with regard to their status, and jointly commit modifications to protected resources.
DTP takes place between programs that are designed to be a matched pair. Their activities, command utilization, error protocols, and message formats must all be coordinated. MTO enables running transactions to initiate and communicate synchronously with transactions in remote systems that support the APPC protocols.
The following restrictions apply to Distributed Transaction Processing:
The links to any interconnected system must be defined according to the examples provided in the section Network Links. You can either define any programs and transactions that are to participate in a DTP conversation through resource definition table entries on both the local and remote system or specify them in the CONNECT PROCESS and EXTRACT PROCESS commands.
The distributed processing facilities described above require that a network link is set up between the client and server systems. This section describes how to set up these links.
This section describes the setting up of network links for the following types of communications network:
These are the network links that you can use for function shipping, asynchronous processing, transaction routing, distributed program linking and distributed transaction processing.
You use TCP/IP network links for all types of Intersystem Communication .When you use TCP/IP to connect to the mainframe, you also need Micro Focus Mainframe Access Version 3.0 or above. Configuring MTO to use TCP/IP network links is described below.
This section describes the definitions you need to make to enable communications. To create the necessary definitions you use the SysC page in ESMAC: from the Server page, click the Services dropdown in the Resources group in the left-hand menu, click By type, then click SysC. You see the list of existing SysC definitions. Click SysC in the New panel of buttons at the bottom of the page.
You must set the following fields:
Name | The name of the target system: this is also the value you enter in SYSID when you define a remote transaction |
MF Node | The text host name or dotted decimal address of the target server |
MF Port | The decimal value of the port that the target server listens on |
Net Name | The name of the target enterprise server or CICS region |
Session Max | The maximum number of send sessions |
You need a connection definition for the enterprise server that is initiating the conversation. If you create a connection definition in both enterprise servers and both servers try to initiate the connection, MTO resolves the contention.
If you specify the home enterprise server as the target enterprise server (that is, Net name is the same as the name of the current enterprise server), MTO ignores the entry. This is useful because it enables a number of servers to share a single group that specifies the connections for all the regions.
You also need to add a listener for Intersystem Communication to each enterprise server that uses ISC. For more information see the section Common Configuration Tasks.
In this example an MTO-enabled enterprise server is to be connected to a CICS region on a mainframe using TCP/IP and Mainframe Manager. The connection definition is shown in Figure 8-4.
Figure 8-4: Example of MTO to Mainframe Connection Definition
The name of the definition, RSYS in this example, must match the server name as defined in the Mainframe Access server definition, which in turn points to the target CICS region on the mainframe. This example would have a Mainframe Access server definition similar to the folllowing:
<MCOID=RSYS> LUNAME=CICSTSR3 TPNAME=* MODENAME=#INTER SYNCLEVEL=0 SECURITY=NONE <END>
For more information see the section Target Server Parameters for CICS Option in the chapter Configuration in your Mainframe Access Administrator's Guide.
You must configure the Mainframe Access Server to enable ES/MTO outbound communications and provide it with definitions for the target CICS and target ES/MTO systems. You can find step by step instructions for this in the appendix Enterprise Server/Mainframe Transaction Option Support of your Mainframe Access Administrator's Guide.
You must also create an appropriate connection definition for your enterprise server. The connection definition is shown in Figure 8-5.
Figure 8-5: Example of MTO to Mainframe Connection Definition
As with the previous example, the name of the definition, RSYS in this example, must match the server name as defined in the Mainframe Access server definition, which in turn points to the target CICS region on the mainframe. This example would have a Mainframe Access server definition similar to the folllowing:
<MCOID=RSYS> LUNAME=CICSTSR3 TPNAME=* MODENAME=#INTER SYNCLEVEL=0 SECURITY=NONE <END>
Note: If your enterprise server is not going to initiate the communications, you can omit the connection definition.
Two MTO-enabled enterprise servers, PEERSRV1 and PEERSRV2, are to be connected using CCITCP. Two connection definitions are shown, although only one is required.
The connection definition for the first region is shown in Figure 8-6.
Figure 8-6: Example Connection Definition for PEERSRV1
The connection definition for the second region is shown in Figure 8-7.
Figure 8-7: Example Connection Definition for PEERSRV2
Copyright © 2006 Micro Focus (IP) Ltd. All rights reserved.