A look at how some z/OS shops have closed their batch windows by pushing batch I/O through the owning CICS region.
Most CICS shops I have worked in carry the same operational habit: keep a nightly window open so that batch can update VSAM files without colliding with CICS. The reasoning behind that habit is sound. CICS opens VSAM files for update with a sharing posture that does not allow a separate batch address space to safely modify the same dataset, because the integrity controls — record locking, journaling, backout, task accounting — live inside CICS. If a batch job updates the file directly, those controls are not in play, and recovery becomes a problem you do not want to be solving at 7:00 AM.
The interesting question, which gets less attention than it deserves, is whether the nightly window is the only way to satisfy that constraint. It is a coordination strategy, not a property of VSAM. Once you frame it that way, a different design becomes possible: leave the file open to CICS and arrange for batch work to be performed under CICS control rather than around it. That is the model I want to walk through here, using SYSB-II as a concrete example because it is the implementation I know best.
The basic mechanism
In a SYSB-II configuration, batch jobs continue to execute in the batch address space, exactly as they always have. What changes is the I/O path for VSAM datasets that have been declared as SYSB-II-managed. Instead of the batch step issuing native VSAM requests against the dataset, those requests are intercepted by an MVS subsystem and forwarded to the CICS region that owns the file. CICS performs the actual I/O as it would for any online transaction, applying its normal record locking, journaling, and backout. The result is returned to the batch step, which continues running.
The architectural payoff is that batch and online traffic are no longer two different integrity models contending for the same dataset. CICS becomes the only integrity model, and batch becomes another consumer of it. From the file’s point of view, every update is a CICS update.
How the pieces fit together
SYSB-II is implemented as a standard MVS subsystem with a started task on the batch side and a command-level transaction on the CICS side. Communication between the two uses VTAM, TCP/IP, or cross-memory services, depending on whether the batch job and the CICS region are in the same LPAR or across a sysplex. Nothing in the architecture modifies VSAM internals, and nothing requires application source changes; the interception happens at the subsystem boundary. RACF authorization, SMF recording, and CICS transaction security continue to apply, because the work is genuinely running as CICS work.
Where the tradeoffs live
There is no free lunch in this design, and it is worth being explicit about the cost. A native batch VSAM I/O is shorter than a routed one. Adding interception, transport to the CICS region, execution under a CICS transaction, and the return path costs measurable time per I/O. If the metric you care about is microseconds per record, this model will look slower.
If the metric you care about is when the work finishes and how much of the day it consumes, the picture changes. Workloads that previously had to be queued into an overnight window can be spread across the day, which usually compresses total elapsed time even when individual I/Os are longer. The relevant comparison is total business-hours availability, not single-I/O latency. That distinction is what makes the model worth considering for some workloads and a poor fit for others; a tight inner-loop batch process where per-I/O cost dominates may be better off with a different approach.
Locking and online response time
The concern I hear most often from operations teams is that concurrent batch will degrade online response time through lock contention. SYSB-II addresses this with configurable sync-point intervals, which control how frequently a batch unit of work commits. Shorter intervals reduce lock hold times and shrink the recovery scope after an abend, at the cost of more commit overhead. Longer intervals do the opposite. In practice, finding the right interval for a given workload is most of the tuning effort, and it is the lever that decides whether online users notice batch is running.
What installation actually looks like
Bringing SYSB-II up in a new environment is bounded systems work. The steps are: define the subsystem, add the started task, install the CICS program and transaction definitions, update the relevant startup parameters, identify the datasets to be managed, and authorize the load library through RACF. Existing batch JCL does not need to change to take advantage of the routed I/O path for managed datasets. Application code does not need to be touched.
How to evaluate it for your shop
The honest way to evaluate any model like this is to scope it narrowly. Pick a batch workload that already conflicts with business hours, billing, or a heavy reconciliation, and run it in a non-production LPAR. Configure SYSB-II for the datasets that workload touches, run the job concurrently with simulated online traffic, and measure two things: online response time during the run, and elapsed batch time end to end. Both numbers, taken together, tell you whether the routing overhead is worth the availability you get back. Either answer is useful information.
The wider point
Whether or not SYSB-II is the right tool for a particular environment, the underlying idea is worth holding onto. The CICS-batch coordination model that most shops inherited was built for an era when overnight windows were cheap and online hours were short. Both of those assumptions have eroded. It is increasingly worth a systems programmer’s time to ask whether the integrity constraint really requires the window, or whether it can be satisfied by a different way of routing the work.








0 Comments