Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

Share this Page URL

DFSMS enhancements checklist > DFSMShsm secondary host promotion - Pg. 58

For more information refer to: "Summary of Changes" in z/OS DFSMS Migration, GC26-7398 4.1.11 DFSMShsm Common Recall Queue Prior to z/OS 1.3, every HSM maintained its own, in-storage, queue of data set recall requests. This meant that if HSM, or the system it is running, on goes down before a recall request has been processed, the request is lost and must be reissued when HSM is restarted. Also, because each HSM could only process its own requests, each HSM needed physical access to any resources required by the recall request. So for example, if data set A.B was migrated to tape X12345, the system that is recalling that data set would need access to both an appropriate tape drive and also to that specific tape. In z/OS 1.3, HSM introduced the option of having a single HSM recall queue that could be shared between multiple HSMs. This means that if tape X12345 is currently mounted for a recall on SYSB, and a user on SYSA issues a recall for a data set on that tape, then the recall request could be processed by SYSB, avoiding the need for SYSA to have access to any tape resource, and potentially eliminating an additional tape mount. More importantly, because there are now two copies of each recall queue (one in the local storage of the requesting HSM, and the other in the Coupling Facility structure), it is possible to restart a HSM or the system it is running on without losing any queued requests. In fact, you could even do a complete sysplex IPL, and as long as the CF is not restarted, the queued requests will still be there waiting to be processed when the HSMs are restarted. There are other performance and load balancing benefits of Common Recall Queue, but