This past week I received a note from Claus Mikkelsen who was sounding concerned about the growing inefficient use of mainframe storage. This is a shock for many of us who looked to mainframe storage as the aspirational model for open systems storage.
Mainframes storage was designed for sharing from the very beginning and most new storage advances showed up first on mainframe systems. Through early Job Control Language, JCL, you could thin provision volumes by releasing unused tracks. Mainframes storage was the first to use cache to mask the latencies of disks, use RAID for data protection, and replicate across distances for business continuity. Mainframes introduced system-managed storage over 30 years ago where you could define a data class and a storage group and manage the tiering of storage with Hierarchical Storage Manager, HSM. Mainframes are considered to be bullet proof with high performance I/O engines called channels, which continue to make them desirable for high performance, mission critical applications. The history of open systems storage has been one of playing catch-up to mainframe storage. Although that is still true, hear what Claus has to say about the situation.
What has happened to make Claus think that the situation may have reversed and mainframe storage is now slipping? Claus explains it this way.
- Like old soldiers, many mainframe systems programmers are fading away. While open systems skills are very similar even between Microsoft and Unix, mainframe skills are very different and require some level of experience.
- The combination of ACS routines, JCL, REXX Execs, and CLISTs, create a very complex intertwined environment that fewer people are willing to touch. “Bob the expert” who wrote these scripts retired 10 years ago, and although some universities have trained graduates in mainframe concepts and skills, they do not want to touch them for fear of causing problems for core applications, which are the main reasons for retaining mainframes. IBM has claimed over 20,000 new System Programmers trained over the years, but teaching a skill and teaching “experience” are two different things as we all know. Proof point: last year I visited some of our largest MF customers in the US and Germany, and few of these customers claimed to still have any serious REXX skills in house. REXX is an interpretive language necessary for scripting automation.
- Many smaller and mid-sized Mainframe customers are off-shoring these skills exacerbating the situation since off shore people are not motivated to fine tune ACS routines, etc. What would happen if they screwed up?
- Net Net: the MF customer set has not necessarily made the changes to take advantage of new storage technologies like storage virtualization, wide striping, and dynamic tiering and may not be running as efficiently as storage systems in the open systems area. Much of this is due to the complexity of the environments and the natural reluctance to adopt newer technology. Mainframe customers are, by their very nature, more conservative than Open Systems.
In the last ten years a lot of technology has been added to storage systems. In 2004 we introduced the USP, which provided storage virtualization of external storage and the ability to tier across internal and external storage. This enabled mainframe systems, which require FICON connectivity and EKCD formats, to connect through the FICON ports of a USP and utilize lower cost SCSI and SATA storage contained within the USP or external storage that did not have FICON connectivity. This also was an important step in enabling the use of tiering to improve the use of HSM. HSM was created in order to reduce the use of expensive ECKD storage capacity by using Mainframe CPU cycles to compress less active data and moving them to a lower migration level, which was also on expensive ECKD storage. When that data needed to be retrieved, more CPU cycles were needed to un-compress it and move it back to the original production level. With tiering in the USP, less active data could be moved to less expensive open systems storage like SATA and could be accessed directly from that lower tier, not only saving storage and CPU cycles, but also simplifying operational tasks.
In subsequent generations of the USP, USP-V, and now the VSP, Hitachi has added new technologies like Hitachi Dynamic Provisioning, where the storage controller automatically thin provisions without the need for JCL, and provides wide striping for higher random performance for those performance databases. In the VSP we added the ability to do Dynamic Tiering, which can automatically move less active pages within a volume to lower cost, internal or external storage and reclaim unused or deleted tracks. HDT also supports Mainframe features like Extended Address Volumes and Dynamic Volume Expansion to relieve the constraints of 3390 volumes, which were defined in the 1980s. Business continuity has also been enhanced with Flash Copy, Hitachi Universal Replicator, 3DC, and Hitachi Availability Manager and the problem of device migration has been solved with non-disruptive migration.
Additional performance can be gained from Hitachi Flash Modules and VSP Flash acceleration in a dynamic tiering pool with the added protection of Flash encryption and shredding.
All these new functions are being rapidly adopted in open systems and customers are seeing TCO reductions of 40% or more. However, as Claus points out the use of these functions on the mainframe requires some re-scripting which requires some skill and experience. If you have mainframes and have not updated your scripting to take advantage of these new functions, you are leaving money on the table. If you would like more information on how to increase your return on your mainframe storage investment contact your Hitachi Data Systems representative or partner reseller.
Also, Claus tells me that in the weeks to come, he will elaborate on these topics by blogging on each of these functions and expanding the scope of what we are now calling Mainframe Storage Economics.