In our October 2020 webinar with IBM Systems Magazine, Reg Harbeck, chief strategist of Mainframe Analytics, Ltd., explained that innovative IT departments will be looking to the mainframe for inspiration as Moore’s Law and the reliable hardware improvements it predicted runs out on them.
“It’s sort of funny that we as the mainframe organizations can say ‘welcome back’ to the rest of the world to frugality. We’ve always had more demand for our mainframe than dollars to pay for them and so everything has always been optimized to make sure we got the maximum value from our mainframe investment… Other platforms have sort of been riding Moore’s Law for the last 50 years.”
And as we are increasingly seeing, that wild ride has come to an end. Harbeck continues, “Suddenly we have people such as the CEO of Nvidia proclaiming that Moore’s Law is dead, and we see that evidenced in things such as CPU speeds. Processor speeds have stabilized basically around 5Ghz for over a decade now. What that means is relying on hardware to pick up the slack for sloppy or inefficient code design is no longer an option.”
In a paper by Neil Thompson, Innovation Scholar at MIT’s Computer Science and Artificial Intelligence Lab and the Initiative on the Digital Economy, the economist argues that slimming down software bloat is perhaps the most promising way to eke new performance gains out of existing chip technology. To prove the point, Thompson and his colleagues demonstrated that a “computationally intensive calculation” could run about 47 times faster using C instead of Python, and that additional optimizations for a chip with 18 cores produced additional improvements. Eventually, the researchers were able to generate a result in 0.41 seconds that necessitated seven hours using Python.
Making the Most of the Mainframe
In the early days on the mainframe, letting software devour valuable computing resources was never an option. That’s not necessarily because mainframers themselves were more meticulous their code than the average programmer—there just wasn’t ever been enough computing capacity to warrant wasting it. Harbeck furthers the point (from the webinar), “Average address space in the 1960s was three bytes or 24 bits worth. None of the original mainframes had that much memory because it wasn’t available yet. The need then was to be very frugal with all the [programming] resources that were available. So that became part of the culture right from the very beginning—the culture of both the mainframe people and of COBOL programmers.”
Even as hardware improved leaps and bounds over early mainframes like the IBM System/360, Big Iron was still heavily scrutinized to the highest levels of maximum efficiency. As Harbeck explains, “There has never been a moment when the mainframe reached a point when it could take it easy because it had more capacity than anyone could use: businesses pay through the nose for every mainframe byte and CPU cycle, and the fact that every cost of the mainframe from hardware to applications to attached devices to support personnel is visible as a single, giant number to management means it never gets cut any slack.”
Harbeck’s argument—that Moore’s Law has been used as a crutch for years of bloated code—isn’t a new one, and Niklaus Wirth made the case eloquently enough that the idea of software persistently eating up hardware’s gains came to be known as Wirth’s law. As Moore’s Law runs out, we can only hope that Wirth’s Law will come to a similar conclusion and give way to a bright future of increasingly efficient code regardless of platform.
To get the bigger picture of what Harbeck is talking about and some additional best practices on managing z/OS resources that affect budget, register here and watch the entire webinar titled, “If It Ain’t Broke, Why Recompile COBOL?”