top of page
Search
  • Writer's pictureCynthia Unwin

Episode Two: What Makes Mainframes Different?

If you started this journey with me in "Episode One" you will have read about mainframe computers being fast, resilient, secure and backwards compatible. In this episode I'd like to dig a bit deeper and start to delve into what the fundamental differences between mainframes and standard commodity servers are and what that means to an agile, digitally progressive company with a mainframe in their data centre.


Mainframes are different from the commodity servers that fill corporate data centres and they operate at a fundamentally different scale. It is true that it is possible to achieve large scale redundant compute power with rafts of small commodity servers working in tandem and I'll admit that this intuitively feels more flexible and Agile than dependence on a custom configured mainframe behemoth. While researching this episode, I found that the internet likes to invoke images of AWS, Google or Facebook filling huge data centres with accessible commodity computers to provide mammoth compute power, swapping out failed units seamlessly, and providing processing power for a myriad of tasks. This image somehow feels egalitarian, it feels accessible, it feels like something we know how to do -- just scaled up. Unfortunately, this image is somewhat misleading. Google, and to a much greater extent AWS design and build at least some of their own hardware and network equipment and they do it for the same reasons that mainframe suppliers do. Replicated standard configuration servers, the ones that fill data centres all over the world, cannot even begin compete with the efficiency, flexibility and resilience of mainframes. These servers can do almost anything, but they rarely do anything exceptionally well. For the moment, let us just consider efficiency. Where a standard server can start seeing efficiency reduction at just over 20% of capacity, mainframes can run at 90% capacity for years at a time without performance ever suffering. Mainframes do this through component failures, OS upgrades and massive changes in workloads, and they do this almost completely without needing to be shut down. The question is then -- how do they accomplish this efficiency?


There are a variety of things that make a mainframe fundamentally more efficient than an equivalent cluster of commodity servers. Here are three that stand out to me:


1. Starting from the bottom, mainframes use built for purpose hardware that is self managed rather than relying on software drivers.


A mainframe uses task specific components. For example, isn't just configured with many efficient processors, it is configured with processors of five or six different types, each designed and optimised for specific types of processing tasks.

In addition mainframes connect to peripheral hardware devices that manage tasks such as I/O, networking, and encryption rather than running software drivers to manage these processes internally. A peripheral device connected to a mainframe has it's own processing power and memory to manage its task, it has it's own redundant power and components and often even its own cooling. These devices know how to do one thing well and responsibility for that "thing" can be effectively off loaded from the core system.


2. Mainframe Operating Systems such as zOS feature exceedingly efficient memory management. In zOS each mainframe user is assigned a virtual address space that appears to provide access to all the memory resources of the host machine. This allows for the concurrent execution of massive jobs and processes by different users without resulting in resource contention or forcing system resources to remain idle. This works because, even very large programs can only run small sections of code, or access (relatively) limited amounts of data at any one given time. The mainframe operating system utilises a schema of virtual addressing to manage paging and swapping on a very granular scale to keep running code for each user in active memory and code not currently executing on auxiliary memory. This ensures the fast execution of code while maximising resource utilisation.


3. Finally, mainframes can run multiple operating systems simultaneously without using virtualisation. While mainframes have the ability to, and regularly do, support more standard virtualisation, they most commonly use the concept of logical partitions (LPARS). These partitions effectively divide up system resources and allocate them to different running OSs without expending resources on a host operating system to manage the running instances. This means you can effectively divide your mainframe into individual logically independent machines and even dynamically move resources between them while incurring minimal management overhead. Additionally, because the partitions exist on the same mainframe they can all communicate without invoking external network resources which makes their communication exceedingly fast and secure.


This efficiency makes mainframes, despite their initial price tags, cost effective to operate over the long term and makes them appropriate hosts for workloads that need to consistently run securely and exceedingly quickly. This efficiency combines with the resiliency, superior batch processing capabilities, fine grained monitoring and overall product maturity to create a powerful platform with great potential to compliment newer Cloud strategies in an exceedingly powerful hybrid model.


Talk to you again soon,

Cynthia


51 views0 comments

Recent Posts

See All
Post: Blog2 Post
bottom of page