Computer technology has always revolved around miniaturization. From the early days of computing, the directive for hardware designers and manufacturers has always been to fit more of everything into a smaller package. The fact that the cell phone in your pocket has more compute power than the entirety of NASA when it put a man on the moon is nearly incomprehensible. To get an idea of the scale we are looking at here, consider that the Apollo 11 guidance computer achieved a staggering 41.6 instructions per second, weighed 70 lbs., and was entrusted with the lives of three men traveling 250,000 miles through the vacuum of space, hanging out in lunar orbit for a couple of days, before making the 250,000 mile trek back to earth. A modern smartphone can achieve over 3 billion instructions per second and a newer Intel processor sitting in one of those servers in your data center can get you about 300 billion instructions per second, but for some reason I can’t go more than a few weeks without needing to reboot one or both for one reason or another.
The massive growth and increased density of computing and storage has unencumbered today’s software engineers from the restraints that bound their predecessors, freeing them to write code without having to account for every last bit of memory or having to deal with clock speeds measured in KHz. Good thing, too, because, as time has gone on, and the pace of software development increases, programmers do not have the luxury of being able to tear down and rebuild their code with every iterative release. The result is code bloat; good code written on top of bad code will likely run poorly. So when application suites like Microsoft Office (estimated by some at 40 million lines of code) run on top of operating systems like Windows 8 (conservatively another 40 million lines of code) it cannot be surprising that there are bugs and inefficiencies that manifest in user frustration.
In addition to performance considerations, there are serious security implications for all of this. Common sense tells you that as your code base gets bigger, the potential for vulnerabilities increases. This is in no way a criticism of Windows, Office or any other software, as it is a modern miracle that this code is able to be maintained and remain functional as hundreds of engineers make changes with each version, update, patch, and hotfix, all while maintaining compatibility with various hardware and maintaining APIs that allow third parties to write even more code to run on top of the stack. However, in the end, performance is performance, and an end user who sees an application crash doesn’t want to hear about just how many lines of code got them to the point of not being able to check their email or how many hours were spent unsuccessfully preventing their credit card data from being stolen.
So how to do things better?
While there is no silver bullet for this problem, there is a change coming to the Windows Server ecosystem that may be the biggest that the server OS has seen to date. With Windows Server 2016, Microsoft will introduce Nano Server, which Microsoft is hoping will be the dominant cloud OS.
To understand the progression to Nano Server one needs only to look back at one of Microsoft’s most underused deployment options, Server Core. Available first in Windows Server 2008, Server Core provides a reduced installation footprint by removing large parts of the OS that aren’t needed to run the server roles for which Server Core was designed to run. Most notably, Server Core has no GUI, and this is a wonderful thing. However, old habits die hard, and people have remained anchored to managing systems from the GUI. To this day, it seems hard for me to fathom any reason why you wouldn’t deploy all Domain Controllers on Server Core. There is no need to RDP to a server in order to manage it. All of the tools that you need should be installed on a remote machine that can be used for management. Better still, avoid the GUI tools altogether and manage everything with a remote PowerShell session. Your server has enough to do without having to deal with your local logon activity. Shedding the millions of lines of code and the CPU and RAM that are dedicated to rendering the GUI can only help to make better use of these resources, and removing bits of code for services that will never be used will reduce your attack surface, your installation footprint, and the need to update all of these bits on the second Tuesday of each month.
Server Core continued in Windows Server 2012. In an effort to offer flexibility and increase user adoption, however, Microsoft provided a way to switch between Server Core and a Full Server installation with only a reboot. Whereas previously a complete rebuild of the server was required, the GUI was now simply a role on the server that could be added and removed like any other. While in some ways this was a genius move in relegating the GUI to what it should be – a tool to be installed only when and where appropriate, instead of a platform on which all applications were accessed – in another way it was a bit of a step back from 2008. The fact that the bits for the GUI existed on the server in and of itself indicates that the footprint and attack surface could have been reduced even further by omitting them completely as was done in the 2008 version of Server Core.
Nano Server, as part of the Server 2016 ecosystem, makes no concessions. While conversations about most product releases begin with all the bells and whistles, the dialogue for Nano Server starts with what it doesn’t have. No GUI. No 32-bit support. No MSI. No local logon. No RDP support. What this all adds up to (or subtracts down to?) is no bloat. Compared to the full server installation option, Nano Server boasts a 93% lower VHD size, 92% fewer critical patches and 80% fewer reboots. Nano Server employs a “Zero Footprint” model such that the base OS should always be the same upon deployment, and all binaries for rolls or features that are to be installed will live outside of the Server.
Initially, Nano Server will serve as a platform to support the framework for cloud-based applications, and for infrastructure – Hyper-V clusters, and Scale-out File Server clusters. This is the logical place for Microsoft to start. As cloud platforms like Azure and Office 365 continue to grow, Microsoft needs a lean platform that, at scale, will save it countless computing resources and shrink its attack surface. So although, initially, the largest environments stand to gain the most from Nano Server, Microsoft has said that this is the initial step of a deep refactoring of the Server family. In the future we can expect to see additional roles available so that, hopefully, Nano Server can serve as the platform for your Active Directory services, print servers, and, who knows, maybe my dream of Exchange on a headless GUI-free OS will become a reality.
As you look to squeeze one more VM onto your already overcommitted hypervisor, relief might be just around the corner.