Central processing units (CPUs) are usually classified according to their architecture.
Historically, desktop computers (especially non-Apple computers) were almost entirely based on Intel’s x86 32-bit architecture, with more recent ones supporting AMD’s 64-bit extensions for more modern computers. The x86 architecture has never ruled because it was a great architecture for the future, but merely because of compatibility — essentially, if you write software to one architecture, it won’t run on another one.
The one company that pushed more aggressively for new architectures was Apple, which switched its Macintosh operating system through four major CPU architectures: Motorola 68k, PowerPC, Intel x86, and now Arm. Not only that, their earlier Apple II series ran yet another CPU architecture—the 6502. Because Apple has made such a transition so many times, it got pretty good at it. Their Rosetta technology essentially makes the transition transparent for users, translating the code from the old architecture into the new one on the fly, giving a small performance hit but a seamless experience.
For non-Apple technology users, the two computer-like technology items that were likely to use non-x86 instruction sets were gaming consoles and cellular phones. While gaming consoles have used quite a variety of CPU architectures over the years, cellular phones have generally stuck with the Arm architecture due to its combined power and efficiency, which is absolutely required for such portable devices. This is for two reasons. First of all, x86 has had its primary goal as maintaining backwards compatibility. This means that as features are added, the chips become increasingly more complex because they have to support both the new and the old ways of working. The second reason is the architecture itself. The Arm architecture follows RISC principles, which aim to remove complexity from the chip, and offload it onto the software. Essentially, it makes the compiler (the program that takes human-readable code and turns it into code the computer can understand) do more work so the processor can do less work.
Additionally, the Arm architecture itself adds additional power saving. The Arm architecture consists of differing powered CPUs — most focused on low power and one focused on high performance. So, if the user is engaged in an intensive task (such as a game on their smartphone), the high performance CPU will be working. Otherwise, the energy efficient CPU will be working. By 2010, Arm had over 90% of the cell phone market, including iPhone and Android phones. Apple has continually pushed to make their iPhones more and more powerful. As such, they have pushed the bounds of what the Arm architecture can do.
During this time, there is another industry that realized they could greatly benefit from reduced power consumption—the server market. Some of the biggest costs of running large data centers is power and cooling. Using more power-efficient chips helps alleviate both of these problems because cooling is required because of heat, and heat usually occurs as the result of hardware inefficiencies. If you make the hardware more efficient, you will reduce both power consumption and heat.
In 2018, Amazon Web Services (AWS) released their Graviton processors—a new Arm-based chip for running server processors. While these were less powerful than their x86 equivalents, AWS also offered them at a discount. In 2020, they released an upgrade, the Graviton 2 processor which supports 64 cores on a chip. The Graviton 2 is now more powerful than the equivalent x86 processors, and yet AWS still offers the Graviton 2 processors at a discount to the x86 processors, presumably because it is benefitting AWS in terms of power, heat, and cost. AWS has even released the next generation, the Graviton 3 processors, on a preview basis, which are capable of handling double the amount of data per clock cycle.
AWS is not the only cloud provider seeing the benefits of Arm processors. Azure started its rollout of Arm-based processors in 2017, with a more general rollout (including Linux servers) in June of 2020. Last month, Azure began its preview of a new Arm processor, the Ampere Altra.
The rise of container technology has made the ability to switch to Arm in the datacenter a lot more straightforward. Rather than developers having to take entire servers into account in their deployments, developers deploy minimalist containers which only package the absolute necessities. The server then decides how those containers actually get scheduled onto real hardware.
On AWS, for instance, if you are using their container service with Fargate, all that is needed to utilize the new Arm-based processors is to build your container image for it (set the platform to linux/arm64). Then, AWS will schedule your image on container with a heavily discounted price.
In all, the same features that made Arm popular in the cell phone market promise to make it popular in the server room and the cloud. In both of these cases, desktop compatibility is not an issue, so it allows better, more efficient architectures to take their place more easily.
You may also wish to read:
How does a Kubernetes Cluster work? A general overview of the Kubernetes environment. The goal here is to provide you with a broad understanding of the components of Kubernetes.