Server Hardware Explained (Part 9)

If you would like to read the other parts in this article series please go to:

In this article, you will learn about CPU sockets, processor cores, CPU architecture, and about how adding additional CPU cores to a server can boost the server’s processing power.

Recently in this article series, I have spent a considerable amount of time talking about server storage hardware. Now, I want to turn my attention to CPU architecture.

On the surface, server CPU architecture really isn’t all that different from what you might encounter in PCs. Most of the PCs that have been manufactured in the last few years feature multicore processors. A multicore processor is a CPU with two or more separate processors (or to be more technically precise, two or more processing cores) integrated onto a single chip.

Although it is extremely common for PCs to include multicore processors, PCs are almost always limited to using a single physical CPU. Servers on the other hand, are often designed to accommodate at least two physical CPUs, and sometimes more. The number of physical CPUs that a server can accommodate is often referred to as the number of sockets.

The overwhelming popularity of server virtualization has forced administrators to look closely at the total number of cores available on servers that are being used as virtualization hosts. Even though CPU cores are rarely assigned to virtual machines in a one-to-one ratio, the number of available cores in a server has a direct impact on the number of virtual machines that the server can host. Essentially, the more CPU cores a server has, the more virtual machines it could potentially host. I say potentially because there are many factors other than CPU cores that limit the number of virtual machines that a host server can accommodate. The most common limiting factor for example, is that of physical memory.

You can determine the total number of CPU cores that exist within the system by multiplying the number of cores per processor by the number of processors in the system. You will notice that I did not say to multiply the total number of cores per processor by the total number of sockets in the system. The reason for this is that often times sockets are empty. Many server manufacturers design their system boards with extra sockets as a way of expanding a server’s capabilities, but in the interest of keeping the price affordable they may not always included a CPU within each socket.

This brings up another interesting point. Server system boards can usually accommodate a variety of your choices. For example, I have seen system boards that support the use of four or five different processors. The processors that are supported by the system board often vary in terms of the number of cores available and in the overall clock speed.

Generally speaking, processors with higher clock speeds offer higher performance. Of course this assumes that the CPU architecture remains the same. You can’t judge a CPU’s performance based solely on its clock speed. Some CPU designs are more efficient than others. As such there are cases in which a CPU with a lower clock speed may outperform a CPU with a higher clock speed. The overall performance boils down to the number of instructions that the CPU is able to process per second. Even though the clock speed does limit the total number of instructions that the CPU can process each second, there are other factors that also play into this. Most modern CPUs are able to execute multiple instructions within each clock cycle.

Another thing that you need to understand about server CPU architecture is that adding additional processing power to the server does not deliver truly linear performance increases. To show you what I mean, let’s forget about server virtualization for a moment. Virtualization complicates things and for right now I would like to keep this example as simple as I can since this article series is intended for beginners. With that in mind, imagine that you have a server with two sockets, but with one physical processor. Let’s also assume that this server is handling a single workload. Perhaps it is running a database application of some sort. Over time the server’s workload increases and you decide to add an additional processor in hopes of getting the server a performance boost.

Logically, it would be easy to assume that adding a second physical processor would double the server’s performance. Unfortunately, this is simply not the case. For one thing, there is a significant degree of overhead associated with the task scheduling process. In other words, the server’s operating system has to actively decide which CPU to assign various processing tasks to. The simple act of juggling resources between CPUs requires a degree of overhead.

Prior to the widespread use of multicore CPUs, the task scheduling process could consume as much as 50% of the total capabilities of the new processor in other words, adding a second CPU to a server would only deliver about a 50% performance gain, and that is under ideal circumstances. Today, multicore CPUs have done a lot to change the way that task scheduling works unfortunately, I have not been able to find any reliable benchmarks detailing the degree of overhead that can be expected from the task switching process in modern CPUs.

A moment ago I mentioned that prior to the release of multicore CPUs, adding a second physical CPU to a server might result in a 50% performance gain under ideal conditions. The reason why I used the phrase under ideal conditions is because not every server can benefit from adding additional CPUs or CPU cores for that matter. The reason for this has to do with threading. A thread is an individual unit of execution. If an application is designed to be single threaded then the application cannot be split in a way that allows portions of the workload to be serviced by multiple CPUs or CPU cores. The only way that an application can receive a benefit from having multiple CPUs or CPU cores is if the application is multithreaded. Multithreaded applications can potentially take advantage of multicore systems because each thread could run on a separate CPU core.

CPU architecture

When shopping for server hardware, you will likely see terms such as x86, x64, and Itanium used to describe the CPUs that are available in server hardware. These specifications are what is known as the CPU architecture. Essentially the CPU architecture defines what type of code the CPU can run. For instance, and x86 processor is not capable of running code that was written for 64-bit or Itanium processors. The exception to the rule is that most 64-bit servers will allow you to run 32-bit code so long as doing so is supported by the server’s operating system.

Conclusion

In this article series, I have tried to convey the idea that although there are similarities between server hardware and PC hardware, server hardware does have some rather significant differences. The vast majority of these differences center around things like storage, CPU, memory, and management.

If you would like to read the other parts in this article series please go to:

About The Author

2 thoughts on “Server Hardware Explained (Part 9)”

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top