Understanding the Impact of RAM on Overall System Performance
Before I Begin
Before I get started, I want to point out that this article makes a couple of assumptions. First, I am assuming that you are running either Windows 2000, Windows XP, or Windows Server 2003 on a 32-bit computer. The second assumption that I am making is that the computer in question is relatively new, and that it isn’t grossly underpowered. The information in this article may not hold true for other versions of Windows, 64-bit systems, or for computers that are grossly out of date.
Windows and Memory
One of the main reasons why memory is such an important resource has to do with the way that Windows makes use of it. When the first version of Windows was created, memory was tough to come by. Memory was extremely expensive, and even if you could afford it, computers at the time were very limited as to how much memory they would accept. Even as recently as the mid 1990s, memory was still a huge issue. For example, my first Pentium computer came with 8 MB of RAM and only supported a maximum of 64 MB. This may seem ridiculously small by today’s standards, but at the time memory prices of as high as $50 per MB were not uncommon.
The high prices and limited motherboard capacities made memory a scarce commodity to say the least. Because of this, Microsoft has always allowed Windows to rely on virtual memory to some extent. The idea behind virtual memory is that since hard disk space costs so much less per megabyte than physical RAM, Windows could use hard disk space to compensate for shortcomings in the system’s RAM.
Virtual memory seemed like an ideal solution in the early days of Windows, but there were some drawbacks to using virtual memory that still hold true today. One problem with using virtual memory is that the hard disk is much slower than physical memory. In fact, memory access times are measured in nanoseconds, or billionths of a second. Hard disk access on the other hand is measured in milliseconds, or thousandths of a second.
Another problem with virtual memory is that it isn’t directly usable. For example, suppose that a page of memory is written to virtual memory and then later, the computer needs to access that page of data. The computer can’t access the data directly from the hard disk in any meaningful way. Instead, the page of data must be copied to RAM before the computer can work with the data. This process is known as paging.
As you can see, paging slows a system down because the computer has to stop and wait while the data is copied from the hard disk to the system’s memory. In actuality though, paging is actually more inefficient than you might first expect. Here’s the problem… as you will recall, the reason why we have virtual memory in the first place is because the computer does not have enough RAM to support the operating system’s needs. If the system’s memory is full, then the computer can’t just copy a page of data from the hard disk to RAM. There is nowhere for the computer to put the data. That being the case, the operating system has to locate a page of data in RAM that is not currently being used and then move that data to the hard disk to make room for the data that is currently needed to be moved from the hard disk to the system’s memory.
If you think that paging is inefficient, just wait; it gets worse. Paging is a process that must be managed. The computer must use memory to keep track of memory usage. That’s right, the system has to dedicate some memory to the task of keeping a record of which pages are in RAM and which pages are in virtual memory. Likewise, the system must use a considerable amount of CPU cycles moving data between physical RAM and virtual memory. To put it simply, computers run much more quickly when they do not have to worry about paging.
OK, time for a reality check. At the beginning of this article, I mentioned that physical memory is often the upgrade that gives you the most bang for the buck. Hopefully by now you understand that the reason for this is that adding physical RAM reduces Windows’ dependency on virtual memory, which in turn frees up CPU and disk resources, making the computer run more efficiently. That being the case, you might assume that the best thing to do is to add lots of RAM to your system and disable virtual memory usage completely.
While this is certainly an option, it’s usually not a good idea to disable Windows’ virtual memory usage. Windows was designed to use virtual memory, and consequently, the operating system expects virtual memory to be available to it. Windows tends not to function as reliably if you disable virtual memory.
You might assume though that the more physical memory you put into your machine, the less virtual memory the machine needs. After all, virtual memory only exists to address RAM shortages, right? Well, this is where things get a little weird. The idea that adding extra RAM to your system reduces the machine’s need for virtual memory is only half true.
Here’s the thing. As a general rule, Microsoft recommends that you configure a machine’s virtual memory usage based on the amount of physical RAM that’s installed in the machine. More specifically, Microsoft recommends that your machine have 1.5 times more virtual memory than physical memory. This means that if your machine has 512 MB of RAM, then Windows expects to have access to at least 768 MB of virtual memory.
Now, suppose that you decided that having 512 MB of RAM just wasn’t getting the job done, so you decided to upgrade the machine to have a total of 1 GB of RAM. In doing so, you have actually increased Windows’ virtual memory requirements. Windows would now expect the machine to have 1.5 GB of RAM available.
All is not what it seems though. Just because you have increased the size of the machine’s pagefile (the file used as virtual memory), does not mean that the machine is using the pagefile more heavily. Usually, the opposite is true. Installing more memory into the machine makes it less likely that Windows will have to page anything at all. Even if Windows does still have to use the virtual memory to some extent, the extra memory that has been installed will help to insure that pages related to the application that is running in the foreground are not paged. This helps the application to be more responsive and to give the user better overall performance.
Is There a Limit?
As you may recall, at the beginning of this article, I mentioned that the information in this article only applied to 32-bit systems, and not necessarily to 64-bit systems. The truth is that even 64-bit systems rely on virtual memory, but 32-bit and 64-bit versions of Windows use completely different memory models.
The fact that 32-bit systems only have 32-bits of data to work with means that they can only address up to 4 GB of RAM. A 64-bit system on the other hand could theoretically address up to 16 exabytes of RAM (That’s over 16,000,000 GB of RAM). In reality though, there are few, if any, 64-bit systems that support 16 exabytes of RAM. Building a machine that supports that much memory would be extraordinarily expensive. To counter this cost, many manufacturers impose RAM address space limits that fall somewhere between the 4 GB limit of 32-bit machines and the theoretical 16 exabytes that a 64-bit system should be capable of addressing. Most existing 64-bit systems limit physical RAM to somewhere between 8 GB and 256 TB.
So what does this 4 GB limit mean for 32-bit machines running a Windows operating system? Windows is designed to address a full 4 GB memory space. Windows splits the 4 GB of available memory address space into two separate 2 GB address spaces. One of the 2 GB address spaces is used by the Windows operating system, and the other 2 GB address space is used for user mode processes (applications).
As a side note, there is actually a way to change the way that Windows allocates the address spaces. You might have seen the occasional Windows Server deployment in which there was a /3GB switch used in the server’s BOOT.INI file. The /3GB switch changes the memory allocation so that Windows is only allocated 1 GB of address space, and user mode processes are allocated 3 GB of address space. Splitting the address space like this helps Windows to better manage high demand applications such as Exchange Server. However, Windows is configured to have a 2 GB address space for the operating system for a reason. If you use the /3GB switch, you can severely impact Windows ability to run multiple applications simultaneously. Furthermore, you should never use the /3GB switch on Small Business Server or on a domain controller.
So with this in mind, the million dollar question is, how is virtual memory implemented on a system that has 4 GB of physical RAM? Unfortunately, I have not been able to get a straight answer from Microsoft, and I don’t actually own a machine that has 4 GB of RAM, so I am basing this answer on logic. If anybody has a definite answer though, please send me an E-mail message because I would like to know.
It seems to me that if Windows can only allocate 4 GB of address space, then there would be no reason to keep virtual memory enabled on a machine with 4 GB of RAM. Even if some small amount of virtual memory were required though, there is no way that the rule about setting the virtual memory size to 1.5 times the physical memory size would apply. Doing so would mean that the machine would have 6 GB of virtual memory and 4 GB of physical memory for a total of 10 GB of memory space. That might be OK for a 64-bit system, but it wouldn’t really work for a 32-bit system with a 4 GB address space limit.
In this article, I have explained that memory can have a huge impact on a system’s performance. As physical RAM increases, the machine’s dependency on virtual memory decreases. The less a computer has to depend on virtual memory, the more efficiently that machine will run.