Caching and Memory – Too Much of a Good Thing?

In general when talking about computing, more is better. More disk space, more memory, more power supplies, more NICs, more servers (or virtual servers) and more power. More more more! How can you go wrong with more?

While the old chestnut of “more is better” is often true, there is another saying that applies just as often “you can have too much of a good thing”.

This is often the case when assigning physical memory to cache. Check out the figure below:

image

The last option, Percentage of free memory to use for caching assigns a percentage of physical memory to caching. This is not a dynamically calculated feature, as the description might make it sound . It’s not really the percentage of “free” memory, its the percentage of total physical memory, and the amount reserved for caching does not change, even if that memory is required for other processes running on the firewall.

The default value is 10% and there’s little reason to change it. Even if content has to be delivered from disk cache, that’s still a lot faster than trying to get the content over the Internet. The problem with increasing this value is that if other processes need memory, they’ll have to go to the page file, while will generate a lot of hard page faults, and that slows down the entire firewall.

Yuri Diogenes discusses this subject in more detail on his blog over at:

http://blogs.technet.com/yuridiogenes/archive/2009/05/16/a-failed-attempt-to-optimize-browsing-performance.aspx

HTH,

Tom

Thomas W Shinder, M.D., MCSE
Sr. Consultant / Technical Writer

image
Prowess Consulting www.prowessconsulting.com

PROWESS CONSULTING | Microsoft Forefront Security Specialist
Email: [email protected]
MVP — Forefront Edge Security (ISA/TMG/IAG)

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top