In general when talking about computing, more is better. More disk space, more memory, more power supplies, more NICs, more servers (or virtual servers) and more power. More more more! How can you go wrong with more?
While the old chestnut of “more is better” is often true, there is another saying that applies just as often “you can have too much of a good thing”.
This is often the case when assigning physical memory to cache. Check out the figure below:
The last option, Percentage of free memory to use for caching assigns a percentage of physical memory to caching. This is not a dynamically calculated feature, as the description might make it sound . It’s not really the percentage of “free” memory, its the percentage of total physical memory, and the amount reserved for caching does not change, even if that memory is required for other processes running on the firewall.
The default value is 10% and there’s little reason to change it. Even if content has to be delivered from disk cache, that’s still a lot faster than trying to get the content over the Internet. The problem with increasing this value is that if other processes need memory, they’ll have to go to the page file, while will generate a lot of hard page faults, and that slows down the entire firewall.
Yuri Diogenes discusses this subject in more detail on his blog over at:
Thomas W Shinder, M.D., MCSE
Sr. Consultant / Technical Writer