When you have a small office, Server Based Computing technology can be very appealing, especially for home workers or for connecting to the new small branch office.
If the concept of SBC is new to you, then you’re confronted with what server to buy, and what is needed to get it working. Extensive training on everything is often a sharp contrast to the price of the solution, so it’s up to your IT administrator to go out there, and make the best of it. This article should prevent him/her from misconfiguring or buying an overprized server.
When shopping for a server, there are few vendors out there. (I would not advise on self-built hardware, because for smaller offices good hardware support is more important then a few $ saved). The main vendors these days are HP, Dell and in Europe, Fujitsu-Siemens is also a large player.
When buying a server, the first choice to make is to whether it’s going to be a rack-mount 19” or a tower server. Rack mount is more expensive, but can save you quite some room in the long term - if your server park is growing.
Keep in mind, that a Terminal Server can be a complete desktop replacement for some users, so serious power is needed. In my opinion, powerful CPUs are a must. When looking at Intel, the use of Xeon’s with MB level 2 cache is just fine (Larger level 2 cache is used more in database-like servers). At the time of writing, the most powerful is the 3.6, and the price is accordingly higher. Going for the 3.2 would give you a much better power/price ratio. These days you can go for 64-bit processors, but keep in mind that at this point, besides a 64-bit version of the OS, and/or Citrix, there is not yet much out there that is 64-bit, so you will not benefit greatly from it.
When thinking about the number of CPUs it’s easy to say, "Never go for a single CPU server." Duals are quite affordable these days (and I’m not talking about Intel’s hyperthreading, but actual processors). Many tests in the past have shown that using quad CPUs is not worth the investment in hardware/enterprise OS, versus the extra number of users you can get on a box (it does not scale linearly). Right now it’s better to scale out than up, but I see a good future for the new 64-bit processors tackling all the limitations we currently have.
Memory is very cheap these days so there’s not much to think about. Just get the fastest type available for your server. When looking at 32-bit servers and OSes, there is a 4GB limitation for a standard servers (Enterprise can handle more, but is not the best investment). This 4GB is split into 2GB for user mode memory space, and 2GB for kernel mode.
When using the /3GB switch in the boot.ini you can move 1GB from kernel mode memory to user mode, providing more users, but real life has shown that this can compromise server stability more easily. My advice is to not use that switch, and try to get the most resources out of the 2GB user mode memory space.
My advice is to go for a 4 GB total server.
Depending on your company policy on redundancy, you can have lots of variation. Keep in mind, that the most well set up servers contain zero user data, making them easily reinstalled if needed (All user data needs to reside on other file servers). With that in mind and to have the fastest redundancy available, I would advise a RAID-1 setup, and to have 1 extra disk as a hot spare available (3 disks in total). Using 15k disks over the 10k ones, can gain you additional performance.
When looking at the partitioning of the disk, try not to scale the OS drive to low. An average Windows install is about 3GB, the page file is about 1.5x the RAM (ex. a 4GB memory system = a 6GB page file) and if Microsoft ever needs a full memory dump, then it HAS to come from the same OS disk, and can also contain the full memory from the server (another 4GB). That leaves you with 13GB used, without installing a single program. So if you bought a server, using 35GB disks, I would advise to use at least 20 of it, for the C drive.
If the concept of a terminal server is new to you, you can easily be surprised by bringing a server to its knees after a minor OS or Citrix update. If this happens you will have to fix it, while dozens of people are screaming at your desk, trying to understand why they can’t work, wishing you could turn back time. Turning back time is actually quite simple. During your partitioning, add a simple 5GB FAT32 partition for storing a server image.
Create an image of your server with the tool you prefer (my favorite is ghost) before running ANY update on your server, and you will find this method will save your a** one day. It’s easier to restore the old environment in a 10 min job, rather than trying to fix the issue for hours, and having your boss looking over your shoulders while doing it.
When using the RAID construction mentioned above, you need to purchase a good quality RAID controller with your server. Get one with at least 128MB and make sure it has an option which you can set to divide the write and read memory on it. (HP calls that a BBWC (Battery Backed up Write Cache) add-on and it’s NOT standard on an HP raid controller. Many people in the past have experienced issues with Controllers that just had read cache enabled, and encountered users with frozen sessions, which only clear up after some time when the cache got emptied, or needed a server reboot to clear the cache.
Setting the cache at 50% read and 50% write access has proven the optimal setting.
Most servers these days come with a 10/100/1000 auto sense network card (or sometimes even 2) already preinstalled. You can use teaming software to increase the total load, but it could also be wise to split the front-end connected to the users, and the backend connected to fileserver or application servers by using 2 nics. That way the traffic type would not interfere with each other. If your backend is not on gigabit yet, consider that it’s just the investment of a gigabit switch and then some client/server applications (where the client runs on the terminal server) could benefit a lot from this set up.
It’s extremely important to fix the nic speed on the server (100 of 1000MB Full duplex), and if possible on the switch also. Afterwards do some speed testing, by copying a large file from a file server to the terminal server, and the other way around (to double-check that the speed can be handled by the switch both ways).
Some people starting with terminal server for the first time could be led to think that they need a top of the line video card. Let me pop that bubble right here. The video card processor in the server is not used whatsoever in the screen scraping process, and you can get the cheapest one available.
The number of users you can load
This is one of the most often asked questions out there, and it cannot be answered.
It all depends on the applications you plan to load on that server. Some applications can be so aggressive, that loading 5 sessions takes all the resources, whereas a simple app like notepad can load over a 100 users on a dual CPU box.
Intensive testing before deploying can give you a more exact number of the users you can load, before performance degradation starts kicking in. To at least get an average, a simple dual CPU with 2GB memory box, roughly gets about 30 to 35 concurrent users using well behaved apps at a time. Some people out there buy expensive 8-way boxes with 16GB memory, because they want hundreds of users, but often experience limitations on the hardware, forcing them to lower the number of users.
Like I mentioned before; with the current technology it’s best to scale out, and not up.
If you’re a first timer, and are not really sure of what to buy for your first terminal server, the above advice can be very useful. If you’re an experienced terminal server administrator, you probably developed your own set of rules and preferences on the above advice.
So for the first time, my summary of all the above advice, for a nice first terminal server:
- Dual CPU with 4GB RAM,
- RAID 1 disk setup with hot spare,
- Running on a 128MB RAID controller that has read/write cache set at 50%,
- And Gigabit NIC enabled