Let us give a great deal of thought to what you should expect from a Web server based on information from databases. Of course, the list does not contain all aims and it is rather general, so anyone can add his particular demands but let us try to sketch a minimum set of these requirements. Firstly, one usually begins with a small project, and then business starts to grow and the site is expected to grow along with it, so the first thing you would expect is scalability.
Secondly, if you offer and show something to the world, it must work. Discontinuities in the presentation of your offer imply that nothing is being offered. After all, no freight means no money. It doesn't matter if this is real money gathered for each site read, or each banner loaded, or if it is virtual, potential money, which you could gain by building your products brand. In general, you should expect your server to be accessible.
Is that all? Nearly. There are two more odds and ends.
Security. Imagine your business running at its best, your products are selling well when, suddenly, the fact that someone has broken into the security system of a profit-giving server comes to light. You will lose your benefit, your clients, and your products will lose their branding. You may even forfeit the trust of those who wanted to become your clients. Noting but losses. But wait a moment... if your site is not designed to sell anything then... No, your company's image would suffer heavily. An insecure Web server may be a gate into the company. And if the gate is at the same time a connection with a database... I need not add anything further.
These are the basic aims. There is one more, very closely linked to security. Therefore, you must always be in control of the cash in your hand and of expenses. Generally, you have to watch the system maintenance outlay cost. Anyway, most systems that serve a growing company tend to get complicated and become uncontrollable. Uncontrollable - that's the word. Such a situation must not develop. You have to bear in mind the manageability of the systems all the time.
The Architecture the first encounter.
Fig. 1. The simplest way of connecting to the Internet. A configuration regarded as an unsafe one, which is not recommended.
Figure 1 shows the simplest way to connect to the Internet. As you may see, everyone works in a single network, and they have comfortable access to database updates (those in the SQL database), as well as to the substance of the websites. Perhaps there is even a direct connection to the base of production or storage data within the website. You economize on servers and licenses. The solution seems to be perfect. Actually, you couldn't be more wrong. Such a (mis-) approach should be crossed out with a thick red line so that no one would try using this architecture for designing active websites.
Where is this negative approach coming from? This is the reason:
The concern of scalability. Every increase in the load of the SQL /WWW server, whether it's coming from static substances or from database operations (in both cases the load resulting from publishing websites is mentioned), tugs at a necessity to upgrade the server or buy a new one. You could say, "Oh well, we have more traffic on our sites so we must increase our expenses." Of course, greater traffic costs more but why should you spend money without knowing which parts of the system need additional financing? The memory modules that a Web server is equipped with may be cheaper than those used in an SQL server. And hard drives? The only difference is the fact that an SQL-server hard-drive is far more expensive than a Web-server drive. By the way, you can't really expect much traffic with the use of one Web server. Especially when an SQL server gives it an extra load. This means that after having spent so much money, you will realize that you have chosen the wrong approach since the beginning.
Availability is another option which is inadequate when this type of network architecture is used. Please note that this type of architecture means the whole system must be halted if one of the systems parts needs to be altered, e.g. if a patch for the MS SQL server is to be installed. If the functions of the SQL and the Web server are separated, at least there is the possibility of a message saying, "The system is being inspected. It will start working in approximately ten minutes". In addition, if one SQL server is used for both bookkeeping and warehouse software, patching it at your will is impossible. However, this is necessary for the stability of a Web and SQL service. What can one do? Which should be sacrificed?
Security will be completely forgotten if you build a network like the one presented in Figure 1. If one server takes care of publishing websites and is also used as an SQL database, an exceptional range of dangers typical for both of them will exist. A potential attacker has a choice: will he first hack into the SQL and then exploit the IIS features or will he start by finding the newest IIS bug which will allow him to freely access the SQL and the intranet? Only the hacker's imagination will decide what you will lose - your company's reputation or its finances. Finally, one more thing should be mentioned. Studies indicate that most hacking attempts come from the inside of corporate networks. This means that most attackers are also employees. Why make hacking easier for them? Trust is a good thing of course but surveillance is even better. Moreover, some attackers may be unconscious of what they're doing. A situation when an authorized user runs a closing of the month operation on the company's SQL is a good example. A server loaded with this operation and also with the burden of the SQL and Web server (as a result of hosting the company's website) starts to slow down. Not only does it worsen the quality of bookkeeping operations but it also makes the website publishing work even slower. The Denial of Service attack has a very similar effect.
Manageability. The last of the goals seems to have been secured. It is easy to manage a multiple server network. However, before you think that this matter has been solved, another fact comes to light. This type of architecture dissuades you from testing your new site or changing the SQL or Web server configuration because both of them use the same system and a change that is suitable for the MS SQL won't necessarily work for the Web server. This solution also leaves no space at all for one of the management processes - the hardening of an internet-accessible system. I can assure you that correctly hardening such an MS SQL server would effectively prevent it from working.
As you can see, simple and presumably cheap architecture achieves none of the expected objectives. Running it becomes expensive too. This should be altered and the modification should be something more than a firewall added at the contact point of the corporation network with the Internet. In this case no type of firewall would manage to improve the company's security. It is true that setting up a firewall would mean sufficient control over incoming traffic, so that a potential hacker would have to start his attempts at the Web server. However these steps would not differ from those probably taken before having installed the firewall. The firewall mentioned would, neither separate the server from the corporation network, nor achieve any of the earlier mentioned targets. The modifications should be deeper.
The Architecture the second encounter
Before the final fight for safe network architecture you should sketch a scheme of what you need, based on what you have inferred.
We already know that the company network should be separated from internet-connecting servers. These servers should be separated from the Internet and from possible attacks. This implies this division:
- The exterior network, the outer zone, which connects directly to the Internet. The addresses would be translated here and this zone cuts off every unexpected network penetration.
- The demilitarized zone (DMZ) where you can find filtered traffic from both the inner and the outer network. This zone does not allow any connections directly into the internal network.
- The internal network, which allows ordinary traffic resulting from the company's needs, such as the traffic caused by preparing data for the servers, installed in the demilitarized zone, to pass through. Only necessary traffic is let through to the DMZ. For example, updating the server data or reading the log files or data inserted by guests on the site.
Gathering these inferences eventually gives you Fig. 2.
Fig. 2. A secure Internet connection.
The scheme is ready. Now it is time to decide which servers should be standing in the trusted zone and which should be in the limited trust zone. After dividing the servers into these groups one will have to cope with another problem - managing them? The internal network is a space where the integrity of the stored data is definitely secure so it seems to be a natural place for the basic database server. This will raise another doubt. If the SQL server was set up in the internal network then a special rule would have to be programmed on the barrier between the internal network and the DMZ. This rule would be necessary so that the web server could update its data every time it is needed. The rule would weaken the barrier by allowing connections initialized by the DMZ. One may agree to this and weaken the barrier (which would cut off any traffic initialized in the DMZ from the internal zone) or one may agree to the costs of replicating the SQL server in the DMZ. Security experts point out that the second solution is better. The servers which should find their place in the trusted zone are called staging servers. They store the up-to-date and correct substance that is used for updating the information stored by the servers in the DMZ whenever such an update is needed. Obviously, a server from the inner network initializes the update. It may be caused by the need to install a new version of the site or the destruction of the contents of one of the servers in the DMZ, e.g. because of a crash. A similar process should be used when updating the databases. In this case, however, you should download the updated databases through interaction with the website users. The internal network should also have its own domain controller so that authorizations could be efficiently managed inside it. The demilitarized zone is an area where data processing, i.e. web servers and helping servers should be installed. The helping servers are: a DNS server, a domain controller for the web server and the SQL server (or multiple ones). Further on in this article I will explain how to connect the web and the SQL servers. I should also emphasize the fact that placing the domain controller in the DMZ is not a mistake. There's method to my madness.
The external network should contain no servers at all. It should only be composed of indispensable network devices and (depending on what security policy you've chosen) the sensors of the IDS system.
These explanations can be defined by further developments to the sketch in Figure 2.
Fig. 3. Simplified network topology for secure hosting of dynamic websites.
To make things easier, this figure does not contain the doubled elements of the network (the switches and the fire-over firewalls). The domain controller of the secure internal network has been intentionally omitted, as well as the connection with the corporation network, which should be linked with the safe internal networks switch. The traffic should be filtered by a firewall and it should pass through an encrypted VPN channel unless it is a local area connection. The corporation network filter rules should define which users on the company's network are authorized to access particular servers (if possible try to have an extra access server for the webmasters). The traffic from the corporation network workstations should be cut off from direct access to the safe SQL server or the staging server (this should be a rule!). It must also be said that these servers accept traffic only from other servers in this case the filtered traffic from chosen servers on the corporation network. The necessity of initializing the connection through the servers from the safe internal network and refusing traffic initialized by every server from the corporation network may be another safeguard.
Does such a network fulfill all the tasks described in the beginning of this text?
Lets check it out:
The scalability task. This has been fulfilled and you can increase the system capacity related to functions of higher computational power relatively cheaply (for only the price of the hardware involved). Upgrading the web servers will allow new, more memory-demanding applications to be run. Adding new web servers to the NLB cluster will allow you to handle the growing traffic. Higher demands towards the SQL server may be handled by doubling it (with the use of a fault-tolerance cluster, i.e. Microsoft Cluster Service). You can partition its data, dividing it among many databases or SQL severs, too.
Availability. This task has been fulfilled. When this architecture is used, every element of the network may be doubled in such a way that a crash will not affect the web site users in any noticeable manner. With such doubling one can easily perform maintenance operations even in prime time. To start installing updates or patches you only have to switch the traffic from one server to another. The systems maintained in this way have a chance of being as safe as possible at that particular level of knowledge.
Security. While taking advantage of the possibility of defining strict rules of communication inside the network one must achieve a satisfactory level of security. This involves a suitable distribution of roles among the systems. In this case hacking into one of the barriers won't compromise the whole network; such a hack will be limited to several devices or systems. For instance, breaking the first firewall will not cause a catastrophe. The hacker would only have access to the network cards of the web servers which have been placed outside and that have been protected as well as possible. In this case the servers' protection may be pushed to the limit because they only keep the URL's data; therefore the process will be efficient. In effect, the only threat to the whole system is to be flooded with unfiltered TCP/UDP/ICMP packets of the published network cards of the web servers. Of course, despite the existence of the firewall barrier the attack may be addressed directly at the web servers, with the use of the newest (or most fashionable) gap in the IIS security locks. This case is a bit more dangerous but besieging a single server is no victory for the hacker because all of the server-database connections run in trusted mode. This means that there is no APS site that could give the attacker a password or account enabling him to penetrate the system further. On the other hand, there is a good chance that another server from the NLB cluster would answer the attacker's query, causing slight confusion. The attacker can't even dream of going further, to any of the DMZ servers which only work in the internal network (SQL, DC, etc.). The correct hardening process of the external systems makes sending their packets into the network impossible. Please notice how far the aggressor is from the trusted internal network, the true heart of the system. Attack attempts from the corporation network are not definitely successful because a good set of rules does not allow direct communication with the servers. Moreover, even if the firewall is broken from the corporation network's side, all communications should stop at once since the servers will demand appropriate IPSEC packets. Those will not be available because the attacker will have just turned off their source. Well, it seems that we have just designed a perfect system. This may be true but you should never fall into a routine and become too self-confident. The history of successful hacking attempts shows that every system can be hacked with the right amount of expertise and funding.
Manageability. It seems that the system in Figure 3 also fulfils this task. Every element of the system is available and can be analyzed. Using this type of architecture makes it relatively easy to specify the updated and future burdens, as well as to define its sources and therefore to determine the expenditure necessary for a state of equilibrium. So, an architectural scheme of safe co-existence of SQL and web servers seems to have been worked out. Is that all? Unfortunately not. Something should now be said about matters which are above the average administrators influence, but which may efficiently annihilate all of his work on the server's safeguards.
Writing a safe code
As discussed earlier, there is one element capable of annihilating all of the expenditure on safe architecture. The ASP site and the ASP-generated elements' code, which are being executed during the processing of the web site, are the magical annihilators. It seems that the best way to create active web sites is by dividing the whole Web-presented application into layers: the layer of presentation, the layer of business logic and the layer of data. On the one hand this type of division allows quick changes in any of the layers with no need to rebuild the whole application. On the other, it is a good basis for constructing safe Web systems and adjusting them to various data loads. The presentation layer is placed in the web servers themselves. This layer consists of elements of the ASP code, which are responsible for presenting the data. These are obviously subject to the Webmaster' needs and his demands concerning the look of the website. At the same time they are safe for the administrator. This layer is also responsible for deriving the data from the form. Further processing of the data takes place in another layer called the business logic layer. Usually this layer's elements work as single components (registered at Component Services). They are called out from the website just as any other component, e.g. the FilesystemObject. As I have mentioned, the business logic layer components are registered as separated elements. Every one of them may work on an administrator-defined account (on a domain or locally). Every one of them may use its own, account-attributed rights, however nothing further. Please note the existence of an ASP site, working as an IWAM_machine or an IUSER_machine (depending on the executed fragment of the site's code) with standard, low rights for these accounts in the system. Hacking the server will give the attacker guest's rights. However the system has been well protected, so the guest will not be able to damage much and he will look for a more hospitable host. How is it possible then, that at such a low level of authorization, you can call a connection with the base to download data for the presentation? The ASP code does not have to contain the SQL account and password. It only needs to call the right component, which also does not know its password or account. However with the use of a trusted connection within the database, as well as working in the context of the Component Services defined account, it will receive (i.e. the data layer) the necessary information from the database. This is clean and clear data flow, fully controllable from the administrator, leaving unequivocal fingerprints on the system. These fingerprints are unequivocal because they come from one account through the whole data path. It is important to look out for the SQL server's mode of work. It may only accept trusted connections. The data should be carefully verified and searched for elements, which are dangerous for the SQL server. This should happen before the data enters the data layer. Unfortunately, it is very easy to smuggle your own commands onto a protected SQL server with all the risks involved. Even the most complicated and the most expensive architecture of the systems and applications will not protect you from such mistakes. Everything that comes from the network must be verified, and one must not assume that something has never happened as yet. Someone will eventually show up with a certain combination of characters, which enable him to enter. Someone will finally get something he shouldn't have.
Those of you who are interested in pursuing in depth the question of creating safe web server applications, should read the following books: Designing secure Web-based Applications for Microsoft Windows by Michael Howard and Writing Secure Code by Michael Howard and David LeBlanc.