It’s undeniable that there have been significant shifts in the IT landscape over the last several years, including computing, the arrangement of networks, data center architecture and even how resources are consumed by lines of business. The majority of these changes have been driven by an increased focus on applications and how to improve their efficiency, scalability and security.
It’s no wonder that many organizations are relying on technology more and more to drive their business. While IT was traditionally viewed as ‘non-critical production’ if it was not directly related to trading in the banking industry, it’s now viewed as a vital asset to help trading firms and investment banks remain relevant in the competitive market. With today’s workforce being more mobile and dispersed than ever before, applications deployed internally have to be available and high performing for clients from a variety of locations and on a variety of devices. For example, I’m based in NYC, but am writing this article in a London café, will likely send the final for editing from an airport in Spain and have used 3 different devices to draft it. This is a prime example as to why IT organizations are being tasked with deploying applications and allowing access to data on the most dangerous place on earth – the Internet.
Enterprise Applications – Hardened Against What?
While the majority of enterprise software vendors report that their applications and operating systems are ‘hardened’, the truth of the matter is that they are only hardened to known threats. Today’s hackers are busy making sure that the number of unknown threats keep pace. In a recent report based on statistics compiled from the National Vulnerability Database, nearly 5,000 software vulnerabilities were discovered last year with an average of 13 being discovered each day. Of these, about 33% were classed as ‘high severity’ and not surprisingly, 75% of these were launched on high profile target organizations.
Figure 1: Reported Vulnerabilities
With such drivers and risks, numerous technologies have either been developed or recently spotlighted due to the fact that they ultimately support and secure the applications that help companies run their business. One such technology is reverse proxy.
What’s in a Name?
Over the last two years, reverse proxy has gained a great amount of attention, especially in the Microsoft space. Microsoft Forefront Threat Management Gateway (TMG) was a product created by Microsoft that many customers leveraged to securely publish applications used for collaboration and productivity. In December 2012, Microsoft brought this product to end of sale and mainstream support will also be coming to a close in just a matter of months. The install base started to look into what alternatives existed to leveraging TMG. But before that can even be explored it’s important to know what a reverse proxy is in the first place.
The best way to understand a whole is typically to first understand its individual parts. In the case of a reverse proxy, an understanding of what a proxy is would be a good first step. In the same way that administrative assistants typically serve as proxies for executives, in a network infrastructure, a proxy is a network service that mediates or brokers connections between two or more systems. In many cases this will be between a client and a server, and in some cases the client itself is also a server. In most cases one participant will be in a more trusted network segment while the other will be in a less trusted one. When the proxy receives a request, it is then sent to the intended destination, perhaps after applying filtering rules. Once a response is received, this also traverses the proxy and is sent back to initiating party.
So where does the reverse part come in? At its simplest, whether a network appliance is acting as a reverse proxy or a proxy (a.k.a. forward proxy) typically has to do with the source versus the destination. Proxies normally handle requests from known sources that are in a controlled environment (your company’s local area network) that will request resources from unknown resources (the Internet). Often, proxies are deployed to control internal client Internet access behavior and prevent malicious attacks via the commonly used vector of the web browser. Reverse proxies are typically executing the inverse operation by handling requests from remote unknown entities (Internet based clients or systems) that are attempting to access resources that are known (web front ends in the company DMZ). The reverse proxy acts as an intermediary and acts on behalf of the requestor to access resources assuming the request is allowed access based on policies. This is a necessary component to add to a comprehensive multi-layer approach to security.
While some reverse proxies simply forward traffic blindly, intelligent reverse proxies treat clients and servers as fully independent from a network perspective and can thereby inspect, re-write and inject into the traffic stream as needed to help mitigate potential attacks. Reverse proxies also typically have capabilities to offload services from application servers such as SSL handling, caching and compression which can be processor-intensive. A network appliance acting as a reverse proxy that takes on these additional responsibilities leads to improved performance on client experience.
What’s interesting to note is that a proxy of any kind really is a network service, much in the same way that switching, routing or even firewalling are network services. This explains why proxying can be delivered by a number of different types of software solutions and appliances.
Load Balancers – A Tale of Multiple Services
In the case of customers with Microsoft deployments, a load balancer is typically also deployed in order to provide high availability and traffic optimization for their applications. While a load balancer’s primary function is to distribute traffic across a pool of servers, by definition and by design a load balancer is also a reverse proxy. As an example, for Microsoft Lync, by simply configuring services that are published on the load balancer appropriately, you can satisfy the requirements that often were previously met by TMG for internal and external pools.
This is good news, since it means that the majority of customers already have the technology they need to offer reverse proxy services and contribute to their secure application publishing strategy. When coupled with other features that are also typically common on load balancers that offer security services such as pre-authentication, single sign-on, and domain/URL filtering, customers are often able to consolidate the number of appliances that have to be deployed in their infrastructure, leading to reduced administration and cost in terms of management and maintenance.
The changes that have occurred in the IT landscape are going to continue to evolve and create new opportunities and challenges. Applications are going to continue to be a critical part of business longevity and as they do, IT organizations will have to take a fresh look at existing and new technologies to meet the requirements.