Organizational considerations in the cloud computing age

Introduction

As organizations begin to consider more and more cloud-based services, the impact hits more than just the infrastructure. The organization itself needs to make adjustments in order to adequately adapt and react to the new reality. In this article, I’ll discuss some of these considerations. I’ll start out with some historical events that got us to where we are today and then move on to begin identifying organizational challenges that need to be addressed.

I’ve tended to link the various “IT eras” directly to bandwidth. As bandwidth has grown, so has the distance between the user and the services he consumes and so have his choices for device type and location. Back in the mainframe era, users could work across the world, but there was a significant tethering taking place; users were pretty tied to their terminals.

The PC age begins

Now, let’s consider the rise of the PC era. Early on, users were tied to a single PC. File sharing was unheard of unless users shared physical media with one another (aka “sneaker net”). With the advent of the affordable local area network, file servers began to take root in organizations and users started loosely collaborating with one another over this emerging medium. Further, organizations began to enjoy greater economies of scale as expensive devices – such as printers – could be shared with one another.

Casting a wide net

Next, along came the Internet and IT-land simply exploded. Once the Internet came along, implementing networks no longer became optional. It became a business requirement in order to stay competitive. IT was raised to executive level status as companies started hiring CIOs to unlock all of the emerging potential. With the Internet, users began collaborating – mostly via email – with users outside the organization more and more. Business changed as companies placed massive emphasis on web presence. But, the traditional operational model – PC communicating with physical server for internal business process fulfillment – from the early LAN era still dominated the internal landscape, although growing IT departments kept adding server after server to fulfill new and growing business requirements.

Let’s all go virtual

Then, VMware came on the scene with a vision to redefine the computing landscape by enabling organizations to consolidate these growing and increasingly inefficient infrastructures. As the dot-com bubble burst resulting in challenging economic conditions, this new-fangled efficiency boost was quickly embraced by a number of forward-looking companies. In fact, virtualization’s “first wave” was very much geared at server consolidation.

But, something happened. The data center began morphing from a big room with lots of physical servers, each hosting a single application into “resource pools” that can grow as necessary. With virtualization, organizations simply assign resources to new services and deploy the new services. Obviously, there’s a bit more to it than that, but for discussion’s sake, let’s leave it there.

At this stage, organizations were leveraging massive bandwidth between clients, servers and between servers in the data center. In fact, you’d be hard pressed to find a modern data center that didn’t make heavy use of both gigabit and 10 gigabit Ethernet. Prior to this era, users could know exactly where there files were stored and how they were stored. But, as bandwidth has grown, new capabilities have come along. Now, we can simply deploy services that migrate around the infrastructure as conditions change.

Bandwidth makes it all happen. If we were still operating 56K data links, none of this would be possible.

Under this modern architecture, companies can deploy services into these new “elastic” data centers with ease. As a result, over time, IT processes and procedures have changed drastically. No longer do we provision en masse new physical servers – with the exception of virtual hosts, of course. No longer do we need to wait for weeks or months for hardware orders to be fulfilled before deploying new services. Now, we can deploy new services almost immediately, at least relative to the old days.

Now, we’re in the early days of another bandwidth fueled transition – the transition to cloud computing. Now, I’ll be the first one to say that I truly despise the term “cloud” because I believe it’s lost a lot of meaning. When I use the term in this article, I do so only because it’s a well-known term for services that used to be “outsourced” or “hosted”. So, I’ll use the term, but begrudgingly.

This paradigm shift, however, is happening largely outside the organization. Companies such as Microsoft, Google and Apple are building massive data centers with unimaginable connectivity (bandwidth) allowing these companies to run services on behalf of their customers. Obviously, there are a whole lot more “cloud service” providers out there as well.

What’s making possible the meteoric rise of cloud-based services? Bandwidth.

Bandwidth growth has become standard operating procedure for many places. At Westminster College, for example, we’ve grown from a 10 Mbps Internet connection in 2006 to a 100 Mbps connection today. That increase has been largely driven by consumer-oriented services, such as YouTube and NetFlix, but connections at 100 Mbps and faster begins to unlock some additional business possibilities. Now, instead of internally deploying a complex new service, the fast – and upgradable – connection to the Internet allows us to more easily consider outsourcing individual services.

This growing ability to push services and applications beyond the organizational border creates major opportunity but can also pose significant challenges to organizations. In each era I described, similar organizational challenges have resulted and have had to be overcome.

Cloud computing challenges

Cloud computing brings with it great opportunities but also great challenges.

Bandwidth

Obviously, if Internet bandwidth was important before, it’s mission critical once services are deployed to the cloud. Rather than simply keeping up with bandwidth demand, organizations will need to stay ahead of the curve in order to avoid risking disruption of key business applications and processes that now live outside the firewall.

Service provisioning and ownership

When the internal IT department specified, provisioned, and maintained all of the services run by the organization, there were clear duties and responsibilities and not just within the IT department. Everyone in the organization knew where to go when it came to managing these services. Now, as service vendors begin negotiating directly with non-IT users, things start to get sticky. While it’s the job of IT to provide excellent support, it’s also critical that IT be involved in service provisioning in order to avoid “service creep” from damaging the organization. And, if a vendor ever tells one of your users that “their product can be implemented without any help from IT” that vendor is lying… no exceptions.

As hosted services become more common, companies need to take proactive steps to develop policies around acquisition and implementation of these kinds of services. The IT department needs to remain the steward of technology-based services in order to ensure that there is a standard process by which services are procured and that all service providers meet organizational requirements, if any are in place. Moreover, departments simply can’t be allowed to buy a service without prioritizing any implementation services that will be necessary against other projects. Too often, I’ve seen organizations allow individual units to simply buy services and, after the fact, frantically call IT because it’s critical that the service be implemented “right now”.

At Westminster College, where I am CIO, we have a policy in place requiring my sign off as technology-based requisitions make their way through the Business Office. Further, IT is supposed to be (and usually is) brought into the planning process early on in most projects so that we have appropriate input on the process and implementation.

Data integration

One item that is often forgotten but that is critically important as companies use of business intelligence and other analytical tools grows is data integration as it relates to the use of outside service providers. Often times, these providers hold data that can be invaluable in analysis efforts. As a company acquires disparate services, there is much more data fragmentation which can lead to inconsistent information between systems, unreliable data for analysis and more. Further, outside systems can wreak havoc on single sign-on and other systems integration mechanisms that might be in place. As a part of the procurement process for any new system, whether it’s hosted internally and, more importantly, externally, the following matters must be resolved:

  • Determining which system is authoritative for a particular kind of data and then implementing processes that synchronize this authoritative information to other data systems that consume this information. Only by doing this can an organization hope to maintain a “single version of the truth” in their data systems. The question for the service provider is simple: How much direct access do you have to your hosted information? If the answer is “none” you might want to consider a different partner. Check also if the vendor has existing integration tools that can be leveraged.
  • Study what operational processes might be affected by the implementation of the new system and work with the provider to make sure that processes remain efficient. For example, if you’ve implemented a single sign on system, make sure that your provider can use it. If they can’t, either move to a provider that can or figure out a workaround.

As examples: At Westminster College, we made the decision earlier this year to outsource our learning management system. From the outset, we were determined to make sure that this system acted as if it resided on site. We made sure that our provider had strong integration tools so that courses could be populated from our student information system and we worked with the provider so that they could configure their systems to authenticate Westminster students against our Active Directory servers. This allowed us to deploy the service without having to create separate user accounts for all of our students.

In this instance, we moved from being the organization that would have implemented this service to being the one that coordinated the vendor’s efforts. This is going to become the modus operandi for many IT shops as hosted services grow in use.

Information security

Let’s face it; no organization is 100% secure, even the ones to which we outsource critical information. And then, there are organizations that are under strict privacy requirements, such as healthcare entities and colleges. When a service is outsourced, the privacy requirements don’t go away, but the implementation of safeguards does. We begin to rely heavily on the provider to make sure that their systems protect our data as much as we want it protected. Information security has always been important, but the move to the cloud has splintered these efforts. We now have to manage contracts and compliance rather than bits and bytes and firewalls.

Information Technology department staffing and skills

As more and more services are outsourced, the IT department will change. Gone will be hardcore techies that exist to keep the lights on. Sure, we’ll still have networks and phone systems and other items that need to operate, but IT will have fewer services to manage. IT staff will be redirected, hopefully, to more value-add roles such as helping business units improve processes, implementing data analytics tools that drive decision making and more. CIOs will become service traffic cops, as many are now. The IT skill set arsenal will change and tools like Exchange and SharePoint are thrown to the cloud. That’s good and bad. With services on site, organizations control their destiny much more than they will when service providers start calling more shots.

Summary

I’ll be blunt; I’m very wary of outsourcing, but it’s an inevitable trend that doesn’t show any signs of going away. Organizations need to adapt to this changing reality by implementing policies designed to control proliferation of unsupportable services and to make sure that acquired services meet their fullest possible potential.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top