Security: A Shared Responsibility (Part 5)

If you would like to read the other parts in this article series please go to:


In this multi-part series, we’re taking a look at the big picture of security areas of responsibility and how each component (internal job position or external entity) fits into the puzzle, with a discussion of the importance of defined areas of responsibility. In the first article of the series, we looked closely at the role of the CSO or CISO, at the top of the IT security structure. In Part 2, we continued the discussion with an examination of the security role(s) of IT administrators, and then we talked about the responsibilities of outside contractors, consultants and partners to whom you grant access to some or all of your IT resources. In Part 3 we started looking at the security responsibilities of end users and how you can turn them into responsible members of the security team and In Part 4, we continued to discuss the responsibility of end users.

In this, Part 5, we’ll begin to examine the role and responsibilities of software vendors (including Software-as-a-Service providers), and in the final installment, Part 6, we’ll wrap up the series by continuing that discussion and also looking at the roles of ISPs and cloud services companies in preventing security breaches and keeping our networks and data safe.

Software vendors’ security responsibilities

There has been much discussion over the years regarding software vendors’ liability for damages when security breaches are a result of vulnerabilities in the code. Bruce Schneier addressed this way back in 2003 in an article titled Liability Changes Everything.

For those who don’t want to go read the article, I’ll summarize what he says on the subject: Software vendors were slow about getting serious regarding security because from a business standpoint, it didn’t make good business sense to spend the large amount of time, effort and money required to make their products more secure when the consequences of not doing so were relatively insignificant.

Evolution of software security

The demotivation to create secure software was especially applicable when security breaches were relatively rare and/or benign, and also when a particular software vendor had a quasi-monopoly on an area of the market. As both of those circumstances have changed over the years, vendors’ interest in security has grown – to the point where now they often annoy customers by having too many security mechanisms (from the users’ point of view).

Let’s take a look back in time to see why software vendors took so long to get on the security bandwagon. When almost all computers were running Windows, Microsoft was unlikely to lose many customers by neglecting security – the worst that would happen is some dissatisfied customers and bad press. When practically all of the smart phones were iPhones, Apple didn’t have to worry about customers going elsewhere if they didn’t build high security into the phones. When Flash was the only game in town for animation and video, Adobe didn’t need security as a selling point.

Several things have happened to change that. First, as the market becomes more diversified, competition changes priorities. Customers can (and do) abandon a vendor and go to a different platform if they’re dissatisfied with a product. Even Apple, which at one time seemed to have an almost hypnotic hold on its users, has started experiencing this, with both the iPhone and iPad losing ground to Android products over the past couple of years, although their iPhone 6 release helped to reverse that to an extent (Statistics for October 2014 showed iOS with 41 percent of the U.S. market in comparison to Android’s 53.8 percent).

The point is: customers aren’t stuck with one software platform anymore. They have choices. Software vendors are recognizing this and taking it into consideration (as evidenced by Apple’s capitulations to market demand when they released the iPad Mini – despite former leader Steve Jobs’ declaration that the company would never make a small form factor tablet – and again with the introduction of the iPhone 6 Plus, after years of holding out against the “phablet” trend.

So unlike in the past when the market leaders had no competition, now if something is important to customers, it’s important to the vendors, too – because it has to be. And security is becoming increasingly important as software users and the IT professionals who support them see not only an increasing number of security breaches occurring but also see the severity of the impact of those breaches growing, as well.

Customers care more about security because the overall cost of security breaches to organizations is on the rise, as evidenced by IBM’s annual Cost of Data Breach Study, conducted by the Ponemon Institute.

Major software vendors are corporations; thus they are ultimately beholden to their shareholders. Their responsibility to those investors is to produce a profit. As long as being lax about security resulted in a larger profit by saving the company the high cost of the more careful coding, training personnel in security and/or hiring new personnel with security expertise, testing and delayed delivery of products to market, sacrifice of some functionality, performance and convenience in software features, that laxity toward security was part of making shareholders happy.

When the consequences of neglecting security reaches a pain threshold such that it’s costing the company money instead of saving it money, a switch flips and building more secure software becomes part of the overall responsibility to the shareholders. But what about the vendors’ legal and moral/ethical responsibilities to their customers?

Legal liability for vulnerabilities: the pros and cons

Many (including Schneier, in the link referenced above) have advocated making software vendors legally liable for vulnerabilities in their software. They argue that the manufacturers of tangible goods are liable for defects in their products; if an automobile maker sells you a car with an exploding gas tank, and that results in injury or loss to you, you can take the company to court and recover compensation for your damages. You can even ask for punitive damages if the problem is particularly egregious.

A software vulnerability that’s exploited by an attacker can result in large monetary losses, the loss of invaluable data, a negative impact on a business’s reputation that causes it to lose customers, and even in some industries (health care, emergency services, critical infrastructure services, aviation) even cause personal injuries or losses of life.

Software companies (or rather, their attorneys) realized early on that legal liability for coding mistakes could easily bankrupt a company, since practically all complex code has hidden vulnerabilities, many of which are difficult or impossible to ferret out until the software is released. That’s because every system and network configuration is different and there is no way that a company can test (or even know about) every possible configuration.

Thus was born the End User License Agreement (EULA), a.k.a. that document that almost no one reads. Instead, 99% of those who install software (or subscribe to Software as a Service) scroll quickly through the lengthy tangle of inscrutable legalese in order to get to the end a click “I agree.” EULAs vary, but one thing they have in common is the agreement to hold harmless the software vendor for any kind of damages that might result from the use of the product.

The EULA is the reason that, when you hear about major security breaches and cyber attacks that cost companies millions, you don’t hear about big lawsuit awards to those companies against the software vendors whose vulnerabilities were exploited in order to carry out the attacks. The EULA is a contractual agreement, entered into voluntarily by the customer, and when you “sign” it (by installing the software), you basically give up any right to make claims against the vendor or hold it legally responsible for anything that happens.

Individual commentators, consumer organizations and legislators have, many times over the years, called for laws that would change this, but none of these efforts have gotten much traction. Although the idea might sound attractive to those affected by a vulnerability, there will almost certainly be negative unintended consequences of such legislation.

Software vendors would find themselves in a position similar to physicians in the U.S., where malpractice lawsuits have resulted in a) many good medical doctors simply leaving the profession altogether and b) the necessity for those who stay to carry huge malpractice liability insurance policies that can cost tens of thousands of dollars per year. This in turn drives the cost of healthcare up as physicians and other providers must charge more for their services in order to pay for the insurance. Allowing software “malpractice” lawsuits could send the price of software and software services soaring into the stratosphere.


In this fifth part of a six-part series on security as a shared responsibility, we focused on the software vendors and their legal obligations such as addressing vulnerabilities and other software-related aspects of keeping our systems, networks and data safe. In the next and final installment, we’re going to wrap up the discussion of software vendor responsibilities and look at the role of ISPs and cloud services companies to round out the series.

If you would like to read the other parts in this article series please go to:

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top