If you would like to read the other parts in this article series please go to:
- Complacency: the 8th Deadly Sin of IT Security (Part 1)
- Complacency: the 8th Deadly Sin of IT Security (Part 3)
In Part 1 of this article, I talked about the general dangers of growing complacent in our work, and letting experience, familiarity and over confidence dull our senses to the threats posed by phishing attacks that we think no one would ever fall for. This time, we’ll discuss some types of security breach attempts we commonly dismiss as “too obvious” that might merit taking a second look, particularly those of the social engineering variety that target you rather than your users. Then in the last of our 3 articles, we’ll move on to ways you can avoid falling under the spell of complacency, and how you can prevent complacency from setting in on the parts of your users.
Advanced social engineering
We all know all about the dangers of social engineering, and you’ve undoubtedly warned your users about the common techniques used by social engineers. We’ve told them to never, ever give out their passwords over the phone to someone claiming to be from the IT department. We’ve cautioned them to watch out for shoulder surfers. We’ve explained how “people hackers” might try to engage them in conversation to ferret out information that could help in guessing their passwords.
IT pros understand very well that, despite these precautions, users are still very vulnerable to the wiles of a good social engineer. But we may become complacent about our own vulnerability. A truly talented – and extremely confident – social engineer knows that you expect him to try to exploit the naivety of the less tech savvy folks in your organization, so he may bypass them altogether and target you instead. After all, if he can gain access to your account, which probably has administrative privileges, that’s a much bigger prize than just getting into a regular user’s account.
Using an analogy that compares network attacks to terrorism, you might think of your users as “soft targets” while you and other IT professionals represent “hard targets.” Sometimes attackers are successful because they strike where it’s least expected. Let’s look at some of the advanced techniques that social engineers use against IT professionals as well as users.
Beware the transitive trust
IT professionals are familiar with the concept of the transitive trust. That was one of those things you learned all about when you studied Active Directory. In a transitive trust relationship, if A trusts B and B trusts C, then A trusts C, too.
This plays out all the time in real life, as well, and social engineers take advantage of this tendency to allow someone we know to vouch for a stranger. If Bob, whom you don’t know, walks up to the server room and says he’s with the company that made your custom software and wants to check some things out and show you some new features, you would (we hope!) recognize that this could be a social engineering attempt and carefully check out his identity before you let him in.
But what if the CEO of your company brings Bob around and introduces him as a rep of the custom software company? Because Bob is now being vouched for by someone you trust, you might be tempted to assume Bob is legit. What you don’t know is that the CEO just met Bob (whose real name is Ted) today, when our social engineer arranged to bump into him at the fitness club across the street before work. Because Ted had done his homework, he dropped a few names of “mutual friends” and when he “discovered” that Bob was with your company, which was on his list to call on today, he walked over with the CEO and asked him to show him to the IT department.
When I was a law enforcement officer, we had a saying: “In God we trust; everyone else we run through NCIC (the National Crime Information Center’s computerized index of criminal history information).” It should be a matter of policy that, before you even think about giving persons you don’t personally know well access to servers, systems or IT facilities, you verify that they are whom they say they are.
That means more than asking them to show a driver’s license; it means calling the company for which they claim to work (not using a phone number they give you, but one that you look up yourself) and confirm that such a person is supposed to be on your premises at this time and matches the description of the person in front of you.
Does such action run the risk of annoying your CEO? Maybe – if it comes as a surprise. That’s why it should be established in written policy and approved by management long beforehand. But even if it does irritate a higher-up, that’s not nearly as bad as the reaction of that same CEO if you have to tell him that his “friend” stole company trade secrets off your systems or introduced malware onto your network.
Trusting those you know (or think you know because they claim to be “one of us”) is basic human nature. And despite the engrained suspicion that comes with police training, law enforcement officers have been known to fall prey to the tendency to trust anyone who displays a legitimate-appearing badge.
The “gated community” syndrome
In some parts of the country, gated communities have become a popular response to rising crime rates. Yet some authorities say the gates don’t really do much to prevent criminal activity. In fact, some say that those who live in gated communities may have a false sense of security that can cause them to be more vulnerable. They make the assumption that, because there are access controls, anyone who is inside the gates must automatically be deemed “okay.”
This form of complacency is dangerous because it causes you to let your guard down when you’re inside your “safe haven.” This can translate to the IT world in a number of different ways. It’s practiced by lazy (or unknowledgeable) admins, usually in small organizations, who rely only on logon credentials to control access to network resources. Rather than going to the trouble of setting permissions on individual files/folders and other resources, they make the assumption that anyone who was able to log onto the network can be trusted with any and all of the resources on that network.
It can also apply to physical spaces. Some admins put a lock on the door of the server room and then assume that means security can be more lax within those walls. After all, only trusted IT people will be in there, so it’s okay to have your master list of complex passwords written down in a notebook that you keep in an unlocked drawer in your desk, or to stay logged onto the server locally when you take a bathroom break, without locking the machine.
Social engineers are well aware of this syndrome and will look for a chance to exploit it. They make themselves appear to be “insiders” – as Ted did in the scenario above, for example, and then they create distractions or take advantage of naturally occurring ones so they can sneak a peek into that drawer or palm that USB stick when you’re not looking.
All that glitters is not security software
You’re probably well aware of the common consumer-targeted attack that utilizes malware that poses as security software (or an update to an existing anti-virus or other security program). A more sophisticated version of this that would target the IT department is the “security vendor” who wants to demonstrate a new product that will keep you safe from attacks and breaches, which is really malicious code itself that opens up a back door to your systems. Some particularly dedicated social engineers might even set themselves up as security consultants and attempt to sell their services to companies whose networks they want to penetrate. This affords them the opportunity to not only breach your security, but to get you to pay them for doing it.
Attitudes that breed complacency
I’m sure some readers will scoff at the examples given above. They’ll dismiss the very possibility that they – trained professionals – could be taken in by any social engineering scheme. They’ll protest that no social engineer is going to go to the trouble of establishing a business and using it to attack other businesses. And they’ll be particularly resistant to the idea that their own disbelief is one of the attacker’s best weapons. Incredulity breeds complacency.
Social engineers have been around much longer than computers and networks; we used to call them “con men” (and women – more on that in a minute). Some of these criminals go after the young, the elderly, the naïve, the uneducated; however, there are many others who focus on fooling successful, smart people who consider themselves immune to such schemes. And there are plenty of examples of professionals – law enforcement officers, even fellow con artists – being taken in by their persuasive abilities. Arrogance is another element in the breeding of complacency.
Arrogance can also take the form of placing too much confidence in your security solutions and personnel. You might believe that because your company has poured large amounts of money into the very best security hardware and software, because you’ve hired security experts to staff the IT department, because you spend so much of your time staying abreast of all the latest threats and countermeasures, you’re invincible. In law enforcement, that’s an attitude that has gotten many cocky rookies killed. Remember that even Superman wasn’t invulnerable to everything.
Arrogance can even manifest as dismissal of certain types of people as threats. Some police officers sometimes let their guards down when dealing with women or others they see as “weaker” – sometimes to their great detriment. Male IT pros, too, may sometimes have a tendency to be more trusting of women. While it’s true that the vast majority of violent criminals and the vast majority of cyber criminals are male, it only takes one female hacker who exploits your assumption to bring down your network.
Then on the other side of the spectrum, there’s the opposite of arrogance: the “I’m not worth it” outlook. Some people think no con artist would bother with them because they don’t have anything of value to be stolen. And some companies adopt the attitude that attackers will pass them by because they don’t have any important or “secret” data on their networks, because they are “little fish” whose security is not worth the trouble of breaching. This type of self-deprecation is based on the hope that a form of security through obscurity will protect you, and it can also breed complacency.
Right on target
There’s an old saying that “You don’t know what you have until you lose it.” In the IT security world, that could be modified to “You don’t know the value of your data until your security is breached.” Most companies today have at least some digital information that could do damage if it fell into the wrong hands. This ranges from your organization’s financials to internal memos that discuss how to outwit your competitors to personal information about your employees and/or clients.
Even if you aren’t in a regulated industry that requires you to protect certain data, a data breach could hurt the company and/or individuals associated with it.
While random attacks are certainly still around, more and more cyber criminals are becoming discriminating and honing in on specific organizations or those that fit a specific profile. Modern trends such as the advanced persistent threat (APT) and spear phishing are targeted types of attacks. Social engineers who have a particular target in mind will go to great lengths to accomplish their missions. The more complacent you’ve grown, the more likely it is that you’ll fall for their con games.
In Parts 1 and 2 of this 3-part series on the dangers of complacency in the IT security world, we’ve talked about what leads to complacent attitudes and how attackers take advantage of our complacency to further their attacks. In Part 3, we’ll discuss some tips for maintaining vigilance without going overboard.
If you would like to read the other parts in this article series please go to: