T-Suite Podcast: Prepare & protect — Talking with InfoSec expert Steve Durbin

When I started my first IT management role, it was at a manufacturing company. I was working with a team to modernize the IT infrastructure to put more capabilities and security best practices into place.

A significant component of that upgrade was to start a pilot to integrate Windows NT (now Windows Server), into the corporate Novell infrastructure. As we were looking at various servers around the organization, we realized that systems were not properly patched and some required upgrades that had not occurred for more than a year. Worse, many did not have the necessary security software in place.

Bring it to the C-Suite

T-Suite Podcast
Pixabay

Before upgrading, sunsetting, patching or replacing those systems, we had to escalate some of the major issues to our CIO and his team. I do not think we were all greatly concerned about the security risks because none of our systems were running plant operations. In other words, “security” to us meant “if it doesn’t run a plant, then there is not much of a security risk.”

Now, I am not saying we did not take security seriously. After all, we escalated our findings up to the CIO level. We merely prioritized the criticality against plant operations rather than outside threats to our “noncritical” systems. In today’s world, those “noncritical” systems would likely be a wrongdoer’s first attack vector as a key to ultimately gaining access to our systems.

Naturally, we gained approval from the CIO to perform all the upgrades, patches, etc. that we needed.

Plowing ahead

T-Suite Podcast
Our team wanted to get the new IT infrastructure in place very quickly. That was not just due to security patches and old systems, but to support new software we were purchasing and building.

The IT upgrades were significant. We were not only expanding current server rooms but adding vital new infrastructure to a newly built facility nearby. To make sure things went smoothly, we set up temporary backup systems, so if we took a server offline, there was another server running to handle the load. It was a well-planned IT infrastructure upgrade that was going smoothly.

Attack(?)

One day, a number of us got in early to address some concerns about how we were handling IP addresses with our new Internet service provider. Not very exciting I know, but stick with me here.

We heard the phones ringing outside the offices and peeked out to see what was going on with the helpdesk team. Some very senior people could not log in to their email clients. Our team jumped into action, verified the email server was running, our user management software was functioning, and all the other things you would normally check and they were running just fine.

After hours of debugging and testing, we could not figure out why only certain people could not access their email. We thought there might be a virus, so we sacrificed one poor person’s computer and completely re-imaged it.

A very senior individual came down and asked if he could try his email on one of the helpdesk computers (laptops were not a thing yet). Sure enough, he could log in, but not at his desk. Ultimately, we learned the reason for the problem was a router handling traffic from that particular area of the building, and all was good. That said, we were very nervous, and the rumors of a potential virus infecting our senior leadership team did not paint a good picture.

Later that week, in a manufacturing plant far, far away, some critical plant operations software lost access to the LAN and triggered an automatic shutdown. If you have ever worked in a manufacturing plant, you know there is a significant loss of revenue and potential safety issues when such a thing happens.

Have you ever met the general manager at a manufacturing plant? I have, and let’s say they can be scary, especially when something goes wrong. Unbeknownst to us at the time, there was a team of remote IT folks trying to figure out what the problem was with (I’m picturing) the plant manager hovering over them with a cigar clenched in his teeth, a shotgun tucked under his arms, and yelling colorful words.

As the IT team dug into the problem, they learned a segment of the network was not working because they could not phone home to a directory server. If you are familiar with [especially older] directory servers, you know they are typically set up with a sort of command-and-control structure. A remote directory server may not function properly if the commanding directory server is not working.

The commanding directory server(s) were in a corporate facility and had all sorts of fallback plans in place in case one went down. As it turns out, the remote directory server at the plant was never connected to that corporate command server. Instead, the plant’s server was directly connected to a backup command server that we had just unplugged in front of me to perform upgrades on it. I’m not a Novell expert, but for whatever reason, that server being down never re-routed the plant to a backup server.

To this day, I do not think anyone knows for sure why the plant was directly connecting to our backup server, but we assume it was human error. As you can imagine, that conspiratorial talk of a virus was whispered through many a hallway and email threads.

The awakening

T-Suite Podcast
The facility I worked in also had a general manager who called a number of IT team members into his office for a moratorium. We talked for hours about what we could have done differently and what we would do next. Of course, we had a lot of new IT policies and procedures to put in place, but the GM had a bigger plan.

As you grow in your career and find your way up the corporate ladder, you have to look at things differently. The GM, of course, saw how IT needs to improve systems, but he also saw that there was no standard process for handling major situations, being plant operational issues or major disasters.

Working with the entire C-Suite, our GM came up with our company’s first operational emergency response process. Of course, safety was No. 1, and all plants had emergency preparedness training, but what happens when you are cut off from the network? What if there is a catastrophic event? How can senior leaders communicate to address a PR issue? The company had a lot of plans in place, but no single, coordinated approach.

The company set up special rooms in every building, including secure sites not within those facilities. They created a secondary network and installed state-of-the-art communications equipment so people can communicate with each other. Senior leaders of the company would even perform mock events to ensure they — and the infrastructure — were prepared. Who knows, maybe the next plant shutdown could be caused by a virus?

T-Suite Podcast: An interview with Steve Durbin

Today’s executives face the threat of unknown faces lurking around their network and preparing to do significant digital harm.

In this week’s T-Suite Podcast, I speak with Steve Durbin, managing director of the Information Security Forum, to discuss how executives must prepare themselves and protect their networked infrastructure.

You can find Steve at the Information Security Forum.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top