Louise Chalupiak is an experienced project manager whose articles for TechGenix crackle with wit, style — and most of all — commonsense advice. This article from November takes a hard look at technical debt and how it can affect quality assurance — making it a perfect entry in the TechGenix 20 Best of 2020 series.
Technical debt is a relatively new buzzword that is credited to Ward Cunningham, one of the authors of the original Agile Manifesto. The Agile Manifesto is a set of guidelines intended to ensure that the Agile project management methodology reflects collaboration, communication, and a product that meets the needs of the customer. In this case, technical debt refers to the impending cost of defects when the time allocated to quality assurance is reduced to expedite software delivery into production. And so, there are times when we assume technical debt intentionally.
What is technical debt?
Naturally, like every other noun and verb in the English language, the definition of the term has become rather subjective, and there is another type of technical debt which many are at risk of accumulating. The term technical debt is also used to refer to an issue that can easily occur given the rapid pace of technology change in today’s environment. In an effort to keep pace with a growing enterprise, we often add technical functionality and interfaces one at a time without any ability to understand what the final product will look like. In many cases, we engage different teams, different developers, and even different vendors to update, upgrade, add system functionality, and even to integrate with other systems. It then becomes improbable, if not impossible, to untangle the system to gain a complete understanding of the blueprint. When we are eventually forced to upgrade, due to new requirements or end-of-life systems, it becomes a behemoth and expensive task. In many cases, it can be easier to start from scratch rather than untangle the undocumented mass that we have created. This is the technical debt that most of us have lived and many of us have helped to create. It is a buildup of internal deficiencies that when looked at individually, seem quite simple to resolve. However, when we look at them in totality, we realize that it’s time to get a bunch of smart people in a room and look at the big picture. Don’t forget to invite the Finance team! Better yet, engage a few best practices and understand the early warning signs to keep ahead of technical debt and avoid that losing bet. The following are lessons learned from those who have played the ponies and lost.
Here are some ways to keep ahead of technical debt:
Assign and enforce a formal change process
When any application is moved into production, it is a best practice to put in place a formalized change process. This means that one person cannot make a change to the system in isolation, no matter what security level they hold. As an example, think of a Human Resources Information System (HRIS) that has functions for payroll, recruiting, onboarding, and core human capital data such as names, addresses, salaries, etc. If Human Resources decides to hire someone and add a new job classification without telling Payroll, there is a very good chance that the person will not get paid. Working in isolation, the ‘fix’ from Payroll will most likely involve building a workaround of some kind to ensure that no-one is missed in the next payroll register. If someone is missed, they are manually added in, and the issue is added to the never-ending list of issues that have occurred. In today’s environment of integrated systems, changes cannot be made in isolation. There needs to be a formal process to bring new configuration forward, and there needs to be a process of approval, development, quality assurance, testing, and move to production. Does this scream Agile to anyone else? That said, this is a best practice regardless of any guidelines or methodology that your organization follows. Without a formalized change and approval process in place, the workarounds will eventually become so onerous to manage that we, guess what? Increase the headcount to manage the workload. Sound familiar?
Invest in a long-term skillset
With all the certifications, training programs, and methodologies that exist in technology today, it’s hard to imagine that we all do things differently. But we do. If there is a large turnover of human resources within your organization, it means that potentially, every defect, every issue, every change, throughout the history of an application, may have been solved by a different specialist. While there is no doubt that we all document everything we do meticulously, imagine if there were no documentation? How do we follow the logic of the person before us who implemented a fix, when they are long gone, and their documentation is non-existent? There is a good chance that we can’t. And so, we use our own logic and implement a fix without any knowledge or understanding of how the system was originally programmed or configured to work. Often, we find out down the road that our fix may have broken something else within the system. Without an understanding of the entire system, it becomes impossible to do regression testing. Regression testing is the processes we run prior to moving a fix into production to ensure that what we have done, does not break anything else within the system. There are programs that can complete this task for us, but they don’t work if we have not documented the system and any changes made since implementation.
The solution? Recruit good people and keep good people. There is no lack of academic theory on this subject, so why is it that some companies still do not understand the value of investing in their employees? The long-term value is much greater than just the cost of recruiting and training.
Centralize system maintenance
The days of the decentralized help desk have not yet left us. As users, we like the decentralized help desk. We can walk over to their desk and talk to them about our issue. We can take them to our desk and physically show them the issue we are having. As a user, that one-on-one attention is valuable. However, as an enterprise environment, decentralized help means looking at issues in isolation. When we do that, we often do not have a full understanding of how often a certain issue is causing us pain. In addition, different people at different locations may be spending valuable time troubleshooting the same issue. This can result in changes to the application and integrations that are not well documented.
What we want to happen is collaboration on all issues so that we consider the requirement for longer-term fixes before they become untraceable. In addition, we need to factor recurring issues when we have discussions regarding the longevity of our applications.
Prohibit the use of rogue software
During the 1980s, as enterprise environments were just starting to understand the benefits of embracing technology, many of us were already champing at the bit to geek out. We saw the advantages and we started to implement our own contributions. As the years went by, and the technology became available, we built templates, and macros, and small databases, and function-heavy spreadsheets. We used them to capture the corporate data that we needed to do our operational jobs. Networks were not even a thought at the time, so those of us who had the luxury of access to any kind of a computer would save the information wherever we could. It was seldom secure.
Today, things are different. The enterprise is aware of the threats to the security of our data and most go to great lengths not only to ensure data security but to educate employees on awareness of security threats. It is no longer acceptable to build one-off tools that we save to our hard drives and use to capture corporate data. Corporate IT needs to educate and enforce this message. If the current systems do not meet the needs of our business users, we need to have open lines of communication so that IT becomes aware of the deficiency.
Technical debt: We have been warned!
The more technical debt we have, the greater the timeline, and the cost to re-engineer the deficient technology. Historically, we could build systems on the side to handle the work that corporate administration and processes did not. We updated and changed these rogue systems as we saw fit, sometimes without any consultation with other corporate resources.
Today, we understand the implications of this behavior and the need to build corporate processes, tools, and standards. In addition, we need to educate and enforce the use through a formal governance process. If we do not embrace best practices, we will continue to run the risk of acquiring technical debt. With that, we can look forward to the inevitable time and cost we will need to invest to dig ourselves out.
Featured image: Shutterstock
More Best of 2020 articles
- Open-source security tools for cloud and container applications
- Deploying fully patched Windows 10 computers: A guide for IT pros
- Setting up static IP address for Linux: Easier than you think
- Microsoft 365 troubleshooting: Diagnostic tools at your fingertips
- Managing disk and file system partitions in an Azure Linux VM