In part two of our series, we start our look at the different ways application understanding solutions can (1) speed your development time and (2) reduce the errors and risk that come from changing business critical software in production. Over the next few weeks, we are going to look at eight techniques that empower you to get the most from your application understanding solutions.
This week we’re looking at:
Code is complex, highly interdependent and sometimes hard to understand. Every system has that one module that’s fragile and that can only be changed by a few battle-hardened developers. There are always legendary programs that must have unique compile and build switches set that are passed down from generation to generation of programmers.
But what if there was a “Google” for your code? What if you could ask any question about the code and get back a complete list of all the possible answers and point-and-click your way through them just as easily as we do with Google results?
Let’s look at some examples of the kind of problems we get into when we don’t understand an application. For this example, we’re going to look at mainframe COBOL, but the same problem exists in all languages on all platforms.
Most programs make use of common code that is included into the source at compile time. These copybooks have three main advantages:
If copybooks are used for file layouts (as they most commonly are), this means that when the file layout changes, perhaps a 5-digit ZIP code going to a 9-digit one, the change can be made in the copybook and then all programs that use that layout need only to be recompiled to take advantage of the changed file design.
But how do you know which programs use the copybook? You need to be able to search your entire inventory of programs for the name of the included copybook. But that’s the easiest part of the problem. If your application is written in COBOL, PLI and Assembler, you’ll have COBOL COPY member, a PLI INCLUDE member and an Assembler DSECT. To make sure that the change is applied correctly, you must find those programs too.
But how can you confirm which DSECT, INCLUDE and COPY match up? To make any major change, you need more than just institutional knowledge to back up your decision. What if someone “refactored” a COBOL copybook and made their own version (for whatever reason at the time), or the person who copied and pasted the copybook into their code instead of using the COPY statement? Suddenly the project complexity goes far beyond the original simple change.
No matter how powerful your source code management system is, no matter how rigorous your programming policies and procedures are, your code inventory is going to have anomalies buried in the code. Standards and methods from only a decade ago are very different to today’s approach to software development. Many COBOL inventories have code written in the 1970’s that is still part of production, business systems. Those program’s authors are long gone and their wisdom about the code gone with them. In one recent research project, 17% of the source code inventory was found to be in error with missing, duplicated, incomplete and corrupted programs.
Complexity increases by an order of magnitude when we try to understand dependencies at the field level. A data item could be called -ZIP, -ZIP-Code, -ZIPCode, ZIP-First-5, -ZIP-Old-Style, etc. Not only can an item have a half-dozen names, but it could be manipulated directly such as:
MOVE IN-REC-ZIP-CODE TO WS-ZIP
And it can be manipulated indirectly such as in all these:
READ IN-REC
WRITE OUT-REC
MOVE IN-REC TO WS-REFORMAT-REC
Understanding everywhere where a field is touched, directly and indirectly, is a massively complex problem. Unfortunately, the simple search tools provided by most IDEs are not able to understand source code at this level of sophistication.
Try finding every place where a data-item is loaded with data, has its data changed or referenced, where decisions are made on the data value and where the data is initialized with “Find” or “Search.” A project like that could take you days or weeks, and you still might not be certain that you caught everything.
Imagine if the whole lifecycle (CRUD – Create, Read, Update and Delete) of your data items could be presented in a simple-to-follow report that lets you click from item to item.
Tools like SMART TS XL are crucial in delivering this kind of application understanding to the development team. Whether a program is a week old or it was written as part of a now departed application from four decades ago, SMART TS XL empowers programmers with insight and understanding that was previously impossible. It delivers comprehensive results fast, so you can spend more time identifying each needed change and making them.
In the next blog, we’ll look at:
We’ll introduce you to several case studies and take you through how real clients tackled complex problems with SMART TS XL. We’ll also provide additional content that you can use immediately to learn more about why Application Understanding is essential for modern application developers. We’ll feature the special challenges facing mainframe developers and show how they too can improve their software development lifecycle.
Here’s the next challenge: You have 10 programs to change, but they’re of different sizes, ages and complexity. The changes ranges from one-line to near re-writes. How do you determine which member of your team should work on which change to which program?
For more information check out these instructional videos, or you can request a demo.
Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6