Welcome to part 3 of this series of blog posts on the essential value of using tools to help you understand your applications. Last time, we looked at specific use-cases that depended on understanding how data is mapped across applications. We saw how only the speed, accuracy, and thoroughness of tools like SMART TS XL from IN-COM deliver the kind of analysis needed in modern development teams. This time we are going to see how we can turn dull, one-dimensional, listings of code into colorful two-dimensional, interactive representations of our applications that not only speed understanding but that also facilitate deep analysis of application interdependencies.
This week we’re looking at:
- Understanding dependencies – Mapping dependencies is important, but finding them is hard.
- Code complexity can be calculated, and it matters – Learning how to use technology to determine which team member would be the best fit for a project.
Understanding dependencies
All programming languages depend on the ability to organize code into discrete functional pieces. This helps to isolate the code for ease of development and maintenance, to facilitate reuse, and to simplify complex problems to an easier to understand conceptual level.
Since modular-programming (back in the 1970s), through structured- and object oriented-programming (1980s) we have chosen to break our programs, and our applications, down into smaller manageable units of work.
In today’s Service Oriented Architectures (SOA), we see this approach taken to the extreme point where every function of every system is its own discrete service.
Whether the granularity of your application is Paragraphs and Sections (in COBOL) or Services and Methods (SOA), being able to visualize the original programmer’s design is essential. However, code is presented to us in a linear form which does not lend itself well to the comprehension of what is, essentially, a two-dimensional “call and return” hierarchy. In some languages, like COBOL, the hierarchy can be even less obvious if developers take shortcuts in how they label their routines or if they use deprecated features like PERFORM THROUGH/THRU.
Once again, we must rely on the powerful, fast and accurate analyses that can only be achieved through specialized tools that are able to reduce this complexity to a graphical form.
What if we could trace this hierarchy of dependencies outside the code to the environment (JCL, free form documents, generated code, etc.) within which the code runs? Reach across the application to understanding static and dynamically called code (subroutines). With that, we could build a picture of how all the code fragments call and serve all the code fragments in our application.
Now add the unique capability to interact with those hierarchies, and suddenly we have powerful diagnostic tool. As we click from program to PERFORMed program, SECTION to CALLed routine to all CALLing programs, we have an instant way to navigate the application structure and understand how the functions interact. This directly impacts our ability to identify the correct location for a program change, understand the impact on dependencies, and identify which other programs may need related changes, too.
Code complexity can be calculated, and it matters
Imagine you need to change 10 programs of different sizes, ages and complexity. The changes range from one-line edits to extensive rewrites. How do you determine which member of your team should work on which change to which program?
Calling a routine to execute a shared piece of code is a task of moderate complexity. Conditional execution is a little more complex, and complex conditions and nested complex conditions is much more complex. The more pieces, the more nesting, the more complexity there is. By aggregating the code complexity, we can determine the relative difficulty involved in amending any piece of code.
We can add to this other data we know about the program and calculate the time and effort required to change a program more accurately. Add to this report the suspected defect density in the program, and very quickly we have a dashboard that can help us make sure we assign the right developer to the right task.
By pooling this data, we can look at the inventory in aggregate and see which applications are reaching the point where they are more expensive to maintain. In this way, we can start portfolio management of our applications, discover where technical debt is accumulating, and understand the value of our inventory.
Next time we’ll look at:
- Modernizing your application – if you are porting to another platform, where is the code that is going to be most difficult to change
- Application Understanding is the heart of your DevOps toolchain.
We’ll introduce you to several case studies and take you through how real clients tackled complex problems with SMART TS XL. We’ll also provide additional content that you can use immediately to learn more about why Application Understanding is essential for modern application developers. We’ll feature the special challenges facing mainframe developers and show how they too can improve their software development lifecycle.
For more information check out these instructional videos, or you can request a demo.
You can contact us at +1 (214) 774-2284 or email us at info@in-com.com