Nothing New

7 February 2018
thomson_post.jpg

It's never easy to admit you were wrong - or rather, I never find it easy to admit I was wrong. You may be different!

Having been in software development for nearly 40 years, I have occasionally fallen into the trap of thinking I've seen it all. After all, a wise man once wrote:  "There is nothing new under the sun". Maybe that gives me enough justification for thinking that nothing is going to be different this time around. But that's a dangerous attitude to take, and perhaps I'm learning a lesson.

In particular, I'm thinking of static analysis - a software technique that is aimed at improving the quality of software code. To explain this, let us hugely simplify some steps in typical software development.

Let's imagine I have developed a program in a high-level language (e.g. the C language - because that is what our group normally use - and it is still very widely used). To turn this into code which can be run on a processor, we pass the C source code through a tool called a "compiler". The compiler's role is to take code in human-readable form and turn it into low-level instructions that the processor can use.

Quite limited

To do this, the compiler has to understand what I mean when I wrote the C code. The C programming language has grammar rules that I need to obey to make it possible for the compiler to do its job - and the compiler could be viewed as doing the translation from C language into machine language. In doing that translation, the complier can spot some errors I might have made - but it is quite limited in its error-checking.

So, that is the simple view of the role of a compiler. Static analysis takes the error-checking part of this up to a different level. Static analysis tools are all about spotting errors that I have made in my code - they don't care about producing machine-readable code, but they do care about spotting things that have the potential to go wrong. In other words, they focus on improving the quality of the code, with the aim of ensuring that errors (that we belittle with a friendly term by calling them "bugs") do not get rolled out to the end customer.

So, that sounds good - right? Unfortunately, static analysis has not had a great early history. The theory is very good, but the practice (when I first made use of it) was very poor. The early static analysis tools which I dabbled with a few decades back tended to add very little to the quality of any code. The biggest problem I remember is tools which threw up lots of "false positives" - i.e. the tools would produce lists of what it thought were errors, but in truth were not errors - just uses of the language which the tool wasn't able to handle.

Impatient

Now, I have to admit that maybe I wasn't using the best tools at that time, or maybe I just wasn't patient enough to learn how to work around the false positive, and make best use of the tools. Anyway, it left me with the impression that static analysis was not worth the effort, and that other code quality techniques (strong design methods, peer review, multi-stage testing, etc.) were far more effective than static analysis.

Thus, it was with some scepticism that I viewed suggestions from some of our OpenWare team that we should try new static analysis tools.

OpenWare is Abaco's Ethernet switch management software. It is in use by hundreds of customers in many thousands of switch products around the world, mainly in military and aerospace applications. Indeed, OpenWare is even used in a couple of different products in the International Space Station. Therefore, we take the quality of our code very seriously. OpenWare is a huge code base - nearly half a million lines of our own code, sitting in a hardware and GNU/Linux environment, all adding up to a full build of tens of millions of lines of code. If static analysis could help trap bugs before the testing stage, we'd be very happy. But I was very hesitant to put much effort into assessing it - because I thought "been there, done that".

Our main software architect on OpenWare, Charlie Wood, took on the task of doing an assessment on a modern static analysis tool, and he found that it can certainly add to the quality of our code.

In part 2 of this post, I’ll share our findings with you.

John Thomson

John Thomson has been working on software for over thirty years, having done a degree in Computing Science in the early 80s at Glasgow University. His focus has always been networking, in particular Local Area Networks, including years working on international standards for protocols - back in the early days of the Open Systems Interconnection (OSI). Having worked for a number of multinational companies (both large and small) he has been with Abaco (or its predecessors) since 2000, and leads the team of software people working on OpenWare – our Ethernet switch management suite. The OpenWare team is based in Edinburgh and California.

 

He and his wife took a career break for five years in the early 1990s, and were heavily involved in relief and development work (with Unicef and others) in Ulaanbaatar, Mongolia. He still claims to remember enough of the Mongolian language to ask for some very interesting foodstuffs!