Yea, I hated all my labour which I had taken under the sun: because I should leave it unto the man that shall be after me. And who knoweth whether he shall be a wise man or a fool? yet shall he have rule over all my labour wherein I have laboured, and wherein I have shewed myself wise under the sun. This is also vanity. Ecclesiastes 2:18-19
Under the wise rule of Solomon Israel had become the most powerful nation in its' locality. Great projects had been undertaken to improve both infrastructure and commerce. A good nose for a contract coupled with exploitation of Israel's position on the major N-S trade routes made Solomon rich and famous. A willingness to tackle and fund logistic problems had made his armies extremely effective (this principle was later exemplified by the Romans). But one thing clearly bothered him; what would happen to the empire when he died. Would the successor be able to use and maintain the intricate set of relationships that Solomon had constructed, or would it all collapse around his ears.
In the event Solomons fears proved well founded, Rehoboam (his son) came to the throne and within a very short period of time Israel had split into two sections. His error was to ignore a problem that had been festering in Solomon's time without erupting. To fund his expansion programs Solomon had to set high taxes and occasionally use forced labour. Upon his accession the people asked Rehoboam to lighten the load, he replied that he intended to make it heavier. The rest is history. (1 Kings 12).
It is unlikely that any of us will rule a kingdom and we will almost certainly never have Solomon's experience directly yet each of us (should) wrestle with this problem daily. In previous articles I have shown that a specification will be subject to change (almost) by definition. I have shown too that zero defects is an unrealistic goal and that therefore we should expect to be maintaining an application throughout its lifetime. In my last article I discussed the basis of offensive programming, attempting to produce an environment in which it is hard for bugs to survive undetected.
In this article I wish to look at strategies (and problems) involved with programming for the long term.
One of the first things to grasp is that contrary to expectation maintenance is much harder than programming (design is harder yet but we'll come to that later). Why? Because the person performing the maintenance is in a much weaker position than the developer.
I can give you an example here. Today I have been performing optimisations in the browse class, the code is intricate and delicate because we are aiming to produce SQL access times that are optimal. (If ever you see the word 'optimal' in a spec it means you are going to be on the wrong side of the 90-10 rule). Yet, as far as I can tell, the code was right first time. I certainly knew what I was trying to do and how to do it. In fact I even knew the (rough) line number the edits would go on before I started. In contrast I also had to go and read some code I wrote in the Report Writer print engine a couple of years ago. It took me twenty minutes just to find the file! (I couldn't remember the object names I had used). When I got to the code I had to read through line-by-line just to try and remember how it all worked! Although I had written the code and recognised the style I had to learn what it did just the same as I would, if I had picked up someone else's code.
The maintainers actual position is usually much weaker yet. It is easy to justify time spent designing code, it is usually early in the product cycle (before the old version stops selling) and you can hopefully show the benefit of writing code a particular way. Put another way we are used to there being an R in R&D. But you don't often hear of R&M (Research and Maintenance). Whilst the developer is paid to think and understand a whole application (or sub-application) the maintainer is expected to find the bug. (the implication is that there is only one line to worry about so it is a much simpler process). The effect of this is that the developer is working with knowledge of context, the maintainer is working without that contextual information even if they are the same person (see above).
So not only is our maintainer without inherent knowledge but also without the resources allocated to get that information. And it gets worse. The developer is working with a system designed from the ground up, everything should be clean and elegant. Over time that code degrades (see previous articles) so the maintainer is not just less qualified to make the changes but he is working with a rather more dangerous code base than the developer, which means he is more likely to hit a booby-trap.
Then consider the psychology. We all enjoy writing good code and we take a pride in our work. And we all hate fixing bugs, especially old bugs in old projects that we really would rather forget. Our attitude will therefore be different, we will be looking for simple expedient solutions to what could be complex and subtle problems. Look at it this way, if we got it wrong when we did know what we were doing what chance do we have when we don't?
All of this combines to make a simple fact. We are in a much better position than Solomon, we do know whether the person coming after us will be a wise man or a fool, he will be a fool.
Having established that the person maintaining our code will be a fool we have to decide on a strategy of coding that allows for this. Here are some of the ones I have come across:
This is quite often corporate strategy at a company that has encountered one or two 'A' type programmers. The company defines a development style that basically precludes any of the programmers getting too clever. Often you will find a number of language constructs out-lawed. Almost certainly there will be 'no assembler' rules and maybe 'no-API' rules. Sometimes it will even be defined that certain algorithms and coding techniques (eg recursion) are no-go areas. These rules are usually encapsulated within a 'standards' document which defines what is and isn't good coding.
It is assumed that in an environment where all the code is extremely simple it will be easy to maintain.
This scheme has some plus points and is quite popular. It does however have some major drawbacks.
This is the notion most popular with the maintainers themselves. The idea is that if the developer is careful enough, chooses good variable names, makes copious comments, uses clear programming constructs and generally thinks things out properly then the maintainer coming along will be able to see at a glance what is going on and fix it. The nice thing about this strategy from the maintainers point of view is that when the maintainer screws the code up it is the developers fault for not holding their hand tightly enough!
And I do mean when. This is the strategy most likely to cause complete havoc. If there is one thing worse than a fool it is a confidant fool. This system is superb at tricking someone into thinking they understand what is going on when they don't. So why doesn't the hand-holding actually work? Because the information being passed on is not of adequate quality for a number of reasons :
I will be going into the pros and cons of different code documentation styles in a future article.
This is my preferred approach. My ideal of a version control system would allow the developer to set questions for each source module. A maintainer would be able to get read-access to the file just by asking. To modify the source he would need to be able to answer the ten questions set by the developer. Wrong answers, no modifications. It may seem radical but think about it? Do you want the maintainers ignorance found by the system or the beta testers (/ paying customers, see earlier articles).
In the absence of such a VCS we need to come up with an alternative.
My approach is to break the source into manageable chunks, define small interfaces so that the maintainer is in a closed world, then do everything in as tight, and correct a way as possible. I try to avoid putting anything in the code that gets between the reader and the algorithm. The code is as good as I can get it and the maintainer will have to work out what is going on. Further, as all the chunks are offensively programmed if the maintainer mis-maintains, the rest of the system will complain volubly. You could argue that I am slowing the maintainer down, and you are correct. The 'time to first edit' goes up, but the time to 'correct edit' comes down.
If you've been following my ramblings you will feel a single theme coming across. Maintenance costs big time, I believe it is better to have a reliable strategy to make maintenance manageable rather than an optimistic strategy that makes maintenance a lottery.
In future articles I want to expand on some of these general principles and actually get down to some examples from the ABC libraries, but there is no point looking at individual lines if the basis isn't in place. So I will beg your indulgence for a second set of DABs rules, this time looking at strategies for actually writing code.
The only sure fire way of avoiding the maintenance cost of code is to avoid writing it. This may seem obvious but it is often ignored. There is a maintenance cost (in $) for every line of code in your system. This leads to a simple fact, writing code is a bad thing. I am astonished when I hear of software shops paying programmers by the line! This is completely wrong. Programmers should be given a fixed bonus related to the functionality of the module they are writing, they should then be fined (from the bonus) for every line of code they use implementing the solution. This more accurately reflects the true effect on the software house of the programmer working (you get paid for the functionality, and then have to pay for the support).
A key methodology for avoiding code-writing is code reuse. This is one of the promises of OOP which I shall investigate next month
This is tied in to the previous rule set 'The spec is always wrong'. If you are having to write some code (and you have tried to re-use) then it suggests you are stepping into the unknown. Given you are now heading this way you will probably head this way again so over-engineer the solution. Try to work out what you will need for this release and the next and design accordingly. It may be that for time reasons you cannot actually implement all of your design up front but you can at least avoid burning bridges. The counter argument is always the calendar. Go with your instinct. If you know that what you are writing is really a waste of time then just hack it, if this code is going to be strategic then code it properly and take the heat. If you have to hack it then put the code in a separate module and pay extra attention to 5.
In any application there will be a diversity of problems. Some are simple (easy), some detailed (lots of typing but easy), some complex (ice-pack job). Make sure these problems are separated out in your source.
This is so easy to do but can help tremendously. Let me show you two little snippets of compiler source :-
switch ( ka )
now, I would imagine that without any clues most of you could guess what this does. I would be confident that if I said "disable Landscape support on reports" you could work out what to do. The interesting this is this snippet comes from the largest procedure in the compiler (by a big distance).
Now for a little baby procedure :
t = *tt;
for ( fldptr *ft = &t->link.fld; *ft; ft = &(*ft)->next )
fldptr fn = new fldtag;
*fn = **ft; // Copy across old field record
fn->number = ++fieldno;
*ft = fn; // Point old next field at new record
if ( (*ft)->id ) // New prefixed id
(*ft)->id = (*ft)->id -> newprefix( x, newprefix );
fn->type = fn->type->copygroup( x );// Copy
return tt == &this ? t : this;
Get the picture? As soon as I know which procedure a problem is in (or an extension needs to be made) I have accurate information about the danger and timeliness of any changes required. On a larger project it would also enable me to distribute code to others in a suitable fashion.
Most importantly in minimises the amount of 'nasty' code. Imagine I had scattered 100 lines of complexity amongst 4000 lines of source (these numbers are taken from code I have seen). I now have 4100 lines of code any one of which could be lethal. Separate them into different sections and I have 100 nasty lines and 4,000 lines of code I don't have to worry about. A 40x productivity increase for little cost.
Always do the nastiest, most complex bit first (unless it is completely peripheral to the execution sequence, such as an import routine). There are many reasons for this.
This is really an insurance policy as much as anything else. As discussed previously it helps reduce the effect of bugs, but it also reduces the flux of the system. I always cringe if I ask for a change in one thing and am told "that will mean we have to change x,y & z". If you ignore 2->4 then get this one right, provided 5 is in place you can ruthlessly chop the system into shape when you need to. (Defining good interfaces will be a future article).
Once we have written a piece of code we tend to feel paternal towards it, we like to feel it will remain in the system unscathed for years, even if it doesn't quite work. It is not uncommon for a particular code lump to gain notoriety even during the development phase. Often it will be a piece of experimental code that worked so well it was adopted lock, stock and barrel. Then slowly the warts and wrinkles start coming out but we try to patch it together to 'make release'. Rip the code out and put in some code that works.
You are coding to a specification that has been designed to last. You know the level of complexity or detail you are dealing with. You are inside a watertight compartment so you only have to deal with the problem in hand. Other bits of the system have to live up to spec or they are removed. Now its' all down to you, so GO FOR IT! Give it your best shot. Do everything as well as you can. You will feel better, the system will run better, and once your maintainer has come up to speed they will be better for having followed you (and the system shouldn't degrade over time).