How Far Will Programmers Go?
Sunday, October 8, 2006 2:13:46 PM
Certain software technologies may indeed be less than expected. But at least they tried. I say right here and now that anyone, and I mean ANYONE, that says something is impossible with software to change profession immediately. We don't need you. We don't want you. In short, you're useless to the software development community. You can say something is difficult. You can say the technology doesn't currently exist. You can say it would be too costly. You can say you don't know how to implement it in time. You can say the management of resources and maintenance would overshadow any productivity from the product. But NEVER, EVER say there's No Silver Bullet.
I just reread this paper and I have to say I found it mindblowingly stupid. I mentioned this paper in passing quite a while back. I had read it long before that and kind of agreed with it at the time. But I didn't know then what I know now. Since my quest for better and simpler software development started, I've learned quite a lot from research. This research didn't have anything really useful, but rather I found out why things suck so hard. So I can now see the vast holes in the arguments made in No Silver Bullet that I could not before.
Before we get to that, let me present a few other examples similar to the No Silver Bullet paper that ended being completely and utterly wrong. Remember that any given negative statement will probably end up being proved wrong eventually.
There had been speculation in the late 1940s that it might be impossible to break the sound barrier (that was why it was called a barrier), and the early tests of the Convair F-102 gave some credence to this fear. 
However, our experiment does show that the generally held misconception that `nothing can travel faster than the speed of light' is wrong. - Lijun Wang, 2000 
Heavier-than-air flying machines are impossible. - Lord Kelvin, 1895 
Radio has no future - Lord Kelvin, 1897 
Fooling around with alternating current is just a waste of time. Nobody will use it, ever. - Thomas Edison, 1889 
Linux is obsolete - Andrew S. Tanenbaum, 1992 
That last one is perhaps unfair because Tanenbaum's Modern Operating Systems, Second Edition is complete crap. So it's expected that he'd continue the trend in his remarks.
But the above quotes show that negative comments about the future usually end up being wrong. So saying there is no silver bullet is likewise most certainly wrong.
Let's look into a few details of Brooks' arguments. Before even getting into the subject matter, he talks about medicine and that the germ theory "dashed all hopes of magical solutions" . Sorry to burst the bubble, but the analogy here is completely incorrect. I'm a pro at bad analogies and can recognise them right away. The germ theory is more like "debugging" (no pun indented) where you have to be like a detective. There's no quick fix when you have to use someone else's system. You have to take time and learn it, sure. No one is there to provide you the specs or documentation. What you're doing is reverse engineering, not construction. If this were somehow related to the construction of a system where you had first hand control of how to build it, then I might agree. Even in the sense of maintenance, I cannot accept this analogy as documentation would be a requirement.
Some time later, he says that the closer you get to software, the more complicated the system becomes. And once you get to the software, it gets progressively more complicated from there. This is an interesting view, but I think he mistakes complexity for advances in speed and efficiency. This complexity is not necessary. They just make the computer run faster. Is system software complicated? Sure. It doesn't have to be though. It depends on the way the hardware system is designed, not by the hardware itself. For example, the PC has a horrible hardware layout. Things like Plug and Play only came out later and were "perfected" much after that. Multiple devices per IRQ was not in place until decades after the introduction of the PC while most other architectures had less IRQ's, yet never had this problem. These are problems of their own doing. They can and have obviously been made simpler. I could go on with other examples.
We're just into the subsection on complexity still in the first section and he goes bananas. He wants us to believe that software has no repeatable parts like computers, buildings, cars and such. Wow! I guess reuse is not a goal he thinks we can achieve either. The rest of this section is nonsense. I'll skip it for now. In the conformity subsection, he talks about comparing physics. This is reverse engineering. Again with the very same false analogy. Moving on.
Actually, I couldn't find anything else worth mentioning. The rest is drivel. It's an old article, so I won't go more into it. The above comments are enough for what I want to discuss. I left a few things dangling. I want to go into more detail about them.
To sum up his paper, he doesn't think software is manageable like hardware. Well, let's look at hardware. The very first computer (or concept of one) didn't have anything else to base itself on. It had to be built from what programmers call "abstract" concepts and then put them down in hardware. All too often, it is assumed that hardware is a concept pointing to itself. That it didn't have any earlier concepts behind it. If there are properties of hardware construction that are useful, should we not use it? These would not be hardware concepts, but concepts that predate hardware. Once you start seeing things in perspective, you can remove false stigma.
What is the basic concept in hardware? It's the same for almost all components. You have three properties.
Some of you may find this familiar. It's often used for n-tier software. It's used in a lot of other places as well. So hardware concepts are well rooted in software design already.
Back to hardware, each output is connected to the input of the next component. You can have multiple inputs and multiple outputs. You can also build larger components such as binary adders where there are many internal components. The larger components still have these three stages. BTW, Brooks' paper argues that composite software components are built of different components while hardware composites are built from the same basic components. Here, would it not be more intelligent to ask where this extra complexity comes from rather than accept this difference at face value? It shows a discrepancy that deserves further attention.
There is nothing in software that can't be done in hardware since our software runs on hardware anyhow. Could we not use this three stage composition (input/processing/output) in software? Of course we can, but unfortunately, almost no one went down that path. Even Flow Based Programming only goes so far with the base components using conventional languages.
Rather than dismiss hardware for the sake of hardware, why not look at the concepts it uses? You can build composite components and link them up building ever larger components. This concept is not unique to hardware. Our road network functions much the same way. We travel, do stuff and go somewhere else. There are different scales of travel. Intercontinental, international, national, state-wide or provincial, regional and city wide. There's no actual physical outline where these boundaries occur. It's a political boundary, not a physical one. In software, this doesn't happen though. We force the complexity there. A function that calls other functions isn't *only* a sum of its parts. The main function also has its own properties and boundaries that cannot be ignored. This boundary should be representational, not factual. This is the extra complexity. It's self imposed by what we use to build compound structures.
Is there a real reason why software components can't be built in a similar way to hardware? No, but not with current tools. Why is this? The answers lead us to a twisted series of events.
Chaining components together is all fine, but as we all know, requirements change and hardware doesn't. The basis for the computer was to be able to give us the power of these hardware components, but to be completely configurable. Since hardware cost a lot in the early days, programmable computers would use as little components as possible. This meant that each hardware component to be used had to be tracked as they could only be used one at a time. The way this is done is by having a list of components to use. An instruction pointer is used to keep track of what operation (or component) to execute next. Now there was another problem. Where does the input come from and where do we store the output? Actually, storing the instruction list was the same problem. Memory and registers were used. The computer, even the one you're using today can only execute an instruction that operates on registers. It can't operate on memory. Any instruction that looks like it operates on memory actually loads (or outputs) it to a register first.
All right, so we have the basics of a computer. So what? Well, we don't actually have enough to simulate hardware circuits. A list of instructions that runs from beginning to end is not very useful. It'd be nice to take different actions depending on different inputs. Here is where things took a really weird turn in history. Something was added that was NEVER needed before. Branch instructions were added. There is NOTHING like this in hardware. Regular branch instructions weren't the real culprit although you can argue for spaghetti code. No, the real criminal was the conditional branch. This forever put us on a path to nowhere.
Many still can't distinguish the difference between control statements and the traces on a circuit board. These are two and completely unrelated concepts. One is used to control what next component or instruction gets executed. The other determines what next component (or instruction) the data goes to. Read the last two sentences again. Do you see the difference?
Then things got even worse. The concept of functions and subroutines were invented. This has no equivalent in hardware. The added and completely unnecessary complexity compounded the problems for later generations. Sure, the immediate effect was that you could group lists of instructions together to be used and reused. But the long term effects of this were never considered.
Before getting into that, let's back up a little. Even with conditional statements, why did the objective of having programmable hardware dissipate? Instead, we took a left turn and used the execution point like it was a fundamental construct of computations when it was previously non-existent. Why did programmers of the time not replicate hardware, but at a programmable level? They could have had dynamic software circuits. There are many reasons, most of which were due to memory and speed limitations. This guided most decisions well into the 1990's.
Unfortunately, what we have today are a host of languages based on this virtual concept of an execution point. Functional languages took the most extreme position. Object Oriented Programming did something I still can't believe. They added mutable state to each component, but mixed in the execution point for its activation. Good luck tracing (debugging) your data.
Now let's say someone remembered or figured out that the execution point is not needed. Suppose that building software components without the execution point or memory was tried? What would it mean to remove a whole framework of complexity that is the call chain? Whether we like it or not, data dependency is always there. Currently, we have to be careful to order our statements so that this dependency is not broken and that's where problems occur, especially with threading. Now suppose you just stated this data dependency straight up and forgot all about instruction ordering? What if someone was able to leverage this to the scale of the Internet where the components are computers themselves? Note that we would not be abstracting anything away. We would be removing something that wasn't needed in the first place.
Before questioning whether or not there is a silver bullet, maybe we should remove self imposed complexity first, no? It's not that easy though. Look at how much software out there uses the concept of the function call as the basis for its API? Probably all of them. Who is ready to ditch everything? For a lot of programmers, this is the only world they know. Without the background, it sounds rather ridiculous to remove functions, loops and conditionals. The irony is that these things have never been necessary. They're there to accommodate the configurability of hardware components. There was absolutely no reason to use this ultra low level virtual concept of the execution point as the basis for any so called "high level language".
I often hear programmers that say they are interested in faster development times. While this is a worthy endeavour, no one believes it will actually get easier as can be seen by the pervasiveness of the no silver bullet argument. What if it was wrong? How far would programmers go with this idea? We are in a field that is supposed to automate processing. This means that eventually our jobs will also be automated.
I doubt many programmers considered what would happen if programming actually did get a lot easier. Do programmers really believe that their tasks will never get automated? Are we that gullible? I know with 100% certainty that more things will get automated. Support will always the main task. Building is easy. It's maintenance that's the hard part.
Right now, almost no consideration is given to maintenance. I always hear about getting software completed faster. That's super easy. If you're having problems there, you have really bad programmers. What we should be concentrating on is not faster development, but easier updating of software. Most organic system have over 90% of it used exclusively for making sure the other 10% is working correctly. Many organic systems have two of each organ in case one of them fails. In programming, there is no equivalent. I know of no system that has 90% of the code just for error checking. Yet, we *should* have much more error checking code than anything else.
Who likes to do error checking? No one. The fun part of programming is building. But that's the easy part. If programmers were really focused on getting faster development times, there's been nothing stopping them for over 50 years. Even with hardware limitations, it could have been easier than what has been available. The real question should rather be if programmers are willing to write over 90% error checking code? I don't think so.
Until we get rid of the run once, maintain zero attitude, we have no one to blame but ourselves. We are the reason why software takes long to write. We are the reason why software is difficult to maintain. Maybe the silver bullet isn't in view, but we can at least stop shooting ourselves in the foot.
1. Frederick P. Brooks, Jr., 1987 No Silver Bullet
2. Andy Wardley MVC: No Silver Bullet
3. Brian D. Foy, 2004 (O'Reilly) eXtreme Programming is not a silver bullet
4. Century of Flight In Search of Speed
5. CNN, 2000 Light can break its own speed limit, researchers say
6. Fraser Speirs, 2006 Why "scientific consensus" can bite me
7. Wikiquote, 1992 Andrew S. Tanenbaum