Software development is notoriously counterintuitive, especially if you’ve never written any code. In this post, we’ll explore some common misconceptions along with pieces of wisdom from experts in the field.
Software entities are more complex for their size than perhaps any other human construct because no two parts are alike (at least above the statement level). If they are, we make the two similar parts into a subroutine … a scaling-up of a software entity is not merely a repetition of the same elements in larger sizes, it is necessarily an increase in the number of different elements. In most cases, the elements interact with each other in some nonlinear fashion, and the complexity of the whole increases much more than linearly.
There are two key ideas in this quote:
- Software has minimal redundancy
- The complexity of software grows quickly
To understand the redundancy idea, think about your experience with physical human-made and naturally occurring structures. There’s redundancy everywhere and it feels very normal to us. This is not the case in software, where we express any given concept in only one place. If we ever need a concept twice, we refer to the original instead of rewriting it.
Now let’s try to better understand how complexity grows: Imagine your codebase as a giant network of ideas, where related ideas have a connection between them. Now let’s also assume that each idea is related to, not one, but multiple other ideas. Every idea you add doesn’t add one connection, it adds several. In fact, the number of potential connections you can make increases with each additional idea.
In technical matters, you have to get the right answers. If your software miscalculates the path of a space probe, you can’t finesse your way out of trouble by saying that your code is patriotic, or avant-garde, or any of the other dodges people use in nontechnical fields.
Computers force us to be brutally honest with ourselves. There are no cheats or short cuts that can get the computer to understand what you meant to do. There’s no way to get your code ‘almost working’ and have the computer infer the correct behavior.
Because of our innate tendency to infer, humans see projects as almost done when they’re not. In reality, all the tiny details that remain unfinished need lots of time and attention.
Making the hardware is one thing, but really it’s the software. That’s what just takes people a long time to get it to really, actually work. You can get it to almost work right away. But ‘almost working’ is a lot like not working.
The ‘complexity problem’ described above can also contribute to this misperception. On a new project, progress is rapid at first because there’s no complexity to contend with. This initial speed sets very aggressive expectations. As complexity grows, progress becomes slower.
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
In addition to describing the slowing pace of a typical project, the ’90-90 rule’ also refers to the difficulty of estimating the amount of effort required to complete a project.
Changes Over Time
Code files are like the Ship of Theseus. Over time, the entire file changes from the original version, but no individual revision causes the change. Instead it is the net result of the entire team’s efforts. When reading a code file, you’re reading an amalgamation of small changes made at different times by different developers with different goals.
Looking at the change ratio, we see that more work was put into changing each file than writing it initially.
-Max Kanat-Alexander, Code Simplicity
Max came to the above conclusion after analyzing code files from various projects. In each file, the initial effort of writing the file was trivial compared to the long-term effort required to maintain the code.
From this information, we can conclude that getting features ‘done’ should not be the goal of a software developer. Software always needs updating. Instead, we should be thinking about creating code that is flexible and maintainable.
[For] most things in life, the dynamic range between ‘average’ and the ‘best’ is, at most, two-to-one. If you get into a cab in New York City with the best cab driver, as opposed to the average cab driver, you’re probably going to get to your destination with the best cab driver maybe thirty percent faster … In software—and it used to be the case in hardware too—the difference between the average and the best is 50 to one. Maybe 100 to one.
Jobs’ estimate is a little overzealous, but he did identify a phenomenon that has become a common belief in tech: the best developers are ten times more effective than the average developer. These hyper-skilled employees are called 10x-ers.
Richard’s a 10x-er. I’m, like, barely an x-er. I kinda suck.
-Nelson ‘Big Head’ Bighetti, Silicon Valley
Our intuition would lead us to conclude that the opposite of a 10x-er would be someone who contributes nothing. However, another common belief is that some developers belong in a worse category: the net negative producing programmer. As you might guess from the name, this type of developer causes a loss of productivity that negates their contributions, and then some.
That was a lot of confusing stuff. However, despite their counterintuitive nature, it is possible to anticipate these phenomena with practice. I hope that reading this post has been a useful primer, and can help when grappling with the challenges of software development.