There’s a lot of hand wringing going on over the impact of AI in software development. Depending on who you ask, AI (a) will eliminate the need for junior developers; (b) will make highly skilled developers an order of magnitude more productive; or (c) will never level up to a human developer, capable only at best at providing suggestions, at worst putting a developer in a premature debug cycle (of AI’s making) that extends rather than compresses the time to build software.
Most of the arguments to do with AI in software development focus on the process of development, specifically the combination of knowledge (what to do and how to do), experience (what not to do and how not to do), and empathy (what is relevant and what is a distraction in a specific problem space). For the skeptics, no AI model can replace these things because the knowledge is incomplete, the experience is actually processed as knowledge, and the empathy is not real. Ergo, AI can augment but cannot replace humans in software development.
This misses the point. The thing about AI writing software is not whether AI writes code as technically sound and functionally fit as humans. It’s that AI that writes software makes software disposable.
Prior to WWII, consumer purchases were overwhelmingly financed with cash, and those acquisitions were intended to last for a long time. A consumer saved and spent frugally; an acquired item - from clothing to car - was repaired (usually by the owner) rather than replaced. Similarly, manufacturers raised large sums of capital to build plants and fill them with machines, pledging a portion of future cash flows to pay for it all. The plants and the machines and the things they made were expected to have very long life spans
The “disposable society” began in 1950s, when post WWII excess led to discarding manufactured items - even durable goods - for non-functional reasons (e.g., styles or fashions). This phenomenon was celebrated rather than castigated: no longer did consumers have to make long-term commitments within tight constraints and limited choices. They could live in the moment through spontaneous decisions with few consequences.
Come the 1970s, manufacturers reduced cost of goods sold by turning to lower cost labor markets and introducing robots. Materials went cheap too, as plastics replaced metals in component parts. Plastics also weigh less, which reduced transportation costs. In real terms, the cost of manufactured things went down.
Cheaper materials meant weaker material strength meant lower product durability. But durability didn’t matter like it used to: if it broke, it was cheaper to purchase a replacement machine rather than service the broken one. Over time, businesses institutionalized this reality: manufacturers and their dealers sold more machines, had less cash and square footage tied up in service parts inventory, and had less need for service technicians on their payroll. It can be argued that today, a machine no longer covered by buyer-paid warranty coverage is a machine that, in the eyes of the manufacturer, is at EOL.
Custom software has always been a labor-intensive activity, most often financed as a capital cost. From a purely economic perspective - that is, ignoring the non-economic costs of damaged careers, ruptured trust, and all-nighters - software development bears a striking resemblance to low-tech-density economic activity of yore.
For the economic reasons (and likely those non-economic ones), custom software has been treated as a durable good similar to machines of the past: they have a high cost of acquisition; they’re capitalized over many years; they are maintained and occasionally upgraded; and every ounce of productive use is squeezed out of them, kept in production as bits of functionality are implemented in new solutions that don’t entirely replace the workhorse of old.
The cost of custom software includes the cost of perpetuating a team for the life of the asset that has the contextual knowledge of what, why and how the software does things. This is an insurance policy for the company that pays it and an annuity for the people who receive it. This is a long tail of labor costs.
If there is AI that can produce code - perhaps today only a small method or class - that same AI can reproduce that code. If there is AI that can not only produce code but also has in its library of all the previous iterations of the prompts, code, scripts, configuration, data structures and data captured by (transactional) and thrown off (observability) from which to learn, the cost of replacing that software in toto is going to be lower than the cost of maintaining in situ.
As long as the data is preserved and interdependencies with other software not broken, the software is disposable.
Before anybody gets arm-wavy that software development is right-brain creative problem solving more than left-brain deterministic task, consider that a lot of custom software consists of code that is not net new problem solving (just because it is new to the people in a particular team doesn’t mean it is new to the world), a lot of the creative aspects are in response to self-made software fashions (e.g., experiences or programming languages) rather than function, a lot of the defects - technical and functional - are products of limited human knowledge and recall of that knowledge.
The question is not how will AI change how we code. The ones over-indexing on “the process” are those dependent on there being longevity to “process”. The shift is how AI changes the economic nature of software itself: from durable asset with a high replacement cost to a disposable tool with a low replacement cost.
Dear reader: as you may know, I am a lapidary artist. Recently, the interest in my artwork has increased, to a point that I am going to take a break from the blog for a few months. Whether you agree or disagree with the content, I hope these blog posts have given you things to think about.