For the sake of argument, let’s assume you can create and to end software machines. Just assume this is true for now.
Right now AI generates code, which is reviewed by a human. The human prompts, provides context, and whatever additional injection, the AI chews on it for something between seconds to tens of minutes, and out pops a module. It could be an API, a library, a command line application, single class, part of a web app or whatever. The human reviews the product, reasoning symbolically over the code, looking for defects of logic rather than text. The LLM is a probabilistic text generator, which is fine, as most new code isn’t necessary novel. There exists a wealth of similar code (text) on which to draw. The human applies a different kind of reasoning, like a pair programming session where one developer knows the syntax and another the algorithm. Is there a race condition here? Could this resource be created once instead of with each invocation of a function? Do the tests check anything meaningful?
The ability to write code, and symbolic – not textual – mental models around that code, are what allow us to evaluate the code. The reason we spend time solving problems like the “dining philosopher’s problem,” or struggle through algorithms, is not to regurgitate that same text ten years later. It’s because we encounter novel problems which are similar to, but not exactly like, well understood problems learned in school. We mash together the novel and well understood elements not by cutting and pasting code, but by applying mathematical logic. Some of us express it more directly in actual mathematical symbols, or tools such as TLA+, but even drawing boxes on a whiteboard utilizes symbolic logic.
Eventually, that mental muscle will atrophy as we read code but produce little. Many AI heavy developers anecdotally report the loss of even basic skills. At some point the loss of skill will become enough of a problem, that code cannot be reliably reviewed. The likely response will be to adopt AI to review the code for the human. At this point it is easy to wonder why a human, with their need for a desk, noise-canceling head-phones, and their expectation they will be served lunch, needs to be paid like a software engineer. Are they an AI engineer? No, they are a machine operator. They are as pivotal to the production of a new piece of software as factory worker is important to stamping out the bit or bob their machine produces. This is what Cory Doctorow calls the reverse centaur, a human who is there because it’s not feasible to automate a part of the process. But the machine is clearly in charge.
What about AI engineers building fantastic AI models? There has been a need for other engineers to produce machines used in factories. Mechanical engineers, electrical engineers, chemical engineers, material scientists, etc. work to design and build better widget stamping machines. But these are nowhere near as common as widget stamping machine operators. And while they make the salary of maybe 5 widget stampers, they are not particularly well compensated with respect to software developers today. It’s likely we’ll need AI specialists to help build better AI machines, but the spectacle of exorbitant salaries is tempered by the realization their jobs will also be more automated. Why would one need hand-made bricks to build a brick factory? You would just buy bricks from another factory.
At this point you might well be wondering why an engineer in the future would make six figures on their first day of work1. Operating the software machine is not about math or programming skills (which average people are excited about for five minutes but then discover learning to code is a non-addictive sleep aid). Operating the machine requires being trained on the machine (likely by the vendor) and feeding in the inputs for the required outputs. This is no longer a prestige or high salary job. Following instructions and not tinkering with the machine are more important to success than symbolic reasoning over concurrent threads of execution. And being the boss of a bunch of machine operators? That’s what we call a shop foreman today.
I’m not saying all programmers drop off the face of the earth. I’m just saying that the ratio of programmers to AI operators drops to something like the ratio of people who can hand-machine a part to those who at most stick a metal blank in a machine and push a button. There are people who build and repair the machines. They aren’t really artisanal craft people. They are machine makers. Often those person is paid somewhat more than the person feeding the machine blanks and pushes the button. Blank, after blank, after blank. But hand machining is not the same skill as feeding blanks into the machine and hand machining is too expensive. Nor does it guarantee of a better part, if you have a well calibrated and maintained machine.
There are plenty of examples of crafts that have been reduced to people feeding machines at various levels. No one spins yarn by hand except for artisanal craftspeople. A loaf of bread from a fancy bakery is more expensive than a mass-produced loaf in a grocery store. IKEA produces a Billy book-book case every five seconds. If any of the workers that operate the machines at IKEA are woodworkers, it is because woodworking is their after-work hobby. But the person operating the bread machine at a bread factory, the person feeding sheared wool into a spinner, or the person feeding sheet goods into a cutting machine at IKEA are not paid as skilled artisans. They are paid as factory workers. The output of craftspeople, like hand-spun wool, hand-made sweaters, or hand-machined Swiss watches, are usually the fetishes of the wealthy. I just don’t see anyone bragging about their artisanal, hand-code, shell script the same way as a watch.
And this world is a different world. In the past, we fixed or extended software because of the cost of re-writing the software. In this world, if you need a new iteration of a storefront for a retailer, you run it through the machine again and make new software. You keep running it through until the machine produces something you want. There is no need to make understandable, modifiable, high-quality software. It will be thrown away every iteration. There is no point in hiring a skilled weirdo, who hand-codes in their parents’ basement, to fix the site or make changes. Maybe in India a few people who hand-code exist to help make the machines, the way IKEA makes their products in other lower-wage countries with the requisite skills. But you wouldn’t pay developed world wages for those skills.
Don’t get me wrong, you are not going to get the average level of quality. You’re going to get shit. A Billy book-case falls apart in the presence of too much humidity because it’s particle board. I’ve had plenty of Billy book cases with flaking laminate or drooping shelves. Do I get rid of them? No, they just continue to be an eyesore until something else forces me to get rid of the bookcase. Just as bad and broken software will sit around until something motivates a broader change. Possibly with no one going into why it was broken other than to stitch together a vague notion from (possibly wrong) tracing and log messages. But no one is going to read all that code, much less edit and debug that code, because (practically) no one will be able to read it. Certainly not the five-figure code machine operator.
But I don’t believe we can build this software machine. I believe there is an upper limit to the complexity LLMs can incorporate in a purely text-based model. (It’s not even text-based, it’s token based, which is why counting letters in words has been such a challenge). LLMs cannot reason (although they can produce text that looks like reasoning). They cannot process categories of concepts, just text about categories of concepts. If it doesn’t already exist, or is not just within reach by mashing more text together, an LLM cannot imagine or create it. And most of the software it does create is not imaginative or novel. It is the same API, using the same authentication, using the same languages, and the same back-end as thousands of other similar examples, code repositories, and text-book descriptions. As long as you want a store-front that is largely like every other store-front on the web today, and prior to LLMs you would have mashed together from example code and Stack Overflow, you’re going to be replaced by a code machine operator. But then again, were you ever really contributing that much?
- Rather than a specific number – just assume six-figure salary means well-compensated above the average wage. ↩︎


