The Lack of Progress in Computing and Productivity

As “A Much Faster Mac On A Microcontroller” points out, even the cheapest processors today are capable of meeting most peoples’ needs with a couple of major asterisks (the underlying compute hardware in the article is about $10 to $30). I’m not going to claim that Word in 1995 is as good as MS Word in 2026. I get a lot of features on 2026 MS word on my Surface laptop that I didn’t get with my Mac 6100 from 1996. But a 1990s PC did that work while consuming less than 100 watts. A modern gaming PC can push 750 or even 1,000 watts while gaming, and 200 watts or more doing nothing. I would argue the biggest difference in desktop computers is in gaming and media consumption. Productivity has largely been static. And this has some implications for productivity we might want to consider before burning up the planet with AI and filling it with toxic e-waste.

What I will say is that for many use cases, the computer from 1996 is as useful as the computer sold in 2026. I have watched many people use nothing more than Outlook, Word, Excel, and their browser, day in and day out. These are tasks I was happily doing in 1995 on my DOS/Windows 3.1 laptop at work, or my Mac 6100 at home. With fewer features, but the largely the same work. The biggest change is moving from desktop applications to web based applications, real time collaboration, and apps like Teams and Slack.

From the productivity numbers, most peoples’ work computer shouldn’t be a $2,000 computer, or even a $1,000 computer burning 200 watts on average. It should be a $100 computer that consumes under 50 watts when running full tilt, but 5 watts at idle. With reasonably well written software, that cheap machine can do everything you need for the bulk of office workers, including Slack and Web-based applications in Chrome. But even more damming is it suggests that investing in computers has had little impact on productivity. I’m going to show you the best possible argument, with a slightly higher productivity when computers were being deployed at scale between 1994 and 2004.

How to read this chart. Productivity measures output per labor inputs. This looks at the percentage change, quarter to quarter. Why do we get negative productivity growth at points? Because the amount of output drops per labor our due to changes in demand. If you produce a little less but don’t lay off, productivity falls. Annual productivity has grown 1-1.5% in the last 20 years. This, combined with the growth in population of about 1% implied a 2-2.5% real annual growth for the highly developed US economy. (Setting aside the population growth was coming through immigration and the implications current policies have on that for this discussion).

But growth was not likely coming through the computer revolution. In fact, we see productivity grew more before 2000, when offices still had shared computers, than it did between 2008 and 2020, when almost everyone had at least one computer (if not two or three) at their disposal. The software and operating systems were much better in 2010 than in 1995. More computers and computer automation did not imply more productivity. This is known as the Productivity Paradox.

The 1990’s computer revolution had productivity gains about the same as the previous 40 years. So the desktop “computer revolution” didn’t meaningfully impact this measurement of productivity. In fact, the period from 2008-2020, with its ubiquitous computing and zero interest rates should have spurred investment leading to productivity growth. That period had unusually low productivity growth. More computer did not translate into more productivity. Take that in for a minute. The era when we introduced iPhones, iPads, and Android devices, and had capable, cheap laptops and desktops coming out of our ears, along with zero interest rates, had sub-par productivity growth. Maybe that’s a hang-over from the great recession? Before then, between 2000 and 2008 had at best historically normal productivity growth.

Why I focus on the desktop versus the enterprise is because I think there’s a real difference in individual versus institutional computing. I still think there is a lot of needless spending on bells and whistles that drives up the cost, but it’s hard to argue that book keeping and accounting aren’t more productive today. But if one group is getting a lot more productive, we still have to ask if this means another group is less productive to balance the average? Or the impact is so narrowly focused it doesn’t move the broader needle? It’s also possible to argue that technological change means computers are a necessary input to enable or unlock the next productivity gains through robotics and machine automation. Maybe productivity would be even lower absent computers. But I still maintain that most office workers would be only marginally less productive if you put a thirty year old computer in front of them. (Although they might complain a lot. And maybe quit).

Which brings us to artificial intelligence. I use it and find it’s maybe a 5% productivity boost. I can’t just yell at the computer to do things. I have to create a context and prompts to enable the AI to produce something that’s usable. And then I still need to refactor it to bring the quality up above a naive or basic implementation. And sometimes, it’s just faster for me to code it at a production ready level rather than doing all that other work. I could see how some developers actually find it a negative productivity tool. It doesn’t always generate the correct code and I’ve only had reliable success with very popular languages and tools. On legacy code it sometimes generates pure garbage. Adding AI may be no more beneficial to productivity than it has been to put a computer on everyone’s desk, or providing internet access, or Web 2.0 apps, and so on.

Like a lot of AI hype, that tweet in no way matches up with my experience. However, it later came out that what they did was feed the AI tool a year of context and it generated a “toy” implementation. That implies it is not code you would run in production without a lot of work. Sometimes the difference between a “toy” version and a production version is months of effort. Sometimes the real effort is figuring out what to build. A proof of concept or toy version is what I get when I use AI code assistance.

Which brings us to the question of how much money should we spend on AI and related investments? Based on the last thirty years, it’s unlikely that computing related investments will drive significant changes in productivity. And from my personal, anecdotal experience the gains from AI aren’t huge. Right now AI investments appear to be sucking up virtually all the available investment capital, along with energy consumption (causing some communities to face higher electricity and gas prices). What if, after all is said and done, we look at that same chart through 2035 and we don’t see any change in productivity? Think about all the money we would have wasted, all the e-waste we would have generated, and all the other lost opportunities we will not pursue?

Am I an AI doomer? No. We can do something an LLM can’t. We can pull a physical plug out of a wall. What I am is skeptical that this is the vehicle for this level of resource commitment. In fact, I’m for creating even less dependence on computer infrastructure in favor of other infrastructure in everything ranging from water treatment to rail to energy production and distribution. I’m all for diversifying semi-conductor production out of Asia so we have a secure source of semi-conductors if Taiwan is overrun. Some of it will require computers to automate and better manage those resources. And some will require an office worker to plug numbers into spreadsheets and send e-mails. But it’s hard to argue that spending much on new computer based tools for that office worker, or automating that work with AI, would do much for growth.