AI Coding Will Become Line-Work

For the sake of argument, let’s assume you can create and to end software machines. Just assume this is true for now.

Right now AI generates code, which is reviewed by a human. The human prompts, provides context, and whatever additional injection, the AI chews on it for something between seconds to tens of minutes, and out pops a module. It could be an API, a library, a command line application, single class, part of a web app or whatever. The human reviews the product, reasoning symbolically over the code, looking for defects of logic rather than text. The LLM is a probabilistic text generator, which is fine, as most new code isn’t necessary novel. There exists a wealth of similar code (text) on which to draw. The human applies a different kind of reasoning, like a pair programming session where one developer knows the syntax and another the algorithm. Is there a race condition here? Could this resource be created once instead of with each invocation of a function? Do the tests check anything meaningful?

The ability to write code, and symbolic – not textual – mental models around that code, are what allow us to evaluate the code. The reason we spend time solving problems like the “dining philosopher’s problem,” or struggle through algorithms, is not to regurgitate that same text ten years later. It’s because we encounter novel problems which are similar to, but not exactly like, well understood problems learned in school. We mash together the novel and well understood elements not by cutting and pasting code, but by applying mathematical logic. Some of us express it more directly in actual mathematical symbols, or tools such as TLA+, but even drawing boxes on a whiteboard utilizes symbolic logic.

Eventually, that mental muscle will atrophy as we read code but produce little. Many AI heavy developers anecdotally report the loss of even basic skills. At some point the loss of skill will become enough of a problem, that code cannot be reliably reviewed. The likely response will be to adopt AI to review the code for the human. At this point it is easy to wonder why a human, with their need for a desk, noise-canceling head-phones, and their expectation they will be served lunch, needs to be paid like a software engineer. Are they an AI engineer? No, they are a machine operator. They are as pivotal to the production of a new piece of software as factory worker is important to stamping out the bit or bob their machine produces. This is what Cory Doctorow calls the reverse centaur, a human who is there because it’s not feasible to automate a part of the process. But the machine is clearly in charge.

What about AI engineers building fantastic AI models? There has been a need for other engineers to produce machines used in factories. Mechanical engineers, electrical engineers, chemical engineers, material scientists, etc. work to design and build better widget stamping machines. But these are nowhere near as common as widget stamping machine operators. And while they make the salary of maybe 5 widget stampers, they are not particularly well compensated with respect to software developers today. It’s likely we’ll need AI specialists to help build better AI machines, but the spectacle of exorbitant salaries is tempered by the realization their jobs will also be more automated. Why would one need hand-made bricks to build a brick factory? You would just buy bricks from another factory.

At this point you might well be wondering why an engineer in the future would make six figures on their first day of work1. Operating the software machine is not about math or programming skills (which average people are excited about for five minutes but then discover learning to code is a non-addictive sleep aid). Operating the machine requires being trained on the machine (likely by the vendor) and feeding in the inputs for the required outputs. This is no longer a prestige or high salary job. Following instructions and not tinkering with the machine are more important to success than symbolic reasoning over concurrent threads of execution. And being the boss of a bunch of machine operators? That’s what we call a shop foreman today.

I’m not saying all programmers drop off the face of the earth. I’m just saying that the ratio of programmers to AI operators drops to something like the ratio of people who can hand-machine a part to those who at most stick a metal blank in a machine and push a button. There are people who build and repair the machines. They aren’t really artisanal craft people. They are machine makers. Often those person is paid somewhat more than the person feeding the machine blanks and pushes the button. Blank, after blank, after blank. But hand machining is not the same skill as feeding blanks into the machine and hand machining is too expensive. Nor does it guarantee of a better part, if you have a well calibrated and maintained machine.

There are plenty of examples of crafts that have been reduced to people feeding machines at various levels. No one spins yarn by hand except for artisanal craftspeople. A loaf of bread from a fancy bakery is more expensive than a mass-produced loaf in a grocery store. IKEA produces a Billy book-book case every five seconds. If any of the workers that operate the machines at IKEA are woodworkers, it is because woodworking is their after-work hobby. But the person operating the bread machine at a bread factory, the person feeding sheared wool into a spinner, or the person feeding sheet goods into a cutting machine at IKEA are not paid as skilled artisans. They are paid as factory workers. The output of craftspeople, like hand-spun wool, hand-made sweaters, or hand-machined Swiss watches, are usually the fetishes of the wealthy. I just don’t see anyone bragging about their artisanal, hand-code, shell script the same way as a watch.

And this world is a different world. In the past, we fixed or extended software because of the cost of re-writing the software. In this world, if you need a new iteration of a storefront for a retailer, you run it through the machine again and make new software. You keep running it through until the machine produces something you want. There is no need to make understandable, modifiable, high-quality software. It will be thrown away every iteration. There is no point in hiring a skilled weirdo, who hand-codes in their parents’ basement, to fix the site or make changes. Maybe in India a few people who hand-code exist to help make the machines, the way IKEA makes their products in other lower-wage countries with the requisite skills. But you wouldn’t pay developed world wages for those skills.

Don’t get me wrong, you are not going to get the average level of quality. You’re going to get shit. A Billy book-case falls apart in the presence of too much humidity because it’s particle board. I’ve had plenty of Billy book cases with flaking laminate or drooping shelves. Do I get rid of them? No, they just continue to be an eyesore until something else forces me to get rid of the bookcase. Just as bad and broken software will sit around until something motivates a broader change. Possibly with no one going into why it was broken other than to stitch together a vague notion from (possibly wrong) tracing and log messages. But no one is going to read all that code, much less edit and debug that code, because (practically) no one will be able to read it. Certainly not the five-figure code machine operator.

But I don’t believe we can build this software machine. I believe there is an upper limit to the complexity LLMs can incorporate in a purely text-based model. (It’s not even text-based, it’s token based, which is why counting letters in words has been such a challenge). LLMs cannot reason (although they can produce text that looks like reasoning). They cannot process categories of concepts, just text about categories of concepts. If it doesn’t already exist, or is not just within reach by mashing more text together, an LLM cannot imagine or create it. And most of the software it does create is not imaginative or novel. It is the same API, using the same authentication, using the same languages, and the same back-end as thousands of other similar examples, code repositories, and text-book descriptions. As long as you want a store-front that is largely like every other store-front on the web today, and prior to LLMs you would have mashed together from example code and Stack Overflow, you’re going to be replaced by a code machine operator. But then again, were you ever really contributing that much?

  1. Rather than a specific number – just assume six-figure salary means well-compensated above the average wage. ↩︎

I Am Exhausted

Maybe it’s the string of cold weather we’ve been “enjoying,” although I don’t mind it. But everything feels exhausting. On the markets side, it feels like we’re one or two events away from something breaking. It could be that the expected revenue from AI for hyper-scalers comes in well below the 1.1 trillion, or 650 billion, or whatever number is currently bandied about. It’s exhausting that no one else sees that as unsustainable and ridiculous. It’s exhausting to see grown people who should be grounded in reality tout the benefits of burning mountains of wealth to build something that isn’t working, but if it did, could lead to existential threats and mass unemployment. But if we accept the fantasy it does work, they become very rich. It’s exhausting to hear we “don’t have the money” for problems such as affordable health care. It’s exhausting to watch someone take seriously the idea that humanoid robots will be a many trillions of dollar industry. It’s exhausting to watch the great big ball of money slosh around, slowly grind away the wealth of retail investors. It’s exhausting to see those investors fail to see the difference between investment and gambling.

On the technology side it’s exhausting to talk about AI and the marginal improvements in each new model. It’s exhausting to see the same lazy, uneducated arguments that it will replace workers, made by people who have little to no idea what those workers do. It’s exhausting to see tech leaders who may be seeing the limits of AI suddenly start to re-hash other technologies to spark a new bubble. It’s exhausting to see formerly reliable products and services suddenly break in strange ways and wonder if someone had vibe-coded that feature. It’s exhausting to spend good money on products that can only be discarded if anything breaks. It’s exhausting to go on the web and realize it’s become more of a data harvesting, surveillance, addiction, and manipulation tool rather than an information sharing tool. It’s exhausting to look at the web, where anyone could create anything they wanted, reduced to four or five destinations for decent people and truly awful places for the rest. It’s exhausting to hear people twist speech to avoid “demonetizing” their content with perfectly normal words, while truly vile people are allowed to spew their hate with impunity.

It’s exhausting to deal with mandates to return to the office, when the office is a room with six-foot wide desks and equally glum co-workers trying to focus by sandwiching their head in noise canceling head-phones, while their dual-monitors serve as blinders to the surrounding motion. It’s exhausting to constantly get prodded with notifications and alerts to pay attention to something that wasn’t important. It’s exhausting to wonder if the next re-org will require me to report to an office thousands of miles away, while the company offers no relocation assistance. It’s exhausting to think about what I’ll do when the bubble bursts and I will lose a job I like. It’s exhausting to look at the options for lunch and realize it’s all various types of slop food where you take a bowl of whatever back to your office break area. It’s exhausting to pass by the well equipped home office you have to leave commute to work, realizing that you are still expected to return to that home office when not at the actual office. It’s exhausting to have your managers start tracking metrics for AI usage, even when you don’t feel like it makes you any more productive.

It’s exhausting to see half the country is happy to be on the way to a racist, authoritarian hell-hole, where the corrupt leader, his corrupt family, and corrupt patrons grossly enrich themselves. It’s exhausting to hear people talk about the constitution that have never read it. It’s exhausting to listen to a court eviscerate the independence of independent agencies, by try to find a carve out for the Federal Reserve (because money must be protected). It’s exhausting to watch law enforcement turned into paramilitaries that intentionally start confrontations, eagerly letting loose tear gas and flash-bang grenades, while arresting people with the intent to subjugate rather than protect. It’s exhausting to see those paramilitaries execute their fellow citizens in the street. It’s exhausting to see Federal law enforcement cover it up and our leaders lie when there’s plenty of contradictory video. It’s exhausting to watch LAPD’s finest, who have judgement after judgement against them from civil rights and abuse suits, unleash a rubber bullet into a woman’s abdomen and laugh. It’s exhausting to see the press and the media white-wash the issue or see it in the leas of traditional politics. It’s exhausting to realize that so many people want a racist ethnic cleansing of the country.

It’s exhausting to watch people abandon reality and honesty, passing around memes and clips that are known to be lies. It’s exhausting to watch people we’ve elevated with massive audiences repeat lies. It’s exhausting to watch the “new media” just regurgitate the facts from “legacy media”, eliding anything that doesn’t fit their narrative, and injecting their own lies. It’s exhausting to have a president who parrots racist, AI generated, slop we know to be lies. It’s exhausting to see the flood of these brainless bits of digital garbage wash up on our shore with the intend to poison our minds, so we don’t know what’s true from what’s a lie. It’s exhausting to know medical professionals who voted for this, because they can’t stand “all the laws and rules” from the “federal government”, but who know that the administration’s vaccination advice is a dangerous lie. It’s exhausting because so many of these people have stopped caring about truth.

Five paragraphs on why I sometimes think this can’t be reality. It can’t be the world we live in. That the real world has to be better than this. The real world can’t be this self destructive, greedy, self-serving, and stupid. That obviously none of this is real. But it is real. It is the daily gristle of our lives we are forced to chew and can’t spit out. It tastes bitter and revolting. We are just forced to quietly to chew and chew because every else sits quietly and chews. Our political leaders, who we trust to voice our concerns, tell us that sitting quietly and chewing makes better people because we want the system to work. And there’s plenty of people on the other side who, off the record, behind closed doors, and very discretely tell us that they also think this gruel tastes awful. They wring their hands in consternation about it all the time. Our leaders say they can work with these people and get real things done. So if we site and quietly chew, don’t disrupt too much, and everything will be okay.

But sometimes go stand on the side of the road with signs and some cars honk at our clever, home-made signs. Other cars give us the finger. I’ve tried understanding the person behind that upturned middle finger, but I’m beginning to think the good I try to find isn’t there. That we are dealing with an irreconcilable vision of the country in which we want to live. That it’s not about taxes, the price of eggs, traditional roles, or their religious conscience. It’s about subjugation, humiliation, and a self-centered disregard for others. It’s dirty, it’s filthy, and it’s twisted. The idea there’s no set of shared values with those people is exhausting.

On Tap for Next Week

Taking a look at the key news items coming up next week:

  • Monday – Nothing
  • Tuesday – Retail Sales
  • Wednesday – Payrolls/Unemployment Rate
  • Thursday – Existing home sales
  • Friday – CPI

One thing to keep in mind about these numbers is their importance depends on what the market news focuses. During the great inflationing, used car prices were a watched number when car prices were a driver of inflation. Now, nobody cares. Of those numbers, Wednesday and Friday are key because of the debate about interest rates. Do rates need to come down more because the job market is weaker, or do they need to hold because we still need to make progress on inflation? I think we’re not paying enough attention to the components of retail sales because if the market tanks, and the top 10% of households no longer want to spend, we could see consumer spending dry up. But that probably won’t be this month’s number.

Another Look at Nvidia

This is not investment or investing advice. It is for entertainment purposes only. Contact your investment or tax professional before making any investment decisions.

A couple of days ago I took a look at Nvidia, to kind of show how I look at where the stock is headed. I actually prefer to buy ETFs or funds than stocks, but I do buy some individual stocks. And some of what I say is also applicable to those funds and ETFs. Although you have to look at them individual and figure out if the ETF or fund matches your objectives. I learned this early on when I bought into some funds to see the sector outperform the fund, only to realize the weighting the fund used wasn’t what I expected.

I use the term support line, but I think of it as more of an area. It’s not that buyers come into the market at 170.95 and lift the price. It’s more that when the stock gets to that price, various people are stepping in to buy. And I use the term people here very loosely. It is a combination of ETFs and funds, along with large institutional investors. It used to be that lunch time on say, Wednesday, you could actually put in a trade that would move a sock price. But I don’t think that’s true any more, mostly because trading volumes are much larger.

Anyway, once prices hit a certain zone, buyers seem to step in and keep the price from falling. It may only be a pause on its way to lower levels. Most support lines are found buy simply eye-balling where a series of lows caused the price action to reverse. It may have dropped below that level for a bit, but quickly comes back up. When the price action drops to the support and comes back up that’s called “testing” a support and the more that happens, the more significant the support area becomes. I picked 171 because I see the price approaching that area multiple times and bouncing back. I don’t think exact numbers are particularly useful.

But the whole market is taking a breather in the morning pre-market session. Nvidia (NVDA) is part of the NASDAQ composite index. (Don’t ask me what NASDAQ stands for). It is a significant part of the index, so it’s behavior will impact the broader index behavior. To me there’s a support area extending from about 600 to 585. (This is the QQQ, which is the cash index for the NASDAQ 100). I view the support areas to be less precise on the cash index because it is the byproduct of a lot of buying and selling, both through the index and its individual components. The line on top is not a support line, it’s a resistance line that suggests most market participants decide anything above 630 drives selling.

The chart of the cash index is different from what most people look at when they look at the NASDAQ, they look at the futures market, as shown below. It is not the product of buying and selling stocks. It is the the prices of the futures market for the index. A different thing is being traded. In the futures market, we see a gap that doesn’t exist on the cash index, and the prices look almost, but not quite, the same.

On the chart of the NASDAQ futures, I see a similar support area but I think they’re actually two support lines. There’s one at 24,937 (remember – not exact) and one at 24,289 (not exact). Remember, I eye-balled the lines and read the price from there. I didn’t pick the price and draw the line. We ploughed right through the first line (which probably means it wasn’t really an area of support). And today, the pre-market is bouncing off the second line, meaning it’s holding. That second line is formed on the basis of it being the bottom of the gap (which was eventually filled). But again, it’s more that zone around 24,289 where a mix of funds, algorithms, and large institutions will see a buying opportunity.

Looking at the S&P 500 Futures I’ve identified what I think is a trend channel. We see a failure to make a higher high at the end of the channel, and a break down from the channel (which we won’t know is significant until we’ve had more than a couple of days trading lower). And we briefly pushed into what I think is a support area. The support area holding means going lower will be a challenge, but the other factors are bearish. (Meaning there’s more interest in selling). But this is just to show the broader market is pausing at some level of support.

To be honest, I have no earthly idea what makes a large fund decide the prices to buy or sell NVDA or the other components of the NASDAQ, a NASDAQ future, or any stock. Maybe they just think we’ve come too far these last couple of days and they’re buying the dip. Maybe it’s just selling or buying to offset options contracts. I really don’t know. And no one knows, except for one thing that investors made clear with Google yesterday and AWS today. They are no longer looking at hundreds of billions of AI investment as paving roads for future growth. They’re looking at it as burning money that may not come back. And because many investors (both small and institutional) use baskets of stocks (like ETFs), stocks are now more likely to move similarly than before.

To wrap this up, I wrote this to clarify my thinking on what’s going on with the market. To remind myself, that even though I drew a line, It should have been a fuzzy, broad line, not a precise, skinny one at a specific number. I also like to see what the giant ball of money is doing today. It’s sloshing away from defensive investments, like consumer staples, and back to risk and tech, now that we had a big move away. That money sloshes back and forth, back and forth, each time allowing the smart money to bleed more and more off retail investors.

And this is not investing or investment advice to you, or anyone. It’s is provided for your entertainment purposes only. And if you are investing, contact a professional before making any decisions. Buying and selling stocks, futures, or any investment is a risky activity and can cause you to lose money, including the principal which you invest.

[Update] The screen shots above are from the pre-market session. The regular session can look a little different, for example, here’s NVDA, note the slightly different representation of today’s candlestick. Which is also a reminder that what you see may be determined by the conditions under which the prices were collected.

My Mental Model of LLMs

Yesterday’s AI sell-off (and today’s weakness in Google because they didn’t get the message that the street no longer wants to see massive AI investments) is a not surprising. I thought some version of that has been coming for a couple of reasons. The first is we’re not seeing the data that shows productivity improvements. Second, we’re not seeing cost reductions. Third, is my mental model of LLMs precludes there being intelligence there at all.

The first two points are the most straightforward. Let’s start with the actual impact on productivity. When I hear someone say they used an LLM to write codes in a couple of minutes that would have taken them a couple of hours, I believe them. But the question is not whether the individual task is more productive, but what is the actual impact on productivity. This is a variation of Amdahl’s Law. The total impact on productivity is the time saved in the one task divided by the total time for all tasks. That has to be weighed against the cost to provide that AI. We’ve only seen anecdotal stories of AI improving specific tasks, which are central to a job, but may only occupy a small fraction of the time in a given month. It’s quite possible a sustainable AI subscription price (not subsidized by investors) is well above the actual productivity impact over a month.

The second item is also straightforward and that is improvements in performance require using multiple stages of “reasoning.” For example, I prompt the AI to write code and it writes the code. It then runs the compiler and tests. It then fixes all the bugs. It then re-runs the tests. It then audits the code. It rewrite some of it, producing new bugs, fixes those bugs, and so on. After a few minutes, it’s finished. I go back in and make some comments about some of the structure and the AI starts churning again. The original code may have been hundreds of tokens. But the entire loop might be tens of thousands of tokens. Even if the cost of one token has dropped to 1/10 of what it was in 2022, we use hundreds of times more tokens. That pushes up the profitable subscription price and meets up against my first point, of the actual value of any savings.

But let’s move on the central reason I don’t believe LLMs are intelligent (which is not to say they aren’t useful – I use them at work). Let’s start by imagining a function (a mathematical machine that produces outputs for a given input), that produces the most likely text given some input text. If we give it the phrase “to be or not to be,” it spits out a meaningful response, such as the rest of Hamlet’s soliloquy. We will never be able to produce this function, because we don’t have a complete model of all human speech and meaning around that speech. While we can’t know this function, we can estimate this function given a lot of example inputs and outputs.

Enter the neural network. One property of multi-layer neural networks is they are excellent function estimators. If you have some function (math machine that makes outputs for inputs) and all you have are example inputs and outputs, a neural network can be trained to act like that function. The bigger the neural network, the better it can estimate a given function1. But the more data it requires to train all the “neurons” in the network. (The neurons are actually weights by which the inputs of the “neuron” are multiplied and then “squashed” as the neuron output for the next neuron in the chain). If you like, each neuron is a function we tune when training the network that estimates some part of the internals of the actual function (which we do not know).

What the LLM estimates is a probability density function over language. This is a fancy way of saying its estimating the probability of choosing a specific piece of text, if you feed it some input text. I send my text into the function and it spits out some output. What context does is to make that probability function conditional. Conditional probabilities are statements like the probability it has rained, given the ground is wet. The “ground is wet” conditions the probability of it having rained. Irrespective of the ground being wet, an any given day it might be a 3% chance of rain. If the ground is wet when we go out, there might have been a 65% chance of rain and maybe a 35% chance of my neighbor watering their lawn.

When we feed an LLM plus some context, what we’re saying is what is the most probable text that follows this text, given the context. It’s no longer examining all possible answers, but only the most likely answers given the context. Hence, the conversation and my prompt files help tune the answer to a better answer. By giving a coding LLM specific examples of what I want, I condition its output to provide the most likely additional text that you would expect to see if other code similar to the examples were already present. That is why I need to spend not just a few minutes, but sometimes significant effort writing and curating the prompt injection files.

Fortunately, for our world, things are rarely unique and novel. If I have to add Passport authentication to an application, it is rarely a completely novel exercise. Chances are, it looks like the vast majority of similar integrations. A machine that generates the most likely text for some input text, and given a context, may produce a working implementation. Even in my day-job, where I work on a novel processor, assembly language and C language code for things like interrupt handlers have not changed much in the last 40 years. For some tasks, an LLM will probably perform to the standards set for an average human practitioner, simply because we have an estimate of what we would expect an average human practitioner to produce.

If we move out of the range of the training data, neural networks can generalize, but they lose their accuracy. A neural network can be trained to within some level of error, but this is only for the the training data and test data. Outside that data, or at the extremes of that data, the performance falls off. Which is why novel languages or truly novel problems are more difficult for a neural network. It can mimic thought as it produces text using its statistical model because that’s what thought looks like to us. That’s how we expect thinking around that topic to appear. Which is why LLMs can even produce mathematical proofs. Just because it completes the idea prompted by “men are mortal and Socrates is a man,” does not mean it reasoned at all.

There is a base-line randomness to LLMs, without which they would be useless. As they produce their output it might not be exactly what was in the training data. But that’s because there’s other training data that expands or contradicts any given example. In addition, without a little injection of randomness (either intentionally or unintentionally) it would lose the illusion of intelligence. If I give it “to be or not to be,” I might get 10 variations on the standard Shakespeare and one odd-ball response, or the wrong play, or a critical analysis of Hamlet. I might get what I view as the “wrong” response. However I use the LLM, I have to accommodate the possibility of getting the wrong answer as part of my cost of using the LLM, which is why I have to review, and often refactor, the LLM code prior to adding it to my code-base.

This is where we come back to my first two points and perhaps the central issue. Is it profitable to use an LLM based system to improve the productivity of a worker to the point it pays for the system? While individual circumstances vary, we need to look broadly. Even with the detailed mental model I painted, we come back to a basic problem of economics. The cost of running the model (or paying API tokens) must exceed the benefits through productivity. That answer is not clear cut. One benefit of writing the prompt files is I work through various ideas I have before writing software, usually in much greater detail than a design document. I have to work in small steps. I need to refactor the code and do more critical reviews. If someone tracking my time said you didn’t actually save three hours, because you spent two hours on writing prompt files, code reviews, and refactoring, I might believe them. And if they then said, the cost of the tokens I burned are equivalent to that hour in savings, that might also be true. And if that’s the case, there isn’t much of a case to use LLMs.

  1. It is possible to over-train networks for small problems, but all of human language is not a small problem. ↩︎

Thinking Through NVDA

With the normal disclaimer that this does not constitute investment advice, it is provided purely for entertainment purposes, and contact a tax or investment professional before making any investment decisions, I’ll take a look at a stock.

Taking a look at Nvidia, we see the 50 day moving average peaked in December and has started a flat to modestly downward trend (yellow line is an eyeball of the trend). The price is approaching the 200 day moving average (orange circle). There is a floor around 170. So what events would I look at, from a technical perspective?

If the price drops below the support at 170, that is significant. In my thinking it’s more impactful than the price falling below the 200 day moving average. The 200 day average is going to continue to move up, even if the price declines, as it catches up to the historical price action. The violation of support that has been in place since September is more critical, in my opinion. As would as a steeper downward slope on the 50 day moving average.

Technical analysis isn’t just tea leaves for investment bros, if you look at it as a pattern of the expression of sentiment about a stock. What do we know about the broader context? For one thing, the current administration has shown a willingness to step in and make decisions that impact Nvidia’s sales. Second, the broader investment community is getting more concerned about circular deals. Third, investors are starting to ask where’s the beef on AI revenue. In the past, when the price has approached 170, buyers came in because they saw value in the stock. Above 190, more people did not see the value in the stock and sold. That range indicates people may be waiting for more information before making a move.

As we can see, all the revenue bump for Nvidia has been in AI accelerators. I’m assuming there’s a role for LLM style AI in the future. If anything, it makes sense for Microsoft to include it in Office to help with writing e-mails, writing Word docs, Spreadsheets, and Presentations. Likewise, Google’s office offerings would benefit. As would ad generation on Meta. The question is if the amount that needs to be spent in accelerators, data centers, and energy makes sense, given the revenue it produces. If it costs Meta $10 of cap-ex and $10 of lifetime energy costs to generate a lifetime revenue impact less than $20, it clearly doesn’t make sense.

What would happen to Nvidia if the GPU sales are cut in half. First, the multiple needs to come down because the expected future growth in sales is at a much lower sales volume. Let’s say it drops to 20 times earnings. Earnings are cut in half. That would mean a share price of around $45 for Nvidia. What does that do to the mag 7? Well, different parts of the Mag 7 are there for different reasons. Apple is not there because of AI sales. Microsoft, Amazon, Google, and Meta have AI exposure but won’t get destroyed. Tesla is a meme stock so this may not change anything for people who believe humanoid robots are a 10 trillion dollar TAM. But there are other stocks, like Oracle, Micron, and Broadcom that take a big gut-punch. (As is happening as I write this).

What may have an outsized impact is the wealth effect that’s given the top 10% to float half of all consumer spending. A drop in the Mag 7 would pull down the entire S&P 500. But it also flips psychology. It would also have an unquantifiable but negative impact on the private equity and banks that have lent money to AI startups and data center build outs. One estimate puts 20% of PE’s loan book on AI related loans. No bueno. Not to mention the hit to all sorts of venture funds and the investors in those funds.

Again, do your own research, consult a professional, this is only provided for entertainment purposes, and is not investment advice.

I Was Not Alone and ADP

I think I was not alone yesterday in looking at the big blob of money wandering around. At one point it looked like the market was going to be broadly down, because the options are depressing. As the blob of money lumbers out of tech and into not tech, valuations wind up rich, especially given rates. Here’s a way to look at it. Take a company that has about a 3% real long term return through a combination of price appreciation and re-invested dividends. I say “real” to mean inflation adjusted. That means every 24 years, or so, you double your money in real terms. The nominal (or not inflation adjusted) rate of return might be 5% (assuming a 2% rate of inflation).

You could also buy bonds, which might be paying 2% in real terms. (The nominal rate is 4% but there’s 2% inflation). If you reinvest the dividend, you double your money in real terms in 36 years. But the bond is considered near zero risk (given the time horizon) while the stock may or may not pan out. Even a very stable business with a simple and straightforward revenue model may not survive all 24 years. Or, like GM, it may stumble repeatedly. 10 years of bad management and shrinking margins may seriously undermine your 24 year plan. The stock has more risk than the bond (which does have interest rate and reinvestment risk – but we’re eliminating interest rate risk by holding to maturity and assume the the reinvestment risk averages to zero over 36 years).

One way to look at the double your money equation is to say you bought the company for cash, today. Every share. How long would it take to make that money back? Well, your money doubles, in real terms, in 24 years. That means it will take 24 years to make your money back. The company earnings are what provide the price appreciation and dividends (although stock buy-backs are seen by stupid investors as more tax efficient). So how much would you pay for one year of that company’s earnings? It’s simple, 24 times. At that rate you should have accumulated enough to buy the company outright in real terms.

That leads to a fairly simple model of how to anticipate the change in value of the company, given the interest rate. If the nominal interest rate goes down to 3% (but holding inflation at 2%), then we would be willing to pay more for that company. Why? Because it would now take 72 years to double our money with the bond versus 24 years for the company. While the company has more risk, it is is more attractive and we would be willing to pay maybe 36 times earnings. We’re taking higher risk, but more reward than the zero risk option. Likewise, if nominal interest rates go up to 5%, and it now takes us only 24 years to double our money with bonds, the company looks less attractive. It’s worth maybe 12 times earnings, for the given level of risk.

That’s the “perfect world,” thought experiment view of valuing a company. The giant ball of money screws with that by suddenly injecting a ton of buying into that company. CNBC and influencers talk about the massive run up in the company. Other idiots then try to follow the trend. That company should be trading 24 times earnings long term, but the ball of money pushes it to 30 and the idiots help drive it to 35. Retail investors get sucked in because “this time it’s different.” Retail investors are left holding the back when the ball of money chases the new shiny thing. The smart money that owns much of the ball gets out at 35. Retail investors ride it down from 35 times earnings to some over-correction down to 18 time earnings, essentially turning over their wealth to the great ball of money.

Which brings us to the ADP report. Is the ADP report an accurate gauge of employment. Not especially. It is a little erratic. But if we look at it over the last few months, we see it’s trending down. And most of the delta between the expected value and the reported value surprised to the down side. The former is consistent with a slowing job market hypothesis and the latter is consistent with most professionals being over-optimistic about conditions. But the picture is cloudy, not clear. We have the Schrodinger’s job market, that’s both good and bad at the same time depending on which number you look at. And that also feeds in to conflicting data, such as manufacturing in the US expanding, but not manufacturing employment.

There is no one number that tells you how the economy is doing. There is no set of numbers that tell you how the economy is doing. In truth, we’ve had a lot of change and I think some numbers, like first time unemployment claims, are no longer indicative of much. While I used to write off ADP as only useful to get journalists on TV hot and bothered when it dropped a wild number, the government jobs number has had a series of issues with major restatements. Companies (for whatever reason – but in this day and age it could be ideological) are failing to report on time, requiring the economists to produce a less accurate estimate. It has never felt so hard to get a bead on the economy.

[Update] Services PMI came in as inflationary to me. Although employment continues to grow in services (which is much larger than manufacturing). New orders are still growing, but slower.

The Great Ball of Money Lurches Away from Tech

This morning, as I shake my head at the lack of JOLTS data from the BLS, I watch the great ball of money lurch from Tech to not-Tech, like staples. Unfortunately, the multiples for many consumer staples companies are already on the high side of normal. For example, PG is at 23, which represents confidence in what is essentially a flat business. Unlike the mythical tech company (and I say mythical with great intention), PG and the other staples are businesses that are both stable, with low but predictable margins, and scale linearly. If they take a little market share this year in one category, it doesn’t mean winner take all. They don’t have 40% ore more margin products. There is no network effect around dish soap.

I have a number I think of when I look at a fair valuation for Nvidia (hint – it’s much lower than it is now). And while I look at AI as having merit, my arguments relate to how much do you want to pay for the productivity. In my own field that productivity is mixed and sometimes requires you to ignore quality. But right now investors are heavily subsidizing AI for consumers and businesses. The question of how much a company would have to charge for a sustainable AI service is open for discussion. We may have a gentle deflation out of Tech but it may mean that PG winds up at a PE of 30? Or maybe 35? That’s a little rich. Because the big ball of money doesn’t know where to go so it just keeps buying and selling.

The impact the draw down on Tech is having is an advance decline ratio of roughly 6:4 (meaning out of 10 random stocks six are up and four are down), but the NASDAQ and SP500 are down. The DOW is treading water, but the Russel 2k is up. People are looking to higher-risk, smaller stocks. As I look at that, the big ball of money is pushing into not-Tech with industrials, consumer cyclicals, and basic materials punching up. That’s in line with the data suggested by yesterday’s ISM – that low inventories are going to drive up production.

It seems like there’s more money sloshing around than value to absorb that money. It’s not floating around in the economy as money to spend (other than through the wealth effect). So it’s not growing anything. It’s sloshing around inside markets, driving growth through multiple-expansion, drafting in more money as people look at the number go up and want to join in. All of which, I’m afraid, makes this gambling more than investing. Jump on the band-wagon, ride it up and then bail before the hammer comes down. But you better get in now, otherwise you’ll only get in at a higher valuation in the catch-up trade and have less runway.

The big ball of money will concentrate into a smaller and smaller chunk of the economy as it sloshes around. With dumber money getting swept up by smarter money. Assuming we just plod along for another couple of years and look for articles that say the top 5% are 50% of spending. Only at those levels they don’t go out to eat more, they just buy $25,000 pizza ovens for their $750,000 kitchen remodel. So the economy will look weird as McDonalds, Walmart, and Target struggle, as household consumer names lurch in and out of bankruptcy and private equity, while Porsche and Rolls Royce are making book. GDP is up. The market is up. But the bottom 80% are just fucked. And you can’t have a democracy when the bottom 80% feel like they’re getting the shaft and locked out of number go up.

Note On PMI

The ISM Manufacturing PMI report shows increased manufacturing activity. In step with the hot PPI we see that prices are increasing. Customers have drained their inventories (which is in line with delaying purchases until there is more clarity or a reprieve on tariffs). That may be pulling up production to fill the backlog. It will be interesting to see what happens if the tariffs are gutted by the Supreme Court and the administration is required to pay back any collected tariffs.

I view this as mixed, but positive. There is definitely price pressure, but customer inventories are so low that this should pull up actual manufacturing at those higher prices. Employment is still contracting, which implies higher productivity (either through automation or overtime). Not sure if an unwillingness to hire indicates people want to see more of a trend. But, if costs are going up, and low inventories are driving forward production, this means that we should see inflationary effects. At some point customers should start passing on costs through higher CPI prices.

Well, maybe higher CPI prices. It depends on whether or not the consumer is able to tolerate those higher prices in the coming months. Two scenarios could play out, given half of all spending is now done by just 10% of the population. A declining equities market reduces spending as the top 10% see their wealth declining. In which case the prices may not show up in the CPI, they will drive down margins for retailers. The second scenario is the wealth effect is not strong/the equity market holds, and people just eat the inflation. That will push off rate cuts even further.

The Downside of Flooding the Zone

Trump has Overwhelmed Himself,” by Ezra Klein, points out one of the problems with flood the zone. That it removes focus from the ones doing the flooding. It makes it feel like the entire White House is in chaos. I don’t know if Trump supporters see it that way, but for those who pay attention, it floods their attention as well. I assumed it would have a built in advantage because of the short attention spans of most people. Something horrible happens in the US, something that feels like we should never forget it happened. A year or two later we see a news story about an anniversary of the event and think “wow – that happened and I should not have forgotten.”

As much as the murder of Alex Pretti at the hands of paramilitary forces is tragic, will we remember a few months from now? What horror shows did you remember from 2025? Do you remember he gutted US AID? Do you remember DOGE rifling through government databases, in locked rooms, with no supervision and the windows blanked out? Do you remember the insanely ridiculous tariff calculations? Signal gate? And we’ve become accustomed to the absolutely corrupt use of the pardon power. The president has been issuing pay for play pardons to convicted scammers. That’s a thing. It barely makes the news. Remember when Clinton got roasted for a couple of questionable pardons at the end of his second term? Trump does that and more on a weekly basis.

I think part of the flood the zone strategy is to keep crisis after crisis going so the real problems don’t surface. Look at the companies footing the bill for the ballroom or the sycophantic Melania movie. Alone, in previous administrations, would have come across as so corrupt as to be career ending. However, when we’re threatening NATO countries, who’s paying attention to the graft from World Liberty Financial, the Trump crypto scam and bribe channel?

I don’t know if Trump wants to run again. But if he doesn’t, he’s going to walk away with as much lucre as he and his family can steal. The party that went purple in the face with rage over Hunter Biden clumsily parlaying his supposed connections to wealth, became the story of the “Biden Crime Family.” And yet turn a completely blind eye to overt corruption. They can because we’re screaming about whether or not the government has to respect the fourth or first amendments. The courts plod along on these cases, with the shadow docket in the tank for Trump. Only going to bat for the constitution and norms when money is involved. Any sense the Supreme Court was non political has been dashed and as far as I’m concerned, sympathy for my causes is more than jurisprudence.

I think it’s too early to declare “Flood the Zone” a dead strategy. Because Trump knows he’ll walk away with billions, hidden behind a screen of outrage. The next president will likely face new limits on their pardon power or power to gut agencies. This is especially true if the next president is a Democrat and the Supreme Court will suddenly rediscover the constitution limits the power of the President.