Why Silicon Valley Isn’t Freaking Out About China

“The Looming Taiwan Chip Disaster” asks a question many are wondering. If China blockades and invades Taiwan, what is Apple going to do? It gets many of its chips from Taiwan and TSMC manufactures its very custom CPU. A blockade or invasion would cut Apple off from TSMC. That would be the end of Apple products, right? So why haven’t Apple and other Silicon Valley companies diversified out of Taiwan to at least have capacity in the United States. Let me tell ya… business idiots are beyond your tiny brain. They understand a broader, global picture that they have to carefully consider. They had a great press event in the oval office to announce a lot of stuff about sexy advanced chips. They even gave Trump a gold thingy for his desk. They’re saying things and doing things you can’t understand. So let me help break it down for your tiny (non business idiot) minds.

First, let’s start with a larger problem, that you can’t just fabricate (fab) a CPU and you’re done. This is part one of that problem, and it’s packaging. Once the silicon is etched and cut into separate chips, it needs to be packaged. This is not just slapping a bunch of plastic or ceramics around the chip. A poorly packaged chip will show problems that prevent it from operating correctly. And modern packaging is a far cry from the 1970s DIP modules, pictured below. A DIP packaged chip might have 40 or so connections. A modern chip can have hundreds of connections. And it may be housed in the same package with other chips, either support chips, or because they’ve adopted a chiplet design to improve yield (number of successful chips on one 30 cm wafer). Packaging is done in Taiwan and because it isn’t sexy, no one focuses on it. Without packaging, you have nothing. And chips etched in the US have to be flown to Taiwan for packaging. If China invaded tomorrow, and all the etching was done in the US, we would still have to fly the chips over for packaging.

Next part of you can’t just fabricate a CPU. A computer isn’t a CPU. There are other chips on the motherboard that control various features. For example, there’s a chip that controls the attached drives. It’s actually a little CPU in its own right, likely running a variation of Linux. Then there are chips to manage the power through the system. These are not simple voltage regulators, they are programmed to ramp up and down the current to keep the CPU running efficiently and cool. You likely have chips to handle all the slow speed IO, like USB ports. That’s not done directly by the CPU, there’s a separate processor for that. You can’t just make the CPU in the US and make a computer without all these other critical chips. Most are made in Taiwan, some in Japan, some in Korea, and some in China. That’s right, you already can’t assemble most electronic things for the US without Chinese made parts.

Next, you have to understand that Apple would buy chips from China. And so would Google and HP. If China took Taiwan tomorrow and the option was to go under or buy chips from a now Chinese controlled TSMC, Apple, Microsoft, Google, Meta, or whoever would buy the chips. Even if it meant entering into agreements that require more of the design to be done in a Chinese controlled company that would rip off the IP. Because going without sales (and maybe going under) is worse than maybe losing some US government sales. Plus, the US government will come around when there’s no option. In fact, it might make some things easier and they make even more money in the short term. If you go to these companies and say it might cost you a little bit more, but you insure your supply from being cut off, they would choose not to spend a little more. They will just assume they can continue with business as usual, buying chips from a Chinese controlled Taiwan. And they’ll be happy to do it.

Related, is the executives won’t believe it. Just as the Ukrainians didn’t believe the Russians would actually invade, and it was just a war game, as the Russians were setting up field hospitals on the Ukrainian border to treat the expected wounded, these business idiots don’t believe it will happen. I don’t even think it entered Tim Cooks little pointy head, as he sat through a screening of “Melania,” that China views the situation with anything other than a money lens. That’s because, like all business idiots, he views the world in a money lens. Why would China do something that would cost them money? The idea that China has felt humiliated and carved up by the West and this is about national pride is alien to them. Pride? If it costs you money? Tim Cook sat through a screening of what was essentially a bribe from Bezos to Trump to protect his money. Executives periodically line up, lips puckered, to pull down Trump’s piss-stained shorts and kiss his un-wiped ass. Money good. Must get more money. Corruption make more money faster.

For all these reasons, until China invades Taiwan, US tech companies are going to do a goddamn thing. And when China invades Taiwan, they will happily license their IP (their chip designs and process documents) to the Chinese controlled TSMC. The fact China will cut them out of the fucking loop once all the IP is stolen is lost on them. Just as they have done with every step of the way so far. Just as China is learning to cut Western designers out of other products. Why would you buy something at a premium, just because it has a Western label on it, when you can buy from the Chinese factory at a discount? Why would you buy an US branded computer when all the chips come from China, and it’s manufactured in a Chinese factory? It’s not like they’re going to get payback for the US putting spyware into the US networking gear bought by Chinese companies.

Once upon a time, there was a thing called the Marshmallow Test. You take a preschool aged child into a room and tell them if they don’t eat the marshmallow on the table, they’ll get that one and another one later. The idea is to see which kids will become doctors, lawyers, and CEOs, by delaying gratification, and which kids will scroll Tik-Tok and scratch their junk for a living. It turns out the whole thing was bogus, but it made a lot of parents try this at home and weep to see little Johnny gladly stuffing the first marshmallow in his fat little mouth. No delayed gratification. No future. Delayed gratification is not what business idiots learned. They learned to demand more marshmallows or else they’ll stop going potty in the right place. Just as they’ve learned to demand tax breaks, guaranteed loans, or other inducements to do the right thing.

If you think you’re going to get Tim Cook to buy a US made chip for his Macs or iPhones, well… he might. He might buy some from a Fab in Arizona to kiss Trump’s fat ass, ship them to Taiwan to be packaged, and then off to China to be assembled into an iPhone. Because Tim can’t package the chip in the United States. Nor can he make all the other parts of an iPhone in the United States – as just a practical matter. And as far as he’s concerned, it’s just keeping Trump happy. Just like he goes through the press conference (along with many tech leaders) announces a bunch of stuff but does nothing. Just like NVidia was supposed to invest 100 billion… I mean 30 billion… I mean up to 30 billion in OpenAI1. Business idiots just say words that have no meaning. He will do the bare minimum necessary to keep Trump happy so Apple doesn’t have to worry about the administration lobbing trade bombs at Apple. He will make the bare minimum number of chips in the US, though parting with that extra nickle every iPhone makes him weep.


Note that this story from Forbes does not invalidate my point. They will likely have US workers shove motherboards flow in from into a case and call that American manufacturing (because, remember, other parts come from other parts of Asia, including China). On their lowest volume product. And a vague promise for other stuff. All so they don’t have to pay a significant tariff on iPhones (their highest volume product).

  1. Note that $20 in McDonald’s gift certificates counts as “up to 30 billion.” ↩︎

A Little Perspective

NVDA is going to announce earnings on Feb. 25. Like everyone else, I think I’m more interested in the forward guidance on sales than sales over the last quarter. I think we’re all looking for an indication of any pull-back in AI capex spending. This would not just be an issue for Nvidia, but also for companies ranging from turbine generator suppliers to utility and real estate companies. Looking at current levels for the NASDAQ, the recent high was about 26,000. A historically normal bear market pull back takes us to just under 21,000. That’s at the bottom end of a congestion area from last December. The S&P 500’s recent high was just above 7,000, meaning it’s 20% retracement is in the 5,600 ballpark.

The difference between a regular bear market pullback, that cleans out some of the deadwood, and something bigger is only visible in the rear view mirror. When the stock market started dipping before the GFC, a lot of people thought this is just about clearing some deadwood from the system. Once it’s done, the infinite money glitch will restart. Jim Cramer gets a lot of shit for the “Buy Bear Stearns” call just before it went tits up. But he wasn’t the only one who genuinely believed Bear Stearns was in a bind but would find its way out. In part because they didn’t have all the information necessary to make that call. They did not know what Bear and its counter-parties knew. They have opinions on dozens of individual stocks and are not specialists who follow just one company. As Bear kept dipping, they thought it was time to buy. The bias sell side analysts have toward buying just made them look that much dopier when it happened.

Contrast what happened with the sharp pullback and return during the start of the COVID lockdown. The S&P 500 went from almost 3,400 to 2,200, well over 20% and came back fairly quickly. In six months it was pushing new highs as we sat around swimming pools, masks on, glaring at the neighbor jogging by without their mask. Okay, that was one stock versus a whole market, but after the 1929 crash, the stock market came back in what’s called a bull trap. Price came back, people bought in, and then resumed their slide. In fact, the prices came back to nearly the 1929 top.

I don’t know, Jim Cramer doesn’t know, and no one knows if Nvidia’s earnings announcement will cause investors to double down, pull back, or continue to waffle in the trading range. Or pull back for a couple of months, come down 20%, and then come back. As Yogi Berra said, predictions are hard, especially about the future.

But it’s good to clear out the deadwood. For example, the zero days till expiration options trading may be contributing nothing more than volume, income for brokers, and some volatility. I would say it would be nice to wash those folks out of the market, but I suspect a majority are not professional traders. They think they are, but they’re just gambling on whatever free broker they’re using. It would also be nice to nip the prediction markets betting on the market in the bud. But again, that’s not done by people whose wealth moves by six or seven figures on a daily basis. It’s done by people who can’t replace their fridge, if it breaks.

But then again, other than time horizon and belief, what makes someone betting Tesla will move up at least half a percent today, different from me? I have a longer timeline. And for the last 100 years, we’ve seen the US economy grow and wealth accumulate in assets like stocks. Over a long enough time-line (with an important asterisk there about when you buy in and when you cash out), people have generally done well. But we wouldn’t be the first example of a country killing its golden goose for the dumbest of reasons. London has played second fiddle to New York for some time, but Brexit has accelerated its trajectory into irrelevance. Now, its best financial innovation is possibly loosening laws to become more like Dubai, where it’s anything goes (including fraud). Once, even after the US economy eclipsed the British Empire, London was the financial and insurance center of the world.

At some point, the hyper-scalers will need to stop buying Nvidia hardware unless they figure out profits from AI. The market is already giving Google, Amazon, and Microsoft the side eye for heavy capital spending to support AI. The punishment by the street for not investing in AI might be worse right now. But at some point, if AI isn’t making real money (and not just redeeming credits issued in exchange for ownership in Open AI or Anthropic), money spent on AI chips or data centers would be better used to buy back stock. At that point Pinchai, Nadella, and Jassy might decide to stop advertising their AI capex spend, as it would be driving down the stock, and focus on “core competency.” They will pivot by laying off a bunch of people and fucking over a bunch of contractors who anticipated the completion of additional data centers. Oh well. Somewhere between now and that possibly distant future, I expect to break Nvidia to break down from its trading range. Unless it doesn’t.


This is not investing or investment advice to you, or anyone. It’s is provided for your entertainment purposes only. And if you are investing, contact a professional before making any decisions. Buying and selling stocks, futures, or any investment is a risky activity and can cause you to lose money, including the principal which you invest.

A Not So Quick Note on Jobs Report

The fun thing about the jobs report is re-interpreting it to better suit your political leanings. “Yes, but they’re not good jobs.” Estimates were in the 70k ballpark and the number came in at 130k. There was also slight wage inflation. This makes a rate cut less likely in the next few months, Warsh or no Warsh. The bulk of the jobs came in health care and education, with the government losing jobs. We’ve been assuming that the interest rates have been a drag on the jobs market but the economy may have adjusted to having any number other than zero as an interest rate. In fact, there’s a case for an interest rate increase in the coming year.

  1. The last mile on inflation is sticky, which may mean rates are not high enough.
  2. If the economy is expanding and pushing up wages at 3.5% funds rate, the neutral rate may be higher.
  3. Inflation may accelerate if GDP continues at its current clip and wage pressure continues to rise.
  4. Aggressive spending (like a 45% increase in the DoD budget) may fuel more inflation.

But here’s the thing. Don’t read too much into one report. Next month could surprise down by 50k. Who knows why – there are some long-standing data collection issues. The numbers will bounce around, especially as they get revised. There’s the table of year end revisions to the 2025 reports and it’s illuminating.

We created (revised) 181k jobs net last year. Maybe the yardstick of over 100k per month for a growing economy doesn’t make sense in a country that may start to experience population declines (as we shut off the immigration flows). That’s been the number I’ve used to evaluate jobs reports, but net migration to the US (by policy) is intended to be zero. We’re not fucking enough to make enough new kids. In fact, if the population isn’t growing, and it’s just aging. Wouldn’t slight job losses, but more jobs concentrated in healthcare, be a good jobs report? Maybe the yardstick should be net zero jobs near full employment? And only shrinking and growing during recession or post-recession recovery?

As I think about this, I start to think about the elasticity of wages with respect to growth. Once we push up against full employment, wages need to rise to get more workers into the job market. How much do wages need to rise once we get more workers into the job market? If we go from 4.3% unemployment to 4% unemployment (very low, historically), do we see wages shoot up enough to drive inflation? But if we get not much more slack than 4.5%, do we see wages falling? That would suggest wages are inelastic with respect to employment near full employment. I suspect the same isn’t true in a recession, if unemployment spikes to 7% or 8%, and much smaller (if any) increase in wages are needed to bring down unemployment. We go from having a Philips curve to a Philips hockey-stick.

It also has some implications on growth. A shrinking population suggests that growth that’s slightly too fast results in inflation. With a growing population, part of growth simply goes to absorbing new workers. You need to grow, otherwise you quickly accumulate large masses of unemployed people. And I suspect it favors younger, less experienced workers, as you can hire more of them to replace older workers. They are cheaper and plenty of them. But what if a shrinking population size makes you risk averse, preferring to stay with proven workers rather than bringing in new, lower return, unproven workers from a smaller (and therefore not much cheaper) pool? Maybe that’s why Europe has had a persistent youth unemployment problem?

Hear me out on this brain fart. In a fixed population (or shrinking population), you basically trade one worker for another. When Bob retires, Alice steps in. You knew what to expect with Bob. Bob was very well trained. In Bob’s cohort, unemployment is like 2%. To balance that, in Alice’s cohort, unemployment needs to be higher, say 10%, so on average, we have that magic non-inflationary level of unemployment. But Bob is old and Alice is young. Alice would be much cheaper, if we had a larger pool of Alices than Bobs.

But there aren’t a ton of Alices sitting at home, vaping and playing Call of Duty. Alice is a risk. You could bring in Alice, train Alice, and then lose Alice as she’s offered a better job by your competitor. In addition to the fact you need (say) 1.5 Alices to equal one Bob until Alice gets a few years of experience. Whereas they are not likely to poach Bob, and Bob would probably not get a better deal if he moved. You have an incentive to hold on to Bob and only hire Alice, if Bob leaves. And if one Bob leaves, a slightly younger Bob would be preferable.

Once Alice’s cohort comes into jobs, their unemployment also drops to some low number. But in order for that to work, the unemployment among new workers has to stay fairly high. Maybe I just made a realization labor economists have known for years. In any case, I think we’re going to have to get used to flat jobs reports and more of our workforce moving into healthcare. Otherwise you wind up with inflation pressure and Boomers dying on the side of the road.

AI Coding Will Become Line-Work

For the sake of argument, let’s assume you can create and to end software machines. Just assume this is true for now.

Right now AI generates code, which is reviewed by a human. The human prompts, provides context, and whatever additional injection, the AI chews on it for something between seconds to tens of minutes, and out pops a module. It could be an API, a library, a command line application, single class, part of a web app or whatever. The human reviews the product, reasoning symbolically over the code, looking for defects of logic rather than text. The LLM is a probabilistic text generator, which is fine, as most new code isn’t necessary novel. There exists a wealth of similar code (text) on which to draw. The human applies a different kind of reasoning, like a pair programming session where one developer knows the syntax and another the algorithm. Is there a race condition here? Could this resource be created once instead of with each invocation of a function? Do the tests check anything meaningful?

The ability to write code, and symbolic – not textual – mental models around that code, are what allow us to evaluate the code. The reason we spend time solving problems like the “dining philosopher’s problem,” or struggle through algorithms, is not to regurgitate that same text ten years later. It’s because we encounter novel problems which are similar to, but not exactly like, well understood problems learned in school. We mash together the novel and well understood elements not by cutting and pasting code, but by applying mathematical logic. Some of us express it more directly in actual mathematical symbols, or tools such as TLA+, but even drawing boxes on a whiteboard utilizes symbolic logic.

Eventually, that mental muscle will atrophy as we read code but produce little. Many AI heavy developers anecdotally report the loss of even basic skills. At some point the loss of skill will become enough of a problem, that code cannot be reliably reviewed. The likely response will be to adopt AI to review the code for the human. At this point it is easy to wonder why a human, with their need for a desk, noise-canceling head-phones, and their expectation they will be served lunch, needs to be paid like a software engineer. Are they an AI engineer? No, they are a machine operator. They are as pivotal to the production of a new piece of software as factory worker is important to stamping out the bit or bob their machine produces. This is what Cory Doctorow calls the reverse centaur, a human who is there because it’s not feasible to automate a part of the process. But the machine is clearly in charge.

What about AI engineers building fantastic AI models? There has been a need for other engineers to produce machines used in factories. Mechanical engineers, electrical engineers, chemical engineers, material scientists, etc. work to design and build better widget stamping machines. But these are nowhere near as common as widget stamping machine operators. And while they make the salary of maybe 5 widget stampers, they are not particularly well compensated with respect to software developers today. It’s likely we’ll need AI specialists to help build better AI machines, but the spectacle of exorbitant salaries is tempered by the realization their jobs will also be more automated. Why would one need hand-made bricks to build a brick factory? You would just buy bricks from another factory.

At this point you might well be wondering why an engineer in the future would make six figures on their first day of work1. Operating the software machine is not about math or programming skills (which average people are excited about for five minutes but then discover learning to code is a non-addictive sleep aid). Operating the machine requires being trained on the machine (likely by the vendor) and feeding in the inputs for the required outputs. This is no longer a prestige or high salary job. Following instructions and not tinkering with the machine are more important to success than symbolic reasoning over concurrent threads of execution. And being the boss of a bunch of machine operators? That’s what we call a shop foreman today.

I’m not saying all programmers drop off the face of the earth. I’m just saying that the ratio of programmers to AI operators drops to something like the ratio of people who can hand-machine a part to those who at most stick a metal blank in a machine and push a button. There are people who build and repair the machines. They aren’t really artisanal craft people. They are machine makers. Often those person is paid somewhat more than the person feeding the machine blanks and pushes the button. Blank, after blank, after blank. But hand machining is not the same skill as feeding blanks into the machine and hand machining is too expensive. Nor does it guarantee of a better part, if you have a well calibrated and maintained machine.

There are plenty of examples of crafts that have been reduced to people feeding machines at various levels. No one spins yarn by hand except for artisanal craftspeople. A loaf of bread from a fancy bakery is more expensive than a mass-produced loaf in a grocery store. IKEA produces a Billy book-book case every five seconds. If any of the workers that operate the machines at IKEA are woodworkers, it is because woodworking is their after-work hobby. But the person operating the bread machine at a bread factory, the person feeding sheared wool into a spinner, or the person feeding sheet goods into a cutting machine at IKEA are not paid as skilled artisans. They are paid as factory workers. The output of craftspeople, like hand-spun wool, hand-made sweaters, or hand-machined Swiss watches, are usually the fetishes of the wealthy. I just don’t see anyone bragging about their artisanal, hand-code, shell script the same way as a watch.

And this world is a different world. In the past, we fixed or extended software because of the cost of re-writing the software. In this world, if you need a new iteration of a storefront for a retailer, you run it through the machine again and make new software. You keep running it through until the machine produces something you want. There is no need to make understandable, modifiable, high-quality software. It will be thrown away every iteration. There is no point in hiring a skilled weirdo, who hand-codes in their parents’ basement, to fix the site or make changes. Maybe in India a few people who hand-code exist to help make the machines, the way IKEA makes their products in other lower-wage countries with the requisite skills. But you wouldn’t pay developed world wages for those skills.

Don’t get me wrong, you are not going to get the average level of quality. You’re going to get shit. A Billy book-case falls apart in the presence of too much humidity because it’s particle board. I’ve had plenty of Billy book cases with flaking laminate or drooping shelves. Do I get rid of them? No, they just continue to be an eyesore until something else forces me to get rid of the bookcase. Just as bad and broken software will sit around until something motivates a broader change. Possibly with no one going into why it was broken other than to stitch together a vague notion from (possibly wrong) tracing and log messages. But no one is going to read all that code, much less edit and debug that code, because (practically) no one will be able to read it. Certainly not the five-figure code machine operator.

But I don’t believe we can build this software machine. I believe there is an upper limit to the complexity LLMs can incorporate in a purely text-based model. (It’s not even text-based, it’s token based, which is why counting letters in words has been such a challenge). LLMs cannot reason (although they can produce text that looks like reasoning). They cannot process categories of concepts, just text about categories of concepts. If it doesn’t already exist, or is not just within reach by mashing more text together, an LLM cannot imagine or create it. And most of the software it does create is not imaginative or novel. It is the same API, using the same authentication, using the same languages, and the same back-end as thousands of other similar examples, code repositories, and text-book descriptions. As long as you want a store-front that is largely like every other store-front on the web today, and prior to LLMs you would have mashed together from example code and Stack Overflow, you’re going to be replaced by a code machine operator. But then again, were you ever really contributing that much?

  1. Rather than a specific number – just assume six-figure salary means well-compensated above the average wage. ↩︎

Thinking Through NVDA

With the normal disclaimer that this does not constitute investment advice, it is provided purely for entertainment purposes, and contact a tax or investment professional before making any investment decisions, I’ll take a look at a stock.

Taking a look at Nvidia, we see the 50 day moving average peaked in December and has started a flat to modestly downward trend (yellow line is an eyeball of the trend). The price is approaching the 200 day moving average (orange circle). There is a floor around 170. So what events would I look at, from a technical perspective?

If the price drops below the support at 170, that is significant. In my thinking it’s more impactful than the price falling below the 200 day moving average. The 200 day average is going to continue to move up, even if the price declines, as it catches up to the historical price action. The violation of support that has been in place since September is more critical, in my opinion. As would as a steeper downward slope on the 50 day moving average.

Technical analysis isn’t just tea leaves for investment bros, if you look at it as a pattern of the expression of sentiment about a stock. What do we know about the broader context? For one thing, the current administration has shown a willingness to step in and make decisions that impact Nvidia’s sales. Second, the broader investment community is getting more concerned about circular deals. Third, investors are starting to ask where’s the beef on AI revenue. In the past, when the price has approached 170, buyers came in because they saw value in the stock. Above 190, more people did not see the value in the stock and sold. That range indicates people may be waiting for more information before making a move.

As we can see, all the revenue bump for Nvidia has been in AI accelerators. I’m assuming there’s a role for LLM style AI in the future. If anything, it makes sense for Microsoft to include it in Office to help with writing e-mails, writing Word docs, Spreadsheets, and Presentations. Likewise, Google’s office offerings would benefit. As would ad generation on Meta. The question is if the amount that needs to be spent in accelerators, data centers, and energy makes sense, given the revenue it produces. If it costs Meta $10 of cap-ex and $10 of lifetime energy costs to generate a lifetime revenue impact less than $20, it clearly doesn’t make sense.

What would happen to Nvidia if the GPU sales are cut in half. First, the multiple needs to come down because the expected future growth in sales is at a much lower sales volume. Let’s say it drops to 20 times earnings. Earnings are cut in half. That would mean a share price of around $45 for Nvidia. What does that do to the mag 7? Well, different parts of the Mag 7 are there for different reasons. Apple is not there because of AI sales. Microsoft, Amazon, Google, and Meta have AI exposure but won’t get destroyed. Tesla is a meme stock so this may not change anything for people who believe humanoid robots are a 10 trillion dollar TAM. But there are other stocks, like Oracle, Micron, and Broadcom that take a big gut-punch. (As is happening as I write this).

What may have an outsized impact is the wealth effect that’s given the top 10% to float half of all consumer spending. A drop in the Mag 7 would pull down the entire S&P 500. But it also flips psychology. It would also have an unquantifiable but negative impact on the private equity and banks that have lent money to AI startups and data center build outs. One estimate puts 20% of PE’s loan book on AI related loans. No bueno. Not to mention the hit to all sorts of venture funds and the investors in those funds.

Again, do your own research, consult a professional, this is only provided for entertainment purposes, and is not investment advice.

The Great Ball of Money Lurches Away from Tech

This morning, as I shake my head at the lack of JOLTS data from the BLS, I watch the great ball of money lurch from Tech to not-Tech, like staples. Unfortunately, the multiples for many consumer staples companies are already on the high side of normal. For example, PG is at 23, which represents confidence in what is essentially a flat business. Unlike the mythical tech company (and I say mythical with great intention), PG and the other staples are businesses that are both stable, with low but predictable margins, and scale linearly. If they take a little market share this year in one category, it doesn’t mean winner take all. They don’t have 40% ore more margin products. There is no network effect around dish soap.

I have a number I think of when I look at a fair valuation for Nvidia (hint – it’s much lower than it is now). And while I look at AI as having merit, my arguments relate to how much do you want to pay for the productivity. In my own field that productivity is mixed and sometimes requires you to ignore quality. But right now investors are heavily subsidizing AI for consumers and businesses. The question of how much a company would have to charge for a sustainable AI service is open for discussion. We may have a gentle deflation out of Tech but it may mean that PG winds up at a PE of 30? Or maybe 35? That’s a little rich. Because the big ball of money doesn’t know where to go so it just keeps buying and selling.

The impact the draw down on Tech is having is an advance decline ratio of roughly 6:4 (meaning out of 10 random stocks six are up and four are down), but the NASDAQ and SP500 are down. The DOW is treading water, but the Russel 2k is up. People are looking to higher-risk, smaller stocks. As I look at that, the big ball of money is pushing into not-Tech with industrials, consumer cyclicals, and basic materials punching up. That’s in line with the data suggested by yesterday’s ISM – that low inventories are going to drive up production.

It seems like there’s more money sloshing around than value to absorb that money. It’s not floating around in the economy as money to spend (other than through the wealth effect). So it’s not growing anything. It’s sloshing around inside markets, driving growth through multiple-expansion, drafting in more money as people look at the number go up and want to join in. All of which, I’m afraid, makes this gambling more than investing. Jump on the band-wagon, ride it up and then bail before the hammer comes down. But you better get in now, otherwise you’ll only get in at a higher valuation in the catch-up trade and have less runway.

The big ball of money will concentrate into a smaller and smaller chunk of the economy as it sloshes around. With dumber money getting swept up by smarter money. Assuming we just plod along for another couple of years and look for articles that say the top 5% are 50% of spending. Only at those levels they don’t go out to eat more, they just buy $25,000 pizza ovens for their $750,000 kitchen remodel. So the economy will look weird as McDonalds, Walmart, and Target struggle, as household consumer names lurch in and out of bankruptcy and private equity, while Porsche and Rolls Royce are making book. GDP is up. The market is up. But the bottom 80% are just fucked. And you can’t have a democracy when the bottom 80% feel like they’re getting the shaft and locked out of number go up.

Tech Bros Are Going to Lose Europe

The Guardian asks. First off, it’s not that the European version of these services do not exist or need to be built from scratch. But that’s not the reason to switch. Until now, US elections, even 2016, produced reasonable outcomes for Europe. Now they have an agent of chaos more than an ally. A country that foments division among its alliances and has stated that a weaker Europe is in American interests. And so much of their communication is funneled through American companies. Companies that have shown a willingness to cooperate with an administration that shows a willingness to use the levers of government in a retaliatory manner. This retaliation could be illegal, but there has been little interest in the Republicans to stop it and there seems little ability (either talent or power) for the Democrats to put an end to it.

Even if the cost is a little higher (with the US being the low cost producer), it is the insurance you pay for having a functioning society. For the same reason you might choose the Grippen over the F-35. Maybe have some F-35s, along side a mix of other fighters. Should the relationship with the US result in a suspension of the contract for maintaining the F-35, or the US out-and-out prohibits their use in the defense of Europe, there is a plane Europe can fly. The resulting insurance against being effectively disarmed by America may be worth additional cost. As would be securing broader sources of many products and services.

But let’s say the 2028 election produces a left-of-center Democrat who is able to assume power. What is the guarantee they don’t pursue some of these policies because of political expediency? There are many instances where a policy unpopular with the winning voter coalition are continued. There is no guarantee that the new administration drops those policies on its first day in power. And what if it loses power to the ideological continuation of the Trump administration? What happens if, in four years, Marco Rubio decides it’s better to rule in hell as a populist autocrat than to serve in heaven? At this point, from an outside observer, I would imagine even Donald Trump Jr. would not be an impossible outcome in 2032.

But there is something coming with AI generated search results I think Europe underestimates. They can be primed to deliver answers that promote division and fringe movements. This could include AI summaries of documents or the generation of material from AI. These could be over-the-top, cringe inducing, obvious trolls to very subtle wording or choosing what information to present. And should Europe complain, threats of tariffs or outright cut-off of these services follows. Or EU leaders are sanctioned and cannot access their digital (or financial) assets.

So yes, Europe needs to figure out what would help. Making it cheaper to start businesses in Europe? Sure. Ensure these companies have guaranteed revenue through government contracts? Maybe. Changing the laws to withdraw from the anti-circumvention features foisted on the EU by previous trade negotiators? That’s a good idea. If Apple threatens to cut off iPhone access to the UK, for example, return the favor by making it legal to jail-break your iPhone and install alternate services. Tax the profits of tech companies to help pay for it? Even if you wind up taxing your own tech companies, the money might come back to them in other grants or contracts.

Right now the US companies have market dominance. Let’s say they have 95% of the office software market. The EU shouldn’t try to retake the 95% at once. It should just focus on reducing dependence on the US to 85%. From 85%, 75% feels possible. Because a lot of tech is based on network effects, as other companies plug in to the services of other companies, each tranche becomes an easier target. Just start with critical pieces, like your ability to share documents securely and e-mail. And start with the easiest part to control, your own bureaucracies. At some point you’ll need to worry about other cloud services, or even the content on social media, but it will be from a stronger place.

Depreciation Should Be 3 Years

The goal of Nvidia, as stated, is to produce graphics card that provide the best performance, such that is is not even economical to run a competitor’s cards. Let’s delve into this. First we need to understand marginal cost. If you build a machine to make widgets, and let’s say you buy the machine up front, the cost of the machine is a sunk cost. If you make one widget or a million, it doesn’t matter. Your sunk costs are your sunk costs.

If it takes $10 with of labor and inputs to make 1 widget, my marginal cost per widget is $10. Anything I earn over $10 per widget covers at least my marginal costs. I may not be profitable, but I’m making money on every widget and cash is coming in. If I make less than $10 on a widget, I’m losing money every widget and bleeding cash. Based on the market, I expect to sell 100,000 in a typical year. A price of $15 would cover both my marginal costs and the amortization of my sunk costs. That’s the best of all possible worlds.

What did I mean by amortization? I expect my widget machine to last, say, 10 years. I don’t just assign the cost of the machine to the first year, as it will produce revenue for the next 10 years. It distorts my performance by making my first year look like a legendary disaster, but the next 9 years like I’m a genius. I take 1/10 of the machine cost every year in amortization. If the machine was 5,000,000, that’s 500,000 a year. If I sell 100,000 widgets, I can assign $5 to each widget as its portion of the sunk cost. If I sell 100,000 widgets for $16, I cover both my sunk and marginal costs and $1 of profit per widget. If I sell 100,000 widgets for $14, I need to sell more than 100,000 to cover amortization.

But let’s say the widget business is not so good. Someone offers me $12 for 50,000 widgets. If I have no other orders for widgets, I’m going to lose money, given the amortization of my machine. Specifically, I won’t be able to cover 400,000 of the amortized cost. However, because my marginal cost is only $10, I get $100,000 of contribution to my sunk costs that I would not have received, had I not taken the deal. I could even make money if I got enough contracts at $12. If the offer is for 200,000 widgets at $9, I should not take the deal. Even if it is more money, I’m losing $1 per widget in marginal costs. I am losing more money on that deal than if I just shut down the machine. There is no sales volume that would result in anything but losing more money.

If my competitor buys a newer widget machine, that’s more efficient, and can make widgets at $8 a widget, they can undercut me at a $9 price point. They’re still making a contribution to their fixed costs (since the marginal cost is $1 less than the selling price). But I can’t match their price without bleeding money. I have no choice, at that point, but to re-tool and try to get my cost down below $8 to be competitive. Otherwise, they take market share and drive me out of business.

The cost of building the data center and filling it with GPUs is the sunk cost. Next, you have marginal cost which is largely electricity. It’s used to both power the devices and cool the devices. Then you have people to manage the data center. Then you have software costs related to that tuning or managing the model. The goal of Nvidia is to make it so that the marginal costs of operating a competitor’s chips is higher than the Nvidia chip. They want to produce new chips at a 1 year cadence, and within 2 years make the marginal cost of operating the newer chips decisively more attractive that the old chips. They want the economics to drive customers to replace on a 2-3 year cycle, to minimize their marginal costs.

One myth is you only need the latest and greatest chips for training, but when they’re two years old, they can move to inference. Training is when the model is sent data to get it to make a good response. Inference is when you and I ask the model for stuff. It’s estimated that 90% of the costs are for inference. That means the least efficient chips would be used for 90% of the cost. Moreover, as the models come on line, they are larger and push the limits of the older chips, meaning those chips have to work even harder for one unit of output. There may be some models that require too much memory or more bandwidth than the older chips provide. Using a 5 year old chip that’s struggling on a newer model is the highest possible marginal cost you can have. Granted, no one can have all new chips all the time, so you have a blend of generations with some chips being new and some chips older. You can track a blended marginal cost, and let’s say that’s $10 per 1,000 queries. [Don’t worry about the exact numbers – I’m making the math easy.]

What Nvidia is trying to do is to make chips so the AMD customer might by a cheaper chip, but their marginal costs are higher than Nvidia’s offering. For example, the AMD chip is a $12 per 1,000 queries, while Nvidia wants to be $10. But next year AMD will come out with a better chip, maybe meeting Nvidia’s $10 per 1,000 queries. So Nvidia has to come out with a chip that produces 1,000 queries for $8. And the same again next year. The two year old Nvidia chip still costs you $10 per 1,000 queries, but the new one costs you $6 per 1,000 queries. To stay ahead of your competition, you need to retire the 2 year old chips as fast as possible and buy as many of the new chips to keep your blended cost lower than your competitor’s blended cost.

But even the models themselves have a short life. Every few months a newer or improved model comes out. Most users want to use the latest model, even if it’s larger. The old model, which might run well on the old hardware, begins to see its market shrink. It may still have applications that are tied to it, but not new applications and not new users. The realistic lifecycle of a model might only be a couple of years. The chips are constantly being pushed to larger and larger models. To get better performance more and more “think time” is built into how the model is executed. This makes new models on old chips not just a little worse, but even harder because the model has to be run with more iterations to provide better output. It’s like having a truck that burns more gas than the new model, but you also have to make more trips with it.

Right now we don’t have a price war. We have a war of financing. Investors want to see growth. More than undercutting each other on cost, the goal is to be able to grow your user base with free services. If that’s the case, using your highest marginal cost chip, to offer free services, seems like putting a noose around your corporate neck. All your competitor has to do is pay more in sunk costs that have to be figured out some point in the future (buy more new GPUs) and kill you on marginal costs today. This won’t work forever, but it works in the short run.

The goal is to be the last one standing in a winner take all market. If your competitor is only bleeding $8 per 1,000 queries on free customers to your $10 per 1,000 queries, they can either offer better models or higher limits. They don’t have to be profitable. No one is profitable right now. No one is meeting their marginal costs. But if you can operate more efficiently, you have more time until the corporate grim reaper comes for you. Your runway lasts longer. What about the GPUs your investors financed? Don’t they want a return on their investment? Yes, that comes from you making it to the finish line while your competitors do not, rather than getting to positive cash flow.

So let’s think about this, again, for a 4-6 year lifetime for a GPU. I would buy 4 years, as that’s probably where the chip is clearly on its last legs, but at 3 years it may still be effective to run in a blended cost environment. That 4th year is likely when it really starts becoming an economic millstone around your neck. I think 6 years is absolute fiction, unless you have government and large corporate customers who have a fixed (and profitable) contract and don’t want you to change the model used by their applications. For the part of the market, like developer subscriptions, or chat agents, or new development, which live on the latest models, a 6 year old chip is likely killing you. It raises your blended costs and you burn through your investment even faster. So no, I don’t think a 6 year life span makes any sense whatsoever, except for a narrow type of government or large corporate customer. I think a 2 year lifespan makes a lot of sense for a very aggressive AI provider, who wants to keep their marginal costs as low as possible.

The Lack of Progress in Computing and Productivity

As “A Much Faster Mac On A Microcontroller” points out, even the cheapest processors today are capable of meeting most peoples’ needs with a couple of major asterisks (the underlying compute hardware in the article is about $10 to $30). I’m not going to claim that Word in 1995 is as good as MS Word in 2026. I get a lot of features on 2026 MS word on my Surface laptop that I didn’t get with my Mac 6100 from 1996. But a 1990s PC did that work while consuming less than 100 watts. A modern gaming PC can push 750 or even 1,000 watts while gaming, and 200 watts or more doing nothing. I would argue the biggest difference in desktop computers is in gaming and media consumption. Productivity has largely been static. And this has some implications for productivity we might want to consider before burning up the planet with AI and filling it with toxic e-waste.

What I will say is that for many use cases, the computer from 1996 is as useful as the computer sold in 2026. I have watched many people use nothing more than Outlook, Word, Excel, and their browser, day in and day out. These are tasks I was happily doing in 1995 on my DOS/Windows 3.1 laptop at work, or my Mac 6100 at home. With fewer features, but the largely the same work. The biggest change is moving from desktop applications to web based applications, real time collaboration, and apps like Teams and Slack.

From the productivity numbers, most peoples’ work computer shouldn’t be a $2,000 computer, or even a $1,000 computer burning 200 watts on average. It should be a $100 computer that consumes under 50 watts when running full tilt, but 5 watts at idle. With reasonably well written software, that cheap machine can do everything you need for the bulk of office workers, including Slack and Web-based applications in Chrome. But even more damming is it suggests that investing in computers has had little impact on productivity. I’m going to show you the best possible argument, with a slightly higher productivity when computers were being deployed at scale between 1994 and 2004.

How to read this chart. Productivity measures output per labor inputs. This looks at the percentage change, quarter to quarter. Why do we get negative productivity growth at points? Because the amount of output drops per labor our due to changes in demand. If you produce a little less but don’t lay off, productivity falls. Annual productivity has grown 1-1.5% in the last 20 years. This, combined with the growth in population of about 1% implied a 2-2.5% real annual growth for the highly developed US economy. (Setting aside the population growth was coming through immigration and the implications current policies have on that for this discussion).

But growth was not likely coming through the computer revolution. In fact, we see productivity grew more before 2000, when offices still had shared computers, than it did between 2008 and 2020, when almost everyone had at least one computer (if not two or three) at their disposal. The software and operating systems were much better in 2010 than in 1995. More computers and computer automation did not imply more productivity. This is known as the Productivity Paradox.

The 1990’s computer revolution had productivity gains about the same as the previous 40 years. So the desktop “computer revolution” didn’t meaningfully impact this measurement of productivity. In fact, the period from 2008-2020, with its ubiquitous computing and zero interest rates should have spurred investment leading to productivity growth. That period had unusually low productivity growth. More computer did not translate into more productivity. Take that in for a minute. The era when we introduced iPhones, iPads, and Android devices, and had capable, cheap laptops and desktops coming out of our ears, along with zero interest rates, had sub-par productivity growth. Maybe that’s a hang-over from the great recession? Before then, between 2000 and 2008 had at best historically normal productivity growth.

Why I focus on the desktop versus the enterprise is because I think there’s a real difference in individual versus institutional computing. I still think there is a lot of needless spending on bells and whistles that drives up the cost, but it’s hard to argue that book keeping and accounting aren’t more productive today. But if one group is getting a lot more productive, we still have to ask if this means another group is less productive to balance the average? Or the impact is so narrowly focused it doesn’t move the broader needle? It’s also possible to argue that technological change means computers are a necessary input to enable or unlock the next productivity gains through robotics and machine automation. Maybe productivity would be even lower absent computers. But I still maintain that most office workers would be only marginally less productive if you put a thirty year old computer in front of them. (Although they might complain a lot. And maybe quit).

Which brings us to artificial intelligence. I use it and find it’s maybe a 5% productivity boost. I can’t just yell at the computer to do things. I have to create a context and prompts to enable the AI to produce something that’s usable. And then I still need to refactor it to bring the quality up above a naive or basic implementation. And sometimes, it’s just faster for me to code it at a production ready level rather than doing all that other work. I could see how some developers actually find it a negative productivity tool. It doesn’t always generate the correct code and I’ve only had reliable success with very popular languages and tools. On legacy code it sometimes generates pure garbage. Adding AI may be no more beneficial to productivity than it has been to put a computer on everyone’s desk, or providing internet access, or Web 2.0 apps, and so on.

Like a lot of AI hype, that tweet in no way matches up with my experience. However, it later came out that what they did was feed the AI tool a year of context and it generated a “toy” implementation. That implies it is not code you would run in production without a lot of work. Sometimes the difference between a “toy” version and a production version is months of effort. Sometimes the real effort is figuring out what to build. A proof of concept or toy version is what I get when I use AI code assistance.

Which brings us to the question of how much money should we spend on AI and related investments? Based on the last thirty years, it’s unlikely that computing related investments will drive significant changes in productivity. And from my personal, anecdotal experience the gains from AI aren’t huge. Right now AI investments appear to be sucking up virtually all the available investment capital, along with energy consumption (causing some communities to face higher electricity and gas prices). What if, after all is said and done, we look at that same chart through 2035 and we don’t see any change in productivity? Think about all the money we would have wasted, all the e-waste we would have generated, and all the other lost opportunities we will not pursue?

Am I an AI doomer? No. We can do something an LLM can’t. We can pull a physical plug out of a wall. What I am is skeptical that this is the vehicle for this level of resource commitment. In fact, I’m for creating even less dependence on computer infrastructure in favor of other infrastructure in everything ranging from water treatment to rail to energy production and distribution. I’m all for diversifying semi-conductor production out of Asia so we have a secure source of semi-conductors if Taiwan is overrun. Some of it will require computers to automate and better manage those resources. And some will require an office worker to plug numbers into spreadsheets and send e-mails. But it’s hard to argue that spending much on new computer based tools for that office worker, or automating that work with AI, would do much for growth.

My Porcine Aviation Era

I have not had great experiences with AI development tools. I’m not a Luddite, but if the tool takes me more time than just doing it by hand, and I get the same results, it’s not a tool I want to use. In some cases, the code had subtle bugs or logic little better than the naive implementation. Or in other cases, the code was not modular and well laid out. Or the autocomplete took more work to clean up than it saved in typing. And in some cases it would bring in dependencies, but the version numbers would date back from the early 2020s. They were out of date and didn’t match to current documentation. For the most part the code worked, but I knew if I accepted the code as is, I would open myself (or whoever followed me) to maintenance issues at some later date. One might argue that the AI would be much better then, and could do the yeoman’s work of making changes, but I wasn’t sold on that idea. (And I’m still skeptical).

I would turn on the AI features, use them for a while, but eventually turn them off. I found it helped me with libraries with which I wasn’t familiar. Give me a few working lines of code and a config to get me going, and I’ll fix it up. It would save me a search on the internet for an out-of-date Stack Overflow article, I guess. I used prompt files and tried to keep the requests narrow and focused. But sometimes, writing a hundred lines of markdown and a four sentence prompt to get a function, didn’t seem scalable.

Well, pigs are flying and I found something that appears to be working. First, it involves a specific product. At the time of writing Claude Opus/Sonnet 4.5 seem to be quite good. Second, I have a better way of incorporating prompt files in my workflow (more on that below). Third, language matters. I found Claude gave me the same problems listed above when working on a Jakarta EE code base. But Rust is great. (Rust also has the advantage of being strongly typed and eliminating some of the issues I’ve had when working with Python and LLMs). Fourth, I apply the advice about keeping changes small and focused. Fifth, I refactor the code to make it crisp and tight. (More on that below). Sixth, ask the LLM for a quick code review. (More on that below).

The first topic I’ll expand on is changing my relationship with prompt files. Instead of attempting to create prompt files for an existing code base, I started writing a prompt file for a brand new project. I had Claude generate a starter and then added my edits. I believe in design (at least enough to show you’ve thought about the problem). This actually dovetails with my need to think through the problem before coding. I still find writing prompt files for an existing code base tedious. But, if I have to sit down and think about my architecture and what pieces should do, the prompt file is as good a place as any.

The other thing I want to cover is refactoring what the LLM hath wrought. Claude generated serviceable code. It was on par with the examples provided for the Rust libraries I was using. (Which also happen to be very popular with plenty of on-line examples). Claude would have had access to rich training data and pulled in recent versions (although I had to help a little). But the code is not quite structured correctly. In this case I needed to move it out of the main function and into separate modules. But mostly it was cut and past and let the compiler tell me what’s broken. Next, in the refactor, is to minimize the publicly exposed elements. Now I have code that’s cohesive and loosely coupled. The LLM by itself does not do a great job at this. Taste is more a matter for meat minds than silicon minds at this stage.

The final thing I want to touch on is using the LLM to review the code after refactoring. This gives me another data point on the quality of my code and where I might have had blind spots during the refactoring. I work with lots of meat minds and we review each others’ code on every commit. There are some good reviews and there are some poor reviews. And reviewing code is harder, if you’re not familiar with the specific problem domain. But the machine can give a certain level of review prior to submitting the PR for team review.

So that’s what I’ve found works well so far: green-field projects, in highly popular frameworks and languages, performing design tasks in the prompt file, using LLM best practices, refactoring the code after it’s generated, and a code review before submitting the PR. Auto-complete is still off in the IDE. And I’ll see if this will scale well as code accumulates in the project. But for now, this seems to produce a product with which I am satisfied.

[A small addendum on the nature of invention and why I think this works].

Peoples’ ideas are not unique. As one of my relations by marriage pointed out years go, when he moved above 190th street in Manhattan, there seemed to be a sudden run to the tip of the island to find “affordable” housing. In a city of millions of people, even if very few people have the same idea at the same time, the demand quickly gobbles up the supply. Humans build ideas up mostly from the bits and pieces floating around in the ether. Even “revolutionary” ideas are often traced back to maybe a interesting recombination of existing ideas. Moreover, people have sometimes been doing that “revolutionary” thing before but didn’t connect it to some other need or didn’t bother repacking the idea. What’s more important about “ideas” is not the property of the idea but the execution of the idea.

There is still something about invention, even if it is largely derivative, that the LLMs don’t appear to posses. Nor do they have the ability to reason about problems from logical principles, as they are focused on the construction of language from statistical relationships. Some argue that enough statistics and you can approximate the logical reasoning as well as a human, but I haven’t seen solid examples to date. The LLM doesn’t understand what to create, but it does summarize all the relevant examples necessary to support the work of realizing the idea. But even then, there are gaps that we have to fill in with human intention. Does this revolutionize coding for me? No, I estimate it makes me some percentage more productive, but in the 5-15% range. And of the time and effort necessary to realize an idea, coding is maybe 1/3 or less of the effort. And I worry that we’ll never get to the point this technology will be available at a price point that makes the provider a profit while being affordable to most people. After all, there’s a limit to how much you would spend for a few percentage points of additional productivity.