Why Silicon Valley Isn’t Freaking Out About China

“The Looming Taiwan Chip Disaster” asks a question many are wondering. If China blockades and invades Taiwan, what is Apple going to do? It gets many of its chips from Taiwan and TSMC manufactures its very custom CPU. A blockade or invasion would cut Apple off from TSMC. That would be the end of Apple products, right? So why haven’t Apple and other Silicon Valley companies diversified out of Taiwan to at least have capacity in the United States. Let me tell ya… business idiots are beyond your tiny brain. They understand a broader, global picture that they have to carefully consider. They had a great press event in the oval office to announce a lot of stuff about sexy advanced chips. They even gave Trump a gold thingy for his desk. They’re saying things and doing things you can’t understand. So let me help break it down for your tiny (non business idiot) minds.

First, let’s start with a larger problem, that you can’t just fabricate (fab) a CPU and you’re done. This is part one of that problem, and it’s packaging. Once the silicon is etched and cut into separate chips, it needs to be packaged. This is not just slapping a bunch of plastic or ceramics around the chip. A poorly packaged chip will show problems that prevent it from operating correctly. And modern packaging is a far cry from the 1970s DIP modules, pictured below. A DIP packaged chip might have 40 or so connections. A modern chip can have hundreds of connections. And it may be housed in the same package with other chips, either support chips, or because they’ve adopted a chiplet design to improve yield (number of successful chips on one 30 cm wafer). Packaging is done in Taiwan and because it isn’t sexy, no one focuses on it. Without packaging, you have nothing. And chips etched in the US have to be flown to Taiwan for packaging. If China invaded tomorrow, and all the etching was done in the US, we would still have to fly the chips over for packaging.

Next part of you can’t just fabricate a CPU. A computer isn’t a CPU. There are other chips on the motherboard that control various features. For example, there’s a chip that controls the attached drives. It’s actually a little CPU in its own right, likely running a variation of Linux. Then there are chips to manage the power through the system. These are not simple voltage regulators, they are programmed to ramp up and down the current to keep the CPU running efficiently and cool. You likely have chips to handle all the slow speed IO, like USB ports. That’s not done directly by the CPU, there’s a separate processor for that. You can’t just make the CPU in the US and make a computer without all these other critical chips. Most are made in Taiwan, some in Japan, some in Korea, and some in China. That’s right, you already can’t assemble most electronic things for the US without Chinese made parts.

Next, you have to understand that Apple would buy chips from China. And so would Google and HP. If China took Taiwan tomorrow and the option was to go under or buy chips from a now Chinese controlled TSMC, Apple, Microsoft, Google, Meta, or whoever would buy the chips. Even if it meant entering into agreements that require more of the design to be done in a Chinese controlled company that would rip off the IP. Because going without sales (and maybe going under) is worse than maybe losing some US government sales. Plus, the US government will come around when there’s no option. In fact, it might make some things easier and they make even more money in the short term. If you go to these companies and say it might cost you a little bit more, but you insure your supply from being cut off, they would choose not to spend a little more. They will just assume they can continue with business as usual, buying chips from a Chinese controlled Taiwan. And they’ll be happy to do it.

Related, is the executives won’t believe it. Just as the Ukrainians didn’t believe the Russians would actually invade, and it was just a war game, as the Russians were setting up field hospitals on the Ukrainian border to treat the expected wounded, these business idiots don’t believe it will happen. I don’t even think it entered Tim Cooks little pointy head, as he sat through a screening of “Melania,” that China views the situation with anything other than a money lens. That’s because, like all business idiots, he views the world in a money lens. Why would China do something that would cost them money? The idea that China has felt humiliated and carved up by the West and this is about national pride is alien to them. Pride? If it costs you money? Tim Cook sat through a screening of what was essentially a bribe from Bezos to Trump to protect his money. Executives periodically line up, lips puckered, to pull down Trump’s piss-stained shorts and kiss his un-wiped ass. Money good. Must get more money. Corruption make more money faster.

For all these reasons, until China invades Taiwan, US tech companies are going to do a goddamn thing. And when China invades Taiwan, they will happily license their IP (their chip designs and process documents) to the Chinese controlled TSMC. The fact China will cut them out of the fucking loop once all the IP is stolen is lost on them. Just as they have done with every step of the way so far. Just as China is learning to cut Western designers out of other products. Why would you buy something at a premium, just because it has a Western label on it, when you can buy from the Chinese factory at a discount? Why would you buy an US branded computer when all the chips come from China, and it’s manufactured in a Chinese factory? It’s not like they’re going to get payback for the US putting spyware into the US networking gear bought by Chinese companies.

Once upon a time, there was a thing called the Marshmallow Test. You take a preschool aged child into a room and tell them if they don’t eat the marshmallow on the table, they’ll get that one and another one later. The idea is to see which kids will become doctors, lawyers, and CEOs, by delaying gratification, and which kids will scroll Tik-Tok and scratch their junk for a living. It turns out the whole thing was bogus, but it made a lot of parents try this at home and weep to see little Johnny gladly stuffing the first marshmallow in his fat little mouth. No delayed gratification. No future. Delayed gratification is not what business idiots learned. They learned to demand more marshmallows or else they’ll stop going potty in the right place. Just as they’ve learned to demand tax breaks, guaranteed loans, or other inducements to do the right thing.

If you think you’re going to get Tim Cook to buy a US made chip for his Macs or iPhones, well… he might. He might buy some from a Fab in Arizona to kiss Trump’s fat ass, ship them to Taiwan to be packaged, and then off to China to be assembled into an iPhone. Because Tim can’t package the chip in the United States. Nor can he make all the other parts of an iPhone in the United States – as just a practical matter. And as far as he’s concerned, it’s just keeping Trump happy. Just like he goes through the press conference (along with many tech leaders) announces a bunch of stuff but does nothing. Just like NVidia was supposed to invest 100 billion… I mean 30 billion… I mean up to 30 billion in OpenAI1. Business idiots just say words that have no meaning. He will do the bare minimum necessary to keep Trump happy so Apple doesn’t have to worry about the administration lobbing trade bombs at Apple. He will make the bare minimum number of chips in the US, though parting with that extra nickle every iPhone makes him weep.


Note that this story from Forbes does not invalidate my point. They will likely have US workers shove motherboards flow in from into a case and call that American manufacturing (because, remember, other parts come from other parts of Asia, including China). On their lowest volume product. And a vague promise for other stuff. All so they don’t have to pay a significant tariff on iPhones (their highest volume product).

  1. Note that $20 in McDonald’s gift certificates counts as “up to 30 billion.” ↩︎

It’s a Special Hell

I feel like it’s a special hell to be stuck in a democracy with people who are so fickle and easily manipulated. Frankly, the best thing that could happen is that we walk away from social media and instead start walking toward places where we meet other people, like a bowling league, bar, maker space, library, or whatever. Sure, some places will code right-wing, but people don’t like blowhards and usually just want to bowl, or have a couple of beers, or work on a project, or find a good book. At the end of the day, in spaces with people, blow-hards have their moment and flash out. You find your people. The people who you look forward to seeing the next time you go there.

But we aren’t doing that as much as we should. We just scroll through phones and social media posts. Like, share, re-share, or maybe even re-mix if you’re bored. And before you know it, another hour or two has gone by. That happens for most of the week and you lose days worth of time before you know it. You look and see things aren’t going so well in other areas, so you jump back on social media. Its a combination of an escapism factory and a dopamine dispenser. How should you feel about something? How did you react to that meme? Did you re-share it? Or did it change your gut reaction to a story? Did you switch teams on that one? Did it make you feel smarter? You’re not a sheep. You think for yourself! Except, did you?

It’s a special hell to be stuck making decisions with people whose frontal cortices have shriveled and are driven by their amygdala and limbic system. Feed enough of the right trash into the chum-bucket that is their media diet and a president trying to become a dictator is romanticized as a Tony Soprano, having the guts to go for what he wants. Not pausing to think that it’s against the basic idea of American democracy. Reaching that conclusion requires logical thought. Having an instant reaction to a meme, an image, a video that has to be shortened to seconds to deal with a decaying attention span, requires no thought. Feed enough of those images and you get powerful reactions.

All the while the people that are making money by monetizing your eyeballs are entering into a new and dangerous relationship with a dictatorship-coded regime. One of patronage. Special privileges, tax breaks, or rewards for being a swell pall. The people who decide what to feed your atrophied brain have an interest in feeding the message the dictatorship wants. What Orwell missed about dictatorship was not that government ministries had the ability to sway citizens. It would be the private companies, beholden to the state, that would do the state’s bidding to pursue personal wealth. It wouldn’t be faceless bureaucrats. It would be opaque content policies that might decide which image to feed you, based on careful targeting.

And with all the noise about the Blue Wave, I imagine we have yet to see the content that will be pushed to fire up fear, resentment, and anger to counter that Blue Wave. Whether it’s voter suppression by making people feel their vote is meaningless to stem the authoritarian tide, and they should just protest, or drowning them in emotional counter-narrative, the push back is coming. They don’t have to hire an army of content creators. They just have to push what ever is working and amplify that. That is the same tactic Russia uses to push its narratives in the West. Don’t push all at once. Seed it. See what works. Amplify that.

In addition we have the novelty of AI to prod the narrative. Imagine an Open AI signaling they need government investment to be profitable, and the only condition is they inject some additional prompting. If someone searches about specific topics, inject a small push to the government’s position. Not overt. Not blaring. Just a nudge. Offer content from a set of friendly sites to back up the claim. Not Fox News, just friendly sites. It can’t be obvious they’re being nudged. Maybe let Google and Meta know that a government ready to invest asks that they just treat sensitive subjects in a way the government desires.

Oh, but won’t they fear the Democrat’s backlash? What backlash? Backlash from people who are making their platform “we follow the law?” The Trump platform is “if you can’t help me – I’ll keep you out of jail and make you rich.” That’s why he’s pardoning scammers who now no longer have to provide restitution to their victims. And if he’s willing to bail out a scammer, he’ll definitely cover for a big-tech CEO. And they have no problem lying, even under oath. If anyone ever asks, it never happened. As we get past labor day, expect your feeds to get a little ‘odd.’ Nothing big, but … wow …. you did not know that Islamic terrorists were among the Minnesota protestors. I wonder what else those people with whistles lied about?

This is truly a special hell. I stay off social media not because I’m immune to manipulation (no one is, regardless of their self-image), but because I am just as susceptible as the next person. I had the misfortune to hop on today and was reminded how addictive scrolling the feed can be. Especially when people react to a comment or re-post. But millions of voters hook themselves to it, willingly reshaping their minds to big tech’s content.

[Note] I originally used the phrase “Red Wave”, when I meant “Blue Wave.”

AI Coding Will Become Line-Work

For the sake of argument, let’s assume you can create and to end software machines. Just assume this is true for now.

Right now AI generates code, which is reviewed by a human. The human prompts, provides context, and whatever additional injection, the AI chews on it for something between seconds to tens of minutes, and out pops a module. It could be an API, a library, a command line application, single class, part of a web app or whatever. The human reviews the product, reasoning symbolically over the code, looking for defects of logic rather than text. The LLM is a probabilistic text generator, which is fine, as most new code isn’t necessary novel. There exists a wealth of similar code (text) on which to draw. The human applies a different kind of reasoning, like a pair programming session where one developer knows the syntax and another the algorithm. Is there a race condition here? Could this resource be created once instead of with each invocation of a function? Do the tests check anything meaningful?

The ability to write code, and symbolic – not textual – mental models around that code, are what allow us to evaluate the code. The reason we spend time solving problems like the “dining philosopher’s problem,” or struggle through algorithms, is not to regurgitate that same text ten years later. It’s because we encounter novel problems which are similar to, but not exactly like, well understood problems learned in school. We mash together the novel and well understood elements not by cutting and pasting code, but by applying mathematical logic. Some of us express it more directly in actual mathematical symbols, or tools such as TLA+, but even drawing boxes on a whiteboard utilizes symbolic logic.

Eventually, that mental muscle will atrophy as we read code but produce little. Many AI heavy developers anecdotally report the loss of even basic skills. At some point the loss of skill will become enough of a problem, that code cannot be reliably reviewed. The likely response will be to adopt AI to review the code for the human. At this point it is easy to wonder why a human, with their need for a desk, noise-canceling head-phones, and their expectation they will be served lunch, needs to be paid like a software engineer. Are they an AI engineer? No, they are a machine operator. They are as pivotal to the production of a new piece of software as factory worker is important to stamping out the bit or bob their machine produces. This is what Cory Doctorow calls the reverse centaur, a human who is there because it’s not feasible to automate a part of the process. But the machine is clearly in charge.

What about AI engineers building fantastic AI models? There has been a need for other engineers to produce machines used in factories. Mechanical engineers, electrical engineers, chemical engineers, material scientists, etc. work to design and build better widget stamping machines. But these are nowhere near as common as widget stamping machine operators. And while they make the salary of maybe 5 widget stampers, they are not particularly well compensated with respect to software developers today. It’s likely we’ll need AI specialists to help build better AI machines, but the spectacle of exorbitant salaries is tempered by the realization their jobs will also be more automated. Why would one need hand-made bricks to build a brick factory? You would just buy bricks from another factory.

At this point you might well be wondering why an engineer in the future would make six figures on their first day of work1. Operating the software machine is not about math or programming skills (which average people are excited about for five minutes but then discover learning to code is a non-addictive sleep aid). Operating the machine requires being trained on the machine (likely by the vendor) and feeding in the inputs for the required outputs. This is no longer a prestige or high salary job. Following instructions and not tinkering with the machine are more important to success than symbolic reasoning over concurrent threads of execution. And being the boss of a bunch of machine operators? That’s what we call a shop foreman today.

I’m not saying all programmers drop off the face of the earth. I’m just saying that the ratio of programmers to AI operators drops to something like the ratio of people who can hand-machine a part to those who at most stick a metal blank in a machine and push a button. There are people who build and repair the machines. They aren’t really artisanal craft people. They are machine makers. Often those person is paid somewhat more than the person feeding the machine blanks and pushes the button. Blank, after blank, after blank. But hand machining is not the same skill as feeding blanks into the machine and hand machining is too expensive. Nor does it guarantee of a better part, if you have a well calibrated and maintained machine.

There are plenty of examples of crafts that have been reduced to people feeding machines at various levels. No one spins yarn by hand except for artisanal craftspeople. A loaf of bread from a fancy bakery is more expensive than a mass-produced loaf in a grocery store. IKEA produces a Billy book-book case every five seconds. If any of the workers that operate the machines at IKEA are woodworkers, it is because woodworking is their after-work hobby. But the person operating the bread machine at a bread factory, the person feeding sheared wool into a spinner, or the person feeding sheet goods into a cutting machine at IKEA are not paid as skilled artisans. They are paid as factory workers. The output of craftspeople, like hand-spun wool, hand-made sweaters, or hand-machined Swiss watches, are usually the fetishes of the wealthy. I just don’t see anyone bragging about their artisanal, hand-code, shell script the same way as a watch.

And this world is a different world. In the past, we fixed or extended software because of the cost of re-writing the software. In this world, if you need a new iteration of a storefront for a retailer, you run it through the machine again and make new software. You keep running it through until the machine produces something you want. There is no need to make understandable, modifiable, high-quality software. It will be thrown away every iteration. There is no point in hiring a skilled weirdo, who hand-codes in their parents’ basement, to fix the site or make changes. Maybe in India a few people who hand-code exist to help make the machines, the way IKEA makes their products in other lower-wage countries with the requisite skills. But you wouldn’t pay developed world wages for those skills.

Don’t get me wrong, you are not going to get the average level of quality. You’re going to get shit. A Billy book-case falls apart in the presence of too much humidity because it’s particle board. I’ve had plenty of Billy book cases with flaking laminate or drooping shelves. Do I get rid of them? No, they just continue to be an eyesore until something else forces me to get rid of the bookcase. Just as bad and broken software will sit around until something motivates a broader change. Possibly with no one going into why it was broken other than to stitch together a vague notion from (possibly wrong) tracing and log messages. But no one is going to read all that code, much less edit and debug that code, because (practically) no one will be able to read it. Certainly not the five-figure code machine operator.

But I don’t believe we can build this software machine. I believe there is an upper limit to the complexity LLMs can incorporate in a purely text-based model. (It’s not even text-based, it’s token based, which is why counting letters in words has been such a challenge). LLMs cannot reason (although they can produce text that looks like reasoning). They cannot process categories of concepts, just text about categories of concepts. If it doesn’t already exist, or is not just within reach by mashing more text together, an LLM cannot imagine or create it. And most of the software it does create is not imaginative or novel. It is the same API, using the same authentication, using the same languages, and the same back-end as thousands of other similar examples, code repositories, and text-book descriptions. As long as you want a store-front that is largely like every other store-front on the web today, and prior to LLMs you would have mashed together from example code and Stack Overflow, you’re going to be replaced by a code machine operator. But then again, were you ever really contributing that much?

  1. Rather than a specific number – just assume six-figure salary means well-compensated above the average wage. ↩︎

Thinking Through NVDA

With the normal disclaimer that this does not constitute investment advice, it is provided purely for entertainment purposes, and contact a tax or investment professional before making any investment decisions, I’ll take a look at a stock.

Taking a look at Nvidia, we see the 50 day moving average peaked in December and has started a flat to modestly downward trend (yellow line is an eyeball of the trend). The price is approaching the 200 day moving average (orange circle). There is a floor around 170. So what events would I look at, from a technical perspective?

If the price drops below the support at 170, that is significant. In my thinking it’s more impactful than the price falling below the 200 day moving average. The 200 day average is going to continue to move up, even if the price declines, as it catches up to the historical price action. The violation of support that has been in place since September is more critical, in my opinion. As would as a steeper downward slope on the 50 day moving average.

Technical analysis isn’t just tea leaves for investment bros, if you look at it as a pattern of the expression of sentiment about a stock. What do we know about the broader context? For one thing, the current administration has shown a willingness to step in and make decisions that impact Nvidia’s sales. Second, the broader investment community is getting more concerned about circular deals. Third, investors are starting to ask where’s the beef on AI revenue. In the past, when the price has approached 170, buyers came in because they saw value in the stock. Above 190, more people did not see the value in the stock and sold. That range indicates people may be waiting for more information before making a move.

As we can see, all the revenue bump for Nvidia has been in AI accelerators. I’m assuming there’s a role for LLM style AI in the future. If anything, it makes sense for Microsoft to include it in Office to help with writing e-mails, writing Word docs, Spreadsheets, and Presentations. Likewise, Google’s office offerings would benefit. As would ad generation on Meta. The question is if the amount that needs to be spent in accelerators, data centers, and energy makes sense, given the revenue it produces. If it costs Meta $10 of cap-ex and $10 of lifetime energy costs to generate a lifetime revenue impact less than $20, it clearly doesn’t make sense.

What would happen to Nvidia if the GPU sales are cut in half. First, the multiple needs to come down because the expected future growth in sales is at a much lower sales volume. Let’s say it drops to 20 times earnings. Earnings are cut in half. That would mean a share price of around $45 for Nvidia. What does that do to the mag 7? Well, different parts of the Mag 7 are there for different reasons. Apple is not there because of AI sales. Microsoft, Amazon, Google, and Meta have AI exposure but won’t get destroyed. Tesla is a meme stock so this may not change anything for people who believe humanoid robots are a 10 trillion dollar TAM. But there are other stocks, like Oracle, Micron, and Broadcom that take a big gut-punch. (As is happening as I write this).

What may have an outsized impact is the wealth effect that’s given the top 10% to float half of all consumer spending. A drop in the Mag 7 would pull down the entire S&P 500. But it also flips psychology. It would also have an unquantifiable but negative impact on the private equity and banks that have lent money to AI startups and data center build outs. One estimate puts 20% of PE’s loan book on AI related loans. No bueno. Not to mention the hit to all sorts of venture funds and the investors in those funds.

Again, do your own research, consult a professional, this is only provided for entertainment purposes, and is not investment advice.

The Great Ball of Money Lurches Away from Tech

This morning, as I shake my head at the lack of JOLTS data from the BLS, I watch the great ball of money lurch from Tech to not-Tech, like staples. Unfortunately, the multiples for many consumer staples companies are already on the high side of normal. For example, PG is at 23, which represents confidence in what is essentially a flat business. Unlike the mythical tech company (and I say mythical with great intention), PG and the other staples are businesses that are both stable, with low but predictable margins, and scale linearly. If they take a little market share this year in one category, it doesn’t mean winner take all. They don’t have 40% ore more margin products. There is no network effect around dish soap.

I have a number I think of when I look at a fair valuation for Nvidia (hint – it’s much lower than it is now). And while I look at AI as having merit, my arguments relate to how much do you want to pay for the productivity. In my own field that productivity is mixed and sometimes requires you to ignore quality. But right now investors are heavily subsidizing AI for consumers and businesses. The question of how much a company would have to charge for a sustainable AI service is open for discussion. We may have a gentle deflation out of Tech but it may mean that PG winds up at a PE of 30? Or maybe 35? That’s a little rich. Because the big ball of money doesn’t know where to go so it just keeps buying and selling.

The impact the draw down on Tech is having is an advance decline ratio of roughly 6:4 (meaning out of 10 random stocks six are up and four are down), but the NASDAQ and SP500 are down. The DOW is treading water, but the Russel 2k is up. People are looking to higher-risk, smaller stocks. As I look at that, the big ball of money is pushing into not-Tech with industrials, consumer cyclicals, and basic materials punching up. That’s in line with the data suggested by yesterday’s ISM – that low inventories are going to drive up production.

It seems like there’s more money sloshing around than value to absorb that money. It’s not floating around in the economy as money to spend (other than through the wealth effect). So it’s not growing anything. It’s sloshing around inside markets, driving growth through multiple-expansion, drafting in more money as people look at the number go up and want to join in. All of which, I’m afraid, makes this gambling more than investing. Jump on the band-wagon, ride it up and then bail before the hammer comes down. But you better get in now, otherwise you’ll only get in at a higher valuation in the catch-up trade and have less runway.

The big ball of money will concentrate into a smaller and smaller chunk of the economy as it sloshes around. With dumber money getting swept up by smarter money. Assuming we just plod along for another couple of years and look for articles that say the top 5% are 50% of spending. Only at those levels they don’t go out to eat more, they just buy $25,000 pizza ovens for their $750,000 kitchen remodel. So the economy will look weird as McDonalds, Walmart, and Target struggle, as household consumer names lurch in and out of bankruptcy and private equity, while Porsche and Rolls Royce are making book. GDP is up. The market is up. But the bottom 80% are just fucked. And you can’t have a democracy when the bottom 80% feel like they’re getting the shaft and locked out of number go up.

Did Microsoft Strongly Encourage EU and Canada Linux Adoption?

Microsoft has indicated they will hand over BitLocker encryption keys if asked by the US government. Could this mean an EU or Canadian company essentially has no device level encryption on a Windows PC with respect to the US government, or a future DOGE like contractor? Maybe, but first let’s scope the risk. For this to be useful, the US agency or contractor would need to posses the device. That would mean either covert access, theft, or grabbing the device at the border. Mostly, it would apply to devices brought to the US. It doesn’t help with remotely accessing or hacking a computer. As long as the device does not come into the US, it is largely safe from having its contents read by decrypting the disk.

But this is one more event highlighting that US infrastructure is a weakness for the EU. If you encrypt your device using BitLocker, to prevent leaking data in the event of a theft or loss, it could be accessed in the hands of a US company or contractor. As Microsoft is disabling the ability to use PCs with only local accounts, this means every newly activated Windows computer’s disk could be decrypted Combine this with the ability Microsoft or Google has to access e-mail and office documents, and suddenly EU companies are naked. Much like Germany is looking at repatriating its gold from Fort Knox, EU and Canadian companies may need to look for non-US controlled solutions. For desktops and laptops this might mean moving from Microsoft to Linux. And from Office 365 and Google Drive to an EU based alternative.

For US citizens, this means that journalists cannot rely on Microsoft’s BitLocker encryption and recovery. Under the fifth amendment, individuals do not have to provide potentially incriminating information. This includes passwords. But this means that Windows PCs are not safe, should the DoD want to decrypt a journalist’s device. As as the recent search of a Washington Post’s reporter recently highlighted, this administration is not overly encumbered by the constitution.

I’ve had to use BitLocker recovery before because I had upgraded hardware or plugged in an external GPU. It is much easier to go to the Microsoft site and look up the recovery key. I don’t have anything that Microsoft doesn’t already have access to, through services like OneDrive. The fact Microsoft had my key did not materially change my security, although how easily it hands over the keys may raise a concern. But I’m not ATML negotiating to sell lithography machines to a potential Intel competitor. Would the government’s stake in Intel be enough to encourage Intel executives to ask the administration to grab the ATML executives business data? Let’s face the fact Pam Bondi and Kash Patel have destroyed the independence of the DoJ and FBI. While the ATML executive is out to dinner, a quick clone of their drive to be unlocked and handed to Intel isn’t out of the realm of possibility.

Europe Needs to Implement a Digital Services Tax

If the EU wanted to do something more than the absolute minimum proportional response (if that), they should do the following three things.

  1. Resurrect the digital services tax.
  2. Get off US infrastructure.
  3. Promote interoperability.

I don’t say this lightly as I work on that US infrastructure. But the road has not gotten better. It has gotten worse and there is no indication it will get better. If Greenland, than why not Iceland? Why not the Faroe Islands? Heck, why not Air Strip 1? Let me be clear about what could happen in the future. Not what could happen next week, but maybe in a few months, if things continue to escalate.

First, the US puts EU leaders on the sanctions list as they did to a judge at the ICC. This would make it impossible to access any of the services provided by Office or Google Cloud. Their 365 e-mail account gets locked, along with all their files, and they are unable to perform almost any financial transaction that involves a bank or on-line retailer. In other words, they are locked out of their work accounts, personal accounts, and are limited to almost a cash existence. Imagine Kier Starmer, Merz, or Meloni having to find an individual in their office that can use MS office on their behalf. This might lead to countries backing out of the various agreements and treaties that allow sanctions to be comprehensive. Which could inadvertently result in Russians getting sanctions relief. Some EU banks, with US ties, may find themselves in the impossible position of satisfying the US and the EU at the same time. They would like pressure the EU to back down.

The US government taps AWS, Microsoft, Oracle, and Google and suggests that hyper-scalers slow-roll or stop updates to Sovereign cloud offerings in these countries. Within a period of weeks, as certificates expire, these clouds will begin to fail. In addition to the US DoJ, legally or illegally, demanding access to the data stored for European governments and European companies. Essentially, the US would likely have a carte blanche to access Google or Microsoft hosted mail and messaging for those governments. These offerings were meant to give the European governments, militaries, police, and intelligence agencies high-quality, secure cloud services. They might have local operators, but they are completely dependent on US companies providing updates. Even though the data centers reside in the EU, the US parent company has a degree of control over the systems that could result in the data being exfiltrated and their services being disabled.1

This is horrifying and what the end of NATO could look like. To get ahead of it, Europe needs to prime the discussion now, because it takes way too long to agree on anything. The first part is straight-forward. It is mostly US companies selling digital services to Europe. Or start fining them to implement desired EU policies such as more open App stores and anti-monopoly rules. Just restarting these discussions may add pressure on US companies that control much of social media and digital infrastructure. They will pressure their own advocates in Congress. If anything, fining X should be a priority across the board. Social media and internet services companies could threaten to pull out of one country, like Denmark, but the would not want to pull out of France, the UK, Germany, Italy and the Nordics. But the “big” countries have to all agree or the “small” countries don’t stand a chance.

Next, they should start discussing (again because of the length of the talk runway) plans to move to EU providers for key infrastructure. The ability to run a railroad or run a water treatment system should not depend on the whims of an aspiring autocrat 5,000 kilometers away. This, by the way, applies to some power systems supplied by China, which could be cut off to help their client state, Russia. Europe needs to put “sole source” laws into place indicating to only buy the product, device, or service, only if no European supplier exists. Or local partner laws, like China has had, to force technology transfers. But the process of moving off that infrastructure is not quick. It will also be expensive, as cloud providers try to create one way valves. Cheap to get data in but expensive to get data out. That cost could be offset with a digital services tax.

Next, the EU should promote adversarial interoperability. The EU should actually (not just make face noises about it) withdraw from provisions of the trade agreements that prevent EU companies and citizens from jail-breaking their devices. This is necessary to prevent a foreign power from bricking your infrastructure. If it is completely legal to replace the tractor firmware with your own firmware, you don’t have to worry about John Deere turning it off, remotely. (Like they did to the ones Russia stole from Ukraine). If EU citizens are allowed to jailbreak their devices, or reverse engineer apps on those devices, they could decide to have X – but without the Nazis. Or they could elect to install an App Store that only charges the listed Apps 5% instead of 30%, where even US companies would want to be listed. Interoperability should include, taking all the data you like from Facebook or Instgram as a scrape, get rid of the adds and unwanted content, and have that as your version of social media. (Although social media is largely a tool used by adversarial nation state actors to conduct influence campaigns and not a great loss if it did go away).

This is getting ugly, stupid, and weird. America has enough legacy income and wealth to get by for a long time, as the institutions that have underpinned its success get wiped out. There is anger on both the left and the right against the institutions, as they have been used too often as spring-boards for their leaders, who profit from them. For example, the possible insider trading both at the Fed and in Congress. And when those people leave, they get plumb jobs in industry or finance, shaping the rules and laws in their employers’ favor. Court rulings that make government bribery charges against officials almost impossible to prove. Parties that have allowed themselves to be captured by monied interests to the point they align only in places where the welfare of dollars are at stake. Europe does not have to get dragged down by the US as it implodes. In the process, tearing apart countries and the conscience of the EU member states.

It’s not that Europe doesn’t have its own challenges. In some ways life without America will force some countries to come to grips with the level of commitment necessary to provide their citizens with security against Russia, Iran, and China. And their aging populations may require them to re-think their approach to their own welfare states or economics, given new security needs. They may take a more active role in the middle east in order to prevent waves of refugees from each successive crisis that god-forsaken part of the world continually spawns. They might have to project power into Africa, both to deal with refugees and secure energy for their countries. And finally, with no expectation of the US nuclear umbrella, German, Poland, and the Nordics need to plan for developing nuclear weapons and a strong second-strike capability. They have relied on the US too much, who was both willing and able to pay for these missions. I can’t imagine the 79 million Americans who voted for Trump being willing to stick their necks out of Polish, Romanian, or Estonian sovereignty.

But they should do this as a United Europe. All these problems are bigger than just one country and require cooperation between multiple countries. That spirit of cooperation and strong institutions is the model that will bring every region a better quality of life. Once upon a time, your impact on the world extended no further than your neighbors, then your city, and then a nation, and now problems are so large they are trans-national. The US is attacking the basic idea there is value in these alliances. They are an agent for a rolling back of progress, along with Russia, and a China that views trade as wealth extraction from the rest of the world. If the world of cooperation, alliances, economic and social progress is going to defeat the world of angry isolation, it needs to rise to the moment. One way to do this, is to be explicit that the US intimidation could come at the cost of the part of the economic order helps enable US digital hegemony.

  1. Note that the UK has to ask the US for permission to fire its nuclear missiles. France does not. And if the US exits NATO, the UK should prioritize fixing this problem. ↩︎

Depreciation Should Be 3 Years

The goal of Nvidia, as stated, is to produce graphics card that provide the best performance, such that is is not even economical to run a competitor’s cards. Let’s delve into this. First we need to understand marginal cost. If you build a machine to make widgets, and let’s say you buy the machine up front, the cost of the machine is a sunk cost. If you make one widget or a million, it doesn’t matter. Your sunk costs are your sunk costs.

If it takes $10 with of labor and inputs to make 1 widget, my marginal cost per widget is $10. Anything I earn over $10 per widget covers at least my marginal costs. I may not be profitable, but I’m making money on every widget and cash is coming in. If I make less than $10 on a widget, I’m losing money every widget and bleeding cash. Based on the market, I expect to sell 100,000 in a typical year. A price of $15 would cover both my marginal costs and the amortization of my sunk costs. That’s the best of all possible worlds.

What did I mean by amortization? I expect my widget machine to last, say, 10 years. I don’t just assign the cost of the machine to the first year, as it will produce revenue for the next 10 years. It distorts my performance by making my first year look like a legendary disaster, but the next 9 years like I’m a genius. I take 1/10 of the machine cost every year in amortization. If the machine was 5,000,000, that’s 500,000 a year. If I sell 100,000 widgets, I can assign $5 to each widget as its portion of the sunk cost. If I sell 100,000 widgets for $16, I cover both my sunk and marginal costs and $1 of profit per widget. If I sell 100,000 widgets for $14, I need to sell more than 100,000 to cover amortization.

But let’s say the widget business is not so good. Someone offers me $12 for 50,000 widgets. If I have no other orders for widgets, I’m going to lose money, given the amortization of my machine. Specifically, I won’t be able to cover 400,000 of the amortized cost. However, because my marginal cost is only $10, I get $100,000 of contribution to my sunk costs that I would not have received, had I not taken the deal. I could even make money if I got enough contracts at $12. If the offer is for 200,000 widgets at $9, I should not take the deal. Even if it is more money, I’m losing $1 per widget in marginal costs. I am losing more money on that deal than if I just shut down the machine. There is no sales volume that would result in anything but losing more money.

If my competitor buys a newer widget machine, that’s more efficient, and can make widgets at $8 a widget, they can undercut me at a $9 price point. They’re still making a contribution to their fixed costs (since the marginal cost is $1 less than the selling price). But I can’t match their price without bleeding money. I have no choice, at that point, but to re-tool and try to get my cost down below $8 to be competitive. Otherwise, they take market share and drive me out of business.

The cost of building the data center and filling it with GPUs is the sunk cost. Next, you have marginal cost which is largely electricity. It’s used to both power the devices and cool the devices. Then you have people to manage the data center. Then you have software costs related to that tuning or managing the model. The goal of Nvidia is to make it so that the marginal costs of operating a competitor’s chips is higher than the Nvidia chip. They want to produce new chips at a 1 year cadence, and within 2 years make the marginal cost of operating the newer chips decisively more attractive that the old chips. They want the economics to drive customers to replace on a 2-3 year cycle, to minimize their marginal costs.

One myth is you only need the latest and greatest chips for training, but when they’re two years old, they can move to inference. Training is when the model is sent data to get it to make a good response. Inference is when you and I ask the model for stuff. It’s estimated that 90% of the costs are for inference. That means the least efficient chips would be used for 90% of the cost. Moreover, as the models come on line, they are larger and push the limits of the older chips, meaning those chips have to work even harder for one unit of output. There may be some models that require too much memory or more bandwidth than the older chips provide. Using a 5 year old chip that’s struggling on a newer model is the highest possible marginal cost you can have. Granted, no one can have all new chips all the time, so you have a blend of generations with some chips being new and some chips older. You can track a blended marginal cost, and let’s say that’s $10 per 1,000 queries. [Don’t worry about the exact numbers – I’m making the math easy.]

What Nvidia is trying to do is to make chips so the AMD customer might by a cheaper chip, but their marginal costs are higher than Nvidia’s offering. For example, the AMD chip is a $12 per 1,000 queries, while Nvidia wants to be $10. But next year AMD will come out with a better chip, maybe meeting Nvidia’s $10 per 1,000 queries. So Nvidia has to come out with a chip that produces 1,000 queries for $8. And the same again next year. The two year old Nvidia chip still costs you $10 per 1,000 queries, but the new one costs you $6 per 1,000 queries. To stay ahead of your competition, you need to retire the 2 year old chips as fast as possible and buy as many of the new chips to keep your blended cost lower than your competitor’s blended cost.

But even the models themselves have a short life. Every few months a newer or improved model comes out. Most users want to use the latest model, even if it’s larger. The old model, which might run well on the old hardware, begins to see its market shrink. It may still have applications that are tied to it, but not new applications and not new users. The realistic lifecycle of a model might only be a couple of years. The chips are constantly being pushed to larger and larger models. To get better performance more and more “think time” is built into how the model is executed. This makes new models on old chips not just a little worse, but even harder because the model has to be run with more iterations to provide better output. It’s like having a truck that burns more gas than the new model, but you also have to make more trips with it.

Right now we don’t have a price war. We have a war of financing. Investors want to see growth. More than undercutting each other on cost, the goal is to be able to grow your user base with free services. If that’s the case, using your highest marginal cost chip, to offer free services, seems like putting a noose around your corporate neck. All your competitor has to do is pay more in sunk costs that have to be figured out some point in the future (buy more new GPUs) and kill you on marginal costs today. This won’t work forever, but it works in the short run.

The goal is to be the last one standing in a winner take all market. If your competitor is only bleeding $8 per 1,000 queries on free customers to your $10 per 1,000 queries, they can either offer better models or higher limits. They don’t have to be profitable. No one is profitable right now. No one is meeting their marginal costs. But if you can operate more efficiently, you have more time until the corporate grim reaper comes for you. Your runway lasts longer. What about the GPUs your investors financed? Don’t they want a return on their investment? Yes, that comes from you making it to the finish line while your competitors do not, rather than getting to positive cash flow.

So let’s think about this, again, for a 4-6 year lifetime for a GPU. I would buy 4 years, as that’s probably where the chip is clearly on its last legs, but at 3 years it may still be effective to run in a blended cost environment. That 4th year is likely when it really starts becoming an economic millstone around your neck. I think 6 years is absolute fiction, unless you have government and large corporate customers who have a fixed (and profitable) contract and don’t want you to change the model used by their applications. For the part of the market, like developer subscriptions, or chat agents, or new development, which live on the latest models, a 6 year old chip is likely killing you. It raises your blended costs and you burn through your investment even faster. So no, I don’t think a 6 year life span makes any sense whatsoever, except for a narrow type of government or large corporate customer. I think a 2 year lifespan makes a lot of sense for a very aggressive AI provider, who wants to keep their marginal costs as low as possible.

The Lack of Progress in Computing and Productivity

As “A Much Faster Mac On A Microcontroller” points out, even the cheapest processors today are capable of meeting most peoples’ needs with a couple of major asterisks (the underlying compute hardware in the article is about $10 to $30). I’m not going to claim that Word in 1995 is as good as MS Word in 2026. I get a lot of features on 2026 MS word on my Surface laptop that I didn’t get with my Mac 6100 from 1996. But a 1990s PC did that work while consuming less than 100 watts. A modern gaming PC can push 750 or even 1,000 watts while gaming, and 200 watts or more doing nothing. I would argue the biggest difference in desktop computers is in gaming and media consumption. Productivity has largely been static. And this has some implications for productivity we might want to consider before burning up the planet with AI and filling it with toxic e-waste.

What I will say is that for many use cases, the computer from 1996 is as useful as the computer sold in 2026. I have watched many people use nothing more than Outlook, Word, Excel, and their browser, day in and day out. These are tasks I was happily doing in 1995 on my DOS/Windows 3.1 laptop at work, or my Mac 6100 at home. With fewer features, but the largely the same work. The biggest change is moving from desktop applications to web based applications, real time collaboration, and apps like Teams and Slack.

From the productivity numbers, most peoples’ work computer shouldn’t be a $2,000 computer, or even a $1,000 computer burning 200 watts on average. It should be a $100 computer that consumes under 50 watts when running full tilt, but 5 watts at idle. With reasonably well written software, that cheap machine can do everything you need for the bulk of office workers, including Slack and Web-based applications in Chrome. But even more damming is it suggests that investing in computers has had little impact on productivity. I’m going to show you the best possible argument, with a slightly higher productivity when computers were being deployed at scale between 1994 and 2004.

How to read this chart. Productivity measures output per labor inputs. This looks at the percentage change, quarter to quarter. Why do we get negative productivity growth at points? Because the amount of output drops per labor our due to changes in demand. If you produce a little less but don’t lay off, productivity falls. Annual productivity has grown 1-1.5% in the last 20 years. This, combined with the growth in population of about 1% implied a 2-2.5% real annual growth for the highly developed US economy. (Setting aside the population growth was coming through immigration and the implications current policies have on that for this discussion).

But growth was not likely coming through the computer revolution. In fact, we see productivity grew more before 2000, when offices still had shared computers, than it did between 2008 and 2020, when almost everyone had at least one computer (if not two or three) at their disposal. The software and operating systems were much better in 2010 than in 1995. More computers and computer automation did not imply more productivity. This is known as the Productivity Paradox.

The 1990’s computer revolution had productivity gains about the same as the previous 40 years. So the desktop “computer revolution” didn’t meaningfully impact this measurement of productivity. In fact, the period from 2008-2020, with its ubiquitous computing and zero interest rates should have spurred investment leading to productivity growth. That period had unusually low productivity growth. More computer did not translate into more productivity. Take that in for a minute. The era when we introduced iPhones, iPads, and Android devices, and had capable, cheap laptops and desktops coming out of our ears, along with zero interest rates, had sub-par productivity growth. Maybe that’s a hang-over from the great recession? Before then, between 2000 and 2008 had at best historically normal productivity growth.

Why I focus on the desktop versus the enterprise is because I think there’s a real difference in individual versus institutional computing. I still think there is a lot of needless spending on bells and whistles that drives up the cost, but it’s hard to argue that book keeping and accounting aren’t more productive today. But if one group is getting a lot more productive, we still have to ask if this means another group is less productive to balance the average? Or the impact is so narrowly focused it doesn’t move the broader needle? It’s also possible to argue that technological change means computers are a necessary input to enable or unlock the next productivity gains through robotics and machine automation. Maybe productivity would be even lower absent computers. But I still maintain that most office workers would be only marginally less productive if you put a thirty year old computer in front of them. (Although they might complain a lot. And maybe quit).

Which brings us to artificial intelligence. I use it and find it’s maybe a 5% productivity boost. I can’t just yell at the computer to do things. I have to create a context and prompts to enable the AI to produce something that’s usable. And then I still need to refactor it to bring the quality up above a naive or basic implementation. And sometimes, it’s just faster for me to code it at a production ready level rather than doing all that other work. I could see how some developers actually find it a negative productivity tool. It doesn’t always generate the correct code and I’ve only had reliable success with very popular languages and tools. On legacy code it sometimes generates pure garbage. Adding AI may be no more beneficial to productivity than it has been to put a computer on everyone’s desk, or providing internet access, or Web 2.0 apps, and so on.

Like a lot of AI hype, that tweet in no way matches up with my experience. However, it later came out that what they did was feed the AI tool a year of context and it generated a “toy” implementation. That implies it is not code you would run in production without a lot of work. Sometimes the difference between a “toy” version and a production version is months of effort. Sometimes the real effort is figuring out what to build. A proof of concept or toy version is what I get when I use AI code assistance.

Which brings us to the question of how much money should we spend on AI and related investments? Based on the last thirty years, it’s unlikely that computing related investments will drive significant changes in productivity. And from my personal, anecdotal experience the gains from AI aren’t huge. Right now AI investments appear to be sucking up virtually all the available investment capital, along with energy consumption (causing some communities to face higher electricity and gas prices). What if, after all is said and done, we look at that same chart through 2035 and we don’t see any change in productivity? Think about all the money we would have wasted, all the e-waste we would have generated, and all the other lost opportunities we will not pursue?

Am I an AI doomer? No. We can do something an LLM can’t. We can pull a physical plug out of a wall. What I am is skeptical that this is the vehicle for this level of resource commitment. In fact, I’m for creating even less dependence on computer infrastructure in favor of other infrastructure in everything ranging from water treatment to rail to energy production and distribution. I’m all for diversifying semi-conductor production out of Asia so we have a secure source of semi-conductors if Taiwan is overrun. Some of it will require computers to automate and better manage those resources. And some will require an office worker to plug numbers into spreadsheets and send e-mails. But it’s hard to argue that spending much on new computer based tools for that office worker, or automating that work with AI, would do much for growth.

My Porcine Aviation Era

I have not had great experiences with AI development tools. I’m not a Luddite, but if the tool takes me more time than just doing it by hand, and I get the same results, it’s not a tool I want to use. In some cases, the code had subtle bugs or logic little better than the naive implementation. Or in other cases, the code was not modular and well laid out. Or the autocomplete took more work to clean up than it saved in typing. And in some cases it would bring in dependencies, but the version numbers would date back from the early 2020s. They were out of date and didn’t match to current documentation. For the most part the code worked, but I knew if I accepted the code as is, I would open myself (or whoever followed me) to maintenance issues at some later date. One might argue that the AI would be much better then, and could do the yeoman’s work of making changes, but I wasn’t sold on that idea. (And I’m still skeptical).

I would turn on the AI features, use them for a while, but eventually turn them off. I found it helped me with libraries with which I wasn’t familiar. Give me a few working lines of code and a config to get me going, and I’ll fix it up. It would save me a search on the internet for an out-of-date Stack Overflow article, I guess. I used prompt files and tried to keep the requests narrow and focused. But sometimes, writing a hundred lines of markdown and a four sentence prompt to get a function, didn’t seem scalable.

Well, pigs are flying and I found something that appears to be working. First, it involves a specific product. At the time of writing Claude Opus/Sonnet 4.5 seem to be quite good. Second, I have a better way of incorporating prompt files in my workflow (more on that below). Third, language matters. I found Claude gave me the same problems listed above when working on a Jakarta EE code base. But Rust is great. (Rust also has the advantage of being strongly typed and eliminating some of the issues I’ve had when working with Python and LLMs). Fourth, I apply the advice about keeping changes small and focused. Fifth, I refactor the code to make it crisp and tight. (More on that below). Sixth, ask the LLM for a quick code review. (More on that below).

The first topic I’ll expand on is changing my relationship with prompt files. Instead of attempting to create prompt files for an existing code base, I started writing a prompt file for a brand new project. I had Claude generate a starter and then added my edits. I believe in design (at least enough to show you’ve thought about the problem). This actually dovetails with my need to think through the problem before coding. I still find writing prompt files for an existing code base tedious. But, if I have to sit down and think about my architecture and what pieces should do, the prompt file is as good a place as any.

The other thing I want to cover is refactoring what the LLM hath wrought. Claude generated serviceable code. It was on par with the examples provided for the Rust libraries I was using. (Which also happen to be very popular with plenty of on-line examples). Claude would have had access to rich training data and pulled in recent versions (although I had to help a little). But the code is not quite structured correctly. In this case I needed to move it out of the main function and into separate modules. But mostly it was cut and past and let the compiler tell me what’s broken. Next, in the refactor, is to minimize the publicly exposed elements. Now I have code that’s cohesive and loosely coupled. The LLM by itself does not do a great job at this. Taste is more a matter for meat minds than silicon minds at this stage.

The final thing I want to touch on is using the LLM to review the code after refactoring. This gives me another data point on the quality of my code and where I might have had blind spots during the refactoring. I work with lots of meat minds and we review each others’ code on every commit. There are some good reviews and there are some poor reviews. And reviewing code is harder, if you’re not familiar with the specific problem domain. But the machine can give a certain level of review prior to submitting the PR for team review.

So that’s what I’ve found works well so far: green-field projects, in highly popular frameworks and languages, performing design tasks in the prompt file, using LLM best practices, refactoring the code after it’s generated, and a code review before submitting the PR. Auto-complete is still off in the IDE. And I’ll see if this will scale well as code accumulates in the project. But for now, this seems to produce a product with which I am satisfied.

[A small addendum on the nature of invention and why I think this works].

Peoples’ ideas are not unique. As one of my relations by marriage pointed out years go, when he moved above 190th street in Manhattan, there seemed to be a sudden run to the tip of the island to find “affordable” housing. In a city of millions of people, even if very few people have the same idea at the same time, the demand quickly gobbles up the supply. Humans build ideas up mostly from the bits and pieces floating around in the ether. Even “revolutionary” ideas are often traced back to maybe a interesting recombination of existing ideas. Moreover, people have sometimes been doing that “revolutionary” thing before but didn’t connect it to some other need or didn’t bother repacking the idea. What’s more important about “ideas” is not the property of the idea but the execution of the idea.

There is still something about invention, even if it is largely derivative, that the LLMs don’t appear to posses. Nor do they have the ability to reason about problems from logical principles, as they are focused on the construction of language from statistical relationships. Some argue that enough statistics and you can approximate the logical reasoning as well as a human, but I haven’t seen solid examples to date. The LLM doesn’t understand what to create, but it does summarize all the relevant examples necessary to support the work of realizing the idea. But even then, there are gaps that we have to fill in with human intention. Does this revolutionize coding for me? No, I estimate it makes me some percentage more productive, but in the 5-15% range. And of the time and effort necessary to realize an idea, coding is maybe 1/3 or less of the effort. And I worry that we’ll never get to the point this technology will be available at a price point that makes the provider a profit while being affordable to most people. After all, there’s a limit to how much you would spend for a few percentage points of additional productivity.