Government Handouts are the Exit

It’s almost undeniable that the only reason the US economy has started slipping into a recession, or would have slipped months ago, is that investment in AI has driven about 1 to 1.5% of GDP. That’s an insanely huge figure. Not AI profits – which are years away even in optimistic projections. Unlike investments in roads, for example, about 60% to 70% of the AI investment is in chips that become obsolete in three years, but maybe as little as two years. The growth is happening so fast that power companies can’t keep up, which has lead to basically using old jet engines to turn hydrocarbons into CO2 to power those chips. All to give you an answer that might or might not be right, or just to generate offensive AI videos like Mahatma Gandhi eating a burger. Just to recap: the only thing keeping us from a recession is money being plowed into quickly obsolete “assets” (it’s hard to call them that – they’re almost a consumable), powered by setting even more climate change. The cement buildings, the data centers, left behind have a multi-decade life, but no one needs that much data center capacity. And if they’re unoccupied, they will go to shit.

So far the financing for this has gone beyond traditional investment to weird circular financing where company A invests in company B, who buys products and services from company A. Company A can then point to future orders and sales. Company B points to more investment. Everyone’s happy. Number go up. Company B then makes absurd projections of incomprehensible investments that need to be made, causing investors to snap up associated companies, and everyone happier because more number go up. But that’s okay, Company C promises (not necessarily delivers) future investments in A, making more number go up, after A promises to buy 3 times that much in company C’s products and services. At no point is numbering going up because Company B is anywhere near making back a significant amount of what it spends on short lived assets and jet fuel to power its data centers.

But surely this is good because it will make us all richer, right? Not really. If you think that, you haven’t been following along. I’ll give you a minute to catch up on how wealth inequality is both bad and accelerating. The benefits will be concentrated in the hands of the wealthy. Most of the benefit will be concentrated in the hands of people like Sam Altman or Mark Zuckerberg (who’s been searching for some new idea – any idea – since Facebook). The bonuses to execs and large share holders would be fantastic, if there any real chance of any of this earning back any money.

David Sachs and Sarah Friar made a statements which might indicate how these companies intend to square this circle of constant investment, no profit, and concentrated wealth. They will make the argument that if the government does not step in to support their narrow version of AI, the economy falters, and we go into recession. To keep that from happening, all we need to do is to make people like David Sachs wealthier, by bailing out their AI bets when the start to go bad by backstopping their loans or printing more money by driving down interest rates. (And therefore boosting inflation back up). I don’t think these are isolated musings. I think their air is probably thick with ideas in this vein, and these are just a couple of leaks. Maybe testing the waters? Or just they keep talking about it, so it’s a natural topic of discussion.

They have done everything in their power to make stochastic parrots seem like the next nuclear bomb. The country with the AI lead (whatever that means) will win the next wars. Tell that to Ukraine, who is using very much human piloted drones to attack 60 year old tanks and drones piloted by human Russian pilots. If businesses don’t adopt AI, or find that AI adoption is more limited than what they thought, and the profit potential seems to be a small fraction of what were overly optimistic projections, AWS or Microsoft’s investments in AI won’t seem like a good use of cash. Rather than lighting giant piles of money on fire, they should have bought back their stock. NVDA doesn’t look like a hot stock if the demand for their chips start to sputter. And Broadcom (AVGO) and ORCL start to falter at that point. (ORCL is already about 1/3 down. META – which has been floundering for its next idea – will also be seen to have wasted cash. The only dangers LLM based AI presents to the modern world is its ability to quickly mint disinformation and memes, and the financial crater it will leave when people no longer expect massive (or any) profits from the likes of Open AI. When that happens, and they stop lighting their money on fire, GDP shrinks and the US will probably slip into recession.

I was about to end there, but that isn’t quite the whole story. Because it isn’t just Wall Steet burning cash on stochastic parrots powered by jet engines. I feel like I would be remiss if I forget to mention all the private equity and funds that are investing in data center construction. To build the data center, largely unregulated private equity firms (which can borrow from regulated banks) have been making loans. If this all goes sideways, the 300,000,000 loan held by a PE firm for a data center could go to near zero, the small fraction recoverable only after years of bankruptcy litigation. Maybe there’s enough of these loans to make the systemically important, regulated banks sweat blood as their PE customers start to sputter. As long as number go up, the loan is getting serviced, but once number stop going up, we could have a massive, sudden influx of cockroaches. This includes some funds who buy notes or make loans as part of their income portfolios. You could wake up to read a horrible story that PIMCO is suddenly knee deep in bad loans in what should have been a safe, income generating, portfolio. And just to give you an idea of how poorly people view risk right now, you only need to look at the historically low spread between junk and investment grade bonds.

Your Mind, Their Thoughts

How does a company that’s hemorrhaging money get to profitability, when they offer a free service? You can create tiers or pay walls to funnel users to paying. This model is popular in the SaaS world, where the free version is a loss leader for future sales. But it isn’t a suitable model for every service. The other avenue to monetization is to show advertisements. It isn’t black and white, with some paid services, like Hulu, still show advertisements. The degree to which advertising is permitted is the degree to which the consumers (businesses or individuals) push back on the advertising.

Strictly speaking, Google and Meta are communication service providers on the SP 500 index. Practically all their money comes from advertising and sponsored content. Amazon and Microsoft are also making significant money from advertising and sponsored content. Your feeds on services like Linked In, X, Facebook, Tik-Tok, YouTube and so on are becoming a gruel of actual content and advertisements, either direct ads through the platform or “creators'” own advertising. New and improved with AI slop to foster more interaction and create more engagement. More of our economy is based on putting an ad in front of someone’s eyeballs than you would imagine. It’s easy to spot some advertising, such as a commercial about a laxative in the middle of a Football game. It’s harder to spot other ads, such as an influencer that doesn’t disclose a payment for a “product review.” The adage that if you aren’t paying for it, you’re the product, is ever more true. Have you thought, for five minutes, how the startups offering free access to LLMs are going to make money?

After thinking about it, I realized companies like OpenAI are better positioned to make money than we realize. First, the injection of cash has turbo-charged their data gathering. There is more investor money to harvest more and more data. I suspect this is also where the first moats for legacy chat-bots will happen, inking deals with content companies. New entrants won’t have the pockets or the bandwidth to negotiate a bunch of little deals to avoid getting sued. But that’s another issue. They are hoovering up everything. There is plenty of evidence they, or their agents, are ignoring any ‘ROBOTS.TXT’ entries that disallow scraping. When actual regulation arrives, it serves more as regulatory capture than creating equitable payments to the sources of content.

Second, we have come to accept that they can wrap your prompt in their secret prompt. These additions to your prompt are hidden, to arguably prevent circumvention. The stated reason to inject those prompts is to prevent leaking dangerous information, such as how to make explosives. They are also part of your terms of service. Attempting to circumvent or discover the prompts is a basis for canceling your account. The account that has your obsequious, pleasant friend on which you’ve come to rely. The point is we are now comfortable, or happily oblivious to, our prompt being wrapped in additional hidden prompts. The easiest way to hide advertising is to keep promotional material secret, like the safety prompts. And to make it a violation of the terms of service to avoid promotional prompting, like the safety prompting. You may even be aware that there is promotional prompting in general, but a specific prompt.

Another way is to selectively return supporting links. For example, if you ask about camping, cold weather clothing, or places to visit in Maine, you might get a link to LL Bean. This is relatively harmless, except that it is different from search, where you can move past the initial results. There is a push for search engines to move from search results to AI results. That may mean, in the future, that you only get the handful of links from the paid advertisers along with the chat response. There may be no button to show more results, or you may have to explicitly ask for more results. Combine that with the advertiser’s ability to modify the hidden prompts injected along with your prompt, and you might lose any awareness of other possibilities. And should the LLM lie about one retailer having the best price, or a particularly well-suited product, that’s chalked up to the hallucinations.

There is also the information you are divulging about yourself. Maybe you are spewing information you would never share on Facebook or even Google Search. For free users, the AI companies are likely to mine all prior conversations, building up a detailed profile. For paid users, mining may depend on the plan and the account, such as a corporate account versus an individual premium account. This is already happening through other social media, but the LLMs may have more detailed information about mental state or health. While it may be more a difference of degree than kind, the chats may have richer data. I suspect the need for vast amounts of storage is to handle the influx and processing of the data you are freely giving them about your internal emotional and psychological state.

What I fear, and may be more deeply concerning, invoving the ability of the LLM to prime you over time. In some sense, search is “one shot.” You type in a search, you get back results. Facebook and other social feeds have been shows to influence peoples’ opinion not on just products, but able to alter their mental health. Their advertising can be better concealed. You might have retweeted or re-posted what were ads in the past. To a degree people have unmasked some of the behavior. We might be more inured to it now, and therefore have a bit of a resistance, but the social media algorithmic rabbit hole is alive and well. We know to watch for “radicalizing” content. What we don’t know how to spot are radicalizing outputs from a chat bot.

LLMs and chat bots may catch us in a particularly vulnerable way. We have a bias to believe the computer’s response is a neutral, disinterested party. And the responses from the LLM are private and highly individual. Not like public feeds on various Apps. If a company that sees sufficient lifetime value in a customer, they may be willing to pay over multiple chats. Maybe a $100 for a couple of months of ‘pushing.’ Imagine if the opioid vendors had access to this technology. Paying a few dollars to push someone toward a prescription for their brand of opiate may be worth thousands of dollars per patient. And each future addict’s chats are essentially customized to that person. Remember, we have plenty of evidence that existing social media can shape opinion and even mental health. Show enough people “PSA” style ads about enough vague symptoms and people will, in fact, ask their doctor if that drug is right for them.

But the big hook is the outsourcing of your cognition. Human beings are inherently lazy. If an escalator is present, almost no-one takes the stairs. Go to the airport and watch people, without luggage, queue for the escalator. The stairs are almost empty and there is just one flight. But they will wait in a press of people. Having a tool that allows you to ‘just get the answer,’ is like your brain being given the option to take the escalator. Instead of thinking through even simple problems, you just pose the prompt to the chat bot. And just like muscle gets soft and atrophies with disuse, your ability to solve problems dwindles. It’s like the person who begins to take the escalator not because it’s a little easier, but because they are now winded when taking the stairs. Need a plan for a workout? This shouldn’t be that hard, but you can just ask the LLM. (Ignoring it may actually give you bad advice, or in a world of sponsored chats, push you toward products and services you don’t need). Need a date idea? Just ask the LLM. Is your back pain something to worry about? The LLM has a short answer.

At least reading search results might inadvertently expose you to a knowledgeable and objective opinion between ads. If I search on Google for US passport applications, the first link is actually a sponsored link to a company that will collect all my data and submit my passport application for me. Who is this company? I’ve never heard of them. It ends in a “.us” domain, making it seem US related, but who knows what they do with the data or how they store it. The second link is the state department, but the third link is not. The only reason the state department is there, is because they paid to sponsor a search result. But at least it’s there. And it’s also in the list of general results. Google, Facebook, Tik-Tok, and so on have a track record of taking advertiser money from almost anyone. Amazon’s sponsored content is sometimes for knock-off or counterfeit products. And some sites have absolutely no scruples on the ads they serve, ads which might originate from Google or Meta ad services.

The lack of scruples or selectivity demonstrated by other on-line services that take advertising, combined with the outsourcing of cognition, means you are exposing yourself to some of the shittiest people on the face of the earth. For every time you are pushed toward buying a Honda, you might also be pushed toward taking a supplement that is dangerous to your health. You will likely be unaware you are being marketed to, and in ways that are completely personal and uniquely effective on your psyche. In a state of mind where you’re being trained to expect an objective result, with additional prompts that are invisible to you for “safety,” and a technology whose operation is inscrutable, you have no idea why you are provided with a given answer. Is it your idea not to buy a car at all and just use ride share services every day? If the ride share services want the behavior to stick, they know it needs to feel like it was your idea. Is it your idea to really push your doctor for a Viagra prescription, even though you are an otherwise healthy, 24 year old male? You shouldn’t but those symptoms come to mind…

The possibilities for political advertising and opinion shaping are staggering. The LLM expected to give neutral answers is sponsored to return “right leaning” or “left leaning” answers for months before an election. Or it embeds language also used by framers of important electoral issues, to prime you for other messaging. Unlike the one-shot advertising in a search result, or the obvious ad on the page you ignore, the LLM is now doing your thinking for you. There will be people who will take the mental stairs because they know the LLM dulls their wits. But these will be fewer and fewer as LLMs get better and more common. With no evidence that on line advertisers find any customer objectionable, could Nick Fuentes be paying to inject your responses with pro-fascist content?

It will be impossible for you to determine what ideas are a product of your reason and research. You will still feel like you’re in control. You will still have your mind. But what goes through your mind will be even more carefully and accurately shaped. In a state were a few thousand votes can sway an election, how much would a campaign pay to advertise to specific voters, if they start seeing those voters adopt talking points and slogans from their LLM chats and social media posts? Would it be $500 per voter? Maybe you need to target 50,000 voters at a total cost of $25,000,000? That actually seems affordable, given the vast sums that are spent on some close elections. The free chat bot loses money. The “premium” plan at $20 per month loses money. Even the $200 a month plan loses money. But the advertising may be their pay-day. How much would you pay to get people to think the way you want them to think, each person believing this was the natural evolution of their own thinking. Casually using LLMs is essentially opening your mind to think other peoples’ thoughts.

The Fraud Is Coming

Michael Burry is shorting some tech companies. With the market as frothy as it is, that’s not exactly prescience. Unless you’re as good a market gambler as Burry, I wouldn’t recommend it. (And if you were as good as Michael Burry – you would already have a lot more zeros in your net worth). It is still true the market can stay irrational longer than you can be solvent. But what Burry isn’t just pointing out the emperor has no clothes. He is pointing to financial engineering. Why is that important? Why does presenting the information in a slightly better fashion matter?

The pressure is on to show something. All the public companies in the AI orbit, with elevated stock prices because they’re part of the “AI-play,” need to show earnings. The non-public AI startups do not need to show earnings. Oracle, Broadcom, Micron, etc. need to show revenue. Immediately they do not need to show revenue, as they sign contracts. That’s future revenue, and the stock price goes up as a multiple of earnings. With expected future earnings rising, the value of the company increases, even though current earnings may not have moved. A company that trades at 15 times their earnings begins trading at 30 times their earnings, based on the expectation of making more money in the future. But at some point, the imaginary future money needs to become real money in the present to justify that multiple.

Could companies like Palantir and Oracle be over-stating their income by altering the way they treat depreciation? Maybe as much as 20%? That’s what Burry sees. When companies structure their earnings to provide a better light than what would otherwise be the case, we refer to that as lower quality earnings. It may be legal and within the GAAP (generally accepted accounting principles), but it suggests the actual earnings are inflated. This is completely legal, as long as it is disclosed. Eventually, the lower quality earnings should result in a lower multiple. But in the short term, investors may ignore it or simply accept the statements of the companies that the new accounting practices make more sense. Longer term, investors tend to give companies that do a lot of financial engineering side-eye. Eventually reality will set it and the fundamental reasons they aren’t doing well will overtake the financial engineering.

But where there’s that much pressure to push earnings, it means there is building pressure to fake earnings. This can be done by either aggressively booking sales when the sale isn’t really complete and moving liabilities off the balance sheet. I would suspect the former is already unfolding. When everyone is desperate for more data center space, more power, more networking, and more processors, booking a sale early may not seem like a big deal. You feel the actual sale will almost certainly close in the very near term. Or you can call the next firm in line, waiting to snap up the same scarce resource. So why not report it in this quarter to juice your numbers a little? But it doesn’t take long before some firms start booking speculative sales, to keep the line on the sales chart going up and to the right. One of two things will happen, either the auditors will stumble over this and realize there’s fraud going on, or (more likely) a short seller will sniff it out. The former is bad enough when firms are forced to restate their prior earnings, as some executives go ‘spend more time with family,’ and shareholders bring suits. But the latter is devastating, usually resulting in obliteration, with the fraud investigations coming later.

The other approach is to engage in balance sheet engineering. A loan or an obligation to make payments in the future are recorded as liabilities. There are ways to move the liabilities off the balance sheet of the parent company, for example, by using special purpose vehicles to actually carry the obligation. Company X doesn’t owe the money. The money is actually is actually owed by Able Baker, a joint venture between X and Y. Company X doesn’t record the liability, even though the counter-party (the lender) can collect from Company X, should Able Baker default. The auditors may miss this if Company X misrepresents the true nature of the obligation (commits fraud). No one will notice a thing as long as the market that props up Able Baker is healthy. Once that changes, and Able Baker defaults, Company X may find itself illiquid.

On overstated earnings or engineered balance sheets you can quickly build other frauds. For example, understating the risk of loans to those companies. Lenders may be aware that something fishy is going on, but continue to lend to the companies, collecting fees on deals that should never have been closed. Even in the best possible light, it means suspicious insiders put aside suspicions to chase the deal. After all, the entire market can’t be wrong. And everything looks good for now. If the demand for the underlying market dries up, the loans held either by the direct lenders or the positions investors have in that lender, are worth pennies on the dollar. (Or nickles, now that we’ve stopped minting pennies). Suddenly, the lender (now likely to be a private equity firm rather than a bank) is exposed to losses large enough to wipe out its equity. Investors that invest or lend to the private equity firms suddenly find their positions wiped out as well, creating significant counter party risk. Which can ripple through other sectors of finance through reinsurance products. And liquidity dries up as everyone becomes unsure of any of their counter-parties actual financial health. What threatened to bring down the entire house of cards in 2007/2008 was the overnight lending market between banks was shutting down.

What makes this especially troubling in the current environment is a confluence of factors. First is the inability to actually jail corporate executives of very large companies. Even if there is fraud, we fine the corporation rather than hold its officers criminally liable. Let me do the math for you. Let’s say you put together a 300,000,000 dollar deal where your bonus is 5%. If you commit the fraud necessary to close the deal, you will be paid 15,000,000 dollars. Should you even get caught, you will have to give most of it back but won’t go to jail. And you keep all the other bonuses you also received. It’s likely the government will settle with your former employer. And if you’re not caught, and the company goes under, they still owe you the 15,000,000. You can sue for it in bankruptcy court, or from the company that acquires your old employer. If the company gets bailed out with public money, contractually, they will still need to pay you the 15,000,000. But what if it isn’t fraud and it’s just making a bet you wouldn’t otherwise make? There is are incentives to take outsized risks. After all, they’re losing other peoples’ money. So you will likely make 15,000,000, or maybe as little as 3,000,000 on the off-chance you’re caught. You won’t go to jail. And you won’t lose any of your houses just because you lost your job.

But coupled to that is a president who is willing to pardon anyone who is a supporter. He has commuted or pardoned people for political advantage. Like CZ to make good with the crypto crowd. Or the violent protester from January 6. It could be that even though there is criminal fraud, having donated to the campaign, the ballroom, the inauguration, and made statements pleasing to the ear of the administration, is sufficient to insulate your from Federal prosecution. And if you’re in a state such as Texas, it might also insulate your from state prosecution. We may find that explicit fraud was committed, people knew and traded on the fraud, but the fraudsters supportive of the administration are pardoned. In other words, they get to walk away with a lot of zeros in their bank account and no accountability.

If it were “free money,” I wouldn’t care. But what happens when a PE firm, lending money for data center construction, suddenly finds itself the proud owner of a bunch of half built data centers? Or even finished data centers filled with useless, expensive, and rapidly depreciating assets because AI demand isn’t what many expected? And what happens to the pension fund that put 250 million into that PE firm? And multiple other PE firms who also couldn’t resist deals around AI? Or the bank that provides liquidity for the PE firm? It’s not “free money,” it’s coming from somewhere. And that somewhere could be a wide-spread, systemic problem. How does the Federal government backstop PE firms, who are not insured or regulated like banks? Does the government step in and buy stock in the fraudulent company, injecting good money after bad? How do we put possibly trillions of dollars of bailout into firms but let the fraudsters walk away with all their money? Does the government step in and buy the data centers? Do the fraudsters stay in charge if they made enough dulcet noises of support for the administration?

Let’s put together a package that “doesn’t cost the taxpayers a dime.” It involves backstopping the loans, purchasing shares in troubled companies, and buying some data centers. All with money is effectively printed by the federal reserve or raised from “investors” with guaranteed loans. Essentially, this injects a pile of money into the economy, which will fuel inflation. It would also expand the debt, causing even more worry about the US debt burden. A burden the US has every incentive to ease by devaluing the dollar and inflating its way out of the crisis. And like the COVID relief bills, a lot of that money will go into creating even more income disparity. Not only will the wealthy (including fraudsters) walk away with the money they made on the way to the crash, eventually they will reap the reward of the stimulus injected to moderate the economic damage. And while the previous administrations tried to put some limits on how either the post 2008 or COVID stimulus could be used, I doubt this administration will suffer that burden.

I don’t know if or when the AI trade unwinds. I suspect it’s ‘when,’ and the longer it goes on, the more I suspect it will unwind badly. There is little I’m hearing that makes me sanguine about an orderly end to this. On the spectrum there will be true believers to outright fraudsters. Like flies to shit, fraudsters are drawn to environments where people making money would rather not look too closely as long as money is being made. If anyone did look closely, the party’s over no one is making money. After all, a little ‘wiggle room’ makes the market possible. To repurpose Mao, this is the water in which the fraudster swims. Whether its repacking bad home loans, creating accounting practices at suspiciously rock-star energy companies, or the sales figures at ‘world leading’ telecom companies, no one wants the gravy train to come to an end. But rest assured, the longer the massive (and frankly stupidly large) sums of money are changing hands (or not actually changing hands) over various AI deals, the more openings fraud will find.

AI Wins the Shutdown

It’s Monday, November 10, and I’m going through the news. The shutdown may be coming to an end. Welcome news to some, although I believe the Democrats caved. The Republicans indicated they were willing to scorch the earth over ACA subsidies. These are the payments that help people buy health insurance, when they can’t possibly afford 15,000 or 20,000 a year in premiums. It seemed as though Republicans were willing to let air travel fall apart in front of the holiday season rather promote access to healthcare. All while the government feels they have enough money to possibly send $2,000 rebate checks from the taxes collected through tariffs. And the Democrats blinked. The government, assuming the house approves and the administration doesn’t have a spaz and veto the bill, will likely re-open.

What stocks do you think would be doing well on that news in the pre-market? They airlines? Yes, they’re up. Defense contractors? They’re mixed. What about health care? Mixed to net negative. What’s ripping? AI hardware and semi-conductor companies. NVIDIA is up over 3%, while the strongest airline, UAL, is up just barely 2%. The DOW and the SP500 are up largely because of just the AI and related semiconductor stocks.

On the Russel 2000, there’s more broadly positive price movement. The second tier defense contractors are doing well. When the Russel 2000 does well, it is a proxy for investors being more willing to take risk. In the sense that ending the shutdown is positive for the economy, taking on more risk through AI and smaller companies follows. With the gross dysfunction abated, it is more likely companies will make money. But the undercurrents of self destruction over providing health care to people is still there. One party is willing to burn it all down, including intentionally withholding food assistance to their base. They are willing to ignore their roll in checking even illegal acts. I don’t know if my outlook on the future is as sanguine as the other investors stepping up to shoulder more risk.

I feel like we’ve lost our ability to discern what is good and bad. All we know to do is calculate which option gives us more money. What you build is not important. Who you defraud is not important. What you destroy is not important. The dystopia you are creating is not important. And with enough money you can buy a legacy. All that matters is making as much money as possible. The invisible hand free of moral and ethical constraints. Even democracy and the constitution fall by the wayside if there is money to be made. Greed is not just elevated to ‘good,’ along with other good values. Greed is the only thing that matters.

Totalitarianism Is an Unneeded Expense

I covered nationalism and authoritarianism and why I think the latter is the serious issue. While you can’t have blood and soil politics without nationalism, in it’s weak form it isn’t a problem. Then I covered authoritarianism and that there is no acceptable level of authoritarianism. And why you can have authoritarianism even when the people choose it. We come to the third, but not to last, leg of the dictatorship table. Unlike a three-legged stool, dictatorships have probably four legs, maybe five, or even six legs. It takes a lot of work to dislodge the autocrat and the peoples’ thirst for an autocrat. People have a craving for order and strange fetishes a democracy will never really satisfy. Totalitarianism is just another leg. One that may not be critical in the modern age.

Total control has largely fallen out of favor. The Nazis, but more so the Soviets, really brought it home. (I will use the term Soviets broadly, encompassing the USSR and allied regimes, like East Germany, Hungary, Romania, or Yugoslavia). A totalitarian state is one where the totality of civic, artistic, and professional life falls under sway of the autocrat. There is no other party. There is no protest. There are no ‘liberal’ cities in opposition. There are no books sold at the store that are not approved. There are no films shown, records played, or news broadcast that isn’t approved by the state. In the Soviet period, especially in the 1930’s to the 1950s, possessing contraband items, expressing contraband ideas, or just running afoul of an apparatchik who desired your apartment could earn you a stint in a slave labor colony.

Modern dictatorships have not picked up the extreme totalitarian mantle. China is a hold-over from the old Soviet model, but even they realize they can’t have a modern society and truly perfect control. They’ve left that to the starving North Koreans. Which is why they allow a limited form of protest and discussion. Within narrow bounds you can make specific points, but no other power centers are permitted. Other dictators, like Orban, Erodgan, or Putin, have differing degrees of social and political control. Russia maintains a tighter control over its people than Turkey or Hungary, but they all have limits on expression and do not tolerate any challenge to their authority.

Autocrats are pulled toward totalitarian control. Dictators can’t help themselves. In little and big ways, their need to control manifests itself. Why has Trump injected himself into the Kennedy Center? Why does he threaten entertainers with investigations or arrest? Why does he use the FCC to intimidate news outlets? And also a late night comic by going after his parent company? Part of it is ego. Part of it is legitimacy with his own base of popular support. No more garish displays of love and acceptance, just proper entertainment like eulogies to fallen internet trolls and odes to the their gold-plated tin god. Part of it is calculated to discourage dissent, intimidate his opposition, and stifle debate. But the need to totally control is just is their insatiable desire for power.

The modern dictator slowly tightens the totalitarian noose, but never pulls it hard enough to completely choke the opposition. This ‘kindness’ accomplishes does two ends. First, it allows the dictator the fig-leaf of not being a dictator. How can we say someone is a dictator if there are still media outlets to oppose them? They’re not a dictator, they’re just the popular choice. When they go after a news outlet or nascent party, it’s over some fraud or a dense legal issue around permits. Because one or two independent sources still exist, other closures were obviously not for political reasons. It provides a handsome veneer over the rotten state of affairs. It allows their apologists to claim it is not a dictatorship or autocracy because control is not total.

Second it reduces the cost of staying in power. Those networks of informants, jails, collecting data, and surveilling cost resources. There is a degree to which the population naturally policies itself, if the regime has a degree of legitimacy. A true believer will rat out the person they see as a traitor or a threat. We like being cozy, safe, right, and righteous. For them, throwing someone in the gulag is for a better society is its own sick reward. Then there are those that can be cheaply coaxed into cooperation. Rat on your neighbor and you’ll be promoted to a better job. At the end of the day it still requires a network of informants, dossiers, and piles of “evidence.” Even in the AI age, that is not cheap. An algorithm might select, but cannot arrest, jail, or torture someone. That requires a paid human being in a jail that must be maintained. By some estimates, the cost of the internal security in Russia exceeded the cost of the military before the war.

It is impossible to put a minder in every home. In the Soviet era “samizdat” circulated even in the darkest days. These are well-worn, dog eared, hand-made, hand-copied, and hand-circulated books, essays, stories, works of art, and news that the Soviet boot heel could not smother. Despite the blaring of propaganda from radio, film, television, and even the PA system in the subway, it was impossible to snuff out the minds of millions of people. People developed the skill of being outwardly compliant but inwardly rebellious. An unseen mass that just needed a spark to set them off. And to the regime, these dangerous people were everywhere. The regime knew this and spent untold efforts to eradicate traces of “foreign” influence. They did so in a brutal and frightening campaign of terrorizing its own population. In the Stalinist peak, people were simply plucked off the street or out of their homes. Accused of some crime or another, it didn’t matter, they were headed to the gulag or a drunken firing squad.

The parallels in the US are obvious. We’ve seen the true believers reach out to quickly remove books form libraries and schools. The fantasies some have of their political opponents arrested, en-masse, are beyond troubling. An administration targeting public opposition with threats of investigation or being charged with the thinnest of crimes. Violently abducting immigrants, and not being too concerned if any citizens who are opposed are also arrested and roughed up. Threatening news outlets with law-suits or revoking press access because they made the dear leader unhappy. Or the sycophancy on display during public events. Or working with police to use excessive force at every opportunity on protesters. Or even taking over the reigns of culture at the Kennedy Center. If you don’t see it, you are pathetically and hopeless ignorant or are a willing participant who won’t admit to it.

A degree of social control is part of the picture, but it is no longer total. We will be allowed some degree of opposition. California exists as a foil to the goodness of the autocrat and his worshipers. A hell-hole of crime and liberal values that the core supporters can contrast to their own cozy sense of safety. The dictator doesn’t need to disappear California politicians from the street. It is enough to force his presence into their civic life. Soldiers standing around a Humvee in the middle of a park. The dictator keeps the protests in check by making it known accusation of excessive force are of no concern. South Park, until it becomes too much of a threat to the profits of its owners, can continue to make essentially obscene mockery of the dictator. The blogs and the “liberal” social networks can continue to exist. As long as it doesn’t actually threaten the hold on power, costly totality is not necessary.