1. Who benefits from this?
2. What can we do?
This week's Wednesday Wisdom has a crack at answering both.
Do you have a sneaking feeling things are getting worse? Do you sound like Man Shakes Fist at Cloud, complaining nothing's built to last, service has tanked, and companies get away with more than ever?
Cory Doctorow's 2025 book Enshittification: Why Everything Suddenly Got Worse and What to Do About It supports your fist-shaking.
How things get worse
Doctorow argues the business cycle of technology platforms (online middle-men that serve users and businesses, like Uber, Facebook and Amazon) follows a four-stage pattern.
- Good for users. Platforms lure users with generosity, using investor funds. Think social networks, cheap taxi rides, free shipping.
- Good for businesses. User value is clawed back to serve business customers. Think surveillance, data mining, ad placement, "recommended" content.
- Good for nobody. Finally, they screw businesses over, hiking fees and degrading quality. By then the switching cost is too high for users or businesses to do anything about it.
- Shit.
Where AI sits in the cycle
When it comes to AI, we appear to be in end-stage 1, maybe early-stage 2. Right now, we access AI freely or cheaply. People increasingly rely on LLMs for daily thinking, writing, decisions, and admin (as well as therapy and personal companionship).
The costs for energy use, processing, et al. are decoupled from the sticker price, funded by unprecedented venture capital. Like cheap Uber rides and enjoyable Facebook feeds, this is unlikely to be financially sustainable.
When the incentives shift
In Doctorow's stage 2, AI companies would put business users first. This could be two-pronged: content is commercially captured and organisations reap big benefits.
LLM results might become pay-to-play, featuring ads or paid placements, like Google did with PPC. For maximum targeting, user privacy would be compromised, just as on Facebook*. Companies would be served by incredible workflow and processing capability, at a reasonable price, increasing the incentive to lock critical IP and data in walled gardens, a la Apple and Microsoft. Institutional dependence grows.
*AI companies stole the entire internet, what are the odds they'll keep your data secure? With the intimate information going into these tools, the commercial payoff will be WILD.
As the opportunity for tampering becomes more obvious, the pressure from governments to censor results or insert propaganda may escalate - and the incentive for hackers and bad actors to intervene would rise alongside.
If Doctorow’s model holds, AI will then turn on business users. On Facebook, advertisers reported reliability and ROI bottoming out from 2016. For LLMs, this could repeat, with companies forced to "bid" for top spots. Pressure to deliver investor returns might see data centre and processing costs appear in subscription fees, with previously core functions shifted into premium tiers or recast as expensive add-ons.
Organisations lured by previous cost savings could find their workflows, IP, agents and processes locked in a proprietary system. The garden walls, at this point, become too high to scale, with migration difficult.
If the pattern holds
If Doctorow's model reaches its peak, all will turn to shit. The possibilities are vast (this is where dystopian robot theories and doomsday predictions take the stage). Let's stick with the platform focus. Perhaps logic and reasoning quality will fall off a cliff. Maybe sourcing will become increasingly woeful. In a no-trust wasteland, we could see news outlets, academic sources and classic search engines crippled by regurgitated AI slop. People may begin to pine for even the shitty versions of Facebook or Google, but these platforms may be unrecognisable or vanished by then.
None of this is inevitable (no matter what viral tech-bros with vested interests say)
At this point, we've entered hysteria territory, so take a deep breath. This is all conjecture. But it isn't alarmist to suggest that without proper regulatory and commercial protections, AI use has risks.
The current incentives appear stacked for extractive fuckery. At some point, investors will want a return on their money. History, outlined in meticulous detail in Enshittification, suggests that weak regulation, low competition and private ownership of critical infrastructure pushes companies chasing returns to the edge of legality, sacrificing consumer experience and protections to the bottom line.
Doctorow is hopeful, helpful and clear: there is scope for pushback. Regulation, competition, and interoperability can roll back or prevent enshittification. Unchecked AI dominance is not inevitable, as I wrote about here and here. We can still regulate and shift policy settings toward safety. We can preserve core information sources, prioritise media and scholarly capacity, invest in publicly owned models, and develop civic literacy programs. Unfortunately, our track record at resisting tech domination is poor - and further concentration of power looks likely.
That is to say, if nothing changes, it would be surprising if AI didn't enshittify.
Your practical resistance strategy
So, what's a grumpy fist-shaker to do?
The game is not yet lost, and power literacy, cognitive sovereignty and the ability to act inside ambiguity are skills you can build. Here is a four-pillar resistance strategy for taking the power back to build personal and professional resilience to AI enshittification.
Resistance Pillar 1: Bring thinking in-house.
While everyone else giggles and makes shitty cartoons, be the loser who puts down your phone and picks up a book. Build a rich repository of information sources that offer you diverse ideas: books, articles, podcasts, and long-form media. Engage with these deeply. Use a highlighter, and make margin notes. Read things twice. Discuss them with people who have different beliefs and life experiences to you, and listen carefully to their take. Ask follow-up questions. Research the evidence base for your opinions, use primary sources, and stay curious. Revise your thinking as new information presents.
Resistance Pillar 2: Own your shit.
As your data accumulates in a system, leaving gets harder and more expensive. You need a backup strategy. Remember how you used to carefully organise your files and catalogue your personal photo collections, before cloud became the norm, when hard drives still cost a fortune? It's time to brush off those skills. Former customers of Google Reader, Evernote, Twitter, or pre-cloud Adobe will remember how quickly data can be lost or held hostage. Mitigate this risk by exporting data from AI and cloud tools regularly, saving and organising your own files, and owning your shit.
Resistance Pillar 3: Resist lock-in.
Any system that becomes mission-critical - especially one that replaces people - is a dependency risk. When development is black-box and opaque, algorithms are impossible to decode, code is auto-generated and user control is low, the risk profile increases dramatically. Other core digital infrastructure is protected by international protocols that require interoperability - things like email (SMTP), web addresses (HTTP) and documents (PDF). AI currently lacks equivalent standards or open-source protection. Keep that in mind when you make decisions about critical workflows, service providers and organisational restructures.
Doctorow's book has one example after another of lock-in dangers - especially when the terms change. Sellers on Amazon Marketplace are powerless to resist fee-gouging and ranking manipulation. Facebook publishers watch readers disappear as traffic collapses. Businesses built on SEO crumble when the algorithm changes and AI summaries became the default. So if you use LLMs and other AI tools at home or at work, don't build your whole life in them. Stay multi-platform and record your workflows in an open-source or owned environment.
Resistance Pillar 4: Budget for price hikes.
This one's for the advisors and analysts. Model your investments and business cases with a significant uncertainty margin. AI cost behaviour is less predictable than previous tech - which is already wildly unpredictable. Many of you will be battle-weary on this front, knowing cloud and SaaS costs rise after adoption.
My years developing technology business cases (when "digital transformation" was all the rage) made me highly skeptical of any technology promising cash-releasing benefits. Work is emerging to support this skepticism. A recent Deloitte report noted AI spending is “no longer linear or predictable” and that cost behaviour is “volatile and complex.” Gartner even predicts that GenAI costs in customer service will be higher than offshore human reps by 2030. Assume AI will get much more expensive.
A final note
With these four pillars in place, you will be better placed to manage AI enshittification. Regardless of what the viral tech CEOs are saying, the horse has not yet bolted.
None of this is a replacement for public pressure on governments, structural change, and policy, but it beats walking blithely into a shit-storm.
In closing: this is the easiest it will be for you to stay independent of AI and build resilience to platform-capture. Don't sleep on it.
Coming soon
As a subscriber, you're already a member of a thinking community that values cognitive sovereignty. Good one. My next essay is called Work Harder For Your Opinions. We'll explore where opinions come from, the risks we face, and how to bring a power filter to your thinking. I'm building a useful framework to help you audit your opinions, too.
Extra reading
I keep saying I don't want to write about technology, but I keep doing it. If you want to hit the back catalogue, start here:
The original banger

The doom-scrolling follow-up.

