Efficiency Theatre: AI Automating Ourselves Into Ignorance

“Soon, workers won’t just read a memo that was written by GPT and summarized by GPT. They’ll respond to it with feedback written by GPT. Then, GPT will turn their feedback into a bullet-point summary. Nobody will read a word of it.”

This quote, absurd at first glance, is starting to look less like satire and more like a user flow diagram.

Every week, it seems another software product announces an AI-powered feature. More often than not, it’s some form of summarization, repositioned as a breakthrough in productivity. But scratch the surface, and it becomes clear: we aren’t making knowledge work more effective, we’re just making it faster to fake.

We’re entering the era of efficiency theatre: where speed, automation, and surface-level polish masquerade as progress. But in our rush to do more faster, we’re eroding the very foundations of good thinking — depth, context, and reflection.

The Recursive Loop of Shallow Thinking

Consider how LLMs are increasingly used to write, summarize, and repackage information, often in layers. A strategy doc might be drafted by AI, summarized by another AI, and shared in a deck filled with AI-generated speaker notes. At no point does anyone deeply engage with the content. It’s summaries of summaries, a recursive loop with no original source of real thought.

In high-context domains like product development, law, or science, where the devil is in the details, this is dangerous. A summarizer can’t catch a misaligned assumption, sense a contradiction in tone, or challenge a lazy conclusion. But a human, fully engaged, can.

This isn’t hypothetical. A 2024 study by Microsoft and Carnegie Mellon found that generative AI shifts people away from deep work and toward light-touch editing and verification. The researchers warned that reliance on AI could erode critical thinking skills, not unlike a calculator that makes you forget how to do math by hand. When tools abstract away the complexity of a problem, they also strip away the opportunity to think critically about how to solve it.

When AI Replaces Friction Instead of Supporting It

One of the most promising uses of LLMs is as a thinking partner — to generate alternatives, refine your argument, or pressure test assumptions. But many teams are using AI to avoid friction, not engage with it. Marketing decks are templated in seconds. Product specs are bloated with filler. Meeting notes are transcribed and summarized instantly, but rarely revisited or internalized.

The result is a growing distance between people and their own ideas.

As Ezra Klein notes:

“Books don’t just give you information. They give you a container to think about a narrowly defined scope of ideas… ChatGPT can’t summarize the thinking you do while reading.”

Similarly, Paul Graham wrote:

“A good writer almost always discovers new things in the process of writing… there is a kind of thinking that can only be done by writing.”

We’re replacing the process that produces original thought with tools that optimize for outcomes that look like thinking — polished, coherent, and often completely hollow.

What We’re Losing

The real loss is the work of work: the friction that sharpens ideas, uncovers nuance, and fosters innovation. When AI is used to bypass this discomfort, we get faster outputs but weaker insights.

In a startup trying to find product-market fit, this shows up as shallow alignment. Everyone’s decks sound good, the strategies seem clear, but nothing clicks. Nobody has wrestled with the ambiguity long enough to find the real path. It’s all artifact, no understanding.

In larger companies, it’s even worse — layer upon layer of AI-written content compounding into organizational amnesia. Teams forget how to write clearly, think independently, or disagree productively. Everyone’s “doing the work,” but no one is actually thinking.

How Do We Do Better?

The good news is that AI can make us better — if we stop treating it like a shortcut and start treating it like a tool for engaged work.

Here’s how we can shift course:

  • Use AI to provoke, not produce. Ask it to challenge your assumptions, explore counterarguments, or suggest alternate approaches. Don’t let it do your job — let it make your job harder in the right ways.

  • Reintroduce friction where it matters. Instead of summarizing a long doc, assign parts of it to team members to interpret and debate. Use AI to amplify perspectives, not compress them.

  • Make outputs traceable to real thinking. If something sounds too polished, ask who thought it through. What tradeoffs were considered? What evidence supports it?

  • Reward depth, not just speed. As leaders, celebrate when someone takes the time to deeply understand a problem — not just when they generate a deliverable quickly.

  • Design software that fosters engagement. Imagine tools that don’t just auto-summarize a doc, but surface open questions, highlight inconsistencies, or prompt clarifying dialogue among readers.

Finding a More Enlightened Way of Working

The worst-case scenario isn’t AI replacing our jobs, it’s AI making us forget how to do them well. But it doesn’t have to be this way.

We can build a future of work where AI is a scaffold for deeper understanding, not a crutch for avoidance. Where writing still clarifies thinking. Where reading still sharpens insight. Where work is still a craft, not just a set of tasks delegated to machines.

The goal isn’t to do less work. It’s to do more and better work. AI should help us think more clearly, not just act more quickly. And in doing so, it might just restore our humanity, not strip it away.

Next
Next

The Quotes I Keep Coming Back To Again and Again