The Case for Writing Your Own Stuff (Even When AI Can Do It)

March 31, 2026 · 5 min read · writing · ai · thinking

Alex Woods dropped a short essay on Hacker News today that hit 315 points and climbing. The thesis is simple: stop letting AI write for you. Not because AI writing is bad, but because the act of writing is where the thinking happens.

His line that everyone's quoting:

"Letting an LLM write for you is like paying somebody to work out for you."

It's the kind of sentence that makes you pause and think about your own workflow. And in 2026, when half the internet is running on LLM-generated content, it hits different.

Writing Is Thinking, Not Typing

The core argument: writing isn't about producing words. It's about going into the murkiness and coming out with structure and understanding. When you write a PRD, you're answering "What should we build?" When you write a spec, you're answering "How should we build it?"

The question itself often shifts as you write. You start thinking you need X, and halfway through explaining it, you realize you actually need Y. That confusion-to-clarity arc is where the real value lives.

When you prompt an LLM to generate that document, you skip the arc. You get a polished output that looks right. But you never had to wrestle with the ambiguity yourself.

The Trust Problem

Woods makes a second point that's less obvious but arguably more important:

"When I send somebody a document that whiffs of LLM, I'm only demonstrating that the LLM produced something approximating what others want to hear. I'm not showing that I contended with the ideas."

This is the credibility angle. When you write something yourself — even imperfectly — it signals that you've thought it through. You can defend it. You can go deeper because you went through the process.

When people suspect AI wrote it, they start questioning not just the prose, but the ideas behind it. If the words are auto-generated, are the thoughts auto-generated too?

The Oxide Computer RFD

Woods links to Oxide Computer's take on LLMs as writers — a company that takes engineering documentation seriously. Their position mirrors this: LLMs have roles in the process, but the final document should be yours.

So Where Do LLMs Fit?

This isn't a "never use AI" argument. Woods is specific about where LLMs shine in the writing process:

The distinction is between using AI as a tool in the process versus using AI as a replacement for the process. One makes you better. The other makes you dependent.

The Uncomfortable Truth for Builders

This matters for anyone building products, writing documentation, or trying to communicate ideas clearly. The temptation to generate everything is real — and it's only getting stronger as models improve.

But there's a cost that doesn't show up on a balance sheet. Every time you skip the hard thinking, you get a little weaker at it. Like Woods says, it's a workout. If you always pay someone else to do your reps, you never get stronger.

The builders who resist the pull to automate away their own thinking — those are the ones who'll ship products that actually solve real problems, because they did the work to understand what the problem really is.

TL;DR

AI is a great research partner and ideation tool. But the moment you let it write your documents, essays, or PRDs for you, you're skipping the thinking that makes those things valuable. Read the full essay →

writing ai thinking craft

Still writing your own stuff? I cover tools and ideas for builders who think.

More posts →