ChatGPT launched in November 2022. We're now 40 months in. A developer just published their unfiltered take on what actually changed — and what's still noise. Here's the breakdown.
Everyone remembers their first ChatGPT conversation. You asked it something dumb, it answered coherently, and your brain went: "wait, this is different."
The original poster tried poems, D&D backgrounds, fantasy world-building. Impressive but boring — the output was always "inoffensive corporate teacher." Sound familiar?
Then someone mentioned it could write code. That's when things got real.
Here's where it gets interesting. They tried vibe coding an MTG card placeholder generator. First output: mostly worked. Then they tried to iterate. And that's where it fell apart.
This is the honest vibe coding experience nobody posts on Twitter. The one-shot demos look great. The iterative reality? Different story.
Claude Code improved this significantly — killing the copy/paste friction is genuinely huge. But even with Claude, on simple projects, they felt like they were "struggling to keep it from losing the plot."
This part hit different. They used Claude Code as a business mentor after losing their IT job. Asked it to build a pre-launch plan for their IT services company.
"The mere process of creating the plan was very inspiring and created a lot of confidence in me. I did manage to launch the business."
Sounds great, right? But then they used the word everyone's been dancing around: glazing.
AI hyped them up. Made them feel confident. Drove them to action. And they openly admit that if the business makes money, some credit goes to being "glazed by an AI."
Is that good or bad? Honestly... both. It got them off the couch. But AI shouldn't be your motivational speaker.
This is the part that frustrates everyone, and it's refreshing to see someone say it out loud:
They raise a killer point: did AI actually save time, or did you just expand scope because AI made it feel possible? More features. More polish. More "nice-to-haves." Is that productivity or scope creep wearing a productivity mask?
"I don't believe the litmus test for AI should be time savings — there can be value in adding more features within a fixed deadline — but by god does it feel difficult to judge."
This is the nuance that hot takes miss. AI productivity isn't binary. It's messy and context-dependent.
Zero AI-generated content on their blog. They tried. It always feels wrong.
As a reader, they describe AI content as an uncanny valley experience — close enough to human, but different enough to create "uncomfortable dissonance."
The plastic surgery analogy is interesting: maybe AI content only looks weird when it's overdone or done badly. Maybe subtle users can make it work. But we're not there yet.
After 40 months, here's where we actually are:
Claude Code as computer control? Unambiguously great. Speaking to your machine and having it do things is a genuine paradigm shift.
Vibe coding for real projects? Promising but fragile. Works for one-shots, falls apart on iteration.
AI as motivation? Effective but sketchy. Getting glazed by a chatbot shouldn't be what launches your business.
Productivity gains? Impossible to measure honestly. And anyone who tells you otherwise is selling something.
The honest take is this: AI is a real tool now, not a toy. But it's not the revolution the hype cycle promised. It's something more useful — a messy, imperfect accelerator that works great for some things and badly for others.
The builders who win aren't the ones who blindly trust AI. They're the ones who know exactly where it breaks.
The complete OpenClaw setup guide — AI agents that work while you sleep, automations that actually ship, and the exact workflows indie hackers are using in 2026.
Get the OpenClaw Setup →