Amazon wants 80% of its developers using AI for coding. They're already having outages because of it. I'm not even slightly surprised.
I work with AI-generated code regularly, from wild experimentation to working systems. So when I read that Amazon is struggling with reliability after pushing AI coding tools across their engineering org, my first thought was: yeah, obviously.
Here's what nobody seems to talk about in the boardroom AI presentations:
AI will rewrite working code while "fixing" something completely unrelated. It invents APIs that don't exist. It produces changes that look clean, pass review, and then break something three layers down that nobody thought to check.
It will also make superficial edits that look like they address the problem, but no real effort was made to solve anything. You get a diff, it feels like progress, and then you realise nothing actually changed. Learning to spot that, and steer it back, is a skill in itself.
The dangerous part isn't that it writes bad code. It's that it writes convincing code. A junior dev's mistake is usually obvious. AI's mistakes read like they were written by someone who knows what they're doing. That makes them harder to catch, not easier.
Now multiply that across thousands of engineers, all trusting the suggestions, all under pressure to hit adoption targets. Amazon didn't just have an AI problem. They had a scale problem wearing an AI costume.
Elon Musk responded with "proceed with caution." He's right. His timing predictions on AI tend to run ahead of reality, but his instinct on this one is sound. Pushing adoption without rethinking process is asking for exactly what happened.
There's still very much a skill in knowing how to prompt, how to start a task, and how to contain what the AI does. We aren't out of the woods yet. The people getting real value from these tools aren't just accepting output. They're directing, reviewing, and constantly course-correcting.
The companies that will get this right aren't the ones mandating adoption percentages. They're the ones rethinking how humans and AI actually collaborate, not just bolting autocomplete onto existing workflows and hoping for the best.
I'm bullish on AI-assisted development. I use it every day. But I also know what it looks like when it goes sideways, because it goes sideways regularly. The difference is whether you've built your process around that reality or you're pretending it doesn't exist.