For a while, I honestly thought AI might take Agile down with it.

Not because I’d suddenly fallen out of love with iteration. And not because I’d developed some late affection for waterfall. It was more that a lot of things we used to hold together with human effort had suddenly become much cheaper. Story drafts. Meeting notes. First-pass documentation. Test scaffolds. Even parts of requirement breakdown. Once you see that shift clearly, it becomes quite hard not to look back at sprints, refinement sessions and reviews and ask a slightly uncomfortable question: how much of this is still genuinely necessary?

What I got wrong, slowly, was not whether AI would change Agile. Of course it would. The part I misread was the layer it would hit first.

What it hit first was a set of intermediary activities that had once been just about worth the cost.

That distinction sounds small. I’ve come to think it matters quite a lot.

If you blur Agile and Scrum into one lump, you can reach a very satisfying conclusion very quickly: AI is here, so the process can go. It’s a pleasing line. It just doesn’t survive contact with an actual team for very long.

I’ve ended up believing something narrower, and more useful. AI hasn’t washed Agile away. It has loosened an older cost structure. DORA’s 2025 work describes AI as an amplifier rather than a magic fix. Digital.ai’s 18th State of Agile report doesn’t read like an obituary either. It reads more like a reminder that agility is being adapted, not abandoned. Put together, they point to the same thing: what’s changing is not whether we still have to work through uncertainty. What’s changing is whether we need to carry that uncertainty with the same heavy machinery we used before.

If I strip Scrum back to something more practical, I now see three layers.

LayerWhat it used to doWhat AI changes firstHow I now read it
Information synchronisationGather stories, status, notes and contextDrafting and summarising become cheaperMost likely to be redesigned or reduced
Rhythm and commitmentAlign goals, scope, trade-offs and riskHarder to justify through heavy information-moving ritualsThe form can shrink; the function cannot vanish
Risk controlHold validation, review, retro and DoD togetherFaster generation also scales bad work fasterMore important than before

One layer is basically information synchronisation. What the requirement looks like now. Who has done what. Where something is blocked. Where the boundary of a story roughly sits. A great deal of that used to require manual effort. People had to gather it, write it down and bring it into the room. AI has taken a visible bite out of that cost.

Another layer is rhythm and commitment. What this round is actually trying to solve. What it is not trying to solve. What gets pulled in. What is left out. Which risks we are knowingly accepting. That layer has not gone away, but it is much harder now to justify carrying it through large, blunt, information-hauling rituals.

The third layer is risk control. What counts as done. Whether a demo actually exposes the underlying assumptions. Whether a retro leads to a better working agreement rather than another round of well-meaning frustration. This layer, if anything, matters more now. When generation gets cheaper and output gets faster, low-quality work scales more easily too. Ambiguity does not become clarity just because AI can package it more neatly. Scrum.org’s recent writing on AI lands quite close to this: the core isn’t treated as obsolete. The emphasis moves back to empiricism, transparency, inspection and adaptation.

So I’m less interested these days in asking whether AI will replace Agile.

I’m more interested in asking this: now that AI has lowered certain costs, do we still want to use the same operating habits to carry the same kind of work?

That question is much less exciting. It is also much closer to real work.

Take refinement.

I never hated refinement as such. In its better form, it is where a team pulls a fuzzy idea into something buildable, testable and discussable. The problem is that many teams gradually turned refinement into a large manual tidying exercise. Clean up the stories. Fill in the acceptance criteria. Make the tickets look respectable. That was easier to justify when those things had to be produced by hand. Once AI can generate a first pass, the valuable part of refinement shifts. It should spend less time creating content and more time sharpening boundaries. Which slice goes first. Which assumption gets tested first. What actually counts as done. Where the dependencies sit. Where the real risks are. AI can assist with the draft. It cannot own the boundary.

Daily standups are much the same.

The version I’ve always found hardest to sit through is the one where everyone earnestly reports what they did yesterday and what they’ll do today, while nobody in the room honestly feels that the ten minutes improved anything. That version becomes even harder to defend now. If it is simply a status recap, AI can produce that recap faster and more consistently than a room full of people. If humans are still gathering, the point ought to be unblocking work, surfacing risk earlier, or resolving ambiguity before it quietly expands.

I also don’t buy the line that the answer is simply fewer meetings. That is a very cheap slogan. The hard part was never the raw number of meetings. It was whether the collaboration was designed to do something real. Some meetings deserve to die because they no longer carry any decision value. Others still deserve to exist because they are effectively insurance for judgement, quality and risk visibility.

I got this wrong myself for a while. I thought I disliked process. What I actually disliked was the kind of process that mainly made a backlog look tidy and a sprint look disciplined. Those are not the same thing. Good Agile was never about watching people more closely. At its best, it shifts management away from watching people and towards watching outcomes, learning and validation.

That is also why Atlassian’s 2025 DevEx research feels relevant to me. What it describes isn’t “AI arrived and everything got easier”. It describes a stranger reality. Many teams do feel that AI is saving them time, while organisational inefficiencies remain stubbornly in place, and in some cases become more visible. That is quite a brutal finding. It suggests that AI speeds up the individual first, then pushes the bottleneck into alignment, information flow and decision quality.

So when someone asks whether Agile is outdated, I rarely answer with a plain yes or no.

I usually want to ask which Agile they mean.

If they mean the part that helps teams work through uncertainty in short loops, inspect reality and adjust direction, I don’t think that has aged out at all. If anything, the current environment needs it more. If they mean the version that gradually hardened into a full administrative layer of ceremonies, then no, that version does look older now. Not because AI made it old overnight, but because AI has made the oldness harder to hide.

That judgement still has boundaries.

If you are running in a heavily regulated, high-dependency, cross-functional environment, many of the activities you may now be tempted to call waste were never there to save time in the first place. They were there to prevent expensive mistakes. And if a team never had clear review habits, clear validation, or meaningful retrospectives, AI will not make it more agile. It will simply help the team move ambiguity downstream faster.

So if I had to compress the whole argument into one sentence, it would be this:

It’s not Agile that’s aging.
It’s some of the ways we deliver.

And those habits were old before AI arrived. AI just made speed cheaper, which means the practices that only ever existed to prop up an older cost structure are now much easier to spot.

What changes and what stays after AI