Every DORA metric, every PR cycle time report, every deployment frequency chart your organisation produces is a performance review of engineering leadership. We just refuse to read it that way.
The excuse era is over
For years, engineering managers could hide behind the inherent complexity of software development. "This work is hard to estimate." "Coding takes time." "You can't rush quality." These weren't lies exactly, but they were convenient shields against accountability.
AI has stripped those shields away. When an engineer can generate a working feature in an hour using Claude or Copilot, but it takes two weeks to ship, the delay is no longer in the coding. It's in everything that surrounds the coding. And every single one of those surrounding activities is controlled by management.
Review turnaround? That's your culture. Deployment pipeline speed? That's your infrastructure investment. Unclear requirements causing rework? That's your relationship with product. Engineers blocked waiting for decisions? That's you.
In a world where AI has made every engineer capable of 10x output, the gap between their potential and their actual delivery is entirely on you.
Two things hold your engineers back
When engineers aren't shipping at the pace they're capable of, only two things are responsible: poor delivery management or lack of accountability. Both are leadership failures.
Delivery management failure looks like this: PRs sitting in review queues for days because you haven't built a culture where review is a priority. Engineers blocked on decisions because you haven't established clear ownership or escalation paths. Deployments batched into risky fortnightly releases because you haven't invested in CI/CD. Context switching killing focus because you haven't protected your team from meeting chaos. Every one of these is a system you failed to build, a blocker you failed to clear, or a process you failed to fix.
Accountability failure looks different but is equally damaging: commitments made in standup that drift without follow-through. Quality standards that exist on paper but aren't enforced in review. Engineers who consistently miss estimates without anyone addressing the pattern. Work that's "almost done" for weeks because nobody is asking the hard questions.
This isn't micromanagement - it's basic leadership. If you're not holding your team accountable to their commitments, you're not leading them.
The uncomfortable truth is that both failures are yours. You either haven't built the systems that enable high performance, or you're not holding people to the standards that demand it.
What your metrics actually reveal
Take your last DORA report and read it as a management scorecard.
Long PR cycle times don't mean your developers are slow. They mean you haven't built a review culture where getting eyes on code quickly is a shared responsibility. You haven't made it clear that reviewing a colleague's PR is as important as writing your own code. You haven't structured your team's capacity to include review time.
Low deployment frequency doesn't mean your team lacks velocity. It means you haven't invested in the infrastructure and confidence to deploy safely and often. You've allowed manual gates, lengthy approval chains, or fragile pipelines to become the norm. Your team has learned that deployments are risky events to be minimised rather than routine operations to be embraced.
High change failure rate doesn't mean your engineers write buggy code. It means you haven't built sufficient quality gates, your test coverage is inadequate, or your team doesn't have the time and space to do proper verification before shipping. You've created an environment where speed is rewarded over reliability.
Long recovery times don't mean your team lacks skill. They mean you haven't built the observability, runbooks, or incident response muscles that enable fast diagnosis and resolution. You haven't invested in the boring operational work that makes recovery routine.
Stop looking at these metrics and asking "how do I make my developers more productive?" Start asking "what have I failed to build, unblock, or enforce?"
Leading high-performance teams in the AI era
When AI handles the mechanical work of writing code, the bottleneck shifts entirely to the human systems around that code. Your job as an engineering leader has never been more clearly defined: enable, unblock, and hold accountable.
Enabling means building the systems that let work flow. Fast review cycles. Clean deployment pipelines. Clear ownership of decisions. Protected focus time. These aren't nice-to-haves. In an AI-augmented world, they're the entire game. Your developers can generate code at unprecedented speed - your job is to ensure nothing else slows them down.
Unblocking means actively removing obstacles rather than waiting for them to be escalated. The best engineering managers I know spend significant time each day asking "what's stuck?" and then doing something about it. They chase down decisions from other teams. They cut through ambiguity with product. They make calls when nobody else will. They treat blockers as personal failures, not external forces.
Holding accountable means following through. When someone commits to delivering something by Thursday, you check in on Wednesday. When code review turnaround starts slipping, you address it immediately rather than hoping it self-corrects. When an engineer repeatedly misses estimates, you have the conversation about what's going wrong. This isn't about being a taskmaster. It's about creating an environment where commitments mean something.
The managers who will thrive in the AI era are the ones who understand that their job isn't to tell engineers what to do. It's to create the conditions where engineers can do what they're already capable of.
Your DORA dashboard is a mirror
The next time you review your team's metrics, try reading them as a reflection of your own performance. Every slow cycle time, every failed deployment, every lingering blocker is telling you something about what you've built - or failed to build - as a leader.
AI has given every engineer on your team the potential to be a 10x engineer. When they're not hitting that potential, there are only two explanations: you haven't built the delivery systems that enable it, or you're not holding them accountable to achieve it.
Either way, the metrics are measuring you.
Comments