Nicole Forsgren: Leading high-performing engineering teams in the age of AI - The Pragmatic Summit
AI coding assistants promise to accelerate software creation at unprecedented speed, yet organizations find themselves shipping slower than ever. Nicole Forsgren, author of «Frictionless» and researcher behind the DevX framework, returns with a paradox: the same tools that supercharge individual developers are overwhelming the very systems designed to support them. As code reviews pile up, deployment pipelines choke, and mental models collapse under rapid iteration, engineering leaders face an urgent question: how do you measure productivity when the nature of work itself is being rewritten? And in a world where agents might one day self-drive entire systems, what does it mean to support both your teams and yourself through this transformation?
Puntos clave
AI is accelerating the «inner loop» (coding, iteration) dramatically, but human-managed processes like code review, security sign-offs, and deployment orchestration have become severe bottlenecks — what was «fine» before is now a crisis under pressure.
The DevX framework's three pillars — flow state, cognitive load, and feedback loops — are being disrupted: faster feedback can paradoxically increase cognitive load when developers must rebuild mental models dozens of times in 30 minutes, and AI completions interrupt deep work.
Adoption and engagement metrics are more useful starting points than traditional productivity measures; if developers won't use a tool (they're «gloriously cranky»), that signals a real problem, and understanding *how* they use it reveals what tasks AI handles well.
For agents to eventually self-drive systems, humans must first be able to see, understand, and fix those systems — which requires cheap, accessible instrumentation at key touch points across the software delivery lifecycle, not heavyweight processes.
Explicit executive sponsorship and psychological safety are critical: developers need permission to experiment, fail safely within guardrails, and know they won't be punished for mistakes made while learning new AI tools.
En resumen
The AI coding revolution has exposed — not solved — systemic friction in software delivery; organizations that invest now in instrumentation, psychological safety, and understanding their end-to-end systems will be the ones that can actually ship faster, while those chasing velocity metrics alone will drown in their own output.
The Speed Paradox: Fast Code, Slow Delivery
AI accelerates coding but exposes bottlenecks in review, deployment, and release processes.
Organizations are experiencing a bewildering contradiction: developers write code faster than ever with AI assistants, yet software ships more slowly. The root cause is systemic. Processes that were «fine» when one or two people managed them — security reviews, deployment candidate selection, cherry-pick decisions — are now overwhelmed by the sheer volume of AI-generated contributions. Human reviewers have become bottlenecks, and some companies have even removed automation from the review process out of concern for AI code verifiability, shifting more burden onto humans.
The deployment and release pipeline, often a «black box» for many engineers, relies heavily on group decision-making and manual sense-making. These processes don't scale when the volume of code multiplies. New hires using AI tools can commit production-ready code on their first day, but they wait two weeks for database access because onboarding systems weren't designed for this pace. One intern committed substantial code before receiving their laptop, only to be blocked by security policies that couldn't accommodate the new reality.
Forsgren frames this as «chasing constraints» or «chasing bottlenecks.» AI has thrown gasoline on the fire of software creation, and now every downstream dependency — whether technological, procedural, or human — is burning bright. The companies that recognize and instrument these friction points will be the ones that can actually accelerate end-to-end delivery, not just local velocity.
The DevX Framework Under Pressure
«I Was Writing for the Wrong Audience»
Forsgren's own writing process illustrates the value of wasted effort and external feedback.
“I get through this whole section of the book and I realize I've created several chapters of basically like how to do research when you're not a researcher... incredibly detailed and easy to understand and 100 pages that no one needs to read ever. No one is going to read this. And so I just like tossed it and reached out to Abby and I was like, «Do you want to write this book? I think I have an idea of the direction I'm going. Also, tell me if I get in a rabbit hole.»”
Measuring What Matters in the AI Era
Start with adoption and engagement; avoid productivity theater and define your real goals.
Start with adoption Despite not loving it as a metric, Forsgren recommends tracking whether developers actually use AI tools. Developers are «gloriously cranky» and won't use bad tools unless forced — low adoption signals a real problem.
Track engagement patterns Understand *how* and *for what tasks* people use AI. Early studies show it's heavily used for straightforward work; watching these patterns reveals strengths and weaknesses of the tooling.
Define «faster» precisely When leaders say they want speed, ask: do you mean the inner coding loop, or end-to-end feature delivery? These require very different measurement approaches and systemic changes.
Apply the SPACE framework Satisfaction, Performance (outcomes like quality), Activity (counts), Collaboration/Communication, Efficiency/Flow. Use multiple dimensions to avoid optimizing velocity at the expense of quality or morale.
Make risk-based decisions Some teams run rapid experiments with lower quality thresholds on tiny user percentages, then roll back quickly. This is acceptable if done intentionally with clear guardrails and instrumentation, not as a blanket sacrifice.
The Security and Compliance Crunch
Non-developers using AI create new risks; regulations may not accommodate agent-driven workflows.
The Security and Compliance Crunch
Business users now have access to tools like Claude Code and are building sophisticated applications — one accidentally made a sales-proxy tool publicly available. Security teams, already overwhelmed, must now educate and govern a much larger population. Meanwhile, regulatory frameworks that required «two humans» to review code before deployment don't yet define what counts when agents are involved, creating legal and process ambiguity.
The Data Imperative for Agentic Futures
For agents to self-drive systems, humans must first instrument and understand them.
Forsgren offers a cascading logic for the future: if agents are to self-drive and self-improve software systems, they must first be able to see, understand, and act on those systems. For *that* to be true, humans must be able to do the same — and currently, many cannot. Right now, humans serve as stop-gaps, relying on tribal knowledge and gut feel (»when there's a problem over here, it's usually about the build«). Agents won't have that context.
The path forward is instrumentation: cheap, accessible signals at key touch points across the software delivery lifecycle. This doesn't mean heavyweight observability stacks, but rather lightweight, targeted data collection that surfaces the signals teams care about — quality gates, adoption patterns, bottleneck indicators. Forsgren expects the «outer loop» (design, ideation, prototyping) to collapse just as the inner loop has, which means today's touch points will shift or disappear. Organizations that understand their current system and its weak points will be able to adapt; those flying blind will be left behind.
Supporting Yourself Through Transformation
Build a personal board of directors and create safe spaces to discuss fear, failure, and misalignment.
Personas
Glosario
Aviso legal: Este es un resumen generado por IA de un vídeo de YouTube con fines educativos y de referencia. No constituye asesoramiento de inversión, financiero o legal. Verifique siempre la información con las fuentes originales antes de tomar decisiones. TubeReads no está afiliado con el creador de contenido.