How Developers Are Building the AI Edge

4 min read.

Artificial intelligence is racing into the enterprise, but the story of where it adds value is shifting. It isn’t leaders on stage who determine AI’s impact, it’s developers in repositories, editors, and enablement sessions. Data from DX, the developer intelligence platform, makes the case that AI advantage stems from execution, not announcements. Their AI Measurement Framework, launched in August 2025 after a year of research, sets out a rigorous way to assess impact across utilization, productivity outcomes, and cost. It’s an approach designed to move AI out of slide decks and into shipping code.

The framework’s three dimensions anchor the discussion in reality. Utilization tracks adoption, are developers actually using these tools daily or weekly? Impact ties AI activity to outcomes, time saved per week, changes in productivity metrics, or shifts in satisfaction. Cost captures the tradeoffs, licenses, training, enablement, and support. By establishing a shared vocabulary, teams can compare, iterate, and decide where to double down. Google has framed time savings as the most important measure of AI, and DX’s approach aligns with that thinking. But where Google’s framing remains high level, DX is making measurement practical and reproducible.

The Developer Experience Connection

This emphasis on execution fits with DX’s longer term project, measuring developer experience as rigorously as business performance. Its Core 4 framework sets a new standard by defining productivity across four categories, Speed, Effectiveness, Quality, and Impact. Speed includes metrics such as pull requests merged and cycle time. Quality incorporates change failure rates and deployment recovery. Impact captures how much time developers can dedicate to new value versus maintenance. Effectiveness, the often missing piece, is captured through the Developer Experience Index (DXI), a standardized survey across 14 drivers of experience.

What makes Core 4 and DXI different is their balance. By combining system data with developer input, they prevent the blind spots of purely quantitative or purely survey based measurement. Leaders can compare benchmarks across companies while developers see friction points surfaced directly. As Dropbox co founder Drew Houston put it, Core 4 provides a cohesive picture and answers the question, what does performance really look like? The addition of DXI allows leaders to not just measure outcomes, but see the levers behind them, tooling, process, or cultural friction.

The payoffs are quantifiable. DX’s research across nearly 39,000 developers shows that every one point increase in DXI correlates to about 13 minutes saved per developer per week, or nearly 10 hours annually. That may sound small, but at enterprise scale it compounds quickly. Block, the financial services company, used Core 4 and DXI to identify half a million hours lost each year to inefficiencies. By reducing this hidden tax, they improved throughput without sacrificing quality, showing that experience is not a “soft” metric but a direct driver of ROI.

Fragile Gains and Real Costs

The enthusiasm around AI is justified, but the numbers reveal how precarious the gains can be. In top quartile organizations, around 60 percent of developers now use AI tools daily or weekly. Average time savings come to 3 hours and 45 minutes per week. That’s a powerful boost when multiplied across a large workforce. But the same research found that more than 20 percent of developer time is still lost to friction and poor tooling. Without structural fixes, the benefit of AI adoption can evaporate into lost hours elsewhere.

This tension explains why developer experience has become central to AI ROI. Tools like GitHub Copilot, Cursor, and Sourcegraph are seeing broad uptake, but organizations that lack a measurement discipline often fail to convert licenses into capability. DX’s framework was built with contributions from industry leaders including DORA and Sourcegraph, ensuring alignment with the direction of developer productivity research. By combining system metrics, time savings, and survey data, the model gives a holistic picture, one that can highlight both where AI works and where it isn’t delivering.

Lessons from Booking.com

The experience of Booking.com shows what happens when AI rollout is guided by structure rather than procurement. With 3,500 engineers, the company leaned on DX’s AI Measurement Framework to guide adoption. Instead of simply buying licenses and hoping for usage, they measured where tools were being picked up, where friction was highest, and how time savings compared across teams. The results were striking, AI tool adoption increased by 65 percent, and the company logged an additional 150,000 hours saved.

A member of the company’s DevEx team summed up the difference, the framework didn’t just show outcomes, it showed where to focus. That focus translated into deeper and wider use of AI, ensuring that engineers integrated tools into their daily work rather than letting them sit idle. The case illustrates the key point, buying AI tools is a procurement exercise; building capability is an operational strategy. The latter compounds over time, creating cultural and technical advantages that competitors struggle to replicate.

Strategy Through Developers

The strategic implications are clear. Define performance consistently with Core 4, capture developer experience rigorously with DXI, and measure AI against utilization, impact, and cost. Make these metrics visible so teams can iterate quickly, scaling what works and cutting what doesn’t. Leadership remains important, especially in funding tools and setting expectations, but the needle only moves when developers adopt and integrate AI into their actual workflows.

This is the essence of the shift. AI strategy without developer buy in is noise. It’s in the cadence of commits, in editor extensions, in reduced context switching, and in the collective hours saved where advantage materializes. The data shows the stakes, consistent measurement not only proves AI ROI but directs where to invest next. As organizations grapple with how to turn AI hype into business value, the lesson is becoming unavoidable, developers, not decks, are building the advantage.