

Stanford economist Erik Brynjolfsson has crunched the numbers: U . S. labor productivity grew by 2.7% in 2025—almost twice as fast as the decade-long average. At the same time, 70% fewer new jobs were created. More output, barely any new jobs. This is the AI-driven productivity boost finally showing up in the macroeconomic data.
It’s important to note that this is the economic evidence—and even that is concentrated. Capital Economics shows that the growth is coming primarily from ICT sectors (software, cloud, platforms). Outside of these sectors, AI adoption in the U.S. remains below 15 percent. FAZ
The business question is a different one: Will this effect be felt in your company? You won’t see it until two conditions are met: you make the necessary changes—and you measure the results accurately.
And even then, this is (still) primarily a U.S. story. The same pattern held true during the dot-com boom: U.S. productivity picked up, while the eurozone lagged behind. The point isn’t that “the U.S. is good and Europe is bad”—rather, the benefits of technology don’t materialize automatically. They only emerge once companies overhaul their processes, roles, and standards.
And this is exactly where impatience sets in: Many teams expect to see results after 8–12 weeks—at a stage when they are just beginning to change their operating system.

When leadership teams say, “We don’t see any ROI,” the issue is often not a technology problem, but a measurement problem. Many organizations track metrics such as: licenses, number of active users, and number of prompts.
These are adoption metrics. They tell you whether AI is being used—but not whether it’s reducing the organization’s workload or making it more efficient.
If you really want to see the transformation, you need metrics that make the “invisible” phase visible—for example:
The key step, then, is this: rely on correlation rather than gut feeling. The ideal pattern: AI adoption increases—and stress signals decrease.
If you don't make this connection, a common misunderstanding arises: AI is used, but the benefit is perceived as "greater speed"—not as "more breathing room."
Brynjolfsson, Rock, and Syverson use the productivity J-curve to describe a pattern that you’ll recognize from every major technological wave: New technologies initially drive down measured productivity because companies first have to restructure—redesign processes, redefine roles, establish standards, and train employees. FAZ
It feels like renovating: first the chaos, then the result.

How long will this transition period last? History shows (FAZ):
The most important takeaway: The problem isn’t that AI doesn’t deliver results. The problem is that we expect results after just 12 weeks. Those who merely accumulate pilot projects without overhauling the operating system are prolonging the J-curve. The shortcut isn’t “more tools,” but rather: a commitment spanning several quarters (often years)—with clear metrics.
Better metrics won’t eliminate the J-curve—but they do show you where you stand on it. Without them, you won’t know whether you’re stuck in the trough or already climbing out of it.
Workday, in collaboration with Hanover Research, surveyed active AI users. The picture that emerges is typical:
This is precisely the crux of the matter: if you scale up speed before establishing standards and quality gates, you’ll create friction elsewhere. The organization will become faster—but not simpler. Workday / Hanover Research
We experienced this firsthand at Leaders of AI. When we integrated AI assistants into our content production, we wanted to see results right away. Instead, we ended up with more back-and-forth. What we had underestimated was the time our team members spent reviewing outputs and providing context that the AI lacked. We had speed, but the workload only eased once we established quality gates and clear responsibilities.
AI doesn't automatically reduce the workload. It reduces friction. Read more about this in our article on workload creep.

The difference rarely lies in the tool. The difference lies in the ecosystem of roles, standards, metrics, and leadership.
How much time does it take to make AI improvements? Assess the current situation and find your benchmark.
Overtime trends, turnover in AI-exposed teams, work-life balance from pulse surveys. Correlate these indicators with AI adoption. The ideal outcome: AI adoption increases—stress indicators decrease.
Don’t just ask management what’s planned. Ask employees what they ’re experiencing: more training, more time, better prioritization—or just more output?
Who checks what? Based on what criteria? Who is responsible?
Ideal scenario: Every AI output has a reviewer and clear acceptance criteria. You measure the percentage of results that are ready for immediate use.
No testing. No piloting. Complete a real workflow that used to take weeks in just a few hours with an AI agent. Document. Scale.

The AI productivity boost is real. Brynjolfsson sees the U.S. data as an early indicator —but most of the current effect likely stems from machine learning in the 2010s (recommendation systems, cloud computing, automation). The broad-based boost from GenAI and agents is yet to come.
This isn't a green light. It just adds to the pressure. If you don't adapt now, you'll miss out on the next wave.
If you don't see a return on investment after 12 weeks, that doesn't mean AI isn't working. It often means:
AI isn't a sprint. But it isn't a marathon either.
In our Master Business with AI MBAI) program, you’ll learn how to systematically integrate AI into your business—using real metrics, clearly defined roles, and agent use cases that scale.
→ Subscribe to our newsletter: Get the most important insights on AI, productivity, and what really works every week.
Hansi
AI Copywriter on the 'Leaders ofAI' team