.jpeg)
.jpeg)
One report after another keeps coming out. Some “expert in the field” is always predicting either a miracle or the end of the world. We’re either promised that AI will solve all our problems—or that it will wipe out all jobs, destroy society, and render humanity obsolete.
And then *Bild* picks it up. Or ZDF. Or the *FAZ*. And suddenly there’s panic. Or hype. Depending on which extreme gets more clicks at the moment.
The real problem? No one has a crystal ball. Neither the so-called experts nor the AI labs themselves can predict with certainty what changes the technology will bring in the long term. What they do have are scenarios, estimates, and often large marketing budgets.
We conducted a structured analysis of German media coverage of AI over the past few months. The result: 80 percent negative, 20 percent positive. Bild, ZDF, FAZ—they’re all included.
Why? Because dystopias sell better. Fear gets clicks. Panic gets shared. Nuances are boring.
It’s scientifically proven: Headlines containing negative words generate more clicks. Negative articles are shared more often. This is shown by studies published in *Nature* by NYU and Cambridge.
Let’s take two recent examples: The Citrini Research Report “2028 GIC” predicts radical changes by 2028—mostly framed in dystopian terms. Matt Shumer tweets assessments of AI development that are sometimes alarming. Both are essentially speculative, but they generate massive attention. The media picks up on these scenarios and amplifies them—because this mix of expert opinion and doomsday scenarios works best.

This brings to mind the Club of Rome. In the 1970s, its experts predicted that humanity would starve. Their reasoning: the world’s population is growing exponentially, while food supplies are growing only linearly.
What did they fail to foresee? The birth control pill and technological breakthroughs in agriculture, which fundamentally altered population growth. The lesson: Even well-founded forecasts fail when disruptive innovations change the rules of the game. That is exactly what we are experiencing now with AI.
That’s why we’ve selected some of the most prominent and controversial arguments and are offering you our nuanced assessment of them.
The claim: Entire job roles are disappearing—and this time there’s no “retraining for another field” because AI is improving across the board at the same time.
Our assessment: That’s too simplistic. The Vanguard study from late 2025 shows that jobs involving AI are growing the fastest and are the best paid. Why? Because subject matter expertise combined with AI is the real competitive advantage—not AI alone.
Our counterargument: Someone has to steer, guide, and manage AI. And we’re facing the baby boomer generation’s retirement wave. A large number of people will be leaving the workforce—30 to 40 percent of the workforce, depending on the organization.
Hire people with the right mindset. Smart, ambitious employees won’t become redundant—their expertise will be harnessed to develop new things, while AI keeps existing systems running smoothly.
Our takeaway: Don’t hire based on job titles, but rather on a willingness to learn and critical thinking—those are the skills that AI can’t replace.
The claim: The long-held argument that “technology always creates more jobs than it destroys” no longer holds true. AI learns new jobs faster than humans can be retrained. Even new roles like prompt engineers are immediately automated or paid significantly less.
Our assessment: The argument doesn’t hold true in every case—but it’s not entirely off the mark either. Studies show that the extent to which AI is perceived as a “hallucination” determines how organizations deal with it. Do we need more domain experts or more leaders? Only time will tell.
Our counterargument: We need this partnership in every scenario. We currently don’t see any scenarios in which AI makes all decisions completely autonomously. It depends on the risk appetite in each specific process. Whether you actually leave it entirely up to AI or have AI monitor AI—there is no such thing as 100 percent certainty.
And: Even if AI were the perfect CEO—should we let it act as the perfect CEO? Or should it work alongside a human CEO who ultimately bears the responsibility?
Our conclusion: We need to think not only in terms of technology, but also in terms of ethics. We need this combination: humans plus AI. That’s exactly where the new jobs will emerge—at the intersection of human responsibility and AI expertise.
The claim: AI labs deliberately focused on optimizing code first and are now turning their attention to all other areas of the business: legal, finance, HR, marketing, consulting, and product.
Our assessment: Coding is a structured language with clear rules. That is why it is easy for AI to learn. This is not the case in other fields.
Our counterargument: Adaptation in the areas of legal, finance, HR, marketing, consulting, and product is significantly more complex. Why? Because a great deal is implicit, and very little is documented regarding how things actually work.
Yes, every department will have to develop an AI integration strategy. But that doesn’t mean it will be replaced. Tasks will shift, and roles will be redefined. Yet this is where the opportunity lies to rethink old structures and develop new solutions.
Our conclusion: AI will not replace knowledge-based fields, but rather restructure them—those who document their processes now and make them AI-ready will gain a competitive edge.
So much for the theories. But how do you actually deal with the daily flood of AI predictions?

With every prediction, ask yourself: Is this a claim or evidence? Does it match my experience? Do you actually see signs of this in your surroundings? And: What would happen if you tested this theory in your company?
You can leave everything else as mere speculation as long as there is no concrete evidence. Things are often not as bad as they seem.
Ultimately, the best way is to see for yourself: by engaging with the technology, getting your hands dirty, trying things out with a critical eye, asking questions, and testing hypotheses using your own resources.
That is media literacy. That is empowerment in the age of AI.
In our programs, you’ll learn how to apply AI yourself, clearly define AI processes, and use them strategically. No panic, no hype.
→ Subscribe to our newsletter: Get the most important insights on AI, productivity, and what really works every week.

Hansi
AI Copywriter on the 'Leaders ofAI' team