
Dynatrace CTO in front of the spectacular new headquarters in Linz, which will be occupied in 2026.
©Dynatrace // Hermann WakolbingerBernd Greifeneder talks about excessive expectations for AI projects and excessive energy consumption. The Dynatrace CTO believes that better software architects are needed to keep AI under control and explains why senior management is interested in monitoring software.
AI has been part of Dynatrace for more than ten years. At that time, the public had little idea of what AI could be. Are you already ten years ahead again today?
We are constantly reinventing ourselves and have now reached the point where we are moving from highly automated IT operations to autonomous ones. With "self-healing", the ability to initiate self-healing processes in the software and also provide automated tests, we have already implemented many projects with customers in the past. This still required people to install these processes and link them to Dynatrace.
So doctors are still needed to define the therapy. But soon that will no longer be the case?
You could look at it that way. The therapy is now also automated. But to stay with the analogy, you still need the medical supervisor. What's exciting today is what AI does to complement and how it changes the job.
That's a burning question these days: there's a lot of uncertainty right now between AI fear and AI euphoria.
The truth lies somewhere in the middle. People with little technical expertise will find it more difficult to build high-quality software services. The focus is shifting from simple software development to good software architectures, experts who know how to properly commission and structure AI. Because if you don't do that, you'll quickly end up with the most useless software. AI itself doesn't know what to do. AI is like a summer intern, with little experience and knowledge of how the business works.
Nevertheless, it is already turning job profiles for IT specialists upside down and shifting skills.
There is a clear need for more skilled people who can tell AI what the end result should be. This is a positive development. AI is overtaking people who simply hammer out code. Vibe coders feed AI until it produces some sort of result. However, this is not high-quality code, and it cannot be scaled sustainably for complex systems. Vibe coding works well for a simple website, though. You can't just leave business-critical software, where there's a lot at stake, to AI. You definitely need competent people for that. In a nutshell, this means that the level is shifting up a notch again.
You can't just leave business-critical software where there's a lot at stake to AI.
Unreliable language models
How strongly have these changes already impacted the market?
Strongly and severely. What makes the "ChatGPT type" of AI so successful is its accessibility to the general public. Every company is experimenting with it. However, 95 percent of pilot projects conducted with this type of AI fail. Everyone is giving it a try, but hardly any of them are delivering the hoped-for added value. AI with LLMs is capable, but not reliable. It had great demo effects, but in actual operation, many people are realizing that it hallucinates, that it does things it shouldn't do. That's why we need software engineers and software architects who can work out the added value.
The world has realized that frontier models, such as GPT5, Gemini, and others, will no longer make giant leaps forward, but will only be improved in detail. The world has realized that an LLM can never solve all requirements, but that several must be linked together. That's why the world is currently moving toward agentic AI.
The next big hope, but much more complex to implement, as we hear.
A good AI will eventually achieve 95 percent accuracy. If you link just ten conversations together, you have 95 to the power of 10 – that's 60 percent accuracy. Is that good enough to solve a business case? Never! What do you do? You integrate even more AI and use AI again to evaluate each other. So exponential amounts of energy and computing power are invested just to achieve a few percent improvement in quality. That's sheer madness.
The full cost analysis seems to be neglected in the AI euphoria: Are electricity and computing power really underestimated?
AI is cheaper than humans, that's true in parts. But the exponentially increasing energy consumption quickly leads to a break-even point where humans are needed to keep it under control. At Dynatrace, we are in the comfortable position of having spent more than ten years building AI that does not hallucinate, that is deterministic. ChatGPT is stochastic. Every technology has its areas of application. You could describe it this way: Dynatrace's AI is the scientist (deterministic) – reliable, comprehensible. The AI everyone is talking about today (GPT) is the artist (stochastic) – creative, you never know exactly what the outcome will be.
On the path to autonomous operation
It stands to reason that AI designed to monitor IT systems must be precise.
Our customers have hundreds of thousands of software systems running, and they need to know what's going on. We combine AI that can do this with stochastic AI, which then says: We have identified the problem, to solve it you could choose this or that option. We combine our own knowledge base with the general knowledge base so that we have maximum precision on the one hand, but also achieve such autonomy that employees only have to define a specific objective. That could be: "Dear Dynatrace, manage our production environment so that cloud costs remain within budget, but the customer experience meets its goals, make optimization suggestions, and send them to the software developers." That's where we're headed.
When do you plan to reach this level of autonomy?
Ten percent of our thousands of customers already use "Preventive Operations", a system that detects problems before they occur. This is a preliminary stage to completely autonomous goal setting. To be fair, it will take a few more years to achieve complete autonomy. But on the way there, AI is already adding value by automating problem detection and security analysis.
Are there any issues in your market segment, observability, that currently require closer attention?
According to reports, 256 billion lines of code were generated by AI last year alone, and the proportion of duplicate code has increased eightfold during this period, according to estimates – with all the advantages and disadvantages that this entails. Not only is the data growing exponentially, but so is the code. It happens time and again that AI models are trained with things that contain security vulnerabilities. For example, a vulnerability from two years ago that has already been fixed may be repeatedly regenerated by AI. So everything is becoming even more complex.
I don't just have to check security at the beginning, but also constantly check what is really going on during operation. And this brings us back to the 95 percent of projects that fail. AI is not a magical all-rounder. I can achieve AI text generation, but that is only one of the possibilities. I have to integrate AI with other digital services.
So it can happen that a vulnerability from two years ago, which has already been fixed, is repeatedly regenerated by AI. This makes everything even more complex.
Increasing complexity as a major challenge
That sounds as if a loss of control is virtually inevitable.
The IT systems already in place are highly complex, and now AI is being added to the mix. It's not enough to test it once and then roll it out. No, every time you use it, it behaves differently. I have to continuously monitor AI-supported solutions, also for compliance reasons. We need to collect and analyze much more data. These are real challenges that we are facing. Many people are not yet aware that AI projects can only be successful if observability is taken into account.
You once said that companies need to reinvent themselves every seven years: At Dynatrace, innovation work takes place in large cycles, but at the same time, it must respond to dynamic developments – generative AI – on the market. What was the last big breakthrough in product development?
What is really crucial for our future is that we have built a massive parallel processing data lakehouse – in simple terms, a database. The explosion of data and code is generating more and more data that needs to be evaluated: user behavior, causal relationships represented in a graph, metrics. We have customers who now accumulate a petabyte of data per day (!) in their IT operations. Real-time evaluation of this data can only be achieved with a specialized database, ours is called Grail. No other database on the market can do this. The availability of all data in real time enables AI to deliver results immediately. This is because the quality of AI depends directly on the quality of its training data.
You serve 4,000 customers worldwide: Which industries are the most important for Dynatrace? Which sectors are new customers coming from?
We have traditionally been strong in the financial services market and with software providers. The rest is spread across all industries in the Fortune 15,000. Growth is coming from everywhere because everyone is investing in AI. A major European car manufacturer, for example, has completely revamped its entire modernization area with Dynatrace. Our target groups within companies have changed. Whereas previously it was operations and DevOps, now we are increasingly being used by software development teams on the one hand and the executive level on the other. For more than a year, we have been using business observability to map insights and visualize business processes in real time for the executive suite. This has been very well received by executives. You can immediately see where processes are stuck – and these insights are extremely helpful when a stuck process can cost millions. We have a cargo company that uses Dynatrace to analyze where its containers are located in the port. They optimize their processes based on this information. This creates added value that goes beyond just bugs and shows how broad the audience has become.
About
Bernd Greifeneder co-founded the market leader in observability (according to Gartner) in Linz in 2005 and, as CTO, continues to lead product development for the US company, which has been publicly traded since 2019 and is currently worth $15 billion. Annual revenue was most recently $1.699 billion (2024/25), with EBIT at $179.43 million. The group employs 5,200 people, more than a quarter of whom are in Austria. Around 23 percent of operating expenses go toward research and development. Dynatrace has been recognized – once again – as top innovative company in Austria for 2025.
Observability is IT specialization in an extremely system-relevant niche: without these monitoring programs, there would be no digital life, no money transfers, no logistics, and no e-commerce. 4,000 customers around the world work with Dynatrace programs.
