A profound shift is underway in enterprise software delivery.
Developers can now generate, modify, and deploy systems faster than ever using AI. But understanding what those systems are doing in production is getting harder, not easier.
What began as simple prompt and response interactions with LLMs has evolved into something far more powerful: a distributed system of humans and AI agents working together to build, run, and adapt software. This isn’t a theoretical future. It’s happening now, and it’s reshaping how organizations must architect, operate, and govern AI at scale.
The industry has moved beyond monolithic LLMs. According to Merlin Yamssi, AI/ML CoE Lead for Partner Engineering at Google, who spoke at Dynatrace Perform this year, the early era of “LLM + prompt” model broke down quickly in real systems: no context, inconsistent behavior, and no way to act safely. That drove a rapid evolution toward retrieval, tool use, and ultimately agents that can plan, act, and collaborate across systems.
Today, AI systems behave less like single models and more like teams: one agent retrieves context, another writes code, another validates changes, and another evaluated impact in production. This shift is redefining enterprise expectations. AI is no longer here just to respond. It’s here to work.
Three forces reshaping enterprise AI
Three major trends are accelerating this transformation.
- Inputs are no longer just text, systems must interpret complex signals. As Yamssi put it, “You can show an image to a model, and then it will understand the image … and think on the image.”
- Execution is no longer linear, systems coordinate across agents.
- Data is no longer static, systems operate on constantly evolving context. “Essentially, [this turns] all the vast enterprise data into an active conversation,” Yamssi said.
Together, these forces are creating not just better models, but an entirely new AI operating model.
Why traditional AI architectures can’t keep up
Legacy architectures weren’t designed for distributed, autonomous AI systems. Single model approaches are rigid, difficult to debug, and prone to hallucinations. As organizations adopt multiagent systems, complexity skyrockets.
These are not only model problems; they are also distributed systems problems.
Emerging risks include the following:
- Agents lose context as they hand tasks to one another.
- Infinite loops occur where agents trigger each other endlessly, consuming tokens and budget.
- Opaque decision chains can make it difficult to understand why an agent acted.
- Token usage explodes without visibility or guardrails.
“You cannot really go into production with something that looks like a black box,” Yamssi added.
Enterprises should treat AI like a distributed system, not a chatbot.
What modern AI applications require
To support an AI workforce, organizations need a vertically integrated stack that spans five foundational layers: infrastructure, data, models, platform, and applications. Google Cloud is one of the few providers offering all five layers in a unified architecture, with hooks for observability at each layer.
On top of this foundation, a new class of agent specific tooling is emerging:
- ADKs for building reasoning capable agents
- MCP for standardized access to tools and enterprise systems
- A2A protocols enabling seamless agent-to-agent collaboration across environments
- Agent engines capable of running thousands of agents at scale
This is the new AI application stack, and it calls for a new operational model.
The missing layer: Observability for the AI workforce
Observability is increasingly about decisions, not just systems. As multiagent systems scale, observability can serve as a control plane.
Without deep visibility, enterprises face black box behavior, unpredictable costs, and operational risk. Dynatrace and Google Cloud are working to address this gap, including integrating observability capabilities with Gemini Enterprise, A2A, MCP, and other parts of the AI stack.
Modern AI observability should help reveal:
- How decisions are made
- How agents coordinate
- How costs and behavior evolve in real time
- Where failures originate across reasoning
This is a shift from monitoring applications to monitoring reasoning, decisions, and collaboration. Enterprises often need visibility from the infrastructure all the way to the data, the LLM, the agent, and the application.
For developers, this changes the job entirely. You’re no longer debugging a service. You’re debugging a system of agents, decisions, and interactions across code, cloud, and runtime behavior.
From insight to action: The path to autonomous operations
According to recent Dynatrace research, 50% of respondents have agentic AI projects in production for limited use cases, and 44% have projects in broad adoption. Further, 72% of respondents have 2-10 agentic AI projects. With agentic AI already in production and growing, observability becomes essential. Once organizations can observe their AI workforce, they can begin to automate operations.
Observability must surface not just what happened, but why. Dynatrace and Google Cloud are already enabling this through integrations with Gemini Cloud Assist, which can recommend infrastructure or application changes based on observed issues.
This unlocks a new operational loop:
- Observe systems behavior across code, runtime, and agents
- Diagnose causal relationships, not just symptoms
- Recommend changes grounded in runtime context
- Remediate automatically or in collaboration with developers and agents
The result can be faster recovery, lower cost, and safer AI deployment at scale.
Accelerate your AI workforce strategy with Dynatrace on Google Cloud
At Perform, this shift was clear: The challenge is no longer generating code or deploying models. It’s understanding and controlling how these systems behave once they’re running.
The organizations that solve this will define the next generation of software delivery. If you want to go deeper, watch the full Perform session: The AI workforce: Advancing agentic collaboration through observability.
Dynatrace and the Dynatrace logo are trademarks of the Dynatrace, Inc. group of companies. All other trademarks are the property of their respective owners. © 2026 Dynatrace LLC
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.
Go to forum