Header background

Building trust in agentic AI: An observability‑led 90‑day action plan

Agentic AI is gaining traction quickly in pursuit of autonomous operations. But establishing the trust, reliability, and governance required to derive real business value is proving more challenging. New Dynatrace research suggests ways leaders can pair human oversight with observability as a real‑time control plane for scaling agentic AI safely from pilot to production.

New research from Dynatrace reveals how organizations are adopting agentic AI to drive greater business value through automating operations. But as teams push toward AI‑driven automation at scale, they’re also confronting a core challenge: the variable, context-dependent nature of AI systems makes it difficult to establish the reliability, safety, and governance needed to fully realize ROI.

Context is key for AI systems to avoid losing track of instructions, hallucinating missing dependencies, or misinterpreting evolving system states—especially during extended multi‑step tasks. Some of the technical challenges AI agents present include:

  • Context fragmentation: As tasks grow more complex, agents cannot reliably hold, retrieve, or apply the full operational context they need for accurate decisions.
  • Unpredictable autonomy: Small gaps or inconsistencies in context can cause cascading errors that affect downstream systems, workflows, and data integrity.
  • Lack of verifiable control signals: Without real‑time, fact‑based grounding, agents cannot validate their own assumptions or detect deviations, making it extremely difficult for leaders to operationalize autonomy safely.

These issues explain why agentic AI is accelerating but still challenged to become “production‑ready” without a new foundational layer of observability, governance, and human oversight.

The emerging reality: What the 2026 Pulse of Agentic AI reveals

The 2026 Pulse of Agentic AI is a global survey of 919 senior leaders and decision makers directly involved in or responsible for agentic AI development and implementation. Results show that agentic AI is advancing rapidly but encountering structural barriers on the path to scalable autonomy.

  • Agentic AI is moving quickly from experimentation into real operations. Most organizations (72%) now run 2-10 agentic AI initiatives, and 50% have at least some production deployments. Adoption is strongest where reliability and risk sensitivity are highest: IT operations (70%), data processing (51%), and cybersecurity (49%), where automation can deliver fast, measurable gains.
  • Maturity is uneven. While investment is rising and expectations for ROI are high—44% have projects in broad adoption in select departments—only 23% have projects in mature, enterprise-wide adoption. The primary blocker is not ambition, but trust. Leaders cite security and data privacy (52%), and technical challenges (51%)—especially limited visibility into agent behavior and difficulty defining when agents can act autonomously versus when humans must intervene.
  • AI operations forge a new role for human oversight. Most agentic decisions are reviewed or validated by people (69%), and 44% rely on manual methods to monitor agent interactions—slowing scale and increasing operational risk. These findings make one conclusion clear: agentic AI cannot reach its potential through experimentation alone. Scaling autonomy requires stronger governance, clearer decision boundaries, and real‑time observability that connects AI behavior to system reliability and business outcomes.

From insight to execution: Why AI projects are stalling and how observability enables results

The research makes clear why many agentic AI initiatives stall before delivering full business value.

Leading organizations are already using observability as more than a monitoring tool

Observability is becoming the foundation for scaling agentic AI safely. Nearly seven in ten respondents apply observability during implementation to integrate agents with existing systems, monitor data quality, and detect anomalies. As agentic systems move into production, observability is increasingly used to track agent performance in real time, validate outputs, and correlate AI behavior with reliability, efficiency, and risk.

Observability data alone is not enough

At the same time, the research exposes a clear gap: many teams still rely on manual reviews to understand agent interactions, slowing scale and limiting trust. Respondents consistently point to limited real‑time visibility and weak connections between technical signals and business outcomes as barriers to autonomy.

Observability must become a fact-based control plane for agentic AI

This is the inflection point. Organizations that treat observability as a real‑time control plane—governing decisions, enforcing guardrails, and grounding AI actions in facts—are better positioned to expand autonomy with confidence. The following 90‑day action plan translates these proven practices into practical steps leaders can take now.

A 90‑day action plan for execs and IT leads

Operationalizing agentic AI requires moving deliberately—from experimentation to governed, observable autonomy. The first 90 days should focus on building AI trust, resilience, and measurable business impact.

days 1-30

Establish foundations and governance.

First, define clear decision boundaries for when agents can act autonomously versus when human approval is required.

Inventory active agentic AI initiatives, assess their business criticality, and identify where visibility gaps exist.

Stand up a baseline observability layer that instruments AI agents, workflows, and data paths, capturing logs, metrics, traces, and contextual signals among agents and infrastructure needed for validation and auditability.

days 31-60

Build trust and controlled autonomy.

Define clear roles for human‑in‑the‑loop operations, placing human judgment in the drivers’ seat for intent and accountability while agents perform tasks and perfect execution.

Set up observability‑driven data‑quality checks, drift detection, and alerts.

Promote observability from passive monitoring to active control by enforcing rules, detecting anomalous behavior in real time, and correlating agent actions with reliability, cost, and performance outcomes.

Secure two quick wins: Implement these trust factors for two high-criticality cases to harden these guardrails to create a template for other use cases.

days 61-90

Scale with confidence.

Graduate proven use cases from supervised to higher levels of autonomy, beginning with repeatable, high‑ROI workflows.

Embed AI observability into operational reviews and executive KPIs.

Establish a continuous improvement cycle to safely expand autonomous operations across the business.

The bottom line: Autonomy only scales with trust

Agentic AI is here—and it’s accelerating. The organizations that win the next phase of AI transformation will be those that implement autonomy with control to minimize risk:

  • Build incrementally, moving from supervised to autonomous operations
  • Ground all agent decisions in deterministic observability data
  • Redesign human roles to guide, not replace, human judgment
  • Treat reliability, safety, and transparency as business‑critical capabilities

With a well‑structured 90‑day plan, enterprises can convert experimentation into operational advantage—unlocking the resilience, scalability, and efficiency that agentic AI promises, while keeping humans firmly in control of outcomes.

Download the full report for a deeper look into agentic AI adoption trends, maturity criteria, KPI breakdowns, and stage-specific observability priorities.