Organizations are racing to integrate AI agents into developer and operations workflows. Developers are using coding agents to help build new features and even entire applications, while simultaneously building agents to automate business processes, or even orchestrate multi-agent workflows. Companies are discovering what works, what breaks, and what moves the needle for developers in the agentic era. Across industries—from telecom to aerospace to finance—a few patterns are emerging.
Based on presentations from customers at Perform 2026, here are five real-world lessons for operationalizing agentic AI in developer workflows.
1. Give agents real-time access to telemetry data
Teams need real‑time, deterministic signals to govern autonomous agents safely. This message came through clearly at Perform.
At TELUS, engineers were doing what most teams do early in their AI journey: copying logs, pasting traces, and moving between dashboards and chat tools. The models could reason, summarize, and even suggest fixes – but they had no direct pathway to telemetry data. Without structured access to Dynatrace Intelligence, GKE, and change events, the agents lacked runtime awareness.
The turning point came when TELUS stopped stitching together ad-hoc data pulls and instead enabled agents to access telemetry through MCP. Instead of manually stitching context together, agents could retrieve live data including traces, logs, and dependency data directly through shared access patterns.
The result: AI grounded in the same real time observability data engineers use to investigate end-to-end.
That’s why MCP was pivotal. As TELUS described it, MCP replaced ad-hoc fetches with a consistent model for safely accessing structured infrastructure and application context. AI agents no longer relied on static or point in time snapshots; they operated against live state and followed shared rules of engagement.
Lockheed Martin faced a similar challenge. Their teams were navigating multiple tools, each with its own interface and data models. AI could help, but it needed real time, unified access across Dynatrace, ServiceNow, and other tools. As Lockheed Martin Chief Observability Engineer David Walker described it, MCP became “the universal translator” that provided a “universal language” for agents to access all of them – transforming siloed tools into a single context layer.
2. Meet developers where they are
The organizations moving fastest in agentic AI are treating developer experience as a first-class concern. The pattern is simple: bring AI and telemetry into the tools where developers already spend their time.
At TELUS, observability isn’t something engineers leave their IDE to access. Using local MCP servers in VS Code, agents can retrieve Dynatrace metrics, logs, and problem details directly within the editor.
They extended this further with n8n to automate workflows and embedded AI into Slack through IRIS, their conversational incident assistant. Engineers ask natural language questions — “What’s the status of service X?” — and IRIS routes the request to the appropriate n8n workflow and MCP endpoint.
At Autodesk, more than 80% of developers already use AI‑assisted IDEs. Many were independently wiring MCP servers into tools like Cursor to access telemetry while coding. Autodesk responded by formalizing this approach with a hosted, governed MCP platform integrated directly into those assistants.
Similarly, Lockheed Martin built MCP servers tailored for agentic assistants in VS Code, enabling developers to retrieve metrics, logs, and problem details without leaving their editor.
3. Standardize telemetry and enrich it with domain context
API access is only the starting point. AI is only as effective as the structure and context behind the data it can access.
Lockheed Martin customized MCP servers with internal knowledge, such as which systems were mission‑critical, how their baselines typically behaved, or which team owned specific services. They called these “Lockheed‑Martin’ized MCPs.”
In one comparison, the generic configuration produced about a 60% success rate. The contextualized version increased that to 95%. As Chief Observability Officer David Walker put it, the added domain knowledge was the difference between “noise and signal.”
Autodesk provided one of the clearest examples. After years of acquisitions and organic growth, they found themselves with 20+ observability tools and seven different tracing systems – a patchwork that made incidents slow and painful to unravel. Standardization was the solution explained Alex Bicalho, Autodesk Senior Director of Engineering, Developer Platform Services during his presentation. They adopted OpenTelemetry as a universal instrumentation layer and consolidated metrics and traces in Dynatrace.
United Wholesale Mortgage extended this concept further by promoting business events—structured signals extracted from logs via OpenPipeline. This created a telemetry layer that mapped technical anomalies directly to customer-facing processes and business impact.
Instead of asking only, “What broke?” the team could ask, “What business process is impacted, and how do we prevent it?”
4. Observe and govern the AI stack itself
As agents become part of developer workflows, they must be treated with the same security and operational standards as existing systems.
Macquarie Group recently launched a customer-facing agent called Q for its Australian retail banking customers. “We needed total visibility, so we built a Dynatrace dashboard that has usage, effectiveness, cost, and downstream dependency health,” says Phillip Grasso-Nguyen, head of reliability for Macquarie’s Banking and Financial Services division. “That gave us the confidence to release.”
At TELUS, their automation backbone, powered by n8n and MCP servers, had become mission‑critical. To ensure reliability, they instrumented n8n itself with OpenTelemetry, routed metrics into Dynatrace, and correlated that data with Kubernetes and log analytics.
Macquarie Group recently launched a customer-facing agent called Q for its Australian retail banking customers. “We needed total visibility, so we built a Dynatrace dashboard that has usage, effectiveness, cost, and downstream dependency health,” says Phillip Grasso-Nguyen, head of reliability for Macquarie’s Banking and Financial Services division. “That gave us the confidence to release.”
In other words, they treated their AI automation stack like any other production service: observable, measured, and governed.
5. Close the loop from insight to action
When teams put the right structure, data, and guardrails in place, operational feedback loops shrink from hours or days to minutes.
Agents don’t misbehave out of malice. They often misbehave because they try too hard. Autodesk provided a vivid example: As developers began experimenting with MCP servers from across the internet— integrating them into Cursor, Copilot Cloud, and other AI‑assisted IDEs—agents began issuing extremely broad observability queries, hammering GitHub, CI systems, and telemetry backends.
As Alex Bicalho, Autodesk Senior Director of Engineering, Developer Platform Services, described it, the agents were “trying to find an atom in the solar system.” They were doing exactly what they were asked, just without guardrails.
The solution was governance. Autodesk created a hosted, governed MCP platform, where each tool and workflow had defined scope, limits, and predictable behavior. Macquarie Group has an SRE agent that pulls real-time data from the Dynatrace API to help act on issues before they become incidents.
TELUS demonstrated this clearly with IRIS, their Slack‑based incident assistant. IRIS blends telemetry, logs, change history, and conversation transcripts into real‑time summaries during active incidents.
While engineers troubleshoot on an incident management call, IRIS ingests Dynatrace data and generates a rolling update every 15 minutes: what happened, who did what, what changed, actions taken, open questions, and the most likely root cause.
The impact is practical. Executives don’t interrupt status updates. Engineers don’t repeat themselves across Slack threads. This is closed-loop automation in practice. Everyone operates from the same continuously-updated view.
Other companies are leaning into this vision as well. Macquarie has an SRE agent that pulls data from the Dynatrace API to help act on issues before they become incidents.
United Wholesale Mortgage uses Business Flow visualizations that connect performance degradation directly to customer-facing processes. Remediation becomes faster and more aligned with business impact.
What’s next?
Agentic AI is already reshaping how developers build, observe, and deliver software. The organizations moving faster aren’t just experimenting with models – they’re integrating AI into their telemetry, workflows, and systems, from development through testing and into production.
The organizations succeeding with agentic AI are rebuilding their developer workflows with observability as the real-time control plane that gives agents context, guardrails, and the ability to act safely and autonomously.
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.
Go to forum