Digital experinces bg
Webinar

Implementing AI Observability with Dynatrace

Tuesday, January 20, 2026
10:00 a.m. ET / 16:00 CET

As the complexity of AI implementations increases, how are you orchestrating and moderating reliable agentic AI outcomes across your organization? 

Observability is a key feedback channel for AI. Even early-phase implementations show the need to observe AI, tune the experience, manage costs, provide guardrails, and govern AI responsibly. As complexity grows, the risks also increase, making deep, context-rich observability of AI strictly mandatory. 

So let’s get you started with AI observability. In this free, live 60-min session led by a Dynatrace expert, you will learn key concepts and practices for monitoring AI and large language model (LLM) systems in production environments. We will explore common challenges such as model drift, token usage, and response quality, and learn how observability can help you ensure performance, reliability, and transparency across your AI stack. Using OpenTelemetry as a foundation, we will dive into monitoring across infrastructure, model behavior, orchestration layers, and application interfaces. Ethical considerations and responsible AI practices are also addressed,

providing a well-rounded perspective on deploying and managing AI systems effectively.

In this session you will:

  • Foundational concepts and practices that underpin AI observability
  • Common challenges with AI implementations that observability can help you address
  • Best practices to monitor AI performance and behavior across your stack, so you can tune and optimize your implementations
  • Responsible and ethical AI observability practices

Register now

Country/Region
 
State/Province
 
By submitting this form, I agree to receive email communications from Dynatrace LLC and its local subsidiaries. I understand I can unsubscribe any time. Please refer to Dynatrace's Privacy Notice for more information.