
AI and LLM Observability
Achieve complete visibility and insights across every layer of your AI and LLM ecosystem – from data ingestion and vector stores to agentic frameworks and prompt engineering – ensuring optimal performance, cost efficiency, compliance, and system reliability at scale.
Model providers and platforms
Monitor and gain insights into the performance, consumption, latency, availability, response time, and health of the platforms used for pre-trained foundational models, agentic frameworks, and specialized AI APIs for building, training, and deploying machine learning models.

OpenAI
Monitoring your OpenAI & Azure OpenAI services such as GPT, o1, DALL-E, ChatGPT.

Amazon Bedrock
Observe end-to-end generative AI models provided by Amazon Bedrock.

DeepSeek
End-to-end observability for DeepSeek models on Amazon Bedrock like R1, V3.

Azure AI Foundry
End-to-end observability for GenAI & LLM applications build with Azure.

Anthropic
Monitor end-to-end your Anthropic services such as Haiku, Sonnet, and Opus.

Gemini
Observe end-to-end multimodal AI models provided by Google Gemini.
Data management and vector stores
Monitor, optimize, and manage data ingestion, preprocessing, and storage for traditional and vector-based workflows.

Pinecone
Gain insight into your Pinecone vector databases to build knowledgeable AI.

LanceDB
Monitor the performance of your multimodal AI database powered by LanceDB.

Chroma
Gain insights into the health of your vector and embedding databases from Chroma

Milvus
Gain insights about vector database resource utilization and cache behavior

Weaviate
Observe your semantic cache efficiency to reduce cost and latency for LLM apps

Qdrant
Gain insights about your Qdrant semantic vector collections
Orchestration and Prompt Engineering Frameworks
Automate multi-step LLM workflows, manage prompt chaining, agent-based systems, and retrieval-augmented generation (RAG).
Infrastructure and Compute Resources
Manage and monitor hardware and compute environments for training, fine-tuning, costs, and inference acceleration.

Google Cloud Tensor Processing Units
Observe and monitor your machine learning models built on top of Tensor Units.

TensorFlow Keras
Observe the training progress of TensorFlow Keras AI models

NVIDIA GPU
Monitor base parameters of the GPU, including load, memory and temperature

vLLM
Monitor your services built with vLLM's inference and LLM serving solution.
Security, Governance and Traffic Management
Ensure secure, compliant, and well-routed AI traffic with transparent governance and policy enforcement.
More resources
Are you looking for something different?
We have hundreds of apps, extensions, and other technologies to customize your environment
More resources


Deliver secure, safe GenAI apps with Dynatrace

AI and LLM Observability Solution
