Our Hands-On Training (HOT) Day will run on Monday, January 28, from 8:00am to 5:00pm.
During registration you can choose two classes, one morning and one afternoon.
Breakfast will be served from 7:00am to 8:00am, and lunch from noon to 1:00pm.
Please note that HOT Day classes are subject to change. In the event that your class is changed, you will be notified and placed in another class of your choice (if available), or refunded.
Automate Monitoring for AWS
This hands-on session centers around AWS environments and their key elements, e.g. services, containers, deployments, CloudWatch, CloudTrail, Lambda, etc. The session will focus on installation of the OneAgent in an AWS environment and the key full-stack visibility provided. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.
Automate Monitoring for Azure
Azure Service Fabric is an open source microservices platform for distributed systems, focused on building and deploying highly reliable and scalable applications. This session will focus on deploying a Service Fabric Cluster, creating applications and integrating them with Dynatrace OneAgent. This includes the metrics collected Out of the box, and extensions that can be incorporated to understand the health of the applications. The session in partnership with Microsoft will cover the platform overview with real world customer use cases.
Automate Monitoring for Pivotal CloudFoundry
This hands-on session centers around Pivotal Cloud Foundry environments and their key elements, e.g. BOSH Add-ons, Gorouters, Diego Cells, etc. The session will focus on installation of the OneAgent through the BOSH Add-on (among other deployment mechanisms) and the key full-stack visibility provided. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.
Automate Monitoring for OpenShift
This hands-on session centers around OpenShift environments and their key elements, e.g. containers and services, pods, projects, deployments, templates, etc. The session will focus on installation of the OneAgent in an Openshift environment and the key full-stack visibility provided. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.
Automate Monitoring for Kubernetes
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. This hands-on session covers the process of integrating Dynatrace into a Kubernetes environment, such that instrumentation is automated into the application deployment process. We will also cover the process of incorporating Dynatrace via YAML configured DaemonSet.
Getting Started with Dynatrace
Getting Started with Dynatrace is for users who are relatively new to the Dynatrace platform and want to become more familiar with the major components. Users will get hands-on experience installing OneAgents, understanding the full-stack metrics that are captured, and reviewing key use cases covered by the platform.
Automate your mornings with Dynatrace Davis!
BizDevOps: Bridging the Gap to Business
BizDevOps drives continuous experimentation of new business ideas, automatically deployed through a DevOps Pipeline and culture. Dynatrace provides the feedback loop for businesses to see how these continuous experiments impact end-user experience and therefore the bottom line: business success! In this hands-on session, we learn how to leverage Dynatrace Real-User Monitoring for Business Analytics. We learn how DevOps can automatically tag different features, versions, or A/B tests, and how Biz can analyze change in user behavior or application performance, based on these deployed changes. We will also learn how Dynatrace Session Replay can be used to better understand and optimize user journeys, how it can influence the next iteration of your experiments, and how it gives you even more detailed business insights.
Advanced Real User Monitoring & Synthetic Monitoring
Dynatrace provides the ability to capture a replay of user sessions to see actual user experiences providing insights into application performance from within the user’s browser or mobile device. This includes 3rd party services, CDN’s, frontend processing, and the impact of requests into backend services. This hands-on session focuses in more detail on: Configuration of Real-User Monitoring - Multi-tagging and Key Performance Metrics - Advanced Session Replay Configuration & Analysis - Dynatrace Session Query Language
Power Dashboarding with Dynatrace
Power Dashboarding with Dynatrace is all about the new Dashboarding and Reporting features of the Dynatrace Platform and how to best leverage them. New tile types and dashboarding workflows will be covered interactively in detail. Key topics such as, management zones and their impact on dashboarding and reporting will also be covered.
Web Performance Optimization with Dynatrace
Dynatrace provides insights into application performance from within the user’s browser or mobile device including 3rd party services, CDN’s, frontend processing, and the impact of requests into backend services. This hands-on session explores how to analyze application performance across communities and devices to understand optimization opportunities.
SLA Monitoring with Synthetic
Synthetic monitoring is the best way to understand application performance in a consistent way without the noise pollution of a given user’s environment, network bandwidth, or physical location. This hands-on session will explore how to setup complex synthetic tests and deploy those tests to multiple geographic locations, as well as learning advanced features, such as testing backend APIs. Automation of both scenarios will be covered in the session with public and private deployments.
Dynatrace Managed for Administrators
Dynatrace Managed for Administrators is designed to provide hands on understanding installation, configuration, and management of Dynatrace Managed Clusters. Topics include: Cluster sizing - Cluster node management (adding, removing, and modifying) - Configuration APIs for cluster management and configuration - Cluster ActiveGate best practices - Cluster failover and load balancing
Dynatrace for AppMon Users
Dynatrace for AppMon Users is designed to help experienced AppMon users understand the key similarities and foundational differences between Dynatrace and AppMon. Common questions from AppMon users such as, “Where is my PurePath?” and “Where is my web request dashlet?” will be covered in detail. This hands-on session will cover Dynatrace features such as, full-stack support, machine learning and AI and their impact on new and traditional application workloads in a pre-built environment provided by Dynatrace.
Network and Infrastructure Performance Monitoring of your Enterprise Cloud: It’s not just packets anymore
Network Performance Monitoring and even Infrastructure (Load Balancers, WAN Accelerators, Firewall/Proxies) has long been the domain of DC RUM within the Dynatrace portfolio. Wire data is still a good source of information for certain applications in on premise datacenters. But as companies move into hybrid Cloud, or Cloud only deployments of their applications and services, packet data isn’t always an option. Dynatrace provides a rich set of network and infrastructure centric capabilities, including host based views, port to port conversation discovery, API data, log analytics, synthetics, while retaining the option of analyzing wire data, all backed by our AI and Root Cause Analytics engine. We will examine several of the use cases, with simple hands on exercises you can utilize to help deploy Dynatrace Network and Infrastructure performance visibility within your own environment. The need to prove it’s not the network still exists, regardless of whether your applications are on premise, cloud based, or both.
Extending Dynatrace AI through Plugins and APIs
Extending Dynatrace AI through Plugins and APIs is designed to help users understand the key mechanisms for adding external metrics to the Dynatrace ecosystem. Topics include: Extending environments via JMX and WMI plugins - Extending AWS monitoring via CloudWatch metric ingestion - Adding entities and entity metrics via Dynatrace API - Adding custom metrics via remote plugins - How Dynatrace AI operates against extended metrics
Migrating applications and services to the cloud
This hands-on session walks you through the process of using Dynatrace to plan, execute, and validate a cloud migration. We will explore how to use Smartscape to evaluate your current on-premise deployments and influence your decisions about re-deploying, re-platforming, or re-architecting. In this session, we will be moving apps from one environment to another and then use Dynatrace to validate the success of that migration.
Unbreakable Pipeline for Azure DevOps (VSTS)
In this hands-on session, we will build an end-to-end, continuous delivery pipeline that pushes a new code change through multiple deployment stages. We will implement Dynatrace-driven quality gates (“Shift-left”) that only promote good changes into a higher environment. We will implement Dynatrace deployment events (“Shift-right”) to enable automated handling of bad deployments. Lastly, we will implement automated rollbacks in case of a bad deployment in a higher-level environment that requires a stable code base, e.g. Staging, or Production! This session will be using VSTS and Azure Functions to implement the Unbreakable Delivery Pipeline.
Self-Healing through Ansible Tower & ServiceNow
In this hands-on session, we will connect Dynatrace to different Ansible Tower Playbooks to self-heal an environment that is exposed to different conditions leading to an unstable application environment, e.g. high load resulting in bad performance, a bad deployment leading to high failure rate, or unstable infrastructure leading to crashing systems. This session will also teach you how to leverage the auto-detected problem details and root-cause information for other common self-healing scenarios.
Full Stack Self-Service Diagnostics with Dynatrace
One of the challenges of any monitoring project is to enable external teams to serve themselves. Monitoring teams can get overwhelmed by the pace and complexity of demands from other internal monitoring customers. This hands-on session will focus on how to level up colleagues to become self-sufficient and deep dive into Database, CPU, Memory, Hotspot Analysis, and Log Analytics use cases.
Continuous Performance in a Jenkins Pipeline
This hands-on session is focused on how to elevate the classic Performance Center of Excellence process from a manual to a fully automated process. The session includes: Integrating the Performance Signature plug-in in Jenkins -Tagging load test requests - Analyzing load test results in Dynatrace UI – Automating test comparison data across environments.
Serverless with Dynatrace
Serverless deployment models, sometimes also referred to as Function as a Service (FaaS), let developers focus on writing code without worrying about the underlying application or infrastructure. Multiple cloud providers support serverless deployment models. This hands-on session focuses on the instrumentation and deployment of Lambda and Azure Functions, in Node.js and .NET Core respectively. Also included is the understanding of serverless function interactions with traditional services and how those interactions are represented in Dynatrace.