Perform 2019 Las Vegas
 

60+ Sessions

Transform into the digital future at light speed

 

Our Hands-On Training (HOT) Day will run on
Monday, January 28, from 8:00am to 5:00pm.

Breakfast will be served from 7:00am to 8:00am, and lunch from noon to 1:00pm.

During registration you can choose two classes, one morning and one afternoon.

Please note that HOT Day classes are subject to change.
In the event that your class is changed, you will be notified and placed in another class of your choice (if available), or refunded.

 

Beginner

Getting Started with Dynatrace

Getting Started with Dynatrace is for users who are relatively new to the Dynatrace platform and want to become more familiar with the major components. Users will get hands-on experience installing OneAgents, understanding the full-stack metrics that are captured, and reviewing key use cases covered by the platform.

Intermediate

Automate Monitoring for your Enterprise Stack

This hands-on session centers around traditional enterprise stacks and environments, e.g. Websphere, JBoss, .NET, PHP, DataPower, IBM IIS, etc. The session will focus on installation of the OneAgent and the key full-stack visibility. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.

Automate Monitoring for AWS

This hands-on session centers around AWS environments and their key elements, e.g. services, containers, deployments, CloudWatch, CloudTrail, Lambda, etc. The session will focus on installation of the OneAgent in an AWS environment and the key full-stack visibility provided. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.

Automate Monitoring for Azure

This hands-on session centers around Azure environments and their key elements, e.g. Azure App Services, Web Apps, Service Fabric, Redis cache, functions, etc. The session will focus on installation of the OneAgent on Azure infrastructure and through Azure CLI and the key full-stack visibility provided. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.

Automate Monitoring for Google Cloud

This hands-on session centers around Google Cloud environments and their key elements, e.g. Google App Engine, Docker Containers, Service Fabric, Redis cache, functions, etc. The session will focus on installation of the OneAgent on Google Cloud infrastructure. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.

Automate Monitoring for OpenShift

This hands-on session centers around OpenShift environments and their key elements, e.g. containers and services, pods, projects, deployments, templates, etc. The session will focus on installation of the OneAgent in an Openshift environment and the key full-stack visibility provided. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.

Automate Monitoring for Pivotal CloudFoundry

This hands-on session centers around Pivotal Cloud Foundry environments and their key elements, e.g. BOSH Add-ons, Gorouters, Diego Cells, etc. The session will focus on installation of the OneAgent through the BOSH Add-on (among other deployment mechanisms) and the key full-stack visibility provided. This includes the out-of-the-box metrics that are collected, plus the custom extensions that can be incorporated to understand the health of the infrastructure. Additionally, service naming, process grouping, and tagging will be discussed in detail. And lastly, the session will cover how to use tags and set up management zones to provide different views for different internal teams.

BizDevOps: Bridging the Gap to Business

BizDevOps drives continuous experimentation of new business ideas, automatically deployed through a DevOps Pipeline and culture. Dynatrace provides the feedback loop for businesses to see how these continuous experiments impact end-user experience and therefore the bottom line: business success! In this hands-on session, we learn how to leverage Dynatrace Real-User Monitoring for Business Analytics. We learn how DevOps can automatically tag different features, versions, or A/B tests, and how Biz can analyze change in user behavior or application performance, based on these deployed changes. We will also learn how Dynatrace Session Replay can be used to better understand and optimize user journeys, how it can influence the next iteration of your experiments, and how it gives you even more detailed business insights.

Advanced Real User Monitoring

Dynatrace provides the ability to capture a replay of user sessions to see actual user experiences providing insights into application performance from within the user’s browser or mobile device. This includes 3rd party services, CDN’s, frontend processing, and the impact of requests into backend services. This hands-on session focuses in more detail on: Configuration of Real-User Monitoring - Multi-tagging and Key Performance Metrics - Advanced Session Replay Configuration & Analysis - Dynatrace Session Query Language

Power Dashboarding with Dynatrace

Power Dashboarding with Dynatrace is all about the new Dashboarding and Reporting features of the Dynatrace Platform and how to best leverage them. New tile types and dashboarding workflows will be covered interactively in detail. Key topics such as, management zones and their impact on dashboarding and reporting will also be covered.

Web Performance Optimization with Dynatrace

Dynatrace provides insights into application performance from within the user’s browser or mobile device including 3rd party services, CDN’s, frontend processing, and the impact of requests into backend services. This hands-on session explores how to analyze application performance across communities and devices to understand optimization opportunities.

Log Analytics in a Containerworld

The Log Analytics in a Containerized World session is designed to provide a rich, hands-on understanding of how log analytics can be used in the ephemeral world of containers and microservices. The features and capabilities of log analytics will be covered and will include: Log ingestion configuration - Log analytics use cases - Log events for AI & root-cause analysis

SLA Monitoring with Synthetic

Synthetic monitoring is the best way to understand application performance in a consistent way without the noise pollution of a given user’s environment, network bandwidth, or physical location. This hands-on session will explore how to setup complex synthetic tests and deploy those tests to multiple geographic locations, as well as learning advanced features, such as testing backend APIs. Automation of both scenarios will be covered in the session with public and private deployments.

Dynatrace Managed for Administrators

Dynatrace Managed for Administrators is designed to provide hands on understanding installation, configuration, and management of Dynatrace Managed Clusters. Topics include: Cluster sizing - Cluster node management (adding, removing, and modifying) - Configuration APIs for cluster management and configuration - Cluster ActiveGate best practices - Cluster failover and load balancing

Dynatrace for AppMon Users

Dynatrace for AppMon Users is designed to help experienced AppMon users understand the key similarities and foundational differences between Dynatrace and AppMon. Common questions from AppMon users such as, “Where is my PurePath?” and “Where is my web request dashlet?” will be covered in detail. This hands-on session will cover Dynatrace features such as, full-stack support, machine learning and AI and their impact on new and traditional application workloads in a pre-built environment provided by Dynatrace.

Dynatrace for DC-RUM Users

Dynatrace for DC RUM Users is designed to help experienced DC RUM users understand the key similarities and foundational differences between Dynatrace and DC RUM. Common questions from DC RUM users such as, “How do I integrate DC RUM wire data into Dynatrace?” and “What incremental value does Dynatrace provide to DC RUM wire data?” will be covered in detail. This hands-on session will cover Dynatrace features such as, full-stack support, machine learning and AI and their impact on new and traditional application workloads in a pre-built environment provided by Dynatrace.

Dynatrace for Classic Synthetic Users

Dynatrace for Classic Synthetic Users is designed to help experienced Classic Synthetic users understand the key similarities and foundational differences between Dynatrace Synthetics and Classic Synthetics. Common questions from Classic Synthetic users such as, “How do I integrate Classic Synthetics into Dynatrace?”, “How do I run multi-step tests in Dynatrace?” and “How do I set up server-side API tests?” will be covered in detail with hands on product interactions. This hands-on session will cover Dynatrace features such as, full-stack support, machine learning and AI and their impact on new and traditional application workloads in a pre-built environment provided by Dynatrace.

OpenKit for IoT and Non-Web Systems

Dynatrace OpenKit instrumentation provides a set of open source libraries that enable instrumentation of other non-traditional digital endpoints in your environment. This hands-on session will cover the process for using OpenKit to instrument IoT endpoints as well as the analysis of the captured endpoint metrics. Examples include rich client applications, smart IoT applications, or even Alexa skills.

Advanced

Extending Dynatrace AI through Plugins and APIs

Extending Dynatrace AI through Plugins and APIs is designed to help users understand the key mechanisms for adding external metrics to the Dynatrace ecosystem. Topics include: Extending environments via JMX and WMI plugins - Extending AWS monitoring via CloudWatch metric ingestion - Adding entities and entity metrics via Dynatrace API - Adding custom metrics via remote plugins - How Dynatrace AI operates against extended metrics

Modern Deployment Strategies with Dynatrace

This hands-on session centers around automating monitoring and feedback loops for Blue/Green & Canary Releases. You will learn more details about these deployment concepts, how to use tags and request attributes to monitor these deployments with Dynatrace, and how to leverage the Dynatrace API to control the rollout or rollback of these deployments. The goal is to increase the overall success of your production deployments with the combination of modern deployment strategies and Dynatrace!

Large Scale Dynatrace Deployments

The Large Scale Dynatrace Deployments session centers around the best practices associated with larger deployments. This is for both SaaS and Managed environments. Topics include: Deployment sizing - ActiveGate deployment strategies and execution - System configuration - Cluster configuration (for Managed deployments)

Migrating applications and services to the cloud

This hands-on session walks you through the process of using Dynatrace to plan, execute, and validate a cloud migration. We will explore how to use Smartscape to evaluate your current on-premise deployments and influence your decisions about re-deploying, re-platforming, or re-architecting. In this session, we will be moving apps from one environment to another and then use Dynatrace to validate the success of that migration.

Breaking the Monolith into Microservices

This hands-on session walks us through the multi-step process of analyzing, breaking, and validating the new microservice architecture. Using a well-known monolithic application, we will go through different best practices to re-platform and re-architecture.

Mastering Kubernetes with Dynatrace

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. This hands-on session covers the process of integrating Dynatrace into a Kubernetes environment, such that instrumentation is automated into the application deployment process. We will also cover the process of incorporating Dynatrace via YAML configured DaemonSet.

Unbreakable Pipeline for AWS

In this hands-on session, we will build an end-to-end, continuous delivery pipeline that pushes a new code change through multiple deployment stages. We will implement Dynatrace-driven quality gates (“Shift-left”) that only promote good changes into a higher environment. We will implement Dynatrace deployment events (“Shift-right”) to enable automated handling of bad deployments. Lastly, we will implement automated rollbacks in case of a bad deployment in a higher-level environment that requires a stable code base, e.g. Staging, or Production! This session will be using AWS CodeDeploy, AWS CodePipeline, and AWS Lambda functions to implement the Unbreakable Delivery Pipeline.

Unbreakable Pipeline for Concourse

In this hands-on session we will build an end-to-end, continuous delivery pipeline that pushes a new code change through multiple deployment stages. We will implement Dynatrace-driven quality gates (“Shift-left”) that only promote good changes into a higher environment. We will implement Dynatrace deployment events (“Shift-right”) to enable automated handling of bad deployments. Lastly, we will implement automated rollbacks in case of a bad deployment in a higher-level environment that requires a stable code base, e.g. Staging, or Production! This session will be using CloudFoundry, Concourse, and Python scripts to implement the Unbreakable Delivery Pipeline.

Unbreakable Pipeline for Jenkins

In this hands-on session, we will build an end-to-end, continuous delivery pipeline that pushes a new code change through multiple deployment stages. We will implement Dynatrace-driven quality gates (“Shift-left”) that only promote good changes into a higher environment. We will implement Dynatrace deployment events (“Shift-right”) to enable automated handling of bad deployments. Lastly, we will implement automated rollbacks in case of a bad deployment in a higher-level environment that requires a stable code base, e.g. Staging, or Production! This session will be using Ansible, Jenkins, and Python scripts to implement the Unbreakable Delivery Pipeline.

Unbreakable Pipeline for VSTS

In this hands-on session, we will build an end-to-end, continuous delivery pipeline that pushes a new code change through multiple deployment stages. We will implement Dynatrace-driven quality gates (“Shift-left”) that only promote good changes into a higher environment. We will implement Dynatrace deployment events (“Shift-right”) to enable automated handling of bad deployments. Lastly, we will implement automated rollbacks in case of a bad deployment in a higher-level environment that requires a stable code base, e.g. Staging, or Production! This session will be using VSTS and Azure Functions to implement the Unbreakable Delivery Pipeline.

Self-Healing with Ansible Tower

In this hands-on session, we will connect Dynatrace to different Ansible Tower Playbooks to self-heal an environment that is exposed to different conditions leading to an unstable application environment, e.g. high load resulting in bad performance, a bad deployment leading to high failure rate, or unstable infrastructure leading to crashing systems. This session will also teach you how to leverage the auto-detected problem details and root-cause information for other common self-healing scenarios.

Self-Healing with ServiceNow

In this hands-on session, we will connect Dynatrace to different ServiceNow Workflows to self-heal an environment that is exposed to different conditions leading up to an unstable application environment, e.g. high load resulting in bad performance, a bad deployment leading to high failure rate, or unstable infrastructure leading to crashing systems. This session will also teach you how to leverage the auto-detected problem details and root-cause information for other common self-healing scenarios.

Full Stack Self-Service Diagnostics with Dynatrace

One of the challenges of any monitoring project is to enable external teams to serve themselves. Monitoring teams can get overwhelmed by the pace and complexity of demands from other internal monitoring customers. This hands-on session will focus on how to level up colleagues to become self-sufficient and deep dive into Database, CPU, Memory, Hotspot Analysis, and Log Analytics use cases.

Continuous Performance as a Service

This hands-on session is focused on how to elevate the classic Performance Center of Excellence process from a manual to a fully automated process. The session includes: Tagging load test requests - Analyzing load test results in Dynatrace UI - Comparing data across tests and across environments - Extracting test results metrics from Dynatrace at the end of a test through the REST API - Automate the comparison of tests through the REST APIs

Serverless with Dynatrace

Serverless deployment models, sometimes also referred to as Function as a Service (FaaS), let developers focus on writing code without worrying about the underlying application or infrastructure. Multiple cloud providers support serverless deployment models. This hands-on session focuses on the instrumentation and deployment of Lambda and Azure Functions, in Node.js and .NET Core respectively. Also included is the understanding of serverless function interactions with traditional services and how those interactions are represented in Dynatrace.

Mobile App Monitoring with Dynatrace

Dynatrace provides facilities to automate the instrumentation of Android and iOS-based native, mobile applications. Once instrumented, all user sessions are captured, including performance, device health, and crash analytics. This hands-on session is designed to cover the instrumentation and deployment process for applications on both platforms as well as analysis of the captured mobile application metrics. Instrumentation techniques covered in the session will include, CocoaPods for iOS, and Gradle for Android.

Automate your mornings with Dynatrace Davis!

Learn to automate your DevOps workflow and streamline your mornings with Davis. In this session you will learn to use webhooks and the Davis API to do things like override default Davis functionality, integrate Davis into your tooling, and automate tasks like JIRA ticket creation and perform remediation actions. This is a technical session taught by the developers of Davis. Some development experience is recommended, especially NodeJS and/or JavaScript.

Join us at Perform 2019

Conference price: $895 | HOT Day: $800 | Get 50% OFF conference price now for Early Bird!

Early Bird offer expires November 15, 2018.

Register now