PurePerformance

The brutal truth about digital performance engineering and operations

Pure Performance

Andreas (aka Andi) Grabner and Brian Wilson are veterans of the digital performance world. Combined they have seen too many applications not scaling and performing up to expectations. With more rapid deployment models made possible through continuous delivery and a mentality shift sparked by DevOps they feel it’s time to share their stories. In each episode, they and their guests discuss different topics concerning performance, ranging from common performance problems for specific technology platforms to best practices in development, testing, deploying and monitoring software performance and user experience. Be prepared to learn a lot about metrics.

Andi & Brian both work at Dynatrace, where they get to witness more real world customer performance issues than they can TPS report at.

Subscribe to the PurePerformance RSS feed     RSS Feed

Subscribe on iTunes     iTunes

Tweet feedback using “@pure_dt”

Your Hosts: @grabnerandi and @emperorwilson

Andi   Brian

Make sure to also check out the PurePerformance Cafe

PurePerformance Cafes are short interviews with practitioners and thought leaders from around the globe. We found it a great way to get introduced to a new topic or just learn what others are doing in their day-to-day job to contribute to better quality and high performing software.

Subscribe to the PurePerformance Cafe RSS feed     RSS Feed

Subscribe to PurePerformance Cafe on iTunes     iTunes

Episode 36 - Baking Functional, Performance and Security into your DevOps Best Practices

Listen to “036 Baking Functional, Performance and Security into your DevOps Best Practices” on Spreaker.

Todd DeCapua twitter LinkedIn has been a performance evangelist for many years. In his recent work and publications, which includes Effective Performance Engineering as well as several publications on outlets such as TechBeacon, he introduces DevOps best practices to improve the 5 S-Dimensions: Speed, Stability, Scalability, Security and Savings. In our discussion with Todd we focused a lot on Security as it has been a more prominent topic in our industry recently. How to bake Security into the delivery pipeline and why it is such an important aspect. Automation seems to be the key which also includes automating functional checks, performance checks and – as we said: Security!

Related Links:

Episode 35 - When Multi-Threading, Micro Services and Garbage Collection Turn Sour

Listen to “035 When Multi-Threading, Micro Services and Garbage Collection Turn Sour” on Spreaker.

For our one year anniversary episode, we go “back to basics”, or, better said, “back problem patterns”. We picked three patterns that have come up frequently in recent “Share Your PurePath” sessions from our global user base and try to give some advice on how to identify, analyze and mitigate them:

Related Links:

Double Header! Episode 33 & 34 with guest Goranka Bjedov, Capacity Engineer at Facebook

Episode 33: Performance Engineering at Facebook with Goranka Bjedov

Listen to “033 Performance Engineering at Facebook with Goranka Bjedov” on Spreaker.

Goranka Bjedov has an eye over the performance of thousands of servers spread across the data centers of Facebook. Her infrastructure supports applications such as Facebook Social Network, WhatsApp, Instagram and Messenger. We wanted to learn from her how to manage performance in such scale, how Facebook engineers bring new ideas to the market and what role performance and monitoring plays.

Episode 34: Monitoring at Facebook & How DevOps Works with Goranka Bjedov

Listen to “034 Monitoring at Facebook & How DevOps Works with Goranka Bjedov” on Spreaker.

In this second episode with Goranka Bjedov from Facebook, we learn details about how Facebook monitors their infrastructure, services, applications and end users. Why they built certain tooling, and how & who analyzes that data. We then shifted gears to development where we learned how the onboarding process of developers works and that Goranka herself made her first production deployment within the first week of employment. Join us and learn a lot about the culture that drives Facebook Engineering

Episode 32 - Agile Performance Engineering with Rick Boyd

Listen to “032 Agile Performance Engineering with Rick Boyd” on Spreaker.

Guest Star : Rick Boyd - Application Performance Engineer at IBM Watson Health

In the second episode with Rick Boyd, we talk about how performance engineering evolved over time – especially in an agile and DevOps setting. It’s about how to evolve your traditional performance testing towards injecting performance engineering into your organizational DNA, providing performance engineering as a service. Making it easy accessible to developers whenever they need performance feedback. Rick gives us insights on how he is currently transforming performance engineering at IBM Watson. We also gave a couple of shout outs to Mark Tomlinson and his take on Performance in a DevOps world!

Related Links: Rick’s Github Repo

Episode 31 - Continuous Performance Testing with Rick Boyd

Listen to “031 Continuous Performance Testing with Rick Boyd” on Spreaker.

Guest Star : Rick Boyd - Application Performance Engineer at IBM Watson Health

We got Rick Boyd and elaborated what Continuous Performance Testing is all about. We all concluded its about Faster Feedback in the development cycle back to the developers – integrated into your delivery pipeline. As compared to delivering performance feedback only at the end of a release cycle. We discussed different approaches on how to “shift left performance” with the benefit of continuous performance feedback!

Related Links: Rick’s Github Repo

Episode 30 - DevOps From the Frontlines – Lessons Learned

Listen to “030 DevOps From the Frontlines – Lessons Learned” on Spreaker.

Guest Star : Brett Hofer @brett_solarch - Global DevOps Practice Lead at Dynatrace

Brett Hofer has been engaged in numerous DevOps Transformation projects mainly for very large enterprises. We got to talk with him on this episode to learn more about how he assesses the status quo when he walks into an organization, what the top blocking items for a successful transformation are and what the best approaches are to implement the recommended changes. Spoiler alert: we talked a lot about IT Ops Automation, building cross functional teams and understanding and defining responsibilities and roles.

Related Links:

If you want to learn more about what Brett is doing check out his blogs about DevOps

Episode 29 - What is Metrics Driven NetOps?

Listen to “029 What is Metrics Driven NetOps” on Spreaker.

Guest Star : Thomas McGonagle @mcgonagle - Field System Engineer at F5 Networks

Thomas McGonagle just had his 10 years DevOps anniversary at it was 10 years ago when he got first exposed to Infrastructure as Code through Puppet. He is currently working with F5, helping Big IP Network Teams around the world automate the Network as part of their DevOps transformation.

We met Tom at a recent DevOps meetup in Boston which sparked this conversation on what “Metrics Driven Continuous Delivery” could mean for Network Operations Engineers. What are the metrics to look at? How to engage with the application teams to provision better and automated network resources? How to bake this into the Continues Delivery Cycle?

Related Links:

Besides NetOps Thomas is also passionate about CI/CD. He runs the largest Jenkins User Group in the World out of Boston, MA. If you happen to be around check out their next meetups and DoJo’s

Episode 28 - Mainframe: Must knows especially for distributed and Cloud Native folks

Listen to “028 Mainframe: Must knows especially for distributed and Cloud Native folks” on Spreaker.

Guest Star : Mike Horwitz - Senior Software Architect at Dynatrace

Mike Horwitz has been working with Mainframe since the mid 80s. In this podcast he explains basic terminology and the challenges that come with the interaction to the distributed and cloud native world. Monitoring end-to-end is a critical capability especially when it comes to cost savings and including the mainframe components in a CI/CD/DevOps environment

Related Links:

Mainframe Performance and Monitoring Challenges Performance Clinic

Live from Dynatrace Perform 2017 in Las Vegas

If you missed the live stream from Perform 2017 with our friends Mark Tomlinson & James Pulley at PerfBytes, you can catch up on the episodes. The episode below is our farewell live podcast, featureing Josh McKenty @jmckenty of Pivotal. Click through to the Spreaker Page for more episodes from Perform 2017

Listen to “Dynatrace Perform 2017 Wednesday Farewell” on Spreaker.

Episode 27 - Essential things to know about Kubernetes, Docker, Mesos, Swarm, Marathon

Listen to “027 Essential things to know about Kubernetes, Docker, Mesos, Swarm, Marathon” on Spreaker.

Guest Star : Eric Wright @discoposse - Principal Solutions Engineer/Technology Evangelist at Turbonomic and DiscoPosse, Host of GCOnDemand Podcast

Eric Wright (@discoposse) is a “veteran” and expert when it comes to virtualization and cloud technologies. He introduces us into the field of container and container orchestrations, the vendors in the space, the pros and cons and the key capabilities he things have to be considered when evaluating the next generation virtualization platform for your enterprise.

Related Links:

GCOnDemand Podcast

DiscoPosse

Blogs on Turbonomic

Episode 26: Love your Data and Tear Down Walls between Ops and Test

Listen to “026 Love your Data and Tear Down Walls between Ops and Test” on Spreaker.

Guest Star: Brian Chandler

How often have you deployed an application that was supposed to be load tested well but then crashed in production? One of the reasons might be that you never took the time to really analyze real life load patterns and distributions. Brian Chandler – Performance Engineer at Raymond James – has worked with their Operations Team to not only start loving application specific performance data captured in production. They starting breaking down the DevOps Walls from Right to Left by sharing this data with Testers to create more realistic load tests but also started education developers to learn from real life production issues.

We hope you enjoy this one as we learn a lot of cool techniques, metrics and dashboards that Brian uses at Raymond James. If you want to see it live check out our webinar where he presented their approach as well: Starting Your DevOps Journey: Practical Tips for Ops

Graph displaying API call distribution in Production on left vs. load model on right. Leverage Production data feedback to create a better load model. Although overall load matched production, the flawed API distribution model made the load test look like there was a performance regression.

Episode 25: Evolution of Load Testing: The Past, The Present, The Future with Daniel Freij

Listen to “025 Evolution of Load Testing: The Past, The Present, The Future with Daniel Freij” on Spreaker.

Guest Star: Daniel Freij @DanielFreij

HAPPY NEW YEAR and welcome back!

Daniel Freij (@DanielFreij) – Senior Performance Engineer and Community Manager at Apica – has been doing hundreds of load tests in his career. 5-10 years ago performance engineers used the “well known” load testing tools such as Load Runner. But things have changed as we have seen both a Shift-Left and a Shift-Right of performance engineering away from the classical performance and load testing teams. Tools became easier, automatable and cloud ready. In this session we discuss these changes that happened in the recent years, what it means for today’s engineering teams and also what might happen in 5-10 years from now. We also want to do a shout out to a performance clinic Daniel and Andi are doing on January 25th 2017 where they walk you through a modern cloud based pipeline using AWS CodePipeline, Jenkins, Apica and Dynatrace. Register here.

Related Link:

ZebraTester Community

Episode 24: What the hell is “Continuous Acceleration of Performance”?

Listen to “024 What the hell is “Continuous Acceleration of Performance”?” on Spreaker.

Guest Star: Mark Tomlinson @mark_on_task

Mark Tomlinson, still a veteran and performance god, is enlightening us on his concept of Continuous Acceleration of Performance. Continuous Delivery is all about getting faster feedback from code changes as code gets deployed faster in smaller increments to the end user. One aspect that is often left out is feedback on performance metrics and behavior. In the “old days” performance feedback was given very late – either in the load testing phase at the end of the project lifecycle or even as late as when it hits production. That could be too late and it makes it hard to fix the root cause. Listen to our conversation on how to accelerate performance related feedback loops without getting overwhelmed with too much data!

Episode 23: Is DevOps the Killer of traditional Performance Engineering?

Listen to “023 Is DevOps the Killer of traditional Performance Engineering?” on Spreaker.

Guest Star: Mark Tomlinson @mark_on_task

Mark Tomlinson, “a veteran” in Performance Engineering, discusses how DevOps is a big opportunity for performance engineering – but also a threat for many that have been in the business for a long time. The big question is: are “traditional performance engineers” using their Load Runners or SilkPerformers at the end of the project lifecycle ready to change? Ready to learn new tools? Ready to think about automating performance engineering into the delivery pipeline and doing that in collaboration with the rest of the engineering team? Ready to “Check your Ego at the door”? Listen to our conversation where we also discuss how these roles have changed in organizations we recently interacted with.

Double Header! Episode 21 & 22 with guest Finn Lorbeer, Senior QA Consultant at Thoughtworks

Episode 21: How Thoughtworks helped Otto.de transform into a real DevOps Culture

Listen to “021 How Thoughtworks helped Otto.de transform into a real DevOps Culture” on Spreaker.

Guest Star: Finn Lorbeer, Senior QA Consultant at Thoughtworks

Finn Lorbeer (@finnlorbeer) is a quality enthusiast working for Thoughtworks Germany. I met Finn earlier this year at the German Testing Days where he presented the transformation story at Otto.de. He helped transform one of their 14 “line of business” teams by changing the way QA was seen by the organization. Instead of a WALL between Dev and Ops the teams started to work as a real DevOps team. Further architectural and organizational changes ultimately allowed them to increase deployment speed from 2-3 per week to up to 200 per week for the best performing teams.

Listen to “022 Latest trends in Software Feature Development: A/B Tests, Canary Releases, Feedback Loops” on Spreaker.

Guest Star: Finn Lorbeer, Senior QA Consultant at Thoughtworks

In Part II with Finn Lorbeer (@finnlorbeer) from Thoughtworks we discuss some of the new approaches when implementing new software features. How can we build the right thing the right way for our end users? Feature development should start with UX wireframes to get feedback from end users before writing a single line of code. Feature teams then need to define and implement feedback loops to understand how features operate and are used in production. We also discuss the power of A/B testing and canary releases as it allows teams to “experiment” on new ideas and thanks to close feedback loops will quickly learn on how end users are accepting it.

Related Links:

Process Automation and Continuous Delivery at OTTO.de

Are we only Test Manager?

Sind wir wirklich nur Testmanagerinnen?

Das Leben ist hasselhoff

Three part interview with Gene Kim, co-author of the new book “The DevOps Handbook”, co-author of “The Phoenix Project”, founder of IT Revolution and host of the DevOps Enterprise Summit conferences.

Gene Kim @RealGeneKim has been promoting a lot of the great DevOps Transformation stories from Unicorns (Innovators), but more so from “The Horses” (Early Adopters). The next DOES (DevOps Enterprise Summit) is just underway, helping him with his mission to increase DevOps adoption across the IT world.

In our 3 podcast sessions we discussed the success factors of DevOps adoption, the reasons that lead to resistance as well as how to best measure success and enforce feedback loops.

Thanks Gene for allowing us to be part of transforming our IT world..

Episode 18: DevOps Stories, Practices and Outlooks with Gene Kim: Part 1

Listen to “018 DevOps Stories, Practices and Outlooks with Gene Kim: Part 1” on Spreaker.

Episode 19: DevOps Stories, Practices and Outlooks with Gene Kim: Part 2

Listen to “019 DevOps Stories, Practices and Outlooks with Gene Kim: Part 2” on Spreaker.

Episode 20: DevOps Stories, Practices and Outlooks with Gene Kim: Part 3

Listen to “020 DevOps Stories, Practices and Outlooks with Gene Kim: Part 3” on Spreaker.

Related Links:

Get a free digital copy of the first 160 pages of the DevOps Handbook

Double Header! Episode 16 & 17 with guest Anita Engleder, DevOps Manager at Dynatrace

As a follow up to our podcast with Bernd Greifender, CTO and Founder of Dynatrace, who talked about his 2012 mission statement to the engineering team: “We go from 6 months to 2 weeks release cycles” we now have Anita Engleder, DevOps Lead at Dynatrace on the mic.

Episode 16: Transforming 6 Months Release Cycles to 1hr Code Deploys

Listen to “016 Transforming 6 Months Release Cycles to 1h Code Deploys” on Spreaker.

Guest Star: Anita Engleder - DevOps Manager at Dynatrace

Anita has been part of that transformation team and in the first episode talks about what happened from 2012 until 2016 where the engineering team is now deploying a feature release every other week, makes 170 production deployment changes per day and can push a code change into production within an hour if necessary. She will give us insights in the processes, the tools but more importantly about the change that happened with the organization, the people and the culture. She will also tell us what she and her “DevOps” team actually contribute to the rest of the organization. Are they just another new silo? Or are they an enabler for engineering to push code faster through their pipeline?

Episode 17: Features and Feedback Loops @ Dynatrace

Listen to “017 Features and Feedback Loops @ Dynatrace” on Spreaker.

Guest Star: Anita Engleder - DevOps Manager at Dynatrace

In this second part of our podcast Anita gives us more insights into how new features actually get developed, how they measure their success and how to ensure that the pipeline keeps up with the ever increasing number of builds pushed through it. We will learn more about the day-to-day life at Dynatrace engineering but especially about the “Lifecycle of a Feature, its feedback loop and what the stakeholders are doing to make it a success”

Related Links:

Dynatrace UFO

Episode 15 - Leading the APM Market from Enterprise into Cloud Native

Listen to “015 Leading the APM Market from Enterprise into Cloud Native” on Spreaker.

Guest Star : Bernd Greifeneder (@berndgreif) - Founder and CTO of Dynatrace

We got to talk with Bernd Greifeneder, Founder and CTO of Dynatrace, who recently gave a talk on “From 0 to NoOps in 80 Days” explaining the “Digital Transformation Story of Dynatrace – the product as well as the company”

The transformation started in 2012 when Dynatrace used to deploy 2 major releases of its Dynatrace AppMon & UEM product to the market. The incubation of the startup Ruxit within Dynatrace allowed engineering, marketing and sales to come up with new ways and ideas that allow continuous innovating. In 2016 the incubated team was brought back to Dynatrace to accelerate the “Go To Market” of all the innovations. A new version of its Dynatrace SaaS and Managed offering is now released every 2 weeks with 170 production updates per day. Many aspects were also applied to all other product lines and engineering teams which boosted the output and raised quality of these enterprise products.

Double Header! Episode 13 & 14 with guest Pat Meenan, Chrome engineer at Google and creator of WebPageTest.org

Episode 13: Pat Meenan (Google and WebPageTest) on Correlating Performance with Bounce Rates

Pat Meenan (@patmeenan) is a veteran when it comes to Web Performance Optimization. Besides being the creator of WebPageTest.org he has also done a lot of work recently on the Google Chrome team to make the browser better and faster.

During his recent Velocity presentation on “Using Machine Learning to determine drivers for bounce and conversion” he presented some very controversial findings about what really impacts end user happiness. That it was not rendering time but rather DOM Load Time that correlates with conversion and bounce rates. In this session we dig a bit deeper into which metrics you can capture from your website and presented them to your business side as an argument for investing in faster websites. Find out which metric you really need to optimize in order to “move the needle”

Related Links:

Using machine learning to determine drivers of bounce and conversion - Velocity 2016

WebPagetest

WPO-Foundation Github repository for machine learning

Are there new Web Performance Rules since Steve Souders started the WPO movement about 10 years ago? Do we still optimize on round trips or does HTTP/2 change the game? How do we deal with “mobile only” users we find in emerging geographies. How does Google itself optimize its search pages and what can we learn from it. In this session we really got to cover a lot of the presentation Pat Meenan (@patmeenan) did at Velocity this year.

Related Links:

Scaling frontend performance - Velocity 2016

WebPagetest

Google AMP Project

Google AMP Github Repository

Episode 12 - Automating Performance into the Capital One Delivery Pipeline

Guest Star : Adam Auerbach (@Bugman31) - Senior Director of Technology at Capital One

Adam Auerbach (@Bugman31) has helped Capital One transform their development and testing practices into the Digital Delivery Age. Practicing ATDD and DevOps allows them to deploy high quality software continuously. One of their challenges has been the rather slow performance testing stage in their pipeline. Breaking up performance test into smaller units, using Docker to allow development to run concurrency and scalability tests early on, and automating these tests into their pipeline are some of the actions they have taken to level-up their performance engineering practices. Listen to this podcast to learn about how Capital One pushes code through the pipeline, what they have already achieved in their transformation and where the road is heading.

Related Links:

Hygea Delivery Pipeline Dashboard

Capital One Labs

Capital One DevExchange

Episode 11 - Demystifying Database Performance Optimizations

Guest Star 1: Sonja Chevre (@SonjaChevre)- Product Manager of Database & Test Automation at Dynatrace

Guest Star 2: Harald Zeitlhofer (@HZeitlhofer) - Dynatrace Innovation Lab

Do you speak SQL? Do you know what an Execution Plan is? Are you aware that large amounts of unique queries will impact Database Server CPU and also efficiency of the Execution Plan and Data Cache? These are all learnings from this episode where Sonja Chevre (@SonjaChevre) and Harald Zeitlhofer (@HZeitlhofer) – both database experts at Dynatrace – pointed out database performance hotspots and optimizations that you many of us probably never heard about.

Related Links:
Watch the Online Performance Clinic -Database Diagnostics Use Cases with Dynatrace

Episode 10 - RESToring the work/life balance with Matt Eisengruber

Guest Star: Matt Eisengruber - Dynatrace Guardian

Are you still exporting load testing reports into Excel compare different runs manually? Matt Eisengruber – Guardian at Dynatrace – walks us through the life-changing transformation story of one of his former clients who used to spend an entire business day analyzing LoadRunner results.

Through automation, they managed to get her the results when she walks into the office in the morning – giving her more time to do “real” business analyst work instead of doing manual number crunching. Matt shares some insights into what exactly it is they did to automate Dynatrace Load Test comparison, how they created the reports and which metrics they ended up looking at.

KNOW-PRIZE Tweet answer using “#pureperformance @dynatrace”

Question: Real Genius premiered in theatres 36 years ago as of August 7th. It featured characters who loved experimenting and learning. Though they were portrayed as nerds in the movie, people who were very smart but didn’t fit in with society very well, Today they’d be silicon valley celebrities. Today’s question is - What is the connection between Real Genius and Napoleon. Answer: Winner:

PurePerformance Guest Host Series 01: Alois Reitbauer presents From Monolith to Microservices at Prep Sportswear

Guest Host: Alois Reitbauer @AloisReitbauer

Guest Star: Mike Jones, Director of Technology at Prep Sportswear LinkedIn

Alois Reitbauer (@AloisReitbauer) guest hosts - Mike Jones takes us on a journey how the team moved a monolithic application that was built by a remote team to a micro service architecture. Learn how the manage a couple of million lines of code with only 5 people while improving performance and availability. Mike also shares lessons learned on their journey and shares strategies on how to make the transition to micro services while having to keep the lights on for day-to-day business.

Episode 9 - Proactive Performance Engineering in ASP.NET with Scott Stocker

Guest Star: Scott Stocker @sestocker

Scott Stocker @sestocker, Solution Architect at Perficient, tells us the background of a recent load testing engagement on an ASP.NET App running on SiteCore. Turns out that even these apps on the popular Microsoft platform suffer from the same architectural and implementation patterns as we see everywhere else. Bypassing the caching layer through FastQuery resulted in excessive SQL, which caused the system to not scale, but crumble. Scott tells us how they identified this issue and what his approach as an architect is to proactively identify most common performance and scalability problems.

Related Links:
Diagnosing Sitecore Performance Problems

Episode 8 - A Cloudy Story: Why You Should Worry About Performance in PaaS vs IaaS or Containers

Guest Star: Mike Villiger @mikevilliger

The initial idea of the Cloud has long become commodity – which is IaaS. Containers are the current hype but still require you to take care of correctly configuring your container that will run your code.

Mike Villiger (@mikevilliger) – a veteran and active member of the cloud community – explains why it is really PaaS that should be on top of your list. And why monitoring performance, architecture and resource consumption is more important than ever in order for your PaaS Adventure not to fail.

KNOW-PRIZE Tweet answer using “#pureperformance @dynatrace”

Question: How many average gallons per month of beer has the dynatrace waltham office consumed in the last 12 months Answer: Winner:

Related Links:
The incestuous relations among containers orchestration tools Online Perf Clinic - Metrics-Driven Continuous Delivery with Dynatrace Test Automation

Double Header! Episode 6 & 7 with guest Richard Dominguez, Developer in Operations for Prep Sportswear

Episode 6: How to sell performance to Marketing with Richard Dominguez

Have you ever wondered how to argue with a marketeer about not releasing a new feature or running a campaign? Or, on the contrary: how can you show a marketeer that performance engineering and monitoring is as critical to the success of a campaign as the marketing campaign itself? Richard Dominguez, Developer in Operations at PrepSportswear, is enlightening us about how his DevOps team is cooperating with marketing to have a better shared understanding between business and technical goals!

KNOW-PRIZE Tweet answer using “#pureperformance @dynatrace”

Question: What computer performance related action figure does Brian regret not getting when he had the chance? Answer: Winner:

Related Link:
Using heat maps to obtain actionable application-user insights

Episode 7: Attack of the Bots & Spiders from Mars with Richard Dominguez

In Part II, Richard Dominguez, Developer in Operations at PrepSportswear, is explaining the significance of understanding and dealing with bot and spider traffic on their e-commerce site. He explains why they route search bot traffic to dedicated servers, how to better serve good bots and how to block the bad ones. Most importantly: we learn about a lot of metrics he is providing for the DevOps team, as well as the marketing teams, to run a better online experience!

Episode 5 - Top .NET Performance Problems

Microsoft is doing a good job in shielding the complexity of what is going on in the CLR from us. Until now Microsoft is taking care to optimize the Garbage Collector and tries to come up with good defaults when it comes to thread and connection pool sizes. The problem though is that even the best optimizations from Microsoft are not good enough if your application suffers from poor architectural decisions or simply bad coding.

Listen to this podcast to learn about the top problems you may suffer in your .NET Application. We have many examples and we discuss how you can do a quick sanity check on your own code to detect bad database access patterns, memory leaks, thread contentions or simply bad code that results in high CPU, synchronization or even crashes!

KNOW-PRIZE Tweet answer using “#pureperformance @dynatrace”

Question: What is the first programming language Andi Grabner used? Answer: Winner:

Related Links:
C# Performance Mistakes – Top Problems Solved in December

Episode 4 - Top Java Performance Problems

The Java Runtime has become so fast that it shouldn’t be the first one to blame when looking at performance problems. We agree: the runtime is great, JIT and garbage collection are amazing. But bad code on a fast runtime is still bad code. And it is not only your code but the 80-90% of code that you do not control such as Hibernate, Spring, App-Server specific implementations or the Java Core Libraries.

Listen to this podcast to learn about the top Java Performance Problems we have seen in the last months. Learn how to detect bad database access patterns, memory leaks, thread contentions and – well – simply bad code resulting in high CPU utilization, synchronization issues or even crashes!

KNOW-PRIZE Tweet answer using “#pureperformance @dynatrace”

Question: What is the first computer co-host Brian ever used? Answer: Winner:

Related Links:
Top Tomcat Performance Problems: Database, Micro-Services and Frameworks
Top Tomcat Performance Problems Part 2: Bad Coding, Inefficient Logging & Exceptions
Tomcat Performance Problems Part 3: Exceptions, Pools, Queues, Threads & Memory Leaks

Episode 3 - Performance Testing in Continuous Delivery / DevOps

How can you performance test an application when you get a new build with every code check-in? Is performance testing as we know it still relevant in a DevOps world or do we just monitor performance in production and fix things as we see problems come up?

Continuous Delivery talks about breaking an application into smaller components that can be tested in isolation but also deployed independently. Performance Testing is more relevant than ever in a world where we deploy more frequently – however – the approach of executing these tests has to change. Instead of executing hourly long performance test on the whole application we also need to break down these tests into smaller units. These tests need to be executed automatically with every build – providing fast feedback on whether a code change is potentially jeopardizing performance and scalability.

Listen to this podcast to get some new insights and ideas on how to integrate your performance tests into your Continuous Delivery Process. We discuss tips&tricks we have seen from engineering teams that made the transition to a more “agile/devopsy” way to execute tests.

Episode 2 - What is a load vs performance vs stress test?

Guest Star: Mark Tomlinson @mark_on_task

Have you ever wondered what other people mean when they talk about a performance or load or stress test? What about a penetration test? There are many definitions floating around and things sometimes get confused. Listen to this podcast and let us clarify for you by giving you our opinion on the different types of tests that are necessary when testing an application. In the end you can make up your own mind what best term to use

Episode 1 - Performance 101: Key Metrics beyond Response Time and Throughput

If you are running load tests it is not enough to just look at response time and throughput. As a performance engineer you also have to look at key components that impact performance: CPU, Memory, Network and Disk Utilization should be obvious. Connection Pools (Database, Web Service), Thread Pools and Message Queues have to be part of monitoring as well. On top of that you want to understand how your individual components that you test (frontend server, backend services, database, middleware, …) communicate with each other. You need to identify any communication bottlenecks because of too chatty components (how many calls between tiers) and to heavy weight conversations (bandwidth requirements).

Listen to this podcast and learn which metrics you should look at while running your load test. As performance engineer you should not only report that the app is slow under a certain load but also give recommendations on which components are to blame

Join the Dynatrace platform now!

Try for free Contact us