What is Azure Functions?
Azure Functions is a serverless compute service by Microsoft that can run code in response to predetermined events or conditions (triggers), such as an order arriving on an IoT system, or a specific queue receiving a new message. It automatically manages all the computing resources those processes require.
Although Azure Functions can offer major benefits to organizations looking to take advantage of the benefits of serverless computing, the increasing reliance on multiple cloud environments, open source technologies, and containerized microservices adds complexity and can create an observability problem for the DevOps teams tasked with monitoring application performance and end-user experience.
The growth of Azure cloud computing
Azure is a large and growing cloud computing ecosystem that empowers its users to access databases, launch virtual servers, create websites or mobile applications, run a Kubernetes cluster, and train machine learning models, to name a few examples. Numerous serverless options let you build almost anything in the cloud, and offerings on Azure now match AWS nearly one-to-one with dedicated and on-demand resources. The platform reserves a base number of virtual machines and automatically adds instances as needed during periods of heavy use.
With so many features, Azure continues to gain popularity among corporations and government agencies. As early as 2015, the Canadian Broadcasting corporation used Azure App Services, the managed platform for building web apps, to scale its real-time election-night website to handle requests from millions of users. In 2019, the US Department of Defense chose Azure for its $10 billion cloud computing project, JEDI.
While AWS and Azure promote the same capabilities — and perhaps a similar ultimate vision — their architectures are not equivalent. Managing your applications requires knowing the intricacies of your chosen provider.
How Azure Functions works
The Azure Functions serverless platform enables teams to build event-driven apps that run code when triggered by preset system conditions or events. The platform automatically manages all the computing resources required in those processes, freeing up DevOps teams to focus on developing and delivering features and functions.
Making the best use of Azure Functions
Like AWS Lambda, Azure Functions works at the edge of the cloud and is well-suited for smaller apps that can work independently of other websites. Some common tasks it performs well include processing orders and IoT data, sending emails, messages, and notifications, and scheduling tasks, such as starting backups and database cleanups.
You can create web apps and APIs on Azure App Service in a function app, using functions for routine tasks, such as setting up application users or querying a database. Microsoft lets you deploy serverless code within many of its individual product-based clouds. Azure IoT Functions, for instance, processes requests for Azure IoT Edge.
When not to use Azure Functions
Although it works well for routine tasks, Azure Functions could be better suited for running computationally intensive tasks, as handling constant CPU-heavy processes in the cloud can get very expensive. Since cost will surely be a consideration when an organization chooses how best to apply Azure Functions, it’s important to closely examine what you plan to use it for and whether the convenience warrants the price tag.
Azure Functions is also not recommended for infrequent, time-sensitive tasks. When a container cold starts — spins up for the first time to complete a new request — there is a slight delay in normal response time. At scale, these small delays can add up to precious seconds perceivable to internal IT teams and end-users, ultimately impacting productivity and business outcomes.
Multiple large dependencies between your function and other services only make matters worse. Consider using virtual machines or specialized frameworks for these types of tasks.
The observability problem of the serverless approach
Beyond compute cost and cold start delays, there is another caveat for teams looking into this technology: Monitoring functions within Azure is limited to Azure apps and any on-premises services they immediately interact with. You can enable and search through logs per-function and per-service, but working through alerts and bugs require manually navigating through the logs for each.
Azure Monitor functions do not extend to distributed traces, which include start-to-finish records of all the events that occur along the path of a given request inside and outside of the Azure ecosystem and its immediate supporting infrastructure.
Distributed tracing enables teams to map and understand dependencies throughout their software stack. Since modern IT environments employ many technologies, including other cloud environments and open-source technologies, teams have to rely on multiple monitoring solutions, which adds complexity and creates blind spots for your DevOps and SRE teams.
Today’s cloud-native applications rely on many different microservices that perform processes like pushing event data to an analytics service, establishing database connections, sending push notifications or other messages — the list goes on. Media streaming giant Netflix, for example, estimates that it uses as many as 700 individual APIs in its microservices architecture.
Maintaining end-to-end observability of your entire software stack is absolutely critical to identifying and resolving performance issues as (or even before) they occur.
How to get the most out of Azure Functions without sacrificing observability
Azure empowers organizations to create dynamic serverless applications built on functions that run on the edge of the cloud without having to think about managing the required infrastructure.
Still, optimizing functions, monitoring performance, and catching errors throughout the full application workflow is impossible when you rely only on Azure’s internal logging services and other disparate insights. A unified view of your organization’s full cloud stack is critical for understanding dependencies between all microservices at play throughout increasingly complex multicloud environments.
To get the most out of Azure Functions and the systems they interact with, teams need end-to-end observability that uses automation and AI-assistance to extend beyond metrics, logs, and traces and include data from open-source initiatives and additional context from the end-user perspective.
The Dynatrace Software Intelligence Platform provides this automatic and intelligent observability and supports both the 2.x and 3.x runtime versions of Azure Functions to give teams deep visibility into any code running in Azure Functions. This visibility extends to the processes up- and downstream from Azure Functions, for full context of the user experience and business outcomes.
Teams also need to see why a request is slow before users are impacted. Dynatrace’s automatic service flows enable teams to instantly see and easily understand application transactions end to end by looking at the full sequence of service calls made when an Azure function was triggered. The Dynatrace Davis AI engine seamlessly monitors triggers, requests, errors, and cold starts in their full context, regardless of programming language, and automatically flags issues, providing your DevOps team with a clear path to action, such as remediating problems, setting up automation, and optimizing workflows.
With a single view across all services that leverage and interact with Azure Functions applications, teams can find out which functions experience the highest failure rate or processing time, and which are executed the most. This enables teams to focus on providing smooth, efficient applications that benefit the end-user experience and improve business outcomes.
To learn more about how the Dynatrace platform can help your organization get the most out of its serverless computing initiative, read about Azure Functions monitoring.
Ready to dive in?