Header background

From monitoring to software intelligence for Flask applications

Python is the fastest-growing major programming language today. Web development and data science are the two main types of Python development. The two most popular web frameworks used by Python developers are Django and Flask.

When comparing Django and Flask, developers like to highlight that Django provides an all-inclusive experience for developers: you get an admin panel, database interfaces, ORM, and directory structure for your applications and projects out of the box. This is great when working on a straightforward application but could be too much and get too big when developing microservices. While Flask, described as a micro framework for building web applications, is designed to be simpler, more flexible, and also allow for more fine-grained control during development. This is why Flask has become the framework of choice for microservices.

In this blog post, I’ll cover how to implement distributed tracing in Flask applications with the OneAgent SDK. The OneAgent SDK enables you to extend Dynatrace, including our AI-based root cause analysis, Smartscape, and service flow, to monitor Python-based applications.

Full-stack monitoring of a Flask application in Dynatrace<

Implementing distributed tracing for Flask applications

Dynatrace OneAgent allows you to track each request from end to end, up to the individual database statements. This enables Davis®, the Dynatrace AI causation engine, to automatically identify the root causes of detected problems and analyze transactions using powerful analysis features like service flow and the service-level backtrace.

The OneAgent SDK is an extension of OneAgent. Whenever OneAgent can’t instrument your application automatically, as is currently the case with Python, you can use the OneAgent SDK to manually instrument your code. Let’s walk through how.

All the sample code including instrumentation is available on GitHub. Here we’ll cover:

Note that this sample code is considered educational and not supported by Dynatrace.

Flaskr: A simple blog application

The application I want to monitor is called Flaskr. This is a simple blog application—open source, and part of the online tutorial to learn how to develop with the Flask framework.

Flaskr: A simple blog application using Flask

Adding the OneAgent SDK to your project

The OneAgent SDK is available as a package called oneagent-sdk in the PyPI repository. You can directly install the latest version in your environment using the following command:

python -m pip install --upgrade oneagent-sdk

Or if you’re already using a setup script, as is the case with my sample application, you can simply add the oneagent-sdk package in the list of dependencies needed to run your project:

# ...
extras_require={"test": ["pytest", "coverage"]},

Dynatrace OneAgent needs to be deployed on the host running the Python application. In case of issues, take a look at our documentation.

Tracing incoming web requests

The most comfortable way to trace incoming web requests is by using a WSGI middleware component. This is a Python application that is able to handle requests by delegating to other WSGI applications. A middleware component can perform such functions as routing a request to different application objects, load balancing. and performing content postprocessing.

To easily trace all incoming web requests, our team (thanks Christian Neumüller!) implemented a sample middleware component (see dynatrace_middleware.py). The middleware uses the SDK function trace_incoming_web_request.

To add the middleware to the application, the following lines of code were added to the file __init__.py:

app.wsgi_app = DynatraceWSGIMiddleware(
app.wsgi_app, app.name,

Once the code change is deployed, Dynatrace starts monitoring your application, including its response time, failure rate, and throughput:

Flaskr application monitoring details

Dynatrace automatically shows you the top request that a service processed during the selected analysis time frame. With this approach you can identify any unexpected requests and see if a specific request received more load than usual.

After creating the clean URL rule, all delete and update requests are grouped together

In my Flask application, the requests to update and delete an article each contain the article ID in the URL (for example, http://www.server.com/132/update). To group these requests together, I have defined a clean URL rule for the Flaskr service by selecting Web request naming rules.

Create clean URL rule

After that, all the update and delete requests are grouped together, which allows us to track the performance of those requests over time.

Tracing database requests

The sample application uses a SQLite database to store users and posts. SQLite is convenient because it doesn’t require setting up a separate database server and is built into Python. However, if concurrent requests try to write to the database at the same time, they will slow down as each write happens sequentially. SQLite is good enough for small applications. Once an application becomes big, you may want to switch to a different database.

To trace database requests, a database info object has to be created. This is done once in the dynatrace.py helper file, as the database meta-information stays the same for each request we are going to trace:

def getdbinfo():
''' Get Database Info '''
dbinfo = oneagent.get_sdk().create_database_info(
'flaskr.sqlite', oneagent.sdk.DatabaseVendor.SQLITE,
return dbinfo

We then use the trace_sql_database_request method for each database request we want to trace, for example, retrieving a specific post by its id in the file blog.py:

query = 'SELECT p.id, title, body, created, author_id, username'
' FROM post p JOIN user u ON p.author_id = u.id'
' WHERE p.id = ?'
dbinfo = getdbinfo()
with dbinfo:
with getsdk().trace_sql_database_request(dbinfo, query):
post = get_db().execute(query, (pid,)).fetchone()

Once the code change has been deployed, Dynatrace starts monitoring all the database calls, including response time, failure rate, and throughput:

Database monitoring, including response time, failure rate, and throughput

Response time, failure rate, and throughput, split by database statements

Tracing custom services

Looking at the service flow of my application, I can now clearly see my main application flaskr making calls to the embedded database.

Flask application calls to embedded database

I have added a new functionality in my application to inform editors that they should review blog posts after they have been created. I would like to monitor that functionality as a separate service. This is done by defining a custom service using the function trace_custom_service in blog.py.

def inform_editors():
''' Inform Editors '''
sdk = getsdk()
role = get_user_role()
if role == "author":
with sdk.trace_custom_service('review', 'BlogReview'):

Once the code change has been deployed, BlogReview appears in the service flow and is monitored by Dynatrace as a separate service:

Custom service tracing

Tracing outgoing web requests

The BlogReview service makes a web request to a Java server to simulate sending notification to the editors. To trace the outgoing request, I’m using the function trace_outgoing_web_request in blog.py. To link the trace on the receiving side, it’s important to make sure that the Dynatrace tag is added to the request headers.

def send_notification():
''' Send Notification '''
url = 'http://localhost:8123/send'
with getsdk().trace_outgoing_web_request(url, 'GET') as tracer:
# Get the Dynatrace tag.
tag = tracer.outgoing_dynatrace_string_tag

# Sending the web request, attaching the tag with the name
# expected by Dynatrace OneAgent
response = requests.get(url,
headers={oneagent.sdk.DYNATRACE_HTTP_HEADER_NAME: tag})


Once the code change has been deployed, end-to-end tracing is extended to the NotificationServer.

Tracing outgoing web requests

Defining custom request attributes

Request attributes are key/value pairs that are associated with a particular request. For example, in my blog application, I want to set up a user role attribute to differentiate each request by its role (such as admin, author, etc.).

In the helper file dynatrace.py, we use the method add_custom_request_attribute to set the custom request before each web request:

def set_custom_request_attributes():
''' Set Custom Request Attributes '''
sdk = getsdk()
role = get_user_role()
sdk.add_custom_request_attribute('user role', role)

The get_user_role method is just a dummy method that returns admin when the user sonja is logged in, editor for the user inanna, author for all the other logged-in users, and guest when a user isn’t logged in.

To configure the request attribute in Dynatrace, go to Settings > Server-side monitoring > Request attributes and select Create new request attribute. In the Data sources section, select SDK custom attribute as the source and enter the Attribute name you have defined in your code (in my example, user role).

Defining a custom request attribute in settings

Once you’ve defined your request attributes, you can use them to build your own custom analysis charts and filter your monitoring data.

For example, you can create a comparison of response times when publishing a new blog post split by user role. The chart below, of the slowest 10% of response time, shows that your authors sometimes have to wait up to 5.1 seconds.

Custom chart based on custom request attributes

These filters allow you to analyze response times filtered by the create request and the author user role to find out where the issue comes from (which, in this case, is the BlogReview service, where we added a delay for demonstration purposes):

Filtering based on custom request attributes

Full stack and AI included

Dynatrace doesn’t stop at the application layer; we go deep into your infrastructure as well. Dynatrace maps your dynamic environment in real time. It automatically discovers and monitors all the relationships and interdependencies of your entire stack, from the application to the the underlying infrastructure.

All this data is used to feed Davis, our AI engine for automatic anomaly detection and root-cause analysis.

What’s next?

As the popularity of Python grows steadily, we are preparing to take our Python support to the next level. You might have already read that we have joined the OpenTelemetry project. Stay tuned for an upcoming blog post sharing our vision and efforts toward supporting OpenTelemetry for Python in Dynatrace and asking for your feedback and use cases.


Thank you Inanna Hess, Christian Gusenbauer, and Christan Neumüller for your help with creating and reviewing the sample!