Deploy OneAgent as Lambda extension

For AWS Lambda, monitoring consumption is based on Davis data units. See Serverless monitoring for details.

AWS Lambda lets you run code without provisioning or managing servers. This deployment model is sometimes referred to as "serverless" or "Function as a Service" (FaaS).

The Dynatrace OneAgent extension supports AWS Lambda functions written in Node.js, Python, or Java running on an Amazon Linux 2 runtime.

Requirement for Java Lambda functions

To monitor a Java Lambda function with OneAgent, you need to allocate at least 1.5 GB of RAM to the Lambda function. To configure memory, in the AWS Lambda console, go to General > Basic settings and set Memory to a value of at least 1.5 GB.


Dynatrace provides you with a dedicated AWS Lambda layer that contains the Dynatrace OneAgent extension for AWS Lambda. You just need to add the publicly available layer for your runtime and region to your function and configure it based on your preferred configuration method (see details below).

Use the AWS Lambda configuration page to select how you deploy your Lambda function and create the corresponding configuration snippets to copy and paste.

On the OneAgent Deployment page for AWS Lambda, you will:

  1. Choose a configuration method
  2. (Optional) Specify a Dynatrace endpoint
  3. (Optional) Enable Real User Monitoring
  4. Define an AWS layer name

Based on your configuration method, Dynatrace will provide a template or configuration for your AWS Lambda function.

To get started

  1. Select Deploy Dynatrace from the Dynatrace navigation menu.
  2. On the Dynatrace Hub page, search for AWS Lambda.
  3. Select AWS Lambda and then select Activate AWS Lambda.
  4. Follow the instructions to enable monitoring of AWS Lambda functions.

Note: If you're using the Deploy Dynatrace page, select Start installation. On the Install OneAgent page, select AWS Lambda. This displays the Enable Monitoring for AWS Lambda Functions page.

Choose a configuration method

The Dynatrace Lambda agent is distributed as a layer that can be enabled and configured manually or using well known Infrastructure as Code (IaC) solutions.

On the Enable Monitoring for AWS Lambda Functions page, use the How will you configure your AWS Lambda functions? list to select your preferred method, and then make sure you set all properties for the selected method before copying the generated configuration snippets.

Specify a Dynatrace API endpoint


This is an optional step that enables you to specify a Dynatrace API endpoint to which monitoring data will be sent.

The typical scenario is to deploy a Dynatrace ActiveGate in close proximity (same region) to the Lambda functions that you want to monitor in order to reduce network latency, which can impact the startup time of your Lambda functions.

Enable Real User Monitoring


This is an optional step to use Real User Monitoring (RUM), which provides you with deep insights into user actions and performance via the browser or in mobile apps.

Define an AWS layer name

Select the AWS region and the runtime of the Lambda function to be monitored. These settings are required to provide the correct layer ARN.


Copy the configuration snippets into your deployment and use your deployment method of choice to enable the layer and set the configuration for your Lambda functions.

Configure the AWS API Gateway

If inbound (non-XHR) requests to your Lambda functions are not connected to the calling application, configure the API Gateway to pass through the Dynatrace tag. To do this, enable Use Lambda Proxy Integration on the Integration Request configuration page of the API Gateway.

AWS Lambda also supports non-proxy integration, which, without some additional configuration, prevents Dynatrace from doing the following:

  • Tracing calls from other monitored applications
  • RUM detection

To make tracing calls from other monitored applications/RUM detection work in this scenario, create a custom mapping template in the integration requests configuration.

  1. In the AWS API Gateway Console, go to Resources and select a request method (for example, GET).
  2. Select Mapping Templates and then select Add mapping template.
  3. Add the following content to the template:
    #set($path = $context.stage + $context.resourcePath)
    "path": "$path",
    "httpMethod": "$context.httpMethod",
    "headers": {
        #foreach($param in ["X-dynaTrace", "traceparent", "tracestate"])
        "$param": "$util.escapeJavaScript($input.params().header.get($param))"
    "requestContext": {
        "stage": "$context.stage"
  1. Select Save to save your configuration.
  2. Redeploy your API.

Note: This configuration method works only for Node.js and Python. Mapping templates currently aren't supported for Java.

Dynatrace AWS integration

While not mandatory, we recommend that you set up Dynatrace AWS integration (SaaS, Managed). This allows data ingested via AWS integration to be seamlessly combined with the data collected by the OneAgent Lambda code module.

AWS Lambda metrics Invocations

Filter cold starts

One of the important metrics for Lambda is the frequency of cold starts. A cold start happens when a new instance of a Lambda function is invoked. Such cold starts take longer and add latency to your requests.

A high cold start frequency can indicate errors or an uneven load pattern that can be mitigated using provisioned concurrency. Dynatrace OneAgent reports such cold starts as a property on the PurePath.

To analyze cold starts, select View all requests on the Lambda service details page.

Service details page for AWS Lambda function

In the request filter, select Function cold start in the Request property section.

This displays a page that you can filter by invocations containing Only cold start or No cold start.

Screen to filter by invocations containing a Only cold start or No cold start

Known limitations

  • The Dynatrace AWS Lambda extension does not support the capture of request attributes.

  • Dynatrace does not support database calls from Lambda functions with Node.js.

  • The Dynatrace AWS Lambda extension relies on an AWS Lambda extension mechanism that is currently available for Lambda functions where the runtime is deployed on Amazon Linux 2.

  • To detect and trace invocations through Lambda functions written in Java, your function needs to use the Lambda events library for event attribute mapping, which also includes HTTP tag extraction. For details, see AWS Lambda Java Events.

  • The Dynatrace AWS Lambda extension doesn't capture IP addresses of outgoing HTTP requests. This results in unmonitored hosts if the called service isn't monitored with Dynatrace OneAgent.

  • Incoming calls: Dynatrace can only monitor incoming calls that are invoked via AWS SDK or an API gateway.

  • Outgoing requests to another AWS Lambda function: In a monitored AWS Lambda function, only the following libraries are supported for outgoing requests to another AWS Lambda function:

    • For Java - AWS SDK for Java
    • For Node.js - AWS SDK for JavaScript in Node.js
    • For Python - AWS SDK for Python (Boto3)
  • Outgoing HTTP requests: In a monitored AWS Lambda function, only the following libraries/HTTP clients are supported for outgoing HTTP requests:

    • For Java - Apache HTTP Client 3.x, 4.x
    • For Node.js - Built-in http.request
      • For Python - Requests, aiohttp-client, urllib3



To get an extensive log output on Lambda, add the variables below.

  • For Node.js
DT_LOGGING_NODEJS_FLAGS: Exporter=true,LambdaSensor=true
  • For Python
  • For Java

Note: logOpenTelemetryUtils=true is required for use-inmemory-exporter (for debugging span-related problems).

Error messages

  • WARNING […] Unexpectedly got HTTP response with Content-Length (...)

This error message appears if you don't have port 9999 enabled for your ActiveGate. Go to AWS PrivateLink and VPC endpoints and set up a VPC that allows outbound communication on port 9999 to the ActiveGate endpoint.

Deploy OneAgent to container image packaged functions

As an addition to function deployment as a ZIP file, AWS Lambda features deployment of Lambda function as container images.

The container image must include files and configuration required to run the function code. The same applies to files and configuration of Dynatrace OneAgent, once monitoring should be enabled for the containerized Lambda function.

In a ZIP file function deployment, Dynatrace OneAgent code modules are attached to the function with an AWS Lambda extension (which is a Lambda layer with an extension-specific folder layout).

A Lambda layer, like a function bundle, is a ZIP file extracted at function cold start time to the /opt folder of the AWS Lambda function instance.

Thus, the process to enable Dynatrace monitoring for a containerized Lambda function requires the following:

  1. Provision Dynatrace OneAgent configuration
  2. Add contents of the OneAgent extension to the Lambda container image

Provision Dynatrace OneAgent configuration

Retrieve OneAgent configuration as described above.

Select configuration type Configure with environment variables and complete the remaining configuration steps.

Open the projects Dockerfile in an editor and copy the environment variables from the deployment screen. Each line must be prefixed with ENV and spaces around the equal signs must be removed.

ENV DT_TENANT=abcd1234
ENV DT_CLUSTER_ID=1234567890

Add OneAgent extension to container image

  1. Download the contents of the Dynatrace OneAgent extension. There are two ways to do this:
    • via dt-awslayertool—this is a utility for downloading or cloning AWS Lambda layers to the local file system
    • via AWS CLI—see below for instructions

Example command:

dt-awslayertool pull arn:aws:lambda:us-east-1:725887861453:layer:Dynatrace_OneAgent_1_207_6_20201127-103507_nodejs:1 --extract DynatraceOneAgentExtension

This will download the layer arn:aws:lambda:us-east-1:725887861453:layer:Dynatrace_OneAgent_1_207_6_20201127-103507_nodejs:1 and extract its contents to the local folder DynatraceOneAgentExtension.

  1. Use the following Dockerfile commands to copy the downloaded extension content into the container image and ensure that shell script file /opt/dynatrace is executable.
COPY DynatraceOneAgentExtension/ /opt/
RUN chmod +x /opt/dynatrace

Sample Dockerfile with Dynatrace OneAgent monitoring enabled

This sample project creates a containerized Node.js Lambda function. The project folder has following files and folders:

├── Dockerfile
├── DynatraceOneAgentExtension
└── index.js

The contents of the Dynatrace OneAgent extension is assumed to be downloaded and extracted (as outlined above) to the folder DynatraceOneAgentExtension.

The handler function is a exported by the index.js file:

exports.handler = async () => {
    return "hello world";

The Dockerfile with the modifications applied to deploy Dynatrace OneAgent to the containerized function:



# --- Begin of enable Dynatrace OneAgent monitoring section

# environment variables copied from Dynatrace AWS Lambda deployment screen
# (prefix with ENV and remove spaces around equal signs)
ENV DT_TENANT=abcd1234
ENV DT_CLUSTER_ID=1234567890

# copy Dynatrace OneAgent extension download and extracted to local disk into container image
COPY DynatraceOneAgentExtension/ /opt/

# make /opt/dynatrace shell script executable
RUN chmod +x /opt/dynatrace

# --- End of enable Dynatrace OneAgent monitoring section

CMD [ "index.handler" ]


OneAgent monitoring is only supported for container images created from an AWS base image for Lambda.

Additional resources

OneAgent overhead

Enabling monitoring unavoidably induces overhead to the monitored function execution. Overhead depends on several factors, such as function runtime technology, configuration, and concrete function characteristics, such as code size or execution duration and complexity.

The amount of memory configured for a function directly impacts the compute resources assigned to the function instance. The worst-case scenario for OneAgent overhead is a function with an empty function handler and minimum memory configuration.

Cold start overhead

  • Typical cold start overhead takes about 600 ms.
  • For Node.js, cold start overhead may take much less than 600 ms.
  • For Java, cold start overhead may exceed 600 ms.

For the minimum memory configuration requirement, see Requirement for Java Lambda functions.

Response time latency

Latency depends on the function implementation, but is typically less than 10%.

Code space overhead

Runtime Code space (MB)
Node.js ~6MB
Python 6.3MB
Java 4.5MB