• Home
  • Deploy Dynatrace
  • Set up Dynatrace on cloud platforms
  • Amazon Web Services
  • Integrations
  • Monitor AWS Lambda
  • Trace Python, Node.js, and Java Lambda functions

Trace Python, Node.js, and Java Lambda functions

Dynatrace provides you with a dedicated AWS Lambda layer that contains the Dynatrace extension for AWS Lambda. You need to add the publicly available layer for your runtime and region to your function. Then, based on your configuration method, Dynatrace provides a template or configuration for your AWS Lambda function.

Activate AWS Lambda

Choose a configuration method

Specify a Dynatrace API endpoint

Enable Real User Monitoring

Define an AWS layer name

Deployment

Configuration options

Dynatrace AWS integration

Prerequisites

  • The Dynatrace extension supports AWS Lambda functions written in Node.js, Python, or Java. Both 64-bit ARM (AWS Graviton2 processors) and 64-bit x86 architectures are supported.
  • To monitor a Java Lambda function, your function requires additional memory. 1.5 GB of RAM is recommended.
    • To configure memory, in the AWS Lambda console, go to General > Basic settings and set Memory to a value of at least 1.5 GB.
    • Note that the RAM requirements for Node.js and Python Lambda functions might be significantly lower.
  • Activate the Forward Tag 4 trace context extension OneAgent feature. In the Dynatrace menu, go to Settings > Preferences > OneAgent features.
    • Note: All OneAgents participating in a trace must meet the minimum OneAgent version of 1.193 for this setting.

Activate AWS Lambda

To get started

  1. In the Dynatrace menu, select Deploy Dynatrace.
  2. On the Dynatrace Hub page, search for AWS Lambda.
  3. Select AWS Lambda and then select Activate AWS Lambda.
  4. Follow the instructions to enable monitoring of AWS Lambda functions.

Note: If you're using the Deploy Dynatrace page, select Start installation. On the Install OneAgent page, select AWS Lambda. This displays the Enable Monitoring for AWS Lambda Functions page.

Choose a configuration method

The Dynatrace Lambda agent is distributed as a layer that can be enabled and configured manually or using well known Infrastructure as Code (IaC) solutions.

On the Enable Monitoring for AWS Lambda Functions page, use the How will you configure your AWS Lambda functions? list to select your preferred method, and then make sure you set all properties for the selected method before copying the generated configuration snippets.

Configure with JSON file

If you select this method, Dynatrace provides you with:

  • An environment variable to add to your AWS Lambda function
  • A JSON snippet that you need to copy into the dtconfig.json file in the root folder of your Lambda deployment
  • Lambda layer ARN

When using this method, make sure that you add the Dynatrace Lambda layer to your function. You can do this through the AWS console (Add layer > Specify an ARN and paste the ARN displayed on the deployment page) or by using an automated solution of your choice.

Enter environment variables via the AWS Console

Lambda environment variables

Enter the Lambda layer ARN via the AWS Console

Specify a layer by providing the ARN

Configure with environment variables

When using this method, make sure that you add the Dynatrace Lambda layer to your function. The layer, as well as the environment variables, can be set either manually through the AWS console (Add layer > Specify an ARN and paste the ARN displayed on the deployment page) or by using an automated solution of your choice.

Note: Client-side decryption of environment variables (Security in Transit) is not supported.

If you select this method, Dynatrace provides you with:

  • Values to define environment variables for the AWS Lambda functions that you want to monitor

    Lambda environment variables

  • Lambda layer ARN

    Specify a layer by providing the ARN

Configure and deploy using Terraform

Terraform is a popular Infrastructure as Code (IaC) solution. If you select this method, Dynatrace provides you with:

  • A template to define the AWS Lambda function. This includes all the configuration that you need to deploy and configure the Dynatrace AWS Lambda extension together with your functions.
  • Lambda layer ARN
Configure and deploy using AWS SAM

The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications.

If you select this method, Dynatrace provides you with a template to define the AWS Lambda function. This includes all the configuration that you need to integrate the Dynatrace AWS Lambda extension.

Configure and deploy using the serverless framework

The Serverless Application option is a framework for deploying serverless stacks.

If you select this method, Dynatrace provides you with a template to define the AWS Lambda function. This includes all the configuration that you need to integrate the Dynatrace AWS Lambda extension.

Configure and deploy using AWS CloudFormation

AWS CloudFormation is an IaC solution that enables provisioning of a wide range of AWS services.

If you select this method, Dynatrace provides you with a template to define the AWS Lambda function. This includes all the configuration that you need to integrate the Dynatrace AWS Lambda extension.

Specify a Dynatrace API endpoint optional

This is an optional step that enables you to specify a Dynatrace API endpoint to which monitoring data will be sent.

The typical scenario is to deploy a Dynatrace ActiveGate in close proximity (same region) to the Lambda functions that you want to monitor in order to reduce network latency, which can impact the startup time of your Lambda functions.

Enable Real User Monitoring optional

This is an optional step to use Real User Monitoring (RUM), which provides you with deep insights into user actions and performance via the browser or in mobile apps.

Enable the RUM header for calls to your monitored Lambda functions

RUM for Lambda functions requires a specific header (x-dtc) to be sent with XHR calls into AWS. To enable this, the CORS settings of your AWS deployment must allow this header during preflight (OPTIONS) requests.

Please refer to the AWS documentation for instructions on how to configure CORS and allow the x-dtc header for your specific setup.

Once this is set up, the application needs to be configured in Dynatrace to set this header for calls to your Lambda functions.

  1. In the Dynatrace menu, select Web, Mobile, Frontend, or Custom applications, depending on your application type.
  2. Select the application you want to connect with your Lambda function.
  3. Select the browse menu (…) in the upper-right corner and select Edit.
  4. Select Capturing > Async web requests and SPAs.
  5. Make sure that your framework of choice is enabled. If your framework is not listed, enable Capture XmlHttpRequest (XHR) for generic support of XHR.
  6. Select Capturing > Advanced setup.
  7. Scroll down to the Enable Real User Monitoring for cross-origin XHR calls section and enter a pattern that matches the URL to your Lambda functions. For example: TheAwsUniqueId.execute-api.us-east-1.amazonaws.com
  8. Select Save. After a few minutes, the header will be attached to all calls to your Lambda function and requests from your browser will be linked to the backend.
Failed requests

If requests start failing after enabling this option, review your CORS settings. Please refer to the AWS documentation for instructions on how to configure CORS.

Service Flow for AWS Lambda function

Define an AWS layer name

Select the AWS region and the runtime of the Lambda function to be monitored. These settings are required to provide the correct layer ARN.

Deployment

Copy the configuration snippets into your deployment and use your deployment method of choice to enable the layer and set the configuration for your Lambda functions.

Configuration options

Configure the AWS API Gateway

If inbound (non-XHR) requests to your Lambda functions are not connected to the calling application, configure the API Gateway to pass through the Dynatrace tag. To do this, enable Use Lambda Proxy Integration on the Integration Request configuration page of the API Gateway.

Integration Request configuration page

If the API Gateway is configured from the Lambda configuration page, this setting will be enabled by default.

Proxy configuration screen

AWS Lambda also supports non-proxy integration, which, without some additional configuration, prevents Dynatrace from

  • Tracing calls from other monitored applications
  • RUM detection (web and mobile)

To make tracing calls from other monitored applications/RUM detection work in this scenario, create a custom mapping template in the integration requests configuration.

  1. In the AWS API Gateway Console, go to Resources and select a request method (for example, GET).

  2. Select Mapping Templates and then select Add mapping template.

  3. Add the following content to the template:

    json
    { "path": "$context.path", "httpMethod": "$context.httpMethod", "headers": { #foreach($param in ["x-dynatrace", "traceparent", "tracestate", "x-dtc", "referer", "host", "x-forwarded-proto", "x-forwarded-for", "x-forwarded-port"]) "$param": "$util.escapeJavaScript($input.params().header.get($param))" #if($foreach.hasNext),#end #end }, "requestContext": { "stage": "$context.stage" } }

    Note: The x-dtc header is specific to tracing RUM scenarios, whereas the remaining headers are generally needed to link traces together and extract relevant information, such as web request metadata.

  4. Select Save to save your configuration.

  5. Redeploy your API.

Note: This configuration method works only for Node.js and Python. Mapping templates currently aren't supported for Java.

Filter cold starts

One of the important metrics for Lambda is the frequency of cold starts. A cold start happens when a new instance of a Lambda function is invoked. Such cold starts take longer and add latency to your requests.

A high cold-start frequency can indicate errors or an uneven load pattern that can be mitigated using provisioned concurrency. Dynatrace reports such cold starts as a property on the distributed trace.

To analyze cold starts, select View all requests on the Lambda service details page.

Service details page for AWS Lambda function

In the request filter, select Function cold start in the Request property section.

This displays a page that you can filter by invocations containing Only cold start or No cold start.

Screen to filter by invocations containing a Only cold start or No cold start

Monitoring overhead

Enabling monitoring unavoidably induces overhead to the monitored function execution. Overhead depends on several factors, such as function runtime technology, configuration, and concrete function characteristics such as code size or execution duration and complexity.

The amount of memory configured for a function directly impacts the compute resources assigned to the function instance. The worst-case scenario on measured overhead is a function with an empty function handler and minimum memory configuration.

Cold start overhead

  • For Python, cold start overhead is about 1,000 ms.
  • For Node.js, cold start overhead is about 700 ms.
  • For Java, cold start overhead may exceed 1,000 ms.

For the minimum memory configuration requirement, see Requirement for Java Lambda functions.

Response time latency

Latency depends on the function implementation, but is typically less than 10%.

Code space overhead

RuntimeCode space (MB)
Node.js~6MB
Python6.3MB
Java4.5MB

Dynatrace AWS integration

While not mandatory, we recommend that you set up Dynatrace Amazon CloudWatch integration. This allows data ingested via AWS integration to be seamlessly combined with the data collected by the Dynatrace AWS Lambda extension.

AWS Lambda metrics Invocations

Known limitations

  • The Dynatrace AWS Lambda extension relies on an AWS Lambda extension mechanism that is currently available for Lambda functions with an Amazon Linux 2 runtime. These runtimes are:
    • For Node.js
      • Node.js 18 (OneAgent version 1.257+)
      • Node.js 16 (OneAgent version 1.251+)
      • Node.js 14
      • Node.js 12
    • For Python
      • Python 3.9 (OneAgent version 1.229+)
      • Python 3.8
    • For Java
      • Java 11
      • Java 8 (amazon-corretto-8 JDK)

See Lambda runtimes for details.

  • The Dynatrace AWS Lambda extension does not support the capture of method-level request attributes.

  • To detect and trace invocations through Lambda functions written in Java, your function needs to use the Lambda events library for event attribute mapping, which also includes HTTP tag extraction. For details, see AWS Lambda Java Events. Specifically, this limits the supported handler function event types to:

    • APIGatewayProxyRequestEvent
    • APIGatewayV2HTTPEvent
  • The Dynatrace AWS Lambda extension doesn't capture IP addresses of outgoing HTTP requests. This results in unmonitored hosts if the called service isn't monitored with Dynatrace.

  • Incoming calls: Dynatrace can monitor incoming calls that are invoked via:

    • AWS SDK
    • API gateway
    • AWS SQS (Node.js and Python)
    • AWS SNS (Node.js and Python)
  • Outgoing requests to another AWS Lambda function: In a monitored AWS Lambda function, the following libraries are supported for outgoing requests to another AWS Lambda function:

    • For Java - AWS SDK for Java
    • For Node.js - AWS SDK for JavaScript in Node.js:
      • version 2
      • version 3 (OneAgent version 1.263+)
    • For Python - AWS SDK for Python (Boto3)
  • Outgoing HTTP requests: In a monitored AWS Lambda function, the following libraries/HTTP clients are supported for outgoing HTTP requests:

    • For Java - Apache HTTP Client 3.x, 4.x
    • For Node.js - The built-in http.request
    • For Python - requests, aiohttp-client, urllib3
  • Java only: The configured handler class has to implement the handler method (usually handleRequest(...)) itself. If the handler method is only defined in a base class, you have to add an override in the handler class, calling the base handler method within (usually super.handleRequest(...)).

Related topics
  • Set up Dynatrace on Amazon Web Services

    Set up and configure monitoring for Amazon Web Services.

  • Limit API calls to AWS using tags

    Add and configure AWS tags to limit AWS resources.