Trace Python, Node.js, and Java Lambda functions
Dynatrace provides you with a dedicated AWS Lambda layer that contains the Dynatrace extension for AWS Lambda. You need to add the publicly available layer for your runtime and region to your function. Then, based on your configuration method, Dynatrace provides a template or configuration for your AWS Lambda function.
Activate AWS Lambda
Choose a configuration method
Specify a Dynatrace API endpoint
Enable Real User Monitoring
Define an AWS layer name
Deployment
Configuration options
Dynatrace AWS integration
Prerequisites
- The Dynatrace extension supports AWS Lambda functions written in Node.js, Python, or Java. Both 64-bit ARM (AWS Graviton2 processors) and 64-bit x86 architectures are supported.
- To monitor a Java Lambda function, your function requires additional memory. 1.5 GB of RAM is recommended.
- To configure memory, in the AWS Lambda console, go to General > Basic settings and set Memory to a value of at least 1.5 GB.
- Note that the RAM requirements for Node.js and Python Lambda functions might be significantly lower.
- Activate the Forward Tag 4 trace context extension OneAgent feature. In the Dynatrace menu, go to Settings > Preferences > OneAgent features.
- Note: All OneAgents participating in a trace must meet the minimum OneAgent version of 1.193 for this setting.
Activate AWS Lambda
To get started
- In the Dynatrace menu, select Deploy Dynatrace.
- On the Dynatrace Hub page, search for AWS Lambda.
- Select AWS Lambda and then select Activate AWS Lambda.
- Follow the instructions to enable monitoring of AWS Lambda functions.
Note: If you're using the Deploy Dynatrace page, select Start installation. On the Install OneAgent page, select AWS Lambda. This displays the Enable Monitoring for AWS Lambda Functions page.
Choose a configuration method
The Dynatrace Lambda agent is distributed as a layer that can be enabled and configured manually or using well known Infrastructure as Code (IaC) solutions.
On the Enable Monitoring for AWS Lambda Functions page, use the How will you configure your AWS Lambda functions? list to select your preferred method, and then make sure you set all properties for the selected method before copying the generated configuration snippets.
Specify a Dynatrace API endpoint optional
This is an optional step that enables you to specify a Dynatrace API endpoint to which monitoring data will be sent.
The typical scenario is to deploy a Dynatrace ActiveGate in close proximity (same region) to the Lambda functions that you want to monitor in order to reduce network latency, which can impact the startup time of your Lambda functions.
Enable Real User Monitoring optional
This is an optional step to use Real User Monitoring (RUM), which provides you with deep insights into user actions and performance via the browser or in mobile apps.
Define an AWS layer name
Select the AWS region and the runtime of the Lambda function to be monitored. These settings are required to provide the correct layer ARN.
Deployment
Copy the configuration snippets into your deployment and use your deployment method of choice to enable the layer and set the configuration for your Lambda functions.
Configuration options
Configure the AWS API Gateway
If inbound (non-XHR) requests to your Lambda functions are not connected to the calling application, configure the API Gateway to pass through the Dynatrace tag. To do this, enable Use Lambda Proxy Integration on the Integration Request configuration page of the API Gateway.
AWS Lambda also supports non-proxy integration, which, without some additional configuration, prevents Dynatrace from
- Tracing calls from other monitored applications
- RUM detection (web and mobile)
To make tracing calls from other monitored applications/RUM detection work in this scenario, create a custom mapping template in the integration requests configuration.
-
In the AWS API Gateway Console, go to Resources and select a request method (for example, GET).
-
Select Mapping Templates and then select Add mapping template.
-
Add the following content to the template:
{ "path": "$context.path", "httpMethod": "$context.httpMethod", "headers": { #foreach($param in ["x-dynatrace", "traceparent", "tracestate", "x-dtc", "referer", "host", "x-forwarded-proto", "x-forwarded-for", "x-forwarded-port"]) "$param": "$util.escapeJavaScript($input.params().header.get($param))" #if($foreach.hasNext),#end #end }, "requestContext": { "stage": "$context.stage" } }
Note: The
x-dtc
header is specific to tracing RUM scenarios, whereas the remaining headers are generally needed to link traces together and extract relevant information, such as web request metadata. -
Select Save to save your configuration.
-
Redeploy your API.
Note: This configuration method works only for Node.js and Python. Mapping templates currently aren't supported for Java.
Filter cold starts
One of the important metrics for Lambda is the frequency of cold starts. A cold start happens when a new instance of a Lambda function is invoked. Such cold starts take longer and add latency to your requests.
A high cold-start frequency can indicate errors or an uneven load pattern that can be mitigated using provisioned concurrency. Dynatrace reports such cold starts as a property on the distributed trace.
To analyze cold starts, select View all requests on the Lambda service details page.
In the request filter, select Function cold start in the Request property section.
This displays a page that you can filter by invocations containing Only cold start or No cold start.
Monitoring overhead
Enabling monitoring unavoidably induces overhead to the monitored function execution. Overhead depends on several factors, such as function runtime technology, configuration, and concrete function characteristics such as code size or execution duration and complexity.
The amount of memory configured for a function directly impacts the compute resources assigned to the function instance. The worst-case scenario on measured overhead is a function with an empty function handler and minimum memory configuration.
Cold start overhead
- For Python, cold start overhead is about 1,000 ms.
- For Node.js, cold start overhead is about 700 ms.
- For Java, cold start overhead may exceed 1,000 ms.
For the minimum memory configuration requirement, see Requirement for Java Lambda functions.
Response time latency
Latency depends on the function implementation, but is typically less than 10%.
Code space overhead
Runtime | Code space (MB) |
---|---|
Node.js | ~6MB |
Python | 6.3MB |
Java | 4.5MB |
Dynatrace AWS integration
While not mandatory, we recommend that you set up Dynatrace Amazon CloudWatch integration. This allows data ingested via AWS integration to be seamlessly combined with the data collected by the Dynatrace AWS Lambda extension.
Known limitations
- The Dynatrace AWS Lambda extension relies on an AWS Lambda extension mechanism that is currently available for Lambda functions with an Amazon Linux 2 runtime. These runtimes are:
- For Node.js
- Node.js 18 (OneAgent version 1.257+)
- Node.js 16 (OneAgent version 1.251+)
- Node.js 14
- Node.js 12
- For Python
- Python 3.9 (OneAgent version 1.229+)
- Python 3.8
- For Java
- Java 11
- Java 8 (
amazon-corretto-8
JDK)
- For Node.js
See Lambda runtimes for details.
-
The Dynatrace AWS Lambda extension does not support the capture of method-level request attributes.
-
To detect and trace invocations through Lambda functions written in Java, your function needs to use the Lambda events library for event attribute mapping, which also includes HTTP tag extraction. For details, see AWS Lambda Java Events. Specifically, this limits the supported handler function event types to:
APIGatewayProxyRequestEvent
APIGatewayV2HTTPEvent
-
The Dynatrace AWS Lambda extension doesn't capture IP addresses of outgoing HTTP requests. This results in unmonitored hosts if the called service isn't monitored with Dynatrace.
-
Incoming calls: Dynatrace can monitor incoming calls that are invoked via:
- AWS SDK
- API gateway
- AWS SQS (Node.js and Python)
- AWS SNS (Node.js and Python)
-
Outgoing requests to another AWS Lambda function: In a monitored AWS Lambda function, the following libraries are supported for outgoing requests to another AWS Lambda function:
-
Outgoing HTTP requests: In a monitored AWS Lambda function, the following libraries/HTTP clients are supported for outgoing HTTP requests:
- For Java - Apache HTTP Client 3.x, 4.x
- For Node.js - The built-in
http.request
- For Python -
requests
,aiohttp-client
,urllib3
-
Java only: The configured handler class has to implement the handler method (usually
handleRequest(...)
) itself. If the handler method is only defined in a base class, you have to add an override in the handler class, calling the base handler method within (usuallysuper.handleRequest(...)
).