Deploying and Monitoring a Lambda Function in less than 8 Minutes

The complexity and plethora of features of AWS can be overwhelming sometimes and as simple the principle of running Lambda function is, setting them up – including API endpoints and security rules –  can be daunting.

On the other hand, AWS provides a rich set of APIs. By leveraging them, tools can help you to create deterministic deployments and turn infrastructure as code (IaC) into reality.

In this blog post, I’ll show you how you can deploy a fully monitored AWS Lambda function in 8 minutes using the Serverless framework. If you prefer to watch a video then check out the Performance Clinic on Deploying and Monitoring a Lambda in less than 8 Minutes.


  • An AWS account with administrative privileges
  • Node.js installed on your local machine
  • AWS CLI installed on your local machine

Installing the Serverless framework

The Serverless framework is a platform agnostic toolkit for deploying and operating serverless architectures. With Node.js installed, it’s easy to install it by typing npm install -g serverless.

Creating and setting credentials for AWS CLI

First we need a dedicated user that lets us access AWS programmatically.

To create these credentials

  1. Sign in to the AWS Console
  2. Go to IAM / Users and click on Add User
  3. Enter a user name like <yourname>-cli-user
  4. Check Programmatic access and click Next: Permissions
  5. Select Attach existing policies directly and check AdministratorAccess and click on Next: Review and Create User on the following screen
  6. Finally copy the Access key ID and the Secret access key and store it for later

We are now done with the AWS console. From now on we can use the CLI.
On your command line type aws configure and enter the just created credentials in the dialogue:

AWS Access Key ID [None]: <YOUR_ACCESS_KEY_ID>
AWS Secret Access Key [None]: <YOUR_SECRET_ACCESS_KEY>
Default region name [None]: us-east-1 # Or any other region you want to use
Default output format [None]: 

Congratulations! Your system is now set-up.

Create a basic Lambda function

We will deploy a simple Node.js Lambda function that sends a request to some weather API and returns the result as JSON.
For that, create a directory lambda-sample and there create a file index.js.

In this directory, run npm init -y to create a basic package.json file for Node.js.

Next copy / paste the code for the lambda function into the index.js file:

As you see, we are using the axios module here to do an outbound request to some weather service.
To make this work, we have to add axios to the project. So we run npm install -S axios.

This function will simply return HTTP status 200 along with a JSON message.

Add Dynatrace

Dynatrace provides an npm module for monitoring Node.js in environments that don’t support installing our full agent. This module also comes with Lambda out-of-the-box.

To add it to your Lambda function, run npm install -S @dynatrace/oneagent.

Important: Only if you created your Dynatrace environment very recently, run npm install -S @dynatrace/oneagent@next to get the very latest version of the module.

Reduce the Module Size

The Dynatrace npm module contains instrumentation code for a variety of Node.js versions.
As a Lambda function is always configured to run a specific version of Node.js, it makes sense to only bundle those parts of the agent that are applicable to this version.
To do this, the npm module comes with a helper script that strips down the module to only the parts needed for a given Lambda function.
To do this, run npx dt-oneagent-tailor --AwsLambdaV8 from the root directory of your Lambda function. The console output should indicate that the script finished successfully.
(npx comes with all recent versions of npm but it can also be installed separately by running npm install -g npx)

Securely store the Dynatrace Credentials on AWS

Now with Dynatrace in place, we have to get and set the credentials to connect the module with your Dynatrace environment. For that, log in to your Dynatrace environment and click on Deploy Dynatrace.

Dynatrace Deployment Dialogue for Serverless

Click on Set up Serverless Integration.
If you are missing this button, your environment has not been enabled for Lambda yet. Please request access to this Early Access feature by filling out this form.

On the following screen, select Node.js  and copy the DT_LAMBDA_OPTIONS from the last text box.

We don’t want to deploy the credentials with our function. So we will use the AWS Systems Manager to securily store the data.

To do this, paste this configuration into a new file .dynatrace-aws.json. (The location does not matter – it will be deleted).

Now from the same directory run

aws ssm put-parameter --name "/dynatrace/lambda/sample/DT_LAMBDA_OPTIONS" \
--value "file://.dynatrace-aws.json" --type String

This command will store the Dynatrace credentials in AWS and return a JSON with a version if executed successfully.
If you want to change the settings, simply append --overwrite to the command.

You can now delete .dynatrace-aws.json again.

Configure Serverless

Now all that is left, is the final project configuration of Serverless.
For that, create a file serverless.yml in your Lambda directory and copy paste the following configuration into it:


Now all that is left to do is to run serverless deploy from your command line.

Now – utilizing the AWS APIs – Serverless does all the heavy lifting for us as the console output shows very well

$ serverless deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...

Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (7.39 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...

Serverless: Stack update finished...
Service Information
service: dynatrace-lambda-sample
stage: dev
region: us-east-1
stack: dynatrace-lambda-sample-dev
api keys:
GET - https://*****
hello: dynatrace-lambda-sample-dev-hello

Here are a few more useful serverless commands:

  • If you want to see what’s going on within Lambda run serverless -f hello logs -t – it will tail the cloudwatch logs and show it in your terminal.
  • If you want to change anything, simply run serverless deploy again.
  • If you want to roll back and clean up all you have done run serverless remove.

See it in Dynatrace

Now the big moment has come. How will this look like in Dynatrace?
Let’s hit the endpoint a few times in the browser and open Dynatrace then. I used ab -c 10 -n 1000 for this.
In Technologies / Node, you should see the AWS Lambda function.

Dynatrace Technology Screen with Lambda
Dynatrace Technology Screen with Lambda

This screen already gives us interesting insights. Beside the CPU usage on the left, the right Y-axis shows the number of instances that are currently running. In our case, it’s between 3 and 4 instances that handle our concurrency of 10.

If we drill deeper into the service flow, we see the outbound request to the weather API and also that it contributes to our response time with almost 90%.

Dynatrace Lambda Service Flow
Dynatrace Lambda Service Flow

This also means that 90% of the execution cost of the Lambda is caused by the outbound call. If you would just measure the response time of the Lambda function without instrumenting it to see outbound calls, this information would be completely hidden in a black-box.

PurePath view of a single Lambda Request
PurePath view of a single Lambda Request

Of course, things get way more interesting, if the Lambda call is part of a larger transaction.

The following service flow shows the polling service of Dynatrace Davis.

The PurePath shows how the different calls are processed and in which sequence. It also shows which call contributes to the execution time to which extent.

Davis Poller PurePath
Davis Polling Service PurePath

Monitoring Memory and CPU

This is gives you a great angle for optimization – especially on platforms like Node.js where the runtime can be reduced by running tasks asynchronously.

Do you want to know how the CPU time is spent? Simply drill into the details of a specific Lambda process.

Lambda CPU Details
Lambda CPU Details

In this case – surprisingly we are spending way too much time in some logging function. I have to discuss that with the team.

Memory utilization is an important metric for Lambda functions as it’s part of the AWS Lambda pricing. The less memory you assign to a function, the cheaper it gets.

Lambda Memory Utilization
Lambda Memory Utilization

We can see that our function roughly scratches the 128MB mark. We could try to limit the size from 256 to 128MB and see if this would actually be enough.


  • While AWS can be complex, third party tools like Serverless can make deployment a breeze.
  • Dynatrace is the perfect match for such scenarios as it does require minimal initial setup and zero code changes.
  • With Dynatrace in place, you instantly get valuable deep-level insights into your Lambda function. This information can help you optimize your code and Lambda setup
  • Lambda Monitoring is an early access feature. To request to join our EAP program for Lambda, please fill out this form.

Stay updated