Monitor dockerized apps with AppMon - Kubernetes and OpenShift

The page How to monitor dockerized apps with AppMon describes how to monitor Dockerized apps in vanilla Docker environments with AppMon.

This page elaborates on monitoring Dockerized applications in Kubernetes and Red Hat OpenShift(v3) environments. If you are unsure how these two container platform technologies relate to each other, OpenShift is Kubernetes for the enterprise.

As described in How to monitor dockerized apps with AppMon, depending on your particular situation, you may find one of the following approaches more suitable. Pros and cons of each are listed.

Option A: inheritance-based approach

The goal of the inheritance-based approach is to put the AppMon agent into your Docker base images. Since Kubernetes and OpenShift are both container platforms, this approach allows you to reuse your monitoring enabled images on any of these platforms. However, since OpenShift is a secure container platform running containers, processes executed as root (which is how most Docker images are built) is prohibited by default. See the OpenShift Container Image Guidelines to learn how to prepare your Docker images for OpenShift. See How to monitor dockerized apps with AppMon for information on how to apply this approach to your Docker images.

Example: Java

Since the particular technologies inside your base images have are being monitored by AppMon, you only need simple runtime configuration settings to bind the agent to the AppMon Collector.

The following example defines a ReplicationController with a container named catalog inside an inline Pod. The environment variables DT_AGENT_NAME and DT_AGENT_COLLECTOR (as defined in our How to monitor dockerized apps with AppMon) override the respective values provided in a fictitious acmeco/my-app base image.

apiVersion: v1
kind: ReplicationController
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: my-app
        image: acmeco/my-app
        env:
        - name: DT_AGENT_NAME
          value: "my-app"
        - name: DT_AGENT_COLLECTOR
          value: "dtappmon-collector.acmeco.com:9998"
        ports:
        - containerPort: 8080

Example: NGINX

Since the particular technologies inside your base images have are being monitored by AppMon, you only need simple runtime configuration settings to bind the agent to the AppMon Collector.

The following example defines a ReplicationController with a container named api-gateway inside an inline Pod. The environment variables DT_WSAGENT_NAME and DT_WSAGENT_COLLECTOR (as defined in How to monitor dockerized apps with AppMon) override the respective values provided in a fictitious acmeco/my-app base image .

apiVersion: v1
kind: ReplicationController
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: my-app
        image: acmeco/my-app
        env:
        - name: DT_WSAGENT_NAME
          value: "my-app"
        - name: DT_WSAGENT_COLLECTOR
          value: "dtappmon-collector.acmeco.com:9998"
        ports:
        - containerPort: 80

Analysis

  • On the pros side, once the agent has been put into your Docker base images, it doesn't matter much on which container platform runs your applications. Also, this approach reduces the integration of AppMon to less frequent, preparatory work that does not add any overhead to the frequent process of building, shipping and running Dockerized applications.

  • On the cons side, depending on your particular use-case and on the technologies you use, you must manually integrate each of these technologies. And since this approach tightly ties together the agent with a particular technology in the same base image, these base images may altogether have to be recreated when switching to a new version of either the technology or of AppMon.

Option B: composition-based approach

With the composition-based approach, you use the AppMon/agent Docker image that includes all variants of the AppMon agent -- that you can configure to attach to your existing Docker containers. For information on how this approach actually works, refer to Monitor dockerized apps with AppMon.

Examples

The following examples integrate the agent Docker image with a fictitious container acmeco/my-app inside a ReplicationController. To mimic the Docker capability of exchanging volumes between containers (as described in Monitor dockerized apps with AppMon), manual steps are required to achieve the same behavior on Kubernetes-based platforms. The following shows what the replication controller looks like without the agent in place.

apiVersion: v1
kind: ReplicationController
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: my-app
        image: acmeco/my-app
        ports:
        - containerPort: 8080

Make the agent available inside the application container

To make the agent available inside the application container, declare a shared Volume inside the Pod definition. In the following example, volume is set to an empty directory on the container's underlying host.

apiVersion: v1
kind: ReplicationController
metadata:
 name: my-app
spec:
 template:
   spec:
     containers:
     - name: my-app
       image: acmeco/my-app
       ports:
       - containerPort: 8080
     volumes:
     - name: dtappmon-agent-volume
       emptyDir: {}

Next, add the agent Docker image to the list of containers, have it mount the volume and use the Container Lifecycle HookpostStart to copy the agent installation present in /dynatrace inside dynatrace/oneagent to /srv/dynatrace (which equals the mount path of the volume).

apiVersion: v1
kind: ReplicationController
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: dtappmon-agent
        image: dynatrace/agent
        volumeMounts:
        - name: dtappmon-agent-volume
          mountPath: /srv/dynatrace
        lifecycle:
          postStart:
            exec:
              command: [cp, -R, /dynatrace, /srv]
      - name: my-app
        image: acmeco/my-app
        ports:
        - containerPort: 8080
      volumes:
      - name: dtappmon-agent-volume
        emptyDir: {}

The dtappmon-agent container must be defined before any application containers that want make use of it. If this order is not maintained, the application container may not be able to eventually pick up the agent once it's started. Finally the application container mounts the agent installation directory from the shared volume.

apiVersion: v1
kind: ReplicationController
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: dtappmon-agent
        image: dynatrace/agent
        volumeMounts:
        - name: dtappmon-agent-volume
          mountPath: /srv/dynatrace
        lifecycle:
          postStart:
            exec:
              command: [cp, -R, /dynatrace, /srv]
      - name: my-app
        image: acmeco/my-app
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: dtappmon-agent-volume
          mountPath: /srv/dynatrace
      volumes:
      - name: dtappmon-agent-volume
        emptyDir: {}

Load the agent into the application's process: Java

apiVersion: v1
kind: ReplicationController
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: dtappmon-agent
        image: dynatrace/agent
        volumeMounts:
        - name: dtappmon-agent-volume
          mountPath: /srv/dynatrace
        lifecycle:
          postStart:
            exec:
              command: [cp, -R, /dynatrace, /srv]
      - name: my-app
        image: acmeco/my-app
        env:
        - name: DT_AGENTNAME
          value: "my-app"
        - name: DT_COLLECTOR
          value: "dtappmon-collector.acmeco.com:9998"
        - name: JAVA_OPTS
          value: "-agentpath:/srv/dynatrace/agent/lib64/libdtagent.so"
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: dtappmon-agent-volume
          mountPath: /srv/dynatrace
      volumes:
      - name: dtappmon-agent-volume
        emptyDir: {}

Load the agent into the application's process: NGINX

apiVersion: v1
kind: ReplicationController
metadata:
  name: my-app
spec:
  template:
    spec:
      containers:
      - name: dtappmon-agent
        image: dynatrace/agent
        volumeMounts:
        - name: dtappmon-agent-volume
          mountPath: /srv/dynatrace
        lifecycle:
          postStart:
            exec:
              command: [cp, -R, /dynatrace, /srv]
      - name: my-app
        image: acmeco/my-app
        env:
        - name: DT_AGENT_NAME
          value: "my-app"
        - name: DT_AGENT_COLLECTOR
          value: "dtappmon-collector.acmeco.com:9998"
        - name: DT_WSAGENT_BIN64
          value: "/srv/dynatrace/agent/lib64/dtwsagent"
        - name: DT_WSAGENT_INI
          value: "/srv/dynatrace/agent/conf/dtwsagent.ini"
        - name: LD_PRELOAD
          value: "/srv/dynatrace/agent/lib64/libdtagent.so"
        command: [/bin/sh, -c, /srv/dynatrace/run-wsagent.sh "${DT_WSAGENT_BIN64}" "${DT_WSAGENT_INI}" && nginx -g 'daemon off;']
        ports:
        - containerPort: 80
        volumeMounts:
        - name: dtappmon-agent-volume
          mountPath: /srv/dynatrace
      volumes:
      - name: dtappmon-agent-volume
        emptyDir: {}

This finalizes the composition-based approach.

Analysis

  • On the pros side, this approach neatly contributes to a clean separation of concerns, which is a design principle in the containers world. Also, you don't have to care about getting agents into your base images, a simple configuration at runtime is all you need to get your containers monitored.

  • On the cons side, while the Docker runtime has great support for exchanging volumes between containers, doing so on a container orchestration platform such as Kubernetes or OpenShift can render your application configurations overly complex.