Managing report server farms

RUM Console ► Deployment ► Manage devices, Farms tab

You can view the configuration and connection status of a farm and perform basic actions for every component within a farm.

Farm synchronization

All nodes and clusters within a farm must synchronize their configurations and settings with the primary node and primary cluster. The primary node of every cluster monitors the configuration status of all other nodes of the same type for that cluster. Any changes to the primary node configuration or settings are broadcast to the remaining nodes. Changes in settings are detected and updated while the nodes receive data batches for analysis. In cases where no analysis data is received by nodes, synchronization occurs approximately every 10 minutes.

Note: Synchronization occurs only after all components have been upgraded.

If your maintenance tasks require you to stop the report servers, the recommended order of stopping and starting your deployment is as follows:

Before you begin stopping individual farm components of your deployment, make sure that none of the components are performing automated jobs such as maintenance or nightly tasks.

Stopping active farm deployments

  1. Stop the primary node in your primary cluster.
  2. (Optional) If present, stop the failover node of the primary node in the primary cluster.
  3. Stop each secondary nodes and its failover node (if present) in the primary cluster.
  4. Repeat step 1 - 3 for each secondary cluster (if present).
  5. Stop the RUM Console and Dynatrace Enterprise Portal
  6. Stop the Microsoft SQL Server

The starting order of all your components would be reversed however, recommended order of starting report servers in a cluster is the same as for stopping:

Starting already configured farm deployments

  1. Start the Microsoft SQL Server
  2. Start the RUM Console and Dynatrace Enterprise Portal
  3. Start the primary node in your primary cluster.
  4. (Optional) If present, start the failover node of the primary node in the primary cluster.
  5. Start each secondary node and its failover node (if present) in the primary cluster.
  6. Repeat step 4 - 6 for each secondary cluster (if present).

Farm status

The tooltip for the primary node for each cluster displays the connection and configuration status for all primary nodes in clusters with which it shares a configuration based on the synchronization options.

The connection and synchronization status can be examined with a glance at the tooltip. Hover the cursor over any node type to view the connection and configuration status for that particular node. The tooltip indicates the status only for the connections and configurations directly related to that node.

  • The tooltip for a secondary node within a cluster displays the connection and configuration status only to the primary node of that cluster because the primary node is the only directly related node with which the secondary node exchanges data. All of the secondary nodes within a cluster have one status line for the connection and configuration to the primary node of that cluster.
  • All primary nodes for each cluster list status lines for all nodes within their own cluster and the configuration and connection status to the primary node of the primary cluster.

In this example, the tooltip displays the connection and configuration status of a primary node in a primary cluster (Cluster A). Since that primary node shares its configuration with all other primary servers of that farm, the tooltip displays the status for the most connections and can be used to view the configuration and connection status for the entire farm.

Farm connection and configuration tooltip
Farm connection and configuration tooltip

Farm management

The only administrative action you can perform that affects the entire farm is to rename it. Click the default Farm 1 name to rename the farm.

Farm rename
Farm rename

Individual components of the farm can be administered independently via the action icon action icon.

Removing a farm definition

To completely remove a farm, delete all clusters belonging to that farm one by one.

As you delete clusters, you modify the farm definition located on the primary node of the primary cluster at the same time creating a draft configuration for that primary node. While all nodes from the deleted clusters automatically become standalone servers in the device list, they cannot be configured until the draft configuration of the primary node in the primary cluster is published. Good practice is to publish draft configurations immediately after making any changes to the farm definition.

Applying patches to farm deployments

The patch description should contain the order in which the patch must be applied in farm deployments. If no such information is provided, the general rule is

  • Apply patches in the order below.
  • Keep all nodes on the same version.

Patch application order:

  1. Apply the patch to the primary node in the primary cluster.
  2. Apply the patch to each secondary node in the primary cluster.
  3. Apply the patch to the primary node in the secondary cluster.
  4. Apply the patch to each secondary node in the secondary cluster.
  5. Repeat step 3 and 4 for each secondary cluster (if applicable).
  6. Apply the patch to all the failover nodes in the primary cluster and secondary clusters (if applicable).

Database server in farm deployments

Each of the nodes within the cluster is capable of using a centralized instance of the database however, this configuration will generate additional traffic between the nodes and may have performance impact on data processing and reporting. It is recommended that each of the nodes within the farm use its own instance of database stored locally on that node.

Cluster administration

Similarly to the renaming of the farm, you can click on the cluster name and rename the cluster. Click the action icon icon to perform one of the basic cluster operations:

Farm cluster actions
Farm cluster actions

Add node

Enables you to select an additional node from a list of standalone report servers and add it to this cluster.

Open configuration

Enables you to define options that affect all nodes assigned to the edited cluster. For more information, see Cluster Configuration.

Delete cluster

Enables you to remove the cluster and all of its nodes from the farm. All nodes attached to a deleted cluster automatically become standalone report servers.

Note

Report servers removed from an active cluster retain the data and options that were set while operating within a farm.

Node administration

Each node is capable of three basic actions: add failover, delete node, and reset node. All secondary nodes also include an additional option of setting the selected node as primary.

Node actions
Node actions

Add failover

Use this option to select a standalone report server of the same type and attach it as a failover server. For more information, see Report server failover overview.

Set as Primary

Use this option to set the current node as the primary node for this cluster. This will demote the current primary node to a regular node within the cluster and set the selected node as the primary node. The configuration of the new primary node will be distributed to the rest of the nodes attached to the cluster within approximately 5 to 10 minutes.

Delete node

Use this option to remove this node from the cluster and from the farm. This action resets the node as a standalone report server, but it retains the data and options used while operating within the cluster. Removing a node from a monitoring cluster creates an incomplete data set because the analysis is based on equal distribution of monitored traffic between all nodes within the cluster.

If removed node did not have a failover server, the historical data stored on that node will not be available for reporting and will be missing from any historical reports generated by that cluster.

Note

Please note that when a Node is removed from a cluster the load is not redistributed until the next execution of the nightly tasks and all historical data will remain with that Node.

There is no possibility to preserve the historical data stored on the node if you plan to permanently remove that node from the cluster.

Reset node (available only in draft mode)

Use this option to reset the node, which purges all data from its database and prepares the node to be synchronized within the cluster.

Note

Resetting a node deletes all data in the report server's database. We recommend that you reset a node only if it has previously been part of another cluster, or has been monitoring traffic as a standalone report server and contains monitoring data.

Failover administration

Each failover node is capable of two basic actions: Swap failover with primary and Delete failover.

failover actions
failover actions

Swap failover with primary

After the failover is attached to an active (its parent node), you can swap their places, such that the old failover node becomes the new active node and the old active node becomes the new failover node for the new active node. Because the active and the failover nodes have identical configurations and contain the same data, the swap occurs seamlessly and the new active and failover nodes become operational as soon as you publish the configuration.

Delete failover

You can use this option to remove a failover node from its primary node. This action changes the failover node's role to a standalone report server, but it retains the data and options it used while operating as a failover node.