Configuring report server farm

A farm is a collection of clusters, where each cluster is a collection of nodes (report servers). To create a farm, you must first group at least two nodes (report servers) into a single primary cluster.

By creating a farm, and selecting at least two nodes, you automatically create a primary cluster for the new farm. After the primary cluster is defined and the single-cluster farm is created, you can add, remove, and manage additional nodes and clusters. For more information, see Managing report server farms.

Before you begin

Before creating a farm, make sure:

  • All devices that you plan to include in the farm are present in the device list of the RUM Console. All report servers on the device list indicate the current role they are performing within the deployment.
  • All participating devices contain proper licensing. For more information, see Licensing a report server farm.

There are four roles that a report server can indicate: Standalone, Primary, Node, and Failover. For more information on each of the roles, see Report Server Farm Concept.

Configuration

Open RUM Console ► Deployment ► Manage devices.

On the Devices panel, switch to the Farms tab and click Create Farm to start the farm creation wizard.

farm config step2

Select the template of the farm that best suits your needs.

Select the farm template

You must define the initial structure of the new farm. The structure depends on the purpose of the farm that you create. Typically you create a report server farm for one of the following three reasons:

  • You would like multiple report servers to share the workload while analyzing the same traffic. Select Create an initial farm with a single load balanced cluster.

    The ability to analyze the data when adding a new load balancing node varies depending on the node type and possible existing data on the primary node of the edited cluster. If you are adding a node to a cluster with a newly installed primary node, the load balancing will be active immediately. If you are adding a node to a cluster with a primary node that already operated in this cluster, and contains data in its database, once added, the new node will be able to load balance the data starting at midnight. This delay occurs while adding CAS nodes. For more information, see Report server load balancing.

  • You would like to monitor and analyze the same traffic but would like special reporting or alerting configurations based on software services, data sources, or other criteria. Select Create an initial farm with multiple clusters.

  • You would like to have a backup or redundant report server. Select Create an initial farm with a single failover cluster.

Click Next to continue.

Select the primary node for the new cluster.

If your deployment contains a large number of available nodes, use the search box to narrow down the list of available nodes.

select primary node

Each cluster within a farm contains one primary node. That node is responsible for sharing its configurations and settings with the rest of the nodes (report servers) attached to that cluster.

If the primary node you selected for the new cluster contains already defined client or server ranges, the new cluster and all secondary nodes within it will limit their monitoring in accordance with the accepted IP address ranges defined on the primary node. Monitoring limits defined by client or server ranges will propagate to all nodes within a given cluster. If you want the new cluster to monitor all available traffic, you must remove the ranges from the primary node.For more information, see Accepted client or server IP address ranges.

You can select only one node. This primary node will represent the entire cluster.

Click Next to continue.

Select additional nodes for the new cluster.

This step varies depending on your template selection in Step 3.

  • If you selected the Create an initial farm with a single load balanced cluster template, you can select any available node and add it to the same cluster.
  • If you selected the Create an initial farm with multiple clusters template, you must select a primary node for the secondary cluster. Because there is only one primary node per cluster, you can select only one node.
  • If you selected the Create an initial farm with a single failover cluster template, you can select only one failover for the primary node selected in the previous step.

Click Next to continue.

Optional: Select nodes to be reset.

Nodes operating as standalone servers may contain data and configurations that are incompatible with the current cluster assignment. Such data can develop inaccurate results and such configurations can generate undesired monitoring outcomes. Resetting a node will purge all records from the node's database and set its configuration to installation defaults. We recommend that all nodes, whether newly added or moved from another farm or cluster, be reset prior to being assigned to a new cluster.

Nodes automatically selected for the database reset have incompatible option settings for site and user options in the Central Analysis Server Configuration and, in order for such a node to be successfully attached to the cluster, the database must to be reset. For more information, see Report Server Nodes.

If you are adding a freshly installed report server containing no monitoring data, or if you are certain that the data or configuration present in the node is compatible or required for current analysis of this new cluster, you can skip this step by leaving the nodes unselected and clicking Finish.

reset nodes step

Select the nodes to be reset and click Finish to continue.

Confirm that you want to delete the database records on selected report servers.

After the initial farm has been defined, all participating devices indicate their new roles when listed in the device list of the RUM Console. The Farms view will list the new farm with a default farm name.

What to do next

You must publish the configuration to complete the farm definition. After it is published, you can manage the farm by adding and deleting nodes, clusters, and failovers.