Managing and migrating legacy report server farm

In earlier releases, a number of slave servers analyzed and reported to a single master server. This legacy concept differs significantly from a currently supported farm-cluster-node hierarchy.

Note on product and component renaming
DC RUM 2017 May release Dynatrace NAM 2018 release
RUM Console NAM Console
Central Analysis Server (CAS) NAM Server
Advanced Diagnostic Server (ADS) Advanced Diagnostics on Demand feature of NAM Server
Agentless Monitoring Device (AMD) NAM Probe

The current NAM Console is able to read and display the legacy farm definitions, but, because of the new farm concept implementation, legacy farm components are converted and their place in farm concept is changed.

Important

To take full advantage of the new farm-cluster-node hierarchy and any automatic configuration features associated with it, you should consider a database reset when migrating a farm deployment from a previous version to the current version. While it may be possible to migrate from the previous farm structure to the new hierarchy without a database reset, the definitions, configurations, and historical data may not be compatible with the new scheme.

Migrating legacy report server farm

The preferable upgrade procedure involves upgrading the following components in this order:

Upgrade the CSS.

This step is applicable only to upgrades to DC RUM 12.4.x

Starting with DC RUM 2017, the NAM Console takes on all NAM user access and security management duties, replacing the CSS.

Upgrade the RUM Console.

Upgrade all of the report servers participating in the farm.

Upgrade all of the data sources participating in the farm.

If needed, add all of the components to the device list in the RUM Console.

After upgrade, there is no need to perform any post upgrade reconfiguration. The upgrade applies the new farm concept in the following way:

Legacy Farm Definitions Current Farm Definitions
Master CAS server
arrow right
arrow right
Primary node in a primary cluster
Slave CAS server
arrow right
arrow right
Primary node in a secondary cluster
Slave ADS server
arrow right
arrow right
Primary node in a secondary cluster
CAS/ADS failover server for Master CAS/ADS server
arrow right
arrow right
CAS/ADS failover for a primary node in a primary cluster
CAS/ADS failover server for a slave CAS/ADS server
arrow right
arrow right
CAS/ADS failover server for a primary node of a secondary cluster

Logical conversion of legacy master-slave concept

Farm migration
Farm migration

While all of the legacy farm definitions will upgrade to the new farm-cluster-node hierarchy automatically, some upgraded farm deployments should be restructured to take advantage of the new farm concept.

If you have used the legacy farm master-slave concept to load balance the report servers, the upgraded farm will contain single node clusters that are using identical configurations. To receive a valid configuration with the proper farm hierarchy, consolidate all single node clusters with the same configuration into one cluster. The clusters that will be converted into nodes will balance the workload and acquire their configuration and settings from the cluster primary node.

Consolidation of the nodes into one cluster involves removing a node from a single node cluster and adding that node to the load balancing cluster. This involves a step where you can select a database reset for the node that you are adding. If the new node has different site and user option settings from the primary node's settings of the edited cluster, that node will be automatically marked for a database reset in the Clear nodes database screen of the wizard. Since this action is required, the marked option will be disabled. Both, Site options and User options of the Central Analysis Server Configuration must be matching the primary node. If you are tracking user IP addresses from selected ranges and, these ranges differ between the primary node and the node being added, we recommend that you remove the User IP address ranges in the configuration options of the node that is being added.For more information, see CAS Configuration.

Post-upgrade reconfiguration

Migration reconfiguration

Important

In your legacy deployment, if you were using the client and server range settings or application and service names or to load balance the incoming traffic, we recommend that you review these settings, as they may cause a portion of incoming traffic to be ignored.

To modify these settings in NAM 12.3 farm configuration:

  1. Log in to the primary node in the load balancing cluster and go to HTTP://<CAS_URL>/diagconsole .

  2. Under the Configuration Management section click Advanced Properties Editor.

  3. Using the search box, find rtmjob properties.

  4. Modify or clear the RtmJob.client, RtmJob.server , RtmJob.service_name or RtmJob.appl_name settings.

Failover report servers migration

The legacy farm concept permitted the assignment of multiple failover nodes to a single report server. The current farm concept permits only one failover node per node within the cluster. Upgrade of multiple failover nodes for a single report server is not supported. If you have such a deployment, detach all additional failover nodes from the report server, leaving only the one failover node that you want to be included in the upgraded farm structure.

Note

If you leave multiple failover nodes attached during migration, the upgrade will complete properly but the upgraded farm hierarchy will contain only the failover node that was listed first in the configuration database. This automatic upgrade does not give you the ability to select a specific device that you want to use as a failover node in the upgraded farm.