# Tuning ActiveMQ

## Multi-tenancy with Meridian and ActiveMQ

The queue names Meridian uses are prefixed with a constant value. If many Meridian are configured to use the same broker, then these queues would end up being shared among the instances, which is not desired. In order to isolate multiple instances on the same broker, you can customize the prefix by setting the value of the `org.opennms.instance.id` system property to something that is unique per instance.

`${OPENNMS_HOME}/etc/opennms.properties.d/instance-id.properties` ``org.opennms.instance.id=MyNMS`` Update the Minion’s instance ID accordingly to match the Meridian instance. `${MINION_HOME}/etc/custom.system.properties`.
``org.opennms.instance.id=MyNMS``
 If you change the instance ID setting when using the embedded broker, you will need to update the authorization section in the broker’s configuration to reflect the updated prefix. Modify the configuration with the file `\${OPENNMS_HOME}/etc/opennms-activemq.xml`:
``````<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<!-- Users in the minion role can write/create queues that are not keyed by location -->
<!-- Users in the minion role can read/create from queues that are keyed by location -->
</authorizationEntries>
<!-- Allow all users to read/write/create temporary destinations (by omitting a <tempDestinationAuthorizationEntry>) -->
</authorizationMap>
</map>
</authorizationPlugin>``````

## Tuning the RPC client in OpenNMS

The following system properties can be used to tune the thread pool used to issue RPCs:

Name Default Description

10

Number of threads which are always active.

20

Maximum number of threads which can be active. These will exit after remaining unused for some period of time.

org.opennms.ipc.rpc.queue.max

1000

Maximum number of requests to queue. Set to `-1` to be unlimited.

 Use the `opennms:stress-rpc` Karaf shell command to help evaluate and tune performance.

## Troubleshooting RPC failures

Symptoms of RPC failures may include missed polls, missed data collection attempts and the inability to provision or re-scan existing nodes. For these reasons, it is important to ensure that RPC related communication with Minion at the various monitoring locations remains healthy.

If you want to verify that a specific location is operating correctly make sure that:

1. Nodes exist and were automatically provisioned for all of the Minions at the location

2. The `Minion-Heartbeat`, `Minion-RPC` and `JMX-Minion` services are online for one or more Minions at the location

3. Response time graphs for the `Minion-RPC` service are populated and contain reasonable values

• These response time graphs can be found under the `127.0.0.1` response time resource on the Minion node

• Values should typically be under 100ms but may vary based on network latency

4. Resource graphs for the `JMX-Minion` service are populated and reasonable values

To interactively test RPC communication with a remote location use the `opennms:poll` command from the Karaf shell:

``opennms:poll -l LOCATION -c org.opennms.netmgt.poller.monitors.IcmpMonitor 127.0.0.1 (1)``
 1 Replace `LOCATION` in the command above with the name of the monitoring location you want to test.