Set up Message Broker

Distributing Meridian components like Minions and Sentinel require a messaging infrastructure. This section describes how to set up your Meridian Core instance to use an existing messaging infrastructure based on ActiveMQ or Apache Kafka.

Meridian Core comes with an embedded ActiveMQ system, which by default an external network cannot reach. To get started quickly, this guide describes how to enable and configure the embedded ActiveMQ instance.

We recommend using Apache Kafka in production, as it is scalable to process high workloads. The embedded ActiveMQ instance is for convenience to get started quickly in a test environment. It is not suitable for production workloads.

Objectives

  • Configure Meridian Core instance to use a message broker for communication

  • Create a Minion user to authenticate the communication channels

Create a Minion user

Credentials secure the communication between Meridian Core instance and the Minion component. The example uses the name my-minion-user with password my-minion-password. Make sure to use an alternate, more secure password.

  1. Log in to the web UI as an administrative user.

  2. Click the gears icon and click Configure Users, Groups and On-Call RolesConfigure Users.

  3. Click Add new user.

  4. Type a login name (my-minion-user) and password (my-minion-password) and click OK.

    The new Minion user appears in the user list.

  5. Click the edit icon beside the new user.

  6. In the Security Roles area, assign the ROLE_MINION security role.

    1. Optional: fill in a comment for the Minion user’s location and purpose.

  7. Click Finish.

Configure message broker

Configuration occurs in the Meridian ${OPENNMS_HOME}/etc directory. We reference etc relative to the OpenNMS Meridian Core home directory. Depending on your operating system, the home directory is /usr/share/opennms for Debian/Ubuntu or /opt/opennms for CentOS/RHEL.

  • Kafka

  • ActiveMQ

  • Embedded ActiveMQ

  • gRPC

Create a configuration file for Kafka settings
sudo vi etc/opennms.properties.d/kafka.properties
Configure Kafka
org.opennms.activemq.broker.disable=true(1)
org.opennms.core.ipc.strategy=kafka(2)
org.opennms.core.ipc.sink.initialSleepTime=60000(3)
org.opennms.core.ipc.kafka.bootstrap.servers=my-kafka-ip-1:9092,my-kafka-ip-2:9092(4)
1 Disable the embedded ActiveMQ broker.
2 Use Kafka for remote producer calls (RPC).
3 Ensure that messages are not consumed from Kafka for Sink until the system has fully initialized. Default is 60 seconds.
4 Connect to the following Kafka nodes and adjust the IPs or FQDNs with the Kafka port (9092) accordingly.
Enabling Kafka broker settings requires that you have a Kafka cluster installed and running.
If you set more than one Kafka node as bootstrap.servers, the driver attempts to connect to the first entry. If that is successful, the client discovers and knows the whole broker topology. The other entries are used only if the connection to the first entry fails.

Any valid Kafka configuration property can be set with the org.opennms.core.ipc.kafka prefix.

Example config using SASL/SCRAM with TLS.
org.opennms.core.ipc.strategy=kafka
org.opennms.core.ipc.sink.initialSleepTime=60000
org.opennms.core.ipc.kafka.bootstrap.servers=my-kafka-ip-1:9096,my-kafka-ip-2:9096
org.opennms.core.ipc.kafka.security.protocol=SASL_SSL
org.opennms.core.ipc.kafka.sasl.mechanism=SCRAM-SHA-512
org.opennms.core.ipc.kafka.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="opennms-ipc" password="kafka";
You can set module-specific config for modules sink, rpc, and twin with prefixes org.opennms.core.ipc.sink.kafka, org.opennms.core.ipc.rpc.kafka, and org.opennms.core.ipc.twin.kafka, respectively. Module-specific config takes precedence over common config with prefix org.opennms.core.ipc.kafka.
Meridian Core requires the Kafka broker configuration option auto.create.topics.enable to be set to true.
Restart the Meridian Core instance to apply the changes
sudo systemctl restart opennms

Configure Meridian components to use your existing ActiveMQ instance.

Create a properties file for the ActiveMQ settings
sudo vi etc/opennms.properties.d/activemq.properties
Disable embedded ActiveMQ and set the credentials for your ActiveMQ endpoints
org.opennms.activemq.broker.disable=true(1)
org.opennms.activemq.broker.url=failover:tcp://my-activemq:61616(2)
org.opennms.activemq.broker.username=my-activemq-user(3)
org.opennms.activemq.broker.password=my-activemq-password(4)
org.opennms.activemq.client.max-connections=8(5)
org.opennms.activemq.client.concurrent-consumers=10(6)
1 Disable embedded ActiveMQ in Meridian Core instance.
2 Set the URL endpoint to your dedicated ActiveMQ instance. Replace my-active-mq:61616 accordingly. If you use ActiveMQ with SSL, replace tcp with ssl.
3 Set a user name for ActiveMQ authentication.
4 Set the password for ActiveMQ authentication.
5 By default we allow a maximum of 8 connections. Increase it depending on your size.
6 By default we allow a maximum of 10 concurrent consumers. Increase it depending on your size.
Restart Meridian Core instance to apply the changes
sudo systemctl restart opennms

Meridian Core has an embedded ActiveMQ instance for convenience that you can enable. It uses the same credentials configured from the web UI for users in the ROLE_MINION role.

Edit ActiveMQ configuration file
sudo vi etc/opennms-activemq.xml
Remove comments for the transport connector listening on 0.0.0.0 and save
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?useJmx=false&amp;maximumConnections=1000&amp;wireformat.maxFrameSize=104857600"/>
Restart Meridian Core instance
sudo systemctl restart opennms
Verify that ActiveMQ port is available on public network interface
ss -lnpt sport = :61616
Verify listening 61616/tcp on all interfaces
State   Recv-Q  Send-Q  Local Address:Port  Peer  Address:Port
LISTEN  0       128     *:61616             *:*   users:(("java",pid=1,fd=706))
If you run a host firewall, allow port 61616/tcp, as in the following example with firewalld:
sudo firewall-cmd --permanent --add-port=61616/tcp
sudo firewall-cmd --reload
Create a configuration file for gRPC settings
sudo vi etc/opennms.properties.d/grpc.properties
Set OSGi as IPC strategy
org.opennms.core.ipc.strategy=osgi
Create a file to install gRPC features on startup
sudo vi etc/featuresBoot.d/grpc.boot
Add the gRPC server features
opennms-core-ipc-grpc-server
Apply the changes with Meridian Core instance restart
sudo systemctl restart opennms

The gRPC server listens on port 8990/tcp by default. Use ss -lnpt = :8990 to verify the port is listening.

Optional: to enable TLS for gRPC, you must provide certificate files and enable it. See below for TLS commands.

Connect to the Karaf shell
ssh -p 8101 admin@localhost
Configure TLS and certificate parameters
config:edit org.opennms.core.ipc.grpc.server
config:property-set tls.enabled true(1)
config:property-set server.cert.filepath /custom-path/server.crt(2)
config:property-set server.private.key.filepath /custom-path/server.pem(3)
config:property-set trust.cert.filepath /custom-path/ca.crt(4)
config:update(5)
1 Enable TLS for the gRPC server.
2 Set the path to your CA certificate file.
3 Set the path to the server certificate private key file.
4 Set the path to your server certificate file
5 Save and update the configuration.

This is optional and you can set a maximum message size for gRPC. The maximum size must be the same on the Minion. Default maximum message size is 10 MiB.

Configure maximum message size for gRPC in the Karaf shell
config:edit org.opennms.core.ipc.grpc.client
config:property-set max.message.size 10485760
config:update
Apply the changes with Meridian Core instance restart
sudo systemctl restart opennms