Set up Message Broker
Distributing Meridian components like Minions and Sentinel require a messaging infrastructure. This section describes how to set up your Meridian Core instance to use an existing messaging infrastructure based on ActiveMQ or Apache Kafka.
Meridian Core comes with an embedded ActiveMQ system, which by default an external network cannot reach. To get started quickly, this guide describes how to enable and configure the embedded ActiveMQ instance.
We recommend using Apache Kafka in production, as it is scalable to process high workloads. The embedded ActiveMQ instance is for convenience to get started quickly in a test environment. It is not suitable for production workloads. |
Objectives
-
Configure Meridian Core instance to use a message broker for communication
-
Create a Minion user to authenticate the communication channels
Create a Minion user
Credentials secure the communication between Meridian Core instance and the Minion component. The example uses the name my-minion-user with password my-minion-password. Make sure to use an alternate, more secure password.
-
Log in to the web UI as an administrative user.
-
Click the gears icon and click Configure Users, Groups and On-Call Roles → Configure Users.
-
Click Add new user.
-
Type a login name (my-minion-user) and password (my-minion-password) and click OK.
The new Minion user appears in the user list.
-
Click the edit icon beside the new user.
-
In the Security Roles area, assign the ROLE_MINION security role.
-
Optional: fill in a comment for the Minion user’s location and purpose.
-
-
Click Finish.
Configure message broker
Configuration occurs in the Meridian etc directory.
We reference etc
relative to the OpenNMS Meridian Core home directory.
Depending on your operating system, the home directory is /usr/share/opennms
for Debian/Ubuntu or /opt/opennms
for CentOS/RHEL.
sudo vi etc/opennms.properties.d/kafka.properties
org.opennms.core.ipc.strategy=kafka(1)
org.opennms.core.ipc.sink.initialSleepTime=60000(2)
org.opennms.core.ipc.kafka.bootstrap.servers=my-kafka-ip-1:9092,my-kafka-ip-2:9092(3)
1 | Use Kafka for remote producer calls (RPC). |
2 | Ensure that messages are not consumed from Kafka for Sink until the system has fully initialized. Default is 60 seconds. |
3 | Connect to the following Kafka nodes and adjust the IPs or FQDNs with the Kafka port (9092) accordingly. |
If you set more than one Kafka node as bootstrap.servers , the driver attempts to connect to the first entry.
If that is successful, the client discovers and knows the whole broker topology.
The other entries are used only if the connection to the first entry fails.
|
You can still set module-specific config for sink IPC with prefix org.opennms.core.ipc.sink.kafka ; similarly for RPC and twin.
Module-specific config takes precedence over common config with prefix org.opennms.core.ipc.kafka .
|
sudo systemctl restart opennms
Configure Meridian components to use your existing ActiveMQ instance.
sudo vi etc/opennms.properties.d/activemq.properties
org.opennms.activemq.broker.disable=true(1)
org.opennms.activemq.broker.url=failover:tcp://my-activemq:61616(2)
org.opennms.activemq.broker.username=my-activemq-user(3)
org.opennms.activemq.broker.password=my-activemq-password(4)
org.opennms.activemq.client.max-connections=8(5)
org.opennms.activemq.client.concurrent-consumers=10(6)
1 | Disable embedded ActiveMQ in Meridian Core instance. |
2 | Set the URL endpoint to your dedicated ActiveMQ instance. Replace my-active-mq:61616 accordingly. If you use ActiveMQ with SSL, replace tcp with ssl . |
3 | Set a user name for ActiveMQ authentication. |
4 | Set the password for ActiveMQ authentication. |
5 | By default we allow a maximum of 8 connections. Increase it depending on your size. |
6 | By default we allow a maximum of 10 concurrent consumers. Increase it depending on your size. |
sudo systemctl restart opennms
Meridian Core has an embedded ActiveMQ instance for convenience that you can enable. It uses the same credentials configured from the web UI for users in the ROLE_MINION role.
sudo vi etc/opennms-activemq.xml
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?useJmx=false&maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
sudo systemctl restart opennms
ss -lnpt sport = :61616
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:61616 *:* users:(("java",pid=1,fd=706))
If you run a host firewall, allow port 61616/tcp, as in the following example with firewalld:sudo firewall-cmd --permanent --add-port=61616/tcp sudo firewall-cmd --reload
|
sudo vi etc/opennms.properties.d/grpc.properties
org.opennms.core.ipc.strategy=osgi
sudo vi etc/featuresBoot.d/grpc.boot
opennms-core-ipc-grpc-server
sudo systemctl restart opennms
The gRPC server listens on port 8990/tcp
by default.
Use ss -lnpt = :8990
to verify the port is listening.
Optional: to enable TLS for gRPC, you must provide certificate files and enable it. See below for TLS commands.
ssh -p 8101 admin@localhost
config:edit org.opennms.core.ipc.grpc.server
config:property-set tls.enabled true(1)
config:property-set server.cert.filepath /custom-path/server.crt(2)
config:property-set server.private.key.filepath /custom-path/server.pem(3)
config:property-set trust.cert.filepath /custom-path/ca.crt(4)
config:update(5)
1 | Enable TLS for the gRPC server. |
2 | Set the path to your CA certificate file. |
3 | Set the path to the server certificate private key file. |
4 | Set the path to your server certificate file |
5 | Save and update the configuration. |
This is optional and you can set a maximum message size for gRPC. The maximum size must be the same on the Minion. Default maximum message size is 10 MiB.
config:edit org.opennms.core.ipc.grpc.client
config:property-set max.message.size 10485760
config:update
sudo systemctl restart opennms