Set up Message Broker

Distributing Meridian components like Minions and Sentinel requires a messaging infrastructure. This section describes how to setup your Meridian Core instance to use an existing messaging infrasturce based on ActiveMQ or Apache Kafka. If you want to learn how it works, Meridian Core comes with an embedded ActiveMQ system which is by default not reachable from external network. To get you started quickly, we describe here as well how you can enable and configure the embedded ActiveMQ instance.

We recommend using Apache Kafka in production and can be scaled to process high workloads. The embedded ActiveMQ instance is for convenience to get you started quickly in a testing environment. It is not suitable for production workloads.

Objectives

  • Configure Meridian Core instance to use a message broker for communication

  • Create a Minion user which is used to authenticate the communication channels

Requirements

Configuring a Minion requires the following information:

  • Web UI URL for your OpenNMS Meridian Core instance

  • SSH access to the Karaf Shell

  • ActiveMQ or Apache Kafka server IP addresses or FQDNs to configure the broker URL endpoints

  • Credentials for message broker if you use your existing environment

Create a Minion user

The communication between Meridian Core instance and the Minion component is secured with credentials. We use here as example the user my-minion-user with password my-minion-password.

  1. Log in to the web UI as an administrative user

  2. Click on the gears icon and choose Configure Users, Groups and On-Call RolesConfigure Users

  3. Add a new user with login name my-minion-user and password my-minion-password and click OK

  4. In the Security Roles area, assign the ROLE_MINION security role

    1. Optional: fill in a comment for the Minion user’s location and purpose

  5. Click Finish

The new created Minion user should now be listed in the user List.

Replace at least my-minion-password with a secure password.

Configure Message Broker

Configurations has to be made in the Meridian etc directory. We reference etc relative to the OpenNMS Meridian Core home directory. Depending on your operating system the home directory is /usr/share/opennms for Debian/Ubuntu or /opt/opennms for CentOS/RHEL.
  • Kafka

  • ActiveMQ

  • embedded ActiveMQ

  • gRPC

  • AWS SQS

Create a configuration file for Kafka settings
sudo vi etc/opennms.properties.d/kafka.properties
Configure Kafka
org.opennms.core.ipc.rpc.strategy=kafka(1)
org.opennms.core.ipc.rpc.kafka.bootstrap.servers=my-kafka-ip-1:9092,my-kafka-ip-2:9092(2)
org.opennms.core.ipc.sink.strategy=kafka(3)
org.opennms.core.ipc.sink.initialSleepTime=60000(4)
org.opennms.core.ipc.sink.kafka.bootstrap.servers=my-kafka-ip-1:9092,my-kafka-ip-2:9092(5)
1 Use Kafka for Remote Producer Calls (RPC)
2 Connect to the following Kafka nodes and adjust the IPs or FQDNs with the Kafka port (9092) accordingly.
3 Use Kafka as message sink
4 Ensure that messages are not consumed from Kafka until the system has fully initialized. 60 seconds is here used a default
5 Connect to the following Kafka nodes and adjust the IPs or FQDNs with the Kafka port (9092) accordingly.
If you set more than one Kafka node as bootstrap.servers the driver will attempt to connect to the first entry. If that is successful the whole broker topology will be discovered and will be known by the client. The other entries are only used if the connection to the first entry fails.
Apply the changes with restarting Meridian Core instance
sudo systemctl restart opennms

Configure Meridian components to use your existing ActiveMQ instance.

Create a properties file for the ActiveMQ settings
sudo vi etc/opennms.properties.d/activemq.properties
Disable embedded ActiveMQ and set the credentials for your ActiveMQ endpoints
org.opennms.activemq.broker.disable=true(1)
org.opennms.activemq.broker.url=failover:tcp://my-activemq:61616(2)
org.opennms.activemq.broker.username=my-activemq-user(3)
org.opennms.activemq.broker.password=my-activemq-password(4)
org.opennms.activemq.client.max-connections=8(5)
org.opennms.activemq.client.concurrent-consumers=10(6)
1 Disable embedded ActiveMQ in Meridian Core instance
2 Set the URL endpoint to your dedicated ActiveMQ instance. Replace my-active-mq:61616 accordingly. If you use ActiveMQ with SSL replace tcp with ssl.
3 Set a user name for ActiveMQ authentication
4 Set the password for ActiveMQ authentication
5 As a sane default we allow maximum 8 connections. Depending on your size it might be increases.
6 As a sane default we allow maximum 10 concurrent consumers. Depending on your size it might be increases.
Restart Meridian Core instance to apply the changes
sudo systemctl restart opennms

Meridian Core has an embedded ActiveMQ instance for convinience. It can be enabled and uses the same credentials configured from the web user interface for users in the role ROLE_MINION.

Edit ActiveMQ configuration file
sudo vi etc/opennms-activemq.xml
Remove comments for the transport connector listening on 0.0.0.0 and save
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?useJmx=false&amp;maximumConnections=1000&amp;wireformat.maxFrameSize=104857600"/>
Restart Meridian core server
sudo systemctl restart opennms
Verify if ActiveMQ port is available on public network interface
ss -lnpt sport = :61616
Verify listening 61616/tcp on all interfaces
State   Recv-Q  Send-Q  Local Address:Port  Peer  Address:Port
LISTEN  0       128     *:61616             *:*   users:(("java",pid=1,fd=706))
If you run a host firewall allow port 61616/tcp, here an example with firewalld
sudo firewall-cmd --permanent --add-port=61616/tcp
sudo systemctl reload firewalld
Create a configuration file gRPC settings
sudo vi etc/opennms.properties.d/grpc.properties
Set OSGi as IPC strategy
org.opennms.core.ipc.strategy=osgi
Create a file to install gRPC features on startup
sudo vi etc/featuresBoot.d/grpc.boot
Add the gRPC server features
opennms-core-ipc-grpc-server
Apply the changes with restarting Meridian Core instance
sudo systemctl restart opennms
The gRPC server is listening on port 8990/tcp by default. You can verify if the port is listening with ss -lnpt = :8990
This optional, if you want to enable TLS for gRPC you have to provide certificate files and enable it. The commands for TLS are described below.
Connect to the Karaf Shell
ssh -p 8101 admin@localhost
Configure TLS and certificate parameters
config:edit org.opennms.core.ipc.grpc.server
config:property-set tls.enabled true(1)
config:property-set server.cert.filepath /custom-path/server.crt(2)
config:property-set server.private.key.filepath /custom-path/server.pem(3)
config:property-set trust.cert.filepath /custom-path/ca.crt(4)
config:update(5)
1 Enable TLS for the gRPC server
2 Set the path to your CA certificate file
3 Set the path server certificate private key file
4 Set the path to your server certificate file
5 Save and update the configuration
This is optional and you can set a maximum message size for gRPC. The maxium size has to be the same on the Minion as well. If you don’t set a maximum message size the default is 10 MiB.
Configure maximum message size for gRPC in the Karaf Shell
config:edit org.opennms.core.ipc.grpc.client
config:property-set max.message.size 10485760
config:update
Apply the changes with restarting Meridian Core instance
sudo systemctl restart opennms
Create a configuration file for AWS SQS settings
sudo vi etc/opennms.properties.d/aws-sqs.properties
Configure AWS SQS
org.opennms.core.ipc.rpc.strategy=sqs(1)
org.opennms.core.ipc.sink.strategy=sqs(2)
org.opennms.core.ipc.sink.initialSleepTime=60000(3)
org.opennms.core.ipc.aws.sqs.sink.FifoQueue=false(4)

org.opennms.core.ipc.aws.sqs.aws_region=us-east-1(5)
org.opennms.core.ipc.aws.sqs.aws_access_key_id=my-access-key(6)
org.opennms.core.ipc.aws.sqs.aws_secret_access_key=my-secret-access-key(7)
1 Use AWS SQS for Remote Producer Calls (RPC)
2 Use AWS SQS as message sink
3 Ensure that messages are not consumed from Kafka until the system has fully initialized. 60 seconds is here used a default
4 If consistent ordering of incoming messages is required, FIFO queues can be used, the default is false and must match with the Minion setting
5 Set AWS SQS region
6 The AWS SQS access key
7 The AWS SQS secret for the access key
The default credential provider chain looks for credentials in the following order:
  1. Environment Variables (i.e. AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)

  2. Java system properties (i.e. aws.accessKeyId and aws.secretKey. These keys can be added to ${OPENNMS_HOME}/etc/opennms.conf)

  3. Default credential profiles file (i.e. ~/.aws/credentials)

  4. Amazon ECS container credentials (i.e. AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)

  5. Instance profile credentials (i.e. through the metadata service when running on EC2)

Apply the changes with restarting Meridian Core instance
sudo systemctl restart opennms
When running OpenNMS Meridian inside AWS, it is possible to use the default provider chain with an IAM Role to avoid hard coding the AWS Credentials on a configuration file. The following shows an example of the role that should be associated with the EC2 instance on which OpenNMS is going to run.
aws iam role
You can find available configuration parameters in the Amazon Simple Queueing Services reference section.