Set up Message Broker
Distributing Meridian components like Minions and Sentinel require a messaging infrastructure. This section describes how to set up your Meridian Core instance to use an existing messaging infrastructure based on ActiveMQ or Apache Kafka.
Meridian Core comes with an embedded ActiveMQ system, which by default an external network cannot reach. To get started quickly, this guide describes how to enable and configure the embedded ActiveMQ instance.
We recommend using Apache Kafka in production, as it is scalable to process high workloads. The embedded ActiveMQ instance is for convenience to get started quickly in a test environment. It is not suitable for production workloads. |
Objectives
-
Configure Meridian Core instance to use a message broker for communication
-
Create a Minion user to authenticate the communication channels
Create a Minion user
Credentials secure the communication between Meridian Core instance and the Minion component. The example uses the name my-minion-user with password my-minion-password. Make sure to use an alternate, more secure password.
-
Log in to the web UI as an administrative user.
-
Click the gears icon and click Configure Users, Groups and On-Call Roles → Configure Users.
-
Click Add new user.
-
Type a login name (my-minion-user) and password (my-minion-password) and click OK.
The new Minion user appears in the user list.
-
Click the edit icon beside the new user.
-
In the Security Roles area, assign the ROLE_MINION security role.
-
Optional: fill in a comment for the Minion user’s location and purpose.
-
-
Click Finish.
Configure message broker
Configuration occurs in the Meridian etc directory.
We reference etc
relative to the OpenNMS Meridian Core home directory.
Depending on your operating system, the home directory is /usr/share/opennms
for Debian/Ubuntu or /opt/opennms
for CentOS/RHEL.
sudo vi etc/opennms.properties.d/kafka.properties
org.opennms.core.ipc.rpc.strategy=kafka(1)
org.opennms.core.ipc.rpc.kafka.bootstrap.servers=my-kafka-ip-1:9092,my-kafka-ip-2:9092(2)
org.opennms.core.ipc.sink.strategy=kafka(3)
org.opennms.core.ipc.sink.initialSleepTime=60000(4)
org.opennms.core.ipc.sink.kafka.bootstrap.servers=my-kafka-ip-1:9092,my-kafka-ip-2:9092(5)
1 | Use Kafka for remote producer calls (RPC). |
2 | Connect to the following Kafka nodes and adjust the IPs or FQDNs with the Kafka port (9092) accordingly. |
3 | Use Kafka as message sink. |
4 | Ensure that messages are not consumed from Kafka until the system has fully initialized. Default is 60 seconds. |
5 | Connect to the following Kafka nodes and adjust the IPs or FQDNs with the Kafka port (9092) accordingly. |
If you set more than one Kafka node as bootstrap.servers , the driver attempts to connect to the first entry.
If that is successful, the client discovers and knows the whole broker topology.
The other entries are used only if the connection to the first entry fails.
|
sudo systemctl restart opennms
Configure Meridian components to use your existing ActiveMQ instance.
sudo vi etc/opennms.properties.d/activemq.properties
org.opennms.activemq.broker.disable=true(1)
org.opennms.activemq.broker.url=failover:tcp://my-activemq:61616(2)
org.opennms.activemq.broker.username=my-activemq-user(3)
org.opennms.activemq.broker.password=my-activemq-password(4)
org.opennms.activemq.client.max-connections=8(5)
org.opennms.activemq.client.concurrent-consumers=10(6)
1 | Disable embedded ActiveMQ in Meridian Core instance. |
2 | Set the URL endpoint to your dedicated ActiveMQ instance. Replace my-active-mq:61616 accordingly. If you use ActiveMQ with SSL, replace tcp with ssl . |
3 | Set a user name for ActiveMQ authentication. |
4 | Set the password for ActiveMQ authentication. |
5 | By default we allow a maximum of 8 connections. Increase it depending on your size. |
6 | By default we allow a maximum of 10 concurrent consumers. Increase it depending on your size. |
sudo systemctl restart opennms
Meridian Core has an embedded ActiveMQ instance for convenience that you can enable. It uses the same credentials configured from the web UI for users in the ROLE_MINION role.
sudo vi etc/opennms-activemq.xml
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?useJmx=false&maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
sudo systemctl restart opennms
ss -lnpt sport = :61616
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:61616 *:* users:(("java",pid=1,fd=706))
If you run a host firewall, allow port 61616/tcp, as in the following example with firewalld:sudo firewall-cmd --permanent --add-port=61616/tcp sudo firewall-cmd --reload
|
sudo vi etc/opennms.properties.d/grpc.properties
org.opennms.core.ipc.strategy=osgi
sudo vi etc/featuresBoot.d/grpc.boot
opennms-core-ipc-grpc-server
sudo systemctl restart opennms
The gRPC server listens on port 8990/tcp
by default.
Use ss -lnpt = :8990
to verify the port is listening.
Optional: to enable TLS for gRPC, you must provide certificate files and enable it. See below for TLS commands.
ssh -p 8101 admin@localhost
config:edit org.opennms.core.ipc.grpc.server
config:property-set tls.enabled true(1)
config:property-set server.cert.filepath /custom-path/server.crt(2)
config:property-set server.private.key.filepath /custom-path/server.pem(3)
config:property-set trust.cert.filepath /custom-path/ca.crt(4)
config:update(5)
1 | Enable TLS for the gRPC server. |
2 | Set the path to your CA certificate file. |
3 | Set the path to the server certificate private key file. |
4 | Set the path to your server certificate file |
5 | Save and update the configuration. |
This is optional and you can set a maximum message size for gRPC. The maximum size must be the same on the Minion. Default maximum message size is 10 MiB.
config:edit org.opennms.core.ipc.grpc.client
config:property-set max.message.size 10485760
config:update
sudo systemctl restart opennms
sudo vi etc/opennms.properties.d/aws-sqs.properties
org.opennms.core.ipc.rpc.strategy=sqs(1)
org.opennms.core.ipc.sink.strategy=sqs(2)
org.opennms.core.ipc.sink.initialSleepTime=60000(3)
org.opennms.core.ipc.aws.sqs.sink.FifoQueue=false(4)
org.opennms.core.ipc.aws.sqs.aws_region=us-east-1(5)
org.opennms.core.ipc.aws.sqs.aws_access_key_id=my-access-key(6)
org.opennms.core.ipc.aws.sqs.aws_secret_access_key=my-secret-access-key(7)
1 | Use AWS SQS for remote procedure calls (RPC). |
2 | Use AWS SQS as message sink. |
3 | Ensure that messages are not consumed from Kafka until the system has fully initialized. Default is 60 seconds. |
4 | If you require consistent ordering of incoming messages, you can use FIFO queues. Default is false and must match the Minion setting. |
5 | Set AWS SQS region. |
6 | The AWS SQS access key. |
7 | The AWS SQS secret for the access key. |
The default credential provider chain looks for credentials in the following order:
-
Environment variables (such as
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
). -
Java system properties (such as
aws.accessKeyId
andaws.secretKey
). Add these keys to${OPENNMS_HOME}/etc/opennms.conf
). -
Default credential profiles file (for example,
~/.aws/credentials
). -
Amazon ECS container credentials (for example,
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
). -
Instance profile credentials (such as through the metadata service when running on EC2).
sudo systemctl restart opennms
When OpenNMS Meridian runs inside AWS, you can use the default provider chain with an IAM role to avoid hard coding the AWS credentials on a configuration file. The following shows an example of the role to associate with the EC2 instance on which OpenNMS is going to run. |
You can find available configuration parameters in the Amazon Simple Queue Service.