Setting up Flow Processing
Objectives
-
Install features to persist and network flow messages with Sentinel to Elasticsearch
-
Consume flow messages from Minions through a message broker, i.e. ActiveMQ or Apache Kafka
-
Allow Sentinel to generate and send events to Meridian Core instance via message broker
Requirements
-
PostgreSQL, Elasticsearch and REST endpoint to Meridian Core instance are running and reachable from the Sentinel node
-
Message broker (ActiveMQ or Apache Kafka) is running and reachable from the Sentinel node
-
Credentials for authentication is configured for the REST endpoint in Meridian Core instance, Message broker, Elasticsearch and the PostgreSQL database
Configurations has to be made in the etc directory relative to the Meridian Sentinel home directory.
Depending on your operating system the home directory is /usr/share/sentinel for Debian/Ubuntu or /opt/sentinel for CentOS/RHEL.
|
Configure Access to PostgreSQL Database
Connect to the Karaf shell via SSH
ssh -p 8301 admin@localhost
Configure access to the PostgreSQL database
config:edit org.opennms.netmgt.distributed.datasource
config:property-set datasource.url jdbc:postgresql://postgres-ip:postgres-port/opennms-db-name(1)
config:property-set datasource.username my-db-user(2)
config:property-set datasource.password my-db-password(3)
config:property-set datasource.databaseName opennms-db-name(4)
config:update
1 | JDBC connection string, replace postgres-ip , postgres-port and opennms-db-name accordingly |
2 | PostgreSQL user name with read/write access to the opennms-db-name database |
3 | PostgreSQL password for my-db-user user |
4 | Database name of your Meridian Core instance database |
Configure Access to Elasticsearch
Connect to the Karaf shell via SSH
ssh -p 8301 admin@localhost
Configure access to persist flows to Elasticsearch
config:edit org.opennms.features.flows.persistence.elastic
config:property-set elasticUrl http://elastic-ip:9200(1)
config:property-set elasticIndexStrategy hourly(2)
config:property-set settings.index.number_of_replicas 0(3)
config:property-set connTimeout 30000(4)
config:property-set readTimeout 60000(5)
config:update
1 | Add here the URL to Elasticsearch cluster |
2 | Select an index strategy |
3 | Set a number of replicas, 0 is just a default and in production you should have at least 1 |
4 | Timeout in milliseconds the Sentinel is waiting to connect the Elasticsearch cluster |
5 | Read timeout when data is fetched from the Elasticsearch cluster |
Setting up Message Broker
Create a file in etc/featuresBoot.d/flows.boot
sudo vi etc/featuresBoot.d/flows.boot
Add the following features to Sentinel on startup
sentinel-jsonstore-postgres
sentinel-blobstore-noop
sentinel-kafka
sentinel-flows
Connect to the Karaf shell via SSH
ssh -p 8301 admin@localhost
Configure Sentinel tracing and REST endpoint
config:edit org.opennms.sentinel.controller
config:property-set location SENTINEL(1)
config:property-set id 00000000-0000-0000-0000-000000ddba11(2)
config:property-set http-url http://core-instance-ip:8980/opennms(3)
config:update
1 | A location string is required and is used only for tracing |
2 | Unique identifier is used as service name only for tracing |
3 | Base URL for the web UI which provides the REST endpoints |
Configure Sentinel as Kafka consumer for flow messages
config:edit org.opennms.core.ipc.sink.kafka.consumer(1)
config:property-set bootstrap.servers my-kafka-ip-1:9092,my-kafka-ip-2:9092(2)
config:update
1 | Edit the configuration for the flow consumer from Kafka |
2 | Connect to the following Kafka nodes and adjust the IPs or FQDNs with the Kafka port (9092) accordingly |
Configure Sentinel to be able to generate and send events
config:edit org.opennms.core.ipc.sink.kafka(1)
config:property-set bootstrap.servers my-kafka-ip-1:9092,my-kafka-ip-2:9092(2)
config:update
1 | Edit the configuration to send generated events from Sentinel via Kafka |
2 | Connect to the following Kafka nodes and adjust the IPs or FQDNs with the Kafka port (9092) accordingly |
If you want to use an Kafka cluster with multiple Meridian instances, the topic prefix can be customized by setting group.id which is by default set to OpenNMS .
You can set a different topic prefix for each instance with config:edit group.id my-group-id for the consumer and sink.
|
Configure the credentials and exit Karaf shell
opennms:scv-set opennms.http my-sentinel-user my-sentinel-password(1)
1 | Set the credentials for the REST endpoint created in your Meridian Core instance |
The credentials are encrypted on disk in ${SENTINEL_HOME}/etc/scv.jce .
|
Exit the Karaf Shell with Ctrl+d
Restart the Sentinel to apply the configuration
sudo systemctl restart minion
Verify configuration with running the health-check
opennms:health-check
Ensure features are installed and work properly
Verifying the health of the container
Verifying installed bundles [ Success ]
Connecting to Kafka from Sink Producer [ Success ]
Connecting to Kafka from Sink Consumer [ Success ]
Retrieving NodeDao [ Success ]
Connecting to ElasticSearch ReST API (Flows) [ Success ]
Connecting to OpenNMS ReST API [ Success ]
=> Everything is awesome
Create a file in etc/featuresBoot.d/flows.boot
sudo vi etc/featuresBoot.d/flows.boot
Add the following features to Sentinel on startup
sentinel-jsonstore-postgres
sentinel-blobstore-noop
sentinel-jms
sentinel-flows
Connect to the Karaf shell via SSH
ssh -p 8301 admin@localhost
Configure Sentinel tracing, REST and ActiveMQ endpoints
config:edit org.opennms.sentinel.controller
config:property-set location SENTINEL(1)
config:property-set id 00000000-0000-0000-0000-000000ddba11(2)
config:property-set http-url http://core-instance-ip:8980/opennms(3)
config:property-set broker-url failover:tcp://my-activemq-ip:61616(4)
config:update
1 | A location string is required and is used only for tracing |
2 | Unique identifier is used as service name only for tracing |
3 | Base URL for the web UI which provides the REST endpoints |
4 | URL which points to ActiveMQ broker. |
Configure the credentials and exit Karaf shell
opennms:scv-set opennms.http my-sentinel-user my-sentinel-password(1)
opennms:scv-set opennms.broker my-sentinel-user my-sentinel-password(2)
1 | Set the credentials for the REST endpoint created in your Meridian Core instance |
2 | Set the credentials for the ActiveMQ message broker |
The credentials are encrypted on disk in ${SENTINEL_HOME}/etc/scv.jce .
|
Exit the Karaf Shell with Ctrl+d
Restart the Sentinel to apply the configuration
sudo systemctl restart minion
Verify configuration with running the health-check
opennms:health-check
Ensure features are installed and work properly
Verifying the health of the container
Verifying installed bundles [ Success ]
Retrieving NodeDao [ Success ]
Connecting to JMS Broker [ Success ]
Connecting to ElasticSearch ReST API (Flows) [ Success ]
Connecting to OpenNMS ReST API [ Success ]
=> Everything is awesome
Enable Flow Processing Protocols
Connect to the Karaf shell via SSH
ssh -p 8301 admin@localhost
config:edit --alias netflow5 --factory org.opennms.features.telemetry.adapters
config:property-set name Netflow-5(1)
config:property-set adapters.0.name Netflow-5-Adapter(2)
config:property-set adapters.0.class-name org.opennms.netmgt.telemetry.protocols.netflow.adapter.netflow5.Netflow5Adapter(3)
config:update
1 | Queue name where Sentinel will fetch messages from, by default for Meridian components the queue name convention is Netflow-5 |
2 | Set a name for the Netflow v5 adapter |
3 | Assign an adapter to enrich Netflow v5 messages |
If you want to process multiple protocols and not just one you have to increase the index 0 in the adapters name and class name accordingly for additional protocols.
|
The configuration is persisted with the suffix specified as alias in etc/org.opennms.features.telemetry.adapters-netflow5.cfg .
|
Verify adapter configuration with running the health-check
opennms:health-check
Ensure the configured flow adapters work properly
Verifying the health of the container
...
Verifying Adapter Netflow-5-Adapter (org.opennms.netmgt.telemetry.protocols.netflow.adapter.netflow5.Netflow5Adapter) [ Success ]
config:edit --alias netflow9 --factory org.opennms.features.telemetry.adapters
config:property-set name Netflow-9(1)
config:property-set adapters.0.name Netflow-9-Adapter(2)
config:property-set adapters.0.class-name org.opennms.netmgt.telemetry.protocols.netflow.adapter.netflow9.Netflow9Adapter(3)
config:update
1 | Queue name where Sentinel will fetch messages from, by default for Meridian components the queue name convention is Netflow-9 |
2 | Set a name for the Netflow v9 adapter |
3 | Assign an adapter to enrich Netflow v9 messages |
If you want to process multiple protocols and not just one you have to increase the index 0 in the adapters name and class name accordingly for additional protocols.
|
The configuration is persisted with the suffix specified as alias in etc/org.opennms.features.telemetry.adapters-netflow9.cfg .
|
Verify adapter configuration with running the health-check
opennms:health-check
Ensure the configured flow adapters work properly
Verifying the health of the container
...
Verifying Adapter Netflow-9-Adapter (org.opennms.netmgt.telemetry.protocols.netflow.adapter.netflow9.Netflow9Adapter) [ Success ]
config:edit --alias sflow --factory org.opennms.features.telemetry.listeners
config:property-set name SFlow(1)
config:property-set adapters.0.name SFlow-Adapter(2)
config:property-set adapters.0.class-name org.opennms.netmgt.telemetry.protocols.sflow.adapter.SFlowAdapter(3)
config:update
1 | Queue name where Sentinel will fetch messages from, by default for Meridian components the queue name convention is SFlow |
2 | Set a name for the sFlow adapter |
3 | Assign an adapter to enrich sFlow messages |
If you want to process multiple protocols and not just one you have to increase the index 0 in the adapters name and class name accordingly for additional protocols.
|
The configuration is persisted with the suffix specified as alias in etc/org.opennms.features.telemetry.adapters-sflow.cfg .
|
Verify adapter configuration with running the health-check
opennms:health-check
Ensure the configured flow adapters work properly
Verifying the health of the container
...
Verifying Adapter SFlow-Adapter (org.opennms.netmgt.telemetry.protocols.sflow.adapter.SFlowAdapter) [ Success ]
config:edit --alias ipfix --factory org.opennms.features.telemetry.listeners
config:property-set name IPFIX(1)
config:property-set adapters.0.name IPFIX-Adapter(2)
config:property-set adapters.0.class-name org.opennms.netmgt.telemetry.protocols.netflow.adapter.ipfix.IpfixAdapter(3)
config:update
1 | Queue name where Sentinel will fetch messages from, by default for Meridian components the queue name convention is IPFIX |
2 | Set a name for the IPFIX adapter |
3 | Assign an adapter to enrich IPFIX messages |
If you want to process multiple protocols and not just one you have to increase the index 0 in the adapters name and class name accordingly for additional protocols.
|
The configuration is persisted with the suffix specified as alias in etc/org.opennms.features.telemetry.adapters-ipfix.cfg .
|
Verify adapter configuration with running the health-check
opennms:health-check
Ensure the configured flow adapters work properly
Verifying the health of the container
...
Verifying Adapter IPFIX-Adapter (org.opennms.netmgt.telemetry.protocols.netflow.adapter.ipfix.IpfixAdapter) [ Success ]