Sentinel Features
The following list contains some features which may be installed manually:
Feature | Required | Description |
---|---|---|
sentinel-core |
true |
Base feature, installing all required bundles such as |
sentinel-jms |
false |
Provides connectivity to the Meridian ActiveMQ Broker. |
sentinel-kafka |
false |
Provides connectivity to Kafka. |
sentinel-flows |
false |
Feature which starts all dependencies to start processing flows. |
sentinel-newts |
false |
Provides functionality to persist measurement data to Newts. |
sentinel-telemetry-nxos |
false |
Lets you use |
sentinel-telemetry-jti |
false |
Lets you use the |
sentinel-telemetry-bmp |
false |
Lets you use the |
Auto install
In some cases it is desired to automatically configure the Sentinel instance and also start required features/bundles.
As Sentinel is based on Apache Karaf - which supports auto deployment by simply copying any kind of data
to the deploy
folder, Sentinel can make use of that mechanism to enable auto or hot deployment.
In order to do so, in most cases it is sufficient to copy a features.xml
file to ${SENTINEL_HOME}/deploy
.
This can be done even if the container is running.
The chapter Configure Flow Processing contains an example on how to automatically start them with Sentinel
Auto Start
In some cases it might not be sufficient to auto-deploy/configure the container with a features.xml
file.
If more flexibility is required it is suggested to modify/copy *.cfg
and *.properties
files directly to the ${SENTINEL_HOME}/etc
directory.
To automatically start features with the container, create a file in ${SENTINEL_HOME}/etc/featuresBoot.d/
that contains the desired features:
echo "sentinel-jms" | sudo tee -a \${SENTINEL_HOME}/etc/featuresBoot.d/features.boot (1)
echo "sentinel-flows" | sudo tee -a ${SENTINEL_HOME}/etc/featuresBoot.d/sentinel-features.boot (2)
1 | Install and start JMS communication feature. |
2 | Install and start Sentinel flows feature. |
Auto configure flow processing for Sentinel
The following examples illustrate a features.xml
which configures the Sentinel instance and automatically starts
all required features to either consume messages via JMS (ActiveMQ) or Kafka.
Simply copy it to ${SENTINEL_HOME}/deploy/
.
<?xml version="1.0" encoding="UTF-8"?>
<features
name="opennms-${project.version}"
xmlns="http://karaf.apache.org/xmlns/features/v1.4.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://karaf.apache.org/xmlns/features/v1.4.0 http://karaf.apache.org/xmlns/features/v1.4.0"
>
<!-- Bootstrap feature to start all flow related features automatically -->
<feature name="autostart-sentinel-flows" version="${project.version}" start-level="100" install="auto">
<!-- Configure the controller itself -->
<config name="org.opennms.sentinel.controller">
location = SENTINEL
id = 00000000-0000-0000-0000-000000ddba11
broker-url = failover:tcp://127.0.0.1:61616
</config>
<!-- Configure datasource connection -->
<config name="org.opennms.netmgt.distributed.datasource">
datasource.url = jdbc:postgresql://localhost:5432/opennms
datasource.username = postgres
datasource.password = postgres
datasource.databaseName = opennms
</config>
<!--
Starts the Netflow5Adapter to process Netflow5 Messages.
Be aware, that this requires a Listener with name "Netflow-5" on the Minion-side to have messages
processed properly.
-->
<config name="org.opennms.features.telemetry.adapters-netflow5">
name = Netflow-5
class-name = org.opennms.netmgt.telemetry.adapters.netflow.v5.Netflow5Adapter
</config>
<!-- Point sentinel to the correct elastic endpoint -->
<config name="org.opennms.features.flows.persistence.elastic">
elasticUrl = http://elasticsearch:9200
</config>
<!-- Install JMS related features -->
<feature>sentinel-jms</feature>
<!-- Install Flow related features -->
<feature>sentinel-flows</feature>
</feature>
</features>
<?xml version="1.0" encoding="UTF-8"?>
<features
name="opennms-${project.version}"
xmlns="http://karaf.apache.org/xmlns/features/v1.4.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://karaf.apache.org/xmlns/features/v1.4.0 http://karaf.apache.org/xmlns/features/v1.4.0"
>
<!-- Bootstrap bootstrap feature to start all flow related features automatically -->
<feature name="autostart-sentinel-telemetry-flows" version="${project.version}" start-level="200" install="auto">
<!-- Configure the controller itself -->
<config name="org.opennms.sentinel.controller">
location = SENTINEL
id = 00000000-0000-0000-0000-000000ddba11
broker-url = failover:tcp://127.0.0.1:61616
</config>
<!-- Configure datasource connection -->
<config name="org.opennms.netmgt.distributed.datasource">
datasource.url = jdbc:postgresql://localhost:5432/opennms
datasource.username = postgres
datasource.password = postgres
datasource.databaseName = opennms
</config>
<!--
Starts the Netflow5Adapter to process Netflow5 Messages.
Be aware, that this requires a Listener with name "Netflow-5" on the Minion-side to have messages
processed properly.
-->
<config name="org.opennms.features.telemetry.adapters-netflow5">
name = Netflow-5
class-name = org.opennms.netmgt.telemetry.adapters.netflow.v5.Netflow5Adapter
</config>
<!-- Point sentinel to the correct elastic endpoint -->
<config name="org.opennms.features.flows.persistence.elastic">
elasticUrl = http://elasticsearch:9200
</config>
<!--
Configure as Kafka Consumer.
All properties described at https://kafka.apache.org/0100/documentation.html#newconsumerconfigs are supported.
-->
<config name="org.opennms.core.ipc.sink.kafka.consumer">
group.id = OpenNMS
bootstrap.servers = localhost:9092
</config>
<!--
Configure as Kafka Producer for sending Events from Sentinel.
All properties described at https://kafka.apache.org/0100/documentation.html#producerconfigs are supported.
-->
<config name="org.opennms.core.ipc.sink.kafka">
bootstrap.servers = localhost:9092
</config>
<!-- Install Kafka related features -->
<feature>sentinel-kafka</feature>
<!-- Install flow related features -->
<feature>sentinel-flows</feature>
</feature>
</features>
Kafka Configuration
Each Minion works as a producer and must be configured beforehand. Please refer to section Minion Kafka Producer Configuration on how to configure Minion as a Kafka Producer.
Each Sentinel works as a Consumer and can be configured in the file ${SENTINEL_HOME}/etc/org.opennms.core.ipc.sink.kafka.consumer.cfg
.
Either manually or via the config:edit org.opennms.core.ipc.sink.kafka.consumer
statement.
For supported properties, see here
By default each Kafka Consumer starts consuming messages immediately after the feature has been started.
It is possible to set a property org.opennms.core.ipc.sink.initialSleepTime
to define an initial sleep time in ms before any messages are consumed.
In order to set this up, please add an entry to the end of the file ${SENTINEL_HOME}/etc/system.properties
:
# Initial delay of 5 seconds before consuming of messages is started in milliseconds
org.opennms.core.ipc.sink.initialSleepTime=5000
Persisting Collection Sets to Newts
In the previous chapter it is described on how to setup Meridian, Minion and Sentinel in order to distribute the processing of flows.
However, it only covered flow processing adapters, but there are more, e.g. the NxosGpbAdapter
, which can also be run on a Sentinel.
The configuration of Newts for Sentinel uses the same properties as for Meridian.
The only difference is, that the properties for Sentinel are stored in /opt/sentinel/etc/org.opennms.newts.config.cfg
instead of *.properties
files.
The name of each property is the same as for Meridian without the org.opennms.newts.config
prefix.
The following example shows a custom Newts configuration using the Sentinel's Karaf Shell.
$ ssh -p 8301 admin@localhost
admin@sentinel> config:edit org.opennms.newts.config
admin@sentinel> config:property-set hostname localhost
admin@sentinel> config:property-set port 9042
admin@sentinel> config:property-set cache.strategy org.opennms.netmgt.newts.support.GuavaSearchableResourceMetadataCache
admin@sentinel> config:update
Adapters
This chapter describes the various adapters which may contain sample data which may be stored to a Persistence Storage and can also run on a Sentinel. At the moment only Newts is supported as a Persistence Storage. See chapter [ga-sentinel-configure-newts] on how to configure Newts.
In order to get it to work properly, please note, that an appropriate listener on the Minion must also be configured. The name of the listener should share the same name on Sentinel.
SFlowTelemetryAdapter
In order to use this adapter, the feature sentinel-flows
and sentinel-newts
must be installed.
In addition either sentinel-jms
or sentinel-kafka
should be installed and configured properly.
See the previous Flow Processing chapter for more details.
If only sample data should be persisted, the following commands can be run on the Sentinel's Karaf Shell
$ ssh -p 8301 admin@localhost
admin@sentinel> config:edit --alias sflow --factory org.opennms.features.telemetry.adapters
admin@sentinel> config:property-set name SFlow-Telemetry
admin@sentinel> config:property-set class-name org.opennms.netmgt.telemetry.adapters.netflow.sflow.SFlowTelemetryAdapter
admin@sentinel> config:property-set parameters.script /opt/sentinel/etc/sflow-host.groovy
admin@sentinel> config:update
If SFlow flows and the sample data should be processed, multiple adapters can be configured:
config:edit --alias sflow-telemetry --factory org.opennms.features.telemetry.adapters
config:property-set name SFlow
config:property-set adapters.1.name SFlow-Adapter
config:property-set adapters.1.class-name org.opennms.netmgt.telemetry.adapters.netflow.sflow.SFlowAdapter
config:property-set adapters.2.name SFlow-Telemetry
config:property-set adapters.2.class-name org.opennms.netmgt.telemetry.adapters.netflow.sflow.SFlowTelemetryAdapter
config:property-set adapters.2.parameters.script /opt/sentinel/etc/sflow-host.groovy
config:update
Please note, that in both cases the file /opt/sentinel/etc/sflow-host.groovy
must be provided manually, e.g. by manually copying it over from Meridian.
NxosGpbAdapter
In order to use this adapter, the feature sentinel-telemetry-nxos
and sentinel-newts
must be installed.
In addition either sentinel-jms
or sentinel-kafka
should be installed and configured properly.
See the previous Flow Processing chapter for more details.
Besides this, configuration files from Meridian must be copied to Sentinel to /opt/sentinel/etc
.
The following files and directories are required:
-
${OPENNMS_HOME}/etc/datacollection
-
${OPENNMS_HOME}/etc/datacollection-config.xml
-
${OPENNMS_HOME}/etc/resource-types.d
Afterwards the adapter can be set up:
$ ssh -p 8301 admin@localhost
admin@sentinel> config:edit --alias nxos --factory org.opennms.features.telemetry.adapters
admin@sentinel> config:property-set name NXOS
admin@sentinel> config:property-set class-name org.opennms.netmgt.telemetry.protocols.nxos.adapter.NxosGpbAdapter
admin@sentinel> config:property-set parameters.script /opt/sentinel/etc/cisco-nxos-telemetry-interface.groovy
admin@sentinel> config:update
Please note, that the file /opt/sentinel/etc/cisco-nxos-telemetry-interface.groovy
must also be provided manually,
e.g. by manually copying it over from Meridian.
JtiGpbAdapter
In order to use this adapter, the feature sentinel-telemetry-jti
and sentinel-newts
must be installed.
In addition either sentinel-jms
or sentinel-kafka
should be installed and be configured properly.
See the previous Flow Processing chapter for more details.
Besides this, configuration files from Meridian must be copied to Sentinel to /opt/sentinel/etc
.
The following files and directories are required:
-
${OPENNMS_HOME}/etc/datacollection
-
${OPENNMS_HOME}/etc/datacollection-config.xml
-
${OPENNMS_HOME}/etc/resource-types.d
Afterwards the adapter can be set up:
$ ssh -p 8301 admin@localhost
admin@sentinel> config:edit --alias jti --factory org.opennms.features.telemetry.adapters
admin@sentinel> config:property-set name JTI
admin@sentinel> config:property-set class-name org.opennms.netmgt.telemetry.protocols.jti.adapter.JtiGpbAdapter
admin@sentinel> config:property-set parameters.script /opt/sentinel/etc/junos-telemetry-interface.groovy
admin@sentinel> config:update
Please note, that the file /opt/sentinel/etc/junos-telemetry-interface.groovy
must also be provided manually,
e.g. by manually copying it over from Meridian.
Configure Sentinel via confd
Mount
When starting the Sentinel container, mount a YAML file to the following path /opt/sentinel/sentinel-config.yaml
.
Any configuration provided to confd will overwrite configuration specified as environment variables. Direct overlay of specific configuration files will overwrite the corresponding config provided by confd.
Contents
The following describes the keys available in sentinel-config.yaml
to configure the Sentinel via confd.
Sentinel controller config
broker-url: "<broker url>"
id: "<id>"
location: "<location>"
Config specified will be written to etc/org.opennms.sentinel.controller.cfg
.
User/password
Supplying the broker username/password via YAML file for configuration via confd is not supported.
Sentinel Elasticsearch config
elasticsearch:
url: "http://elastic-ip:9200"
index-strategy: "hourly"
replicas: 0
conn-timeout: 30000
read-timeout: 60000
Config specified will be written to etc/org.opennms.features.flows.persistence.elastic.cfg
.
Sentinel datasource config
datasource:
url: "jdbc:postgresql://localhost:5432/opennms"
username: "postgres"
password: "postgres"
database-name: "opennms"
Config specified will be written to etc/org.opennms.netmgt.distributed.datasource.cfg
.
Sentinel Kafka config
ipc:
kafka:
bootstrap.servers: "my-kafka-ip-1:9092,my-kafka-ip-2:9092"
group.id: "OpenNMS"
Config specified will be written to etc/org.opennms.core.ipc.sink.kafka.cfg
and etc/org.opennms.core.ipc.sink.kafka.consumer.cfg
.
Telemetry flow adapters
You can configure individual flow adapters and any number of uniquely named listeners. See the example below for how to specify parameters and parsers.
telemetry:
flows:
adapters:
NetFlow-5:
class-name: "org.opennms.netmgt.telemetry.protocols.netflow.adapter.netflow5.Netflow5Adapter"
parameters:
some-key: "some-value"
Config specified will be written to deploy/confd-flows-feature.xml
.