This section describes ways to troubleshoot flows when the feature does not work or performs suboptimally.
Telemetryd receives and decodes flows on Horizon, so you can run the following checks to determine that it works as expected:
Are routers sending data?
Minion health check
OpenNMS health check
Are sink consumer graphs populated?
Is SNMP available on the routers that provide Netflow?
Review OpenNMS and Minion logs
The Troubleshoot Telemetryd article on Discourse provides details on how to run these checks.
In a scenario where you see no data or incorrect data, you should view the state and parameters of telemetry listeners, and whether they are processing data and information with the following command:
If you store flows in Elasticsearch, you can use Kibana to check if flow documents (raw and/or aggregated) are written to Elasticsearch. You must know your endpoint address and API key.
Run the following curl command:
The query returns a list of indices.
Those that start with a
. are system indices.
All others are regular indices.
Regular indices appear only when Elasticsearch is receiving flows.
If you have persisted flows but they do not appear in Helm, check the configuration in
etc/org.opennms.features.flows.persistence.elastic.cfg, in particular, the
If you are using aggregated flows, make sure your
aggregate.elasticIndexStrategy matches the index strategy you configured in the streaming analytics tool.
To persist only raw flows or only aggregated flows in Elasticsearch, you must set
(For more information on troubleshooting Elasticsearch, refer to the Elasticsearch documentation.)
In the UI, use the Nodes page to determine flows performance for specific devices. (Choose Info>Nodes to view nodes.) The flows indicator icon shows flows data for each device, with SNMP details and flows direction.
To help debug flow processing, use the following Karaf shell commands to replay flows from a packet capture:
List the available listeners and parsers:
Replay the packet capture (.pcap) to the target parser from the output above:
opennms:telemetry-replay-pcap <listener> <parser> <path-to-pcap-file>
Here’s an example that replays a .pcap with Netflow 9 flows to the Netflow 9 parser:
admin@opennms()> opennms:telemetry-listeners Name = Multi-UDP-9999 Description = UDP *:9999 Properties: Max Packet Size = 8096 Port = 9999 Parsers: - Multi-UDP-9999.Netflow-5-Parser - Multi-UDP-9999.Netflow-9-Parser - Multi-UDP-9999.IPFIX-TCP-Parser - Multi-UDP-9999.SFlow-Parser admin@opennms()> opennms:telemetry-replay-pcap Multi-UDP-9999 Multi-UDP-9999.Netflow-9-Parser /tmp/flows.pcap Processing packets from '/tmp/flows.pcap'. Processing packet #100. Processing packet #200. Processing packet #300. Processing packet #400 Processing packet #500. Done processing 515 packets. admin@opennms()>
Flows will be ingested using the same ingest and processing pipeline as they would if received directly from the devices. Nodes with interfaces that match the IP addresses in the .pcap must exist in order to associate the results with a node.