Help with Softflowd and exporting netflow data to PRTG

I'm lost...perhaps you accidentally replied to me.

I'm not sure why you boldfaced this. Let's all look at the script included with OpenWrt:

        procd_open_instance                                                                                                                                                    
        procd_set_param command /usr/sbin/softflowd -d $args${pid_file:+ -p $pid_file}                                                                                         
        procd_set_param respawn                                                                                                                                                
        procd_close_instance

As you see, -d is used.

Yes, I replied to the wrong post

It was simply copy/pasted from the online docs, where it is boldfaced.

The post was a comment to @RuralRoots that a procd service isn't meant to be run in the background anyway, since he's been talking about it not working when run in the background.

1 Like

I was prompting this reply from you...once I realized. :wink:

From my references and thus my understanding - thanks
-d Don't daemonise (run in foreground)
-D Debug mode: + verbosity + track v6 flows

Strace - Thanks. Reminds me of the process of stepping through my code in the sandbox.

Thanks, it was clearly addressed at me and I did take note.

I'm just going to sit back at this point and ruminate on all this until I can digest it all. Thanks to you all. Time for due diligence on my part.

I believe I found the cause of my issues.

Just for my own information, I wanted to see the impact of adding additional flows so I added another flow to the rc.local and performed a re-boot. Only the first flow shows up in ‘top’ - ????

I added echo commands into the rc.local to track the process and logread -e “echo-value” sure enough indicates clearly both instances were invoked, but again only the first instance appears in the process tree - ?????

Finally logread -e “instance2 interface-name” returns “instance2 interface-name” doesn’t exist (yet).
Adding sleep 1m before instance2 in the rc.local fixed that and all is good.

So, I turned my attention to the init.d script and the first thing that hit me was that it had a start priority of ‘50’. It was running prior to the existence of the target interface and thus failing. Changed start priority to ‘99’ and all is good - I see both instances in ‘top’ and both collectors are now receiving data.

So, my original issue is solved - thank you all for adding to my personal KB.

Now, I’ve come across another issue - same subject.

Occasionally the instance2 interface restarts based on ‘some’ trigger and instance2 terminates. Is there any way of re-invoking instance2 based on an IFUP of the instance2 interface? In the case of this particular application at least, it would make sense to start the flow acquisition whenever the target interface comes up.

I am also coming up with a dearth of information relating to script interpreters - softflowd script for example uses rc.common as it’s interpreter. Any hints where I can find out more how parameters are passed, placeholder variables get their values, interpreter syntax, . . .

Effectively, I would like to learn how to “step through the process flow if that makes sense.

Well, turns out I made a big newbie mistake. Changed the init.d script start priority, but failed to clear out the rc.local script. The init.d script instances both go into a crash loop, but the rc.local runs which gave me a false assurance issue was resolved.

I apparently don’t read so well either. @dl12345 gave me what I needed to answer most of what my last post requested, soooooo off I went down the rabbit hole.

I found service_trigger(), RESPAWN parameter, procd.sh, rc.common, functions.sh, PROCD_DEBUG=1, and INIT_TRACE=1 /etc/init.d/softflowd $action all involved in the process.

Putting it all together I was able to determine the softflow init script was properly populating the command line And the rc.common wrapper was properly responding to the /etc/init.d/softflowd $action command.

I added PROCD_DEBUG=1 to the script and as sparse as it’s output is, it reinforced my conclusion through my step-through of the softflowd init script, config, and rc.common that the fault wasn’t arising there.

So, I tried INIT_TRACE=1 /etc/init.d/softflowd start to see if I could glean anthything from that. Way beyond my abilities!!!!!

So I configured a third interface in the softflowd config, echoed the procd_set_param command values and did a straight CP into rc.local. It all works - big sigh.

Then I remembered the inference that it could be the snapshot build I was using.

I went back to my previous build on the other partition, installed softflowd, copied the config file from my latest build and ran it. All three interfaces loaded without a hitch. Going a little further, I ran diff against the config, init script, rc.common, procd.sh, functions.sh, verified checksum match on the binaries. Identical between the builds.

Arghhhh!!!

[quote="lleachii, post:13, topic:67755"]
I've only had issues like that on snapshots
[/quote]. It isn’t the SOLUTION it’s the CAUSE

All in all it’s been a good exercise. I got good advice, helpful insights, and challenges. I’ve learned a lot and have a much better perspective of the whole process but I won’t be writing any init scripts anytime soon. rc.local works in the interim, and I’m attempting to use @dl12345’s script as a foundation to try and accommodate 2 instances. Thanks all. It’s been enlightening.

On a final note, I was surprised at the router cpu utilization impact even with 4 flows running (I like to push the envelope at times). Significant spikes when flows expired, but otherwise little overall. The monitor station on the other hand was put to it’s knees trying to handle the flow data from 4 flows.

As well, the softflowd init script appends what appears to be a superfluous iteration of the -p pid parameter at the end of the generated command
softflowd -d -v 5 -i tun0 -n 10.10.1.100:5556 -c /var/run/tun0.ctl -p /var/run/tun0.pid -T full -p /var/run/tun0.pid
with
procd_set_param command /usr/sbin/softflowd -d $args${pid_file:+ -p $pid_file}
Removing ${pid_file:+ -p $pid_file} has zero effect and doesn’t make sense at least to me.

1 Like

im using OpenWrt 19.07.3, r11063-85e04e9f46 and notice that it will only start with debug mode

I figured out where the issues are in the procd implementation and expanded it to include most of the softflowctl functionality. It’s been running tickety-boo feeding PRTG for a couple of months now. I’m trying to figure out how to issue a PR ATM, but if you want to give it a test, I can post my script and config.

Put this into /etc/init.d/softflowd

#! /bin/sh /etc/rc.common
#  Copyright (C) 2007-2011 OpenWrt.org/Copyright (C) 2020 RuralRoots

USE_PROCD=1
START=50

Run_Dir="/var/run/softflowd"
Ctl_Dir="/usr/sbin/softflowctl"

EXTRA_COMMANDS="statistics dump pause resume expire delete timeouts active shutdown update"
EXTRA_HELP="       
			SOFTFLOWCTL COMMANDS
	Syntax: /etc/init.d/sflow <Command> Control File Name
	
	statistics  ->  Show Interface Statistics        
	dump	    ->  Dump Interface Flows
	pause       ->  Pause Interface Flow Monitoring
	resume      ->  Resume Interface Flow Monitoring	
	expire      ->  Expire Interface Flows
	delete	    ->	Delete All Interface Flows
	timeouts    ->  Show Interface Timeout Settings
	active	    ->  Show All Active Interfaces
	shutdown    ->	Exit Gracefully & close softflowd
	update      ->  Enable/Disable An Interface & Restart softflowd Monitoring"

append_bool() {
	local section="$1"
	local option="$2"
	local value="$3"
	local _val
	config_get_bool _val "$section" "$option" '0'
	[ "$_val" -gt 0 ] && append args "$3"
}

append_string() {
	local section="$1"
	local option="$2"
	local value="$3"
	local _val
	config_get _val "$section" "$option"
	[ -n "$_val" ] && append args "$3 $_val"
}

run_cmd(){                                                                                 
        echo "" && echo "        $msg " && echo ""                                 
        $Ctl_Dir -c $Run_Dir/$iface.ctl $command                                         
}

start_instance() {
	local section="$1"
	config_get_bool enabled "$section" 'enabled' '0'
	[ "$enabled" -gt 0 ] || return 1

	args=""
	append args "-c /var/run/softflowd/$section.ctl"
	append_string "$section" 'interface' '-i'
	append_string "$section" 'pcap_file' '-r'
	append_string "$section" 'timeout1' '-t'
	append_string "$section" 'timeout2' '-t'
	append_string "$section" 'timeout3' '-t'
	append_string "$section" 'timeout4' '-t'
	append_string "$section" 'timeout5' '-t'
	append_string "$section" 'timeout6' '-t'
	append_string "$section" 'timeout7' '-t'
	append_string "$section" 'timeout8' '-t'
	append_string "$section" 'max_flows' '-m'
	append_string "$section" 'host_port' '-n'
	append_string "$section" 'export_version' '-v'
	append_string "$section" 'hoplimit' '-L'
	append_string "$section" 'tracking_level' '-T'
	append_string "$section" 'sampling_rate' '-s'
	append_bool "$section" track_ipv6 '-6'

	procd_open_instance
	procd_set_param command /usr/sbin/softflowd -d $args
	procd_set_param respawn
	procd_close_instance
}

start_service() {
	mkdir -p /var/run/softflowd
	config_load 'softflowd'
	config_foreach start_instance
}

statistics(){
	command="statistics" && iface=$1 && msg="Showing $iface Statistics" && run_cmd
}

dump(){
	command="dump-flows" && iface=$1 && msg="Dumping $iface Flows" && run_cmd
}

pause(){
	command="stop-gather" && iface=$1 && msg="Pausing $iface Flow Monitoring" && run_cmd
}

resume(){
	command="start-gather" && iface=$1 && msg="Resuming $iface Flow Monitoring" && run_cmd
}

expire(){
	command="expire-all" && iface=$1 && msg="Expiring All $iface Flows" && run_cmd
}

delete(){
	command="delete-all" && iface=$1 && msg="Immediately Deleting All $iface Flows" && run_cmd
}
	
timeouts(){
	command="timeouts" && iface=$1 && msg="Showing Current $iface Timeout Values" && run_cmd
}
		
active(){
	echo "" && echo "	Showing All Active Control Sockets: " && echo ""
	ls $Run_Dir 2> /dev/null
}

shutdown(){
	echo "" && echo "	Shutting Down All Instances" && echo ""
	config_load softflowd
	config_foreach cleanup
	/etc/init.d/softflowd stop
	echo "" && echo "	Cleaning Up Run Environment" && echo "" && echo "" echo ""
	rm -r $Run_Dir 2> /dev/null
	echo "" && echo "		  D O N E" && echo "" && echo "" && echo "	. . . . Goodbye" && echo ""
}

cleanup() {
	$Ctl_Dir -c $Run_Dir/$1.ctl shutdown 2> /dev/null
}

update(){
	local socket=$1
	local updown=$2
	shutdown
        uci set softflowd.$socket.enabled=$updown
	echo "	Restarting softflowd With Updated Configuration"
	/etc/init.d/softflowd start
}

Put this into /etc/config/softflowd

config	ctlsock	lan					# Control Socket File Name
	option	enabled		'1'			# Interface enabled=1/disabled=0
	option	interface	'br-lan'		# Interface to be monitored
	option 	pcap_file	''			# Read/process/exit pcap packet capture file
	option 	timeout1	'expint=90s'		# # # # # # # # # # # # # # #
	option 	timeout2	'udp=600s'		# Flow timeout override values
	option 	timeout3	''			# # # #	# Valid  entries with Defaults:
	option 	timeout4	''				# expint=60s		udp=300s
	option 	timeout5	''				# tcp=3600s		tcp.rst=120s
	option 	timeout6	''				# tcp.fin=300s		icmp=300s
	option 	timeout7	''				# general=3600s		maxlife=604800s 
	option 	timeout8	''			# # # # # # # # # # # # # # #
	option 	max_flows	'8192'
	option 	host_port	'10.10.1.100:5555'	# Collector IP:Port
	option 	export_version	'9'			# NetFlow export version 1/5/9
	option 	hoplimit	''
	option 	tracking_level	'full'
	option 	track_ipv6	'1'			# Track ipv6 regardless enabled=1
	option 	sampling_rate	'100'

config	ctlsock	vpn0
        option 	enabled		'1'
        option 	interface	'vpn0'
        option 	pcap_file	''
        option 	timeout1	''
	option 	timeout2	''
	option 	timeout3	''
	option 	timeout4	''
	option 	timeout5	''
	option 	timeout6	''
	option 	timeout7	''
	option 	timeout8	''
        option 	max_flows	'8192'
        option	host_port	'10.10.1.100:5556'
        option 	export_version	'9'
        option 	hoplimit	''
        option 	tracking_level	'full'
        option 	track_ipv6	'1'
        option 	sampling_rate	'100'

Make /etc/config/softflowd changes for your setup - interface and host_port at least.

Startup softflowd: /etc/init.d/softflowd start

enter /etc/init.d/softflowd to see a command list.

enter ps -A | grep softflowd to see what configured softflowd instances are running.

3 Likes

If you want to try my updated custom version you can find it on: https://github.com/ruralroots/softflowd.

It is all script based. Just overlay your existing etc/config/softflowd, /etc/init.d/softflowd, and add /etc/hotplug.d/iface/40-softflowd.

I’ve added a read.me as well.
This version also includes the runtime commands of softflowctl as well.

2 Likes

A while back, I wrote several blog posts about Netflow collectors (programs that receive information from Netflow exporters, like softflowd).

You can read them at: RandomNeuronsFiring.com/netflow-collectors-for-home-networks/ They're likely out of date (I'm astonished to see that they were from 2017/2018...) but still might be useful.

1 Like

This is the equivalent UCI syntax in /etc/config/softflowd - I hope this helps someone:

config softflowd                    
        option enabled '1'          
        option interface 'eth0.2'    
        option host_port '192.168.xxx.xxx:xxxxx'
        option export_version '9'          
                                 
config softflowd               
        option enabled '1'                 
        option interface '6in4-henet'      
        option host_port '192.168.xxx.xxx:xxxxx'
        option export_version '9'

This works in version 21

tags: @dl12345 @anon50098793

1 Like

Remember this

Did you miss

Post: 23 and/or Post: 24?

/usr/sbin/softflowd -d "-c /var/run/softflowd/wan.ctl -i wan -t expint=120s -t tcp=1800s -m 8192 -n 10.10.1.100:5555 -v 9 -T full -s 100 -6"
/usr/sbin/softflowd -d "-c /var/run/softflowd/lan.ctl -i br-lan -m 8192 -n 10.10.1.100:5556 -v 9 -T full -s 100"
/usr/sbin/softflowd -d "-c /var/run/softflowd/vpn0.ctl -i vpn0 -m 8192 -n 10.10.1.100:5557 -v 9 -T ip -s 100 -6"
/usr/sbin/softflowd -d "-c /var/run/softflowd/WG.ctl -i WGMon -t tcp=1200s -t expint=60s -m 8192 -n 10.10.1.100:5558 -v 9 -T proto -s 100 -6"

It can support as many instances as ifconfig reports as up theoretically. Don’t try it.

Complies fully to man pages softflowd (8), and softflowctl (8)
Hotplug support

What Netflow collector are you guys using?
I am trying to wrap my head around how the collector is listening on all those ports (5555, 5556, 5557, 5558) at the same time.
I am using ntopng, and that requires nprobe, which is commercial.

I use Paesler PRTG Network Monitor to collect Netflow data. It’s a commercial product, but provide a free license provided you don’t exceed 50 sensors. Solarwinds, Manage Engine, Elastic Network provide free Netflow collectors iirc.

Just multiple collector instances based on ifconfig. ie Lan interface sends Lan Netflow data to collector port 5555, Wan interface sends Wan Netflow dat to collector port 5556, . . .

1 Like

Allow me to ask a dumb question: Is it illegal to configure softflowd to monitor multiple interfaces and send the flow to the same collector - like below?

config  ctlsock lan                                     # Control Socket File Name
        option enabled          '1'                     # Interface enabled=1/disabled=0
        option interface        'br-lan'                # Interface to be monitored
        option pcap_file        ''                      # Read & process pcap file & Exit
        option timeout1         ''                      # # # # # # # # # # # # # # #
        option timeout2         ''                      # Flow timeout override values
        option timeout3         ''                      #  Valid  entries with Defaults:
        option timeout4         ''                      # expint=60s    udp=300s
        option timeout5         ''                      # tcp=3600s             tcp.rst=120s
        option timeout6         ''                      # tcp.fin=300s  icmp=300s
        option timeout7         ''                      # general=3600s maxlife=604800s
        option timeout8         ''                      # # # # # # # # # # # # # # #
        option max_flows        '8192'                  # Maximum Flows to process
        option host_port        '172.16.17.106:2055'    # Collector IP:Port
        option export_version   '9'                     # NetFlow export version 1/5/9 supported
        option hoplimit         ''                      # Set ipv4 TTL or ipv6 hoplimit
        option tracking_level   'full'                  # Full, proto, or ip
        option track_ipv6       '0'                     # Track ipv6  enabled=1
        option sampling_rate    '1'                     # Periodic Sample Rate
        option bpf_filter       ''                      # Berkeley Packet Filter

config  ctlsock vpn0
        option  enabled         '1'
        option  interface       'tun0'
        option  pcap_file       ''
        option  timeout1        ''
        option  timeout2        ''
        option  timeout3        ''
        option  timeout4        ''
        option  timeout5        ''
        option  timeout6        ''
        option  timeout7        ''
        option  timeout8        ''
        option  max_flows       '8192'
        option  host_port       '172.16.17.106:2055'
        option  export_version  '9'
        option  hoplimit        ''
        option  tracking_level  'full'
        option  track_ipv6      '1'
        option  sampling_rate   '1'

config  ctlsock vpn1
        option  enabled         '1'
        option  interface       'wgc0'
        option  pcap_file       ''
        option  timeout1        ''
        option  timeout2        ''
        option  timeout3        ''
        option  timeout4        ''
        option  timeout5        ''
        option  timeout6        ''
        option  timeout7        ''
        option  timeout8        ''
        option  max_flows       '8192'
        option  host_port       '172.16.17.106:2055'
        option  export_version  '9'
        option  hoplimit        ''
        option  tracking_level  'full'
        option  track_ipv6      '1'
        option  sampling_rate   '1'

I use nfsen.

https://nfsen.sourceforge.net/

Listening...Netflow transmits data to the IP/port you specify in the config. So you simply configure your collector for it. Since you're configuring it, what's perplexing you?

Definitely not a dumb question.

It seems your looking at my fork, so theoretically, all Netflow data from br-lan, tun0, wg0 should send all Netflow data into your Netflow collector port 2055 on host 172.16.17.106.

A couple of caveats go with that though.

  • Netflow version must be identical for all interfaces you configure.
  • Tracking level must be identical for all interfaces you configure.
  • Be aware that this runs on your router. Setting your sample rate to 1:1 could severely impact performance. What is your use case?

Yes, I am using your fork.
And I am using a free NetFlow collector hooked to ntopng.
Thank you for the caveats. My router is a quad-core 2.2GHz CPU with 1GB RAM so I don't think it will get easily overwhelmed, but I am watching that with luci-app-statistics.
My use case is minor, probably stupid even since I am not getting what I want because this might not be the right tool. I am running a Pi4B which majorly runs HA (supervised) besides some airplane monitoring for piaware, fr24feed, radabox and adsbexchange-feed. I have been looking at its status and I see a crazy amount of data transfers. I am trying to figure out where this data is going out to and what it is. I have Netlink Bandwidth Monitor installed on the router, but it doesn't show me the same traffic details I am seeing on the Pi Monitor. That is what drove me to ntopng and Netflow. It's also a learning experience, because it already looks like I am one of those people :slight_smile:

Don’t overlook the pcap option.