Configuration for ulogd and logstash to geneate and collect IPFIX data

Since ulogd recently got working IPFIX support (upstream commit) I created a configuration example for ulogd on OpenWrt and used logstash to collect the generated data.

ulogd
On your OpenWrt device you need the following packages built from master: ulogd ulogd-mod-nfct ulogd-mod-extra

Edit /etc/ulogd.conf and use the following config:

[global]
plugin="/usr/lib/ulogd/ulogd_inpflow_NFCT.so"
plugin="/usr/lib/ulogd/ulogd_output_IPFIX.so"
stack=ct1:NFCT,ipfix1:IPFIX

[ct1]
hash_enable=0

[ipfix1]
oid=1
proto="udp"
host="192.168.1.30" # your logstash instance
port=4739
send_template="always" # could be set to "once", but I hat problems when restarting logstash - "always" seems to work better

After that, run /etc/init.d/ulogd start.
Since ulogd-mod-nfct is used, there is no need to add iptables rules for ulogd.

logstash
I won't give instructions on how to install logstash and elasticsearch here, there are plenty of guides available online (I started with this one).
On my logstash instance I used the following config (/etc/logstash/conf.d/netflow.conf on my Debian installation):

input {
  udp {
    port  => 4739
    codec => netflow
    workers => 4 # depending on the number of CPU cores to use
  }
}

filter {
  cidr {
    address => [ "%{[netflow][sourceIPv4Address]}" ]
    network => [ "127.0.0.1/24", "10.0.0.0/8", "192.168.0.0/16", "::1/24", "fd00::/8" ]
    add_tag => [ "sourcePrivate" ]
  }

  cidr {
    address => [ "%{[netflow][destinationIPv4Address]}" ]
    network => [ "127.0.0.1/24", "10.0.0.0/8", "192.168.0.0/16", "::1/24", "fd00::/8" ]
    add_tag => [ "destinationPrivate" ]
  }

  mutate { add_field => { "remoteHostname" => "none" } }

  if "sourcePrivate" not in [tags] {
    mutate { replace => { "remoteHostname" => "%{[netflow][sourceIPv4Address]}" } }
  }

  if "destinationPrivate" not in [tags] {
    mutate { replace => { "remoteHostname" => "%{[netflow][destinationIPv4Address]}" } }
  }

  if [remoteHostname] != "none" {
    geoip {
      source => "[remoteHostname]"
    }

    dns {
      reverse => [ "remoteHostname" ]
      action => "replace"
      hit_cache_size => 8192
      hit_cache_ttl => 900
      failed_cache_size => 2048
      failed_cache_ttl => 900
    }
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "netflow-%{+YYYY.MM.dd}"
  }
}

The filters add reverse DNS and geoip information. If you just want to have plain netflow information, you can remove the filters section.

This setup works quite well. The only downside is that logstash+elasticsearch are really ressource-heavy and are nothing that I want to run permanently on a small home-server. But in absence of a performant, free software IPFIX analyzer (I didn't find one, at least) this is currently the best solution for me.

ulogd on the other hand is extremely efficient. I only tested it on my x86 router, but in theory it should be possible to generate IPFIX data on pretty low end devices as well.

1 Like

Today i'm configuring my newie installed openwrt on TP-Link Archer C50 v4
OpenWrt SNAPSHOT r11616-291d79935e / LuCI Master git-19.333.26981-88cdda4

after configuring /etc/ulogd.conf file I decided to check the configuration using netcat, when i start

ulogd -vc /etc/ulogd.conf

ncat sends this error

root@OpenWrt:~# netcat -vvvvvv -l -p 4739 -u
Listening on any address 4739
Received packet from 127.0.0.1:53096 -> 127.0.0.1:4739 (local)
Trace/breakpoint trap
root@OpenWrt:~#

What could be the reason?

Sounds like a netcat bug to me.
Can you test it with netcat or wireshark on a non OpenWrt device?
Just put the IP of your workstation/whatever in your ulogd.conf

trash in incoming data, and i configured logstash, but it donesen't creates index in elastic, errors like handle data error

Great post. I'm having the issue that /usr/lib/ulogd/ulogd_output_IPFIX.so is not available in the latest version of ulogd (after installing ulogd-mod-nfct and ulogd-mod-extra). Any clue about where it could be?

I build this setup with Elastiflow as ipfix destination. It works quite well with one major limitation: bytes and packets are transmit as "total", and not "delta". So, with defaut config, bytes and packets are all zeros as Elastiflow expects "delta".
A dirty workarround is to add the following lines in 20_filter_30_ipfix.logstash.conf:

} else if [ipfix][octetTotalCount] {
        mutate {
         replace => {
           "[network][bytes]" => "%{[ipfix][octetTotalCount]}"
         }
        }

and

} else if  [ipfix][packetTotalCount] {
        mutate {
         replace => {
           "[network][packets]" => "%{[ipfix][packetTotalCount]}"
         }
        }

And use hash_enable=1 in /etc/ulogd.conf to transmit connections log when terminated.

A other issue with ipv6: ulogd/ipfx transmit ipv6 connections with an ipv4_addr type for src and dst ip (4 first bytes only)... Any idea how to fix that ?

1 Like

Hello, I`m also facing the 0 bytes issue also.
Can you please help me pointing to the lines you changed in 20_filter_30_ipfix.logstash.conf because its quite a fat file with lots of rules.
thanks.

Hi @alexmac
You can put in 20_filter_30_ipfix.logstash.conf the following condition at line 357:

} else if [ipfix][octetTotalCount] {
...

Then, line 503:

} else if  [ipfix][packetTotalCount] {
...

Thanks !
I've already switched to monitoring with softflowd, now it contains the byte and packets count.

1 Like