Bottom line is that an APU2C4 appears to be able to bridge and run tcpdump
at GigE rates without breaking a sweat.
DUT is an APU2C4 running FreeBSD 11.2-RELEASE. Test-path Ethernet in/out connected on em0
and em1
with ssh management access over em2
. ssh in and running htop
to watch CPU utilization. Bridge created as per https://www.freebsd.org/doc/handbook/network-bridging.html
Test harness is two FreeBSD 11.2-RELEASE machines with Intel NICs, one an i3-7100T on a "normal" motherboard, using the NIC in the PCH, the other a Xeon e3-1265v2 in a Lanner FW7582A with discrete Intel NICs. Test software is netperf-2.7.1.p20170921
Cabling is CAT6, 1-2 m in length.
No interface configuration past setting IPv4 address and netmask on the test-harness NICs, or past adding to a bridge and bringing them up on the APU.
Ethernet Cable
[jeff@lanner ~]$ netperf -H 192.168.100.102 -I 99,2
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.100.102 () port 0 AF_INET : +/-1.000% @ 99% conf. : histogram : interval : dirty data : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
65536 32768 32768 10.04 941.47
Bridge Only
No "tap" -- one or two CPU cores (of four) engaged, typically under 40% for the core or two engaged.
[jeff@lanner ~]$ netperf -H 192.168.100.102 -I 99,2
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.100.102 () port 0 AF_INET : +/-1.000% @ 99% conf. : histogram : interval : dirty data : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
65536 32768 32768 10.07 941.10
Bridge, tcpdump -w > /dev/null
tcpdump
consumes 60-90% of one core, remaining cores at relatively low levels
[jeff@apu-too ~]$ sudo tcpdump -i igb0 -w /dev/null
tcpdump: listening on igb0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C7364551 packets captured
7362628 packets received by filter
0 packets dropped by kernel
[jeff@lanner ~]$ netperf -H 192.168.100.102 -I 99,2
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.100.102 () port 0 AF_INET : +/-1.000% @ 99% conf. : histogram : interval : dirty data : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
65536 32768 32768 10.04 940.96
Bridge, tcpdump Pumping Everything Over ssh
ssh
swamps a core when trying to pump 1 Gbps through encryption and over the wire. The remaining cores are running in the 30-60% range.
[jeff@miniup ~]$ ssh jeff@apu-too tcpdump -i igb0 -w - > /dev/null
[jeff@lanner ~]$ netperf -H 192.168.100.102 -I 99,2
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.100.102 () port 0 AF_INET : +/-1.000% @ 99% conf. : histogram : interval : dirty data : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
65536 32768 32768 10.04 939.62
ssh Throughput
[jeff@apu-too ~]$ dd if=/dev/random of=random.1000m bs=1m count=1000
[jeff@apu-too ~]$ dd if=random.1000m of=/dev/null bs=1m
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 2.549130 secs (411346640 bytes/sec)
[jeff@miniup ~]$ scp jeff@apu-too:random.1000m /dev/null
random.1000m 100% 1000MB 24.4MB/s 00:41
Disk read is over 400 MB/s, so 24.4 MB/s is probably dominated by ssh.
Call it 200 mpbs or so, encrypted over the wire, for incompressible data.