I did additional tests on trunk and got this
The Blue line is the default NAT performance while I got the Green line after I ran "/etc/init.d/odhcpd stop".
What does odhcpd do that limits the throughput?
Sounds strange. Afaik odhcpd is a ipv6 relayd/ra/dhcpv6 server, but plays no role in actual traffic routing. But it adds more ipv6 info and possibly thus complicates routing inside kernel/iproute2/whatever...
It only adds more ipv6 support. I am wondering if your traffic gets now passed through a ipv6 tunnel, or something like that, which could cause more protocol overehead. Do you have ipv6 in use?
Could you please verify that result:
I would like to see separate
a) odhcpd stop. Normal service stop. That will probably remove the ipv6 prefix infos from routing tables etc.
b) odhcpd process killed forcefully via kill command. Then the routing rules etc. should still be there and continue affecting performance.
My guess is that you will see a speed difference in a) and b). odhcpd process itself should have no effect, but the routing rules etc. may have. Having more ipv6 support may add just that much more routing tasks, that the additional work gets visible under heavy speeds.
(The ipv6 routing information has a lifespan between 10-30 minutes, I think, so please test the speed right after killing the process.)