Iptables to simulate latency spikes

Hello.

#!/bin/sh

WANIF="eth1"
iptables -N FUZZLIMIT
iptables -A OUTPUT -o $WANIF -j FUZZLIMIT

while true
do
	RAND1=`awk -v min=2 -v max=14 'BEGIN{srand(); echo int(min+rand()*(max-min+1))}'`
	sleep 1
	RAND2=`awk -v min=6 -v max=10 'BEGIN{srand(); echo int(min+rand()*(max-min+1))}'`
	iptables -F FUZZLIMIT
	iptables -A FUZZLIMIT -m limit --limit $RAND1/second --limit-burst $RAND2 -j ACCEPT
done

This is a shell script that I've found on the forums that is supposed to simulate latency spikes. When I try to run it using sh command, I get this error - "bad rate '/sec' " or "bad rate --limit-burst".

Is this script outdated? If so, how can I update it? Thank you for your help.

Im using OpenWRT stable release 18.0.4
Kernel 4.9.184

use alternate shell commands in the shell script to generate the RAND1 and RAND2 values.

i.e.

RAND1=`cat /dev/urandom | tr -dc 0-9 | dd bs=1 count=1 2>/dev/null`
RAND2=`cat /dev/urandom | tr -dc 0-9 | dd bs=1 count=1 2>/dev/null`
2 Likes

Thank you for your answer :slight_smile:

dont hold your breath
this snippet will just accept a random ammount of packets per second, what happens to the rest is not decided. also random is chosen while installing the rule (once), not while evaluating it.

this will simulate a random transfer speed with a random sized buffer.

1 Like

Use tc netem would be better

2 Likes

Alright, I guess I will use netem then, thanks.

Ok, thanks.