Securing Guest Zone (techniques to limit abuse)

Hi!
How to prevent abuse of guest zones?

//edit new approach
Make use of extra option and hashlimit module

/etc/config/firewall

config rule
	option name 'Guest-Accept-Input-DNS'
	option family 'ipv4'
	option proto 'tcpudp'
	option src 'guest'
	option dest_ip 'x.x.x.x'
	option dest_port '53'
	option extra '-m hashlimit --hashlimit-upto 1/sec --hashlimit-burst 50 --hashlimit-mode srcip --hashlimit-name guest_input_dns'
	option target 'ACCEPT'

config rule
	option name 'Guest-Accept-Input-DHCP'
	option family 'ipv4'
	option proto 'udp'
	option src 'guest'
	option dest_port '67'
	option extra '-m hashlimit --hashlimit-upto 1/sec --hashlimit-mode srcip --hashlimit-name guest_input_dhcp'
	option target 'ACCEPT'

config rule
	option name 'Guest-Accept-Input-IGMP'
	option family 'ipv4'
	option proto 'igmp'
	option src 'guest'
	option extra '-m hashlimit --hashlimit-upto 1/sec --hashlimit-mode srcip --hashlimit-name guest_input_igmp '
	option target 'ACCEPT'

config rule
	option name 'Guest-Accept-Input-ICMP-Echo-Request'
	option family 'ipv4'
	option proto 'icmp'
	list icmp_type 'echo-request'
	option src 'guest'
	option dest_ip 'x.x.x.x'
	option extra '-m hashlimit --hashlimit-upto 1/sec --hashlimit-burst 30 --hashlimit-mode srcip --hashlimit-name guest_input_icmp --hashlimit-htable-expire 30000 '
	option target 'ACCEPT'

config rule
	option name 'Guest-Accept-Forward-SSDP-To-Lan'
	option family 'ipv4'
	option proto 'udp'
	option src 'guest'
	option dest 'lan'
	option dest_ip '239.255.255.250'
	option dest_port '1900'
	option extra '-m hashlimit --hashlimit-upto 1/sec --hashlimit-mode srcip --hashlimit-name guest_forward_ssdp'
	option target 'ACCEPT'

config rule
	option name 'Guest-Accept-Forward-HTTP-To-Lan'
	option family 'ipv4'
	option proto 'tcp'
	option src 'guest'
	option dest 'lan'
	option dest_ip 'x.x.x.x'
	option dest_port '80'
	option extra '-m connlimit --connlimit-upto 20 --connlimit-mask 32 --connlimit-saddr'
	option target 'ACCEPT'

/etc/firewall.user

$IPT -t filter -N RATE_LIMIT_FORWARD
$IPT -t filter -N RATE_LIMIT_REJ

$IPT -t filter -F RATE_LIMIT_FORWARD
$IPT -t filter -F RATE_LIMIT_REJ

$IPT -t filter -A forwarding_rule -o eth1 -j RATE_LIMIT_FORWARD

$IPT -t filter -A RATE_LIMIT_REJ -m limit --limit 20/min -j LOG --log-prefix "IPTables-Rejected: "
$IPT -t filter -A RATE_LIMIT_REJ -j reject  

### Split global max connection limit over zones
$IPT -t filter -A RATE_LIMIT_FORWARD -i br-lan -s 10.0.0.0/24 -m connlimit --connlimit-above 8192 --connlimit-mask 24 --connlimit-saddr -j RATE_LIMIT_REJ
$IPT -t filter -A RATE_LIMIT_FORWARD -i br-isolated -s 10.0.1.0/24 -m connlimit --connlimit-above 8192 --connlimit-mask 24 --connlimit-saddr -j RATE_LIMIT_REJ

### Max 1000 Connections per Host
$IPT -t filter -A RATE_LIMIT_FORWARD -m connlimit --connlimit-above 1000 --connlimit-mask 32 --connlimit-saddr -j RATE_LIMIT_REJ

## Limit outgoing icmp requests per sec
$IPT -t filter -A RATE_LIMIT_FORWARD -p icmp -m hashlimit --hashlimit-above 1/sec --hashlimit-burst 30 --hashlimit-mode srcip --hashlimit-name all_forward_icmp_to_wan --hashlimit-htable-expire 30000 -j RATE_LIMIT_REJ

What are "good" values here?
The rules that enforce connection/packets per sec globally, maybe also add dstport to it?

1 Like

For the most part it seems a bandwidth limit would be sufficient, qos/sqm settings

This is another example where a luci app for guest wireless would really help a lot of users. Incorporating bandwidth limits or some sort of prioritization would be a useful feature of such an app.

1 Like

In some cases qos/sqm will not help if some users decide to screw the network.

Like spamming the router itself, with either bogus dns request or icmp storm?

For limiting dns it is a bit weird.
When i did some tests, browsing the web, fast page switches, i easily could create 200+ "connections" to dnsmasq. (Would be quite nice to set lower timeout for dns requests like 15 seconds?)

I tried to use connlimit, to limit max "connections" to the router dns forwarder.
When searching online about this topic, some say it will not work cause udp is connectionless.
Its true that udp doesn't use "connections" but its possible to track its state with conntrack. (ctstate,state)
This also works for icmp.
And from my test connlimit does also work for udp.

But for udp i think its better to limit on packet basis and use connlimit for tcp connections.

Also limiting multicast packets across networks doesnt seem to be a bad idea either?

I edited my post above, made some small adjustments.

Well, you can't prevent them from sending the packets, but you can drop the packets. A custom QoS could do that fine. Your solution isn't bad though, in particular I think hashlimit is in fact great for preventing DoS / flood attacks like sending thousands of DNS requests per second, or trying thousands of ssh logins per minute or whatever.

On my guest network, I just firewall the router itself, it doesn't accept input. My guest network is ipv6 only, (I run Tayga on the router), it is bandwidth limited to 30Mbps, and I hand out google DNS in the router advertisements. The router is completely firewalled from the guest users, it drops all input.

EDIT: ok, not completely firewalled, it does accept the required icmpv6 packets. ipv6 doesn't work without those. But that's it. BTW, to get this to work, I hand out Google's DNS64 addresses: https://developers.google.com/speed/public-dns/docs/dns64

Im still trying to figure out what is the best solution here.

I updated my first post once more.

So few things that bugged me.
The icmp (1/s) rule didn't work that good since it breaks traceroute.
Since traceroute sends like 3/4 icmp requests/s.
Some obvious solution would be to raise the limit to 3/4 packets/s.
But i dont like that.
Assuming the average hop count across the internet is like 10-20 hops?

I changed the rule to --hashlimit-upto 1/sec --hashlimit-burst 30 --hashlimit-htable-expire 30000
(10 hops * 3 icmp echo requets = 30)
So it will allow 30 packets passing then limit to 1 packet/s.
But the trick here is the htable-expire.
It defaults to the prefix used, /sec = expire after 1s /min = expire after 60s and so on.
So burst gets recharged every time hashlimit-upto 1/sec is not matched or if the table expires.
If no value is specified the burst would get fully recharged after 1s. (if the upto is not hit)
Using htable-expire 30000 here causes the burst to recharge over a period of 30s.

And for dns when i think about it most requests are coming in bursts style requests.
So i use
//edit3
back to
--hashlimit-upto 1/sec --hashlimit-burst 150 --hashlimit-htable-expire 30000

So the idea is, assuming a client uses 5 requests per second.
Over a period of 30 seconds (udp connection timeout) that would be 150 requests.
And most of the time a client will burst like 10-20 requests then idles for a period of time.
In the idle time the burst can recharge.
If someone starts spamming he will get limited after 150 packets/requests.
Someone could say okay why not use:
--hashlimit-upto 5/sec --hashlimit-burst 15

I have two problems with that.
First it always allows 5 requests per second, doesn't sound that much at first but if there are several people messing around that can multiple quite fast.
Second the burst with no expire timer (default 1sec).
If they figure out the pattern (e.g. 15 requests, wait 1 sec, spam 15 again), that doesnt sound good either.

Assuming 30 seconds udp timeout:
Constant 4 requests/sec = 120 Requests (using 5 would not allow the burst to recharge)
burst of 15 every 2 seconds. 30/2*15 = 225
Resulting in roughly 345 requests over 30 seconds ?

Now:
--hashlimit-upto 1/sec --hashlimit-burst 150 --hashlimit-htable-expire 30000
This one is a bit harder for an attacker to exploit i think.

Because they can't spam at a constant rate. The burst would never get recharged.
It will always be 1 per sec. (with the bust of course)

So 150 + 1*30 = 180

In normal use case this no problem i think.
For example, someone browses web, does like 10 requests per web page visit.
Does reading the page, in that time, the burst get recharged (a bit)

while there are a lot of intriguing usecases for hashlimit, i've found that ones idea of a "good limit" usually means vialoating someone elses idea and thus stuff breaks in unintuitive ways.

usually the rate-limiting concepts available in application layer, like
maxstartups (ssh) or net.ipv4.icmp_ratelimit, are better suited and thought after.

I think hashlimit really is good to limit the rate at which connections can come into a server. So for example if you're running conntrack, and accepting all related traffic, and your firewall is therefore only looking at initial packets. In that environment for example you can limit ssh sessions to say 3 per minute with a burst of 5... and prevent people from brute-forcing passwords, or rate limit smtp connections and prevent people from flooding you with spam. Sure it doesn't prevent coordinated botnets where each connection comes from a different ip... but it goes a long way towards helping avoid DoS or brute-force attacks.

what I don't think necessarily makes sense is to hashlimit things like ICMP or UDP etc. There the QoS mechanisms make more sense. For example ICMP you probably want to prioritize, but limit to a certain bandwidth... so an HFSC bucket for ICMP makes sense, with a fast initial rate for a few tens of ms and then a slow steady-state rate after that which prevents flooding. Same idea for multicast, or the like.

I edited my post above once more x)

QoS limiting is also an option indeed.
But then i would have to setup some qos scripts i dont want that x)

For multicast i only forward ssdp between 2 networks. Which is usually only 1 packet send and 1 received.
So limiting to 1 per second does make sense here, i think?

UDP is a bit trickier, it all comes down to the use case.
Online games often use a constant rate of sending udp packets.
While dns requests are more like, 10-20 requests then idle, 10-20 requests and so on.

Admittedly tc is pretty irritating to learn, but maybe I'll do a "custom QoS discussion" thread and we can get ideas bouncing around there. I like the possibilities that eBPF offers, but I haven't had a chance to work on that yet.

yeah i dont like tc either, but after doing some basic stuff, i get used more and more to it.
But still much to learn.

Berkeley Packet Filter looks quite interesting, i never heard of it before.
But for typical home use, this seems a bit too much x)

Maybe, but I think a decent high level language for tc related tasks that compiled to eBPF would be in order. It could perhaps compile to c and then use the clang compiler for BPF. So then you might be able to do something like:

if ipv6 and dscp == 48 then returnclass(prio)
if ipv4 and dscp == 48 then returnclass(prio)
if icmp4 then returnclass(mod);
if icmp6 then returnclass(mod);
if ipv4 and udp and src in 33.33.33.0/24 then returnclass(voip);
....

and have it all compile to eBPF and this would then become the new easy way to do QoS for home or business use.

Yes could be.

But the problem is to classify the traffic in the first place.
And more and more traffic use encryption.
Only way I see here is by ports, destination IPs (problematic because off CDN) or maybe hostnames/domains. (Ndpi does some good job here
But that is not always working.
Best example is http/s.
How to distinguish between bulk downloads and video streaming for example.
For bulk maybe use connbytes but that also doesn't work all the time because some applications split their downloads.

//edit
seems like npdi does some mixture of matching Ips, Hostnames/domains and pattern matching in packets.

The domain matching can also archived by iptables string module with regex algorithm (third party), i think.
That also works for https?

For http and https you can do it at layer 7 (HTTP layer) by forcing everyone through a squid proxy and then using some combination of delay pools and DSCP tagging on the LAN side. That's what I do to limit the total bandwidth of YouTube videos and Sling TV streams and the like, but give them higher priority on my LAN so they generally don't stall if someone wants to do something major (like transfer a file between fileserver and desktop).

I don't think there's a one-size fits all QoS, Cake does a not too bad job of doing stuff that lots of people find useful, but if you really care about quality of service for specific services you need to actually tell the computer. No way around it, but there are some ways that are easier than others, and u32 matches in tc is probably one of the least user friendly :wink:

1 Like

Further info on squid directives

http://www.squid-cache.org/Doc/config/clientside_tos/

Allows you to set the LAN side TOS byte based on a squid ACL. So for example you can set AF41 on stuff from googlevideo.com, and then use TC to prioritize these streams while limiting them to a certain max bandwidth. I also use delay pools here so that the WAN side connection isn't too bandwidth hungry. Squid can buffer things so the LAN rate and WAN rate aren't guaranteed to be the same.

You can also set say CS1 for stuff from sites you regularly bulk download from.

Check the squid logs to figure out how to identify bandwidth hogging sites.

1 Like

ty dlakelan

squid looks quite interesting.
But then i would need to change my sqm classify script, which is based on connmarks.
Could be quite easily modded i guess by setting a connmark based on the dscp flag?
So ingress traffic gets also classified.

I also figured out that is quite easy to add a new protocol (add domain matching) to ndpi.

So im not quite sure what is the best solution here,
Squid also uses some more resources then ndpi, i guess.

If your goal here is to secure the Guest Zone, I think Squid is the way to go. It lets you do a lot of stuff related to http and https connections, which are often the most bandwidth hungry.

I'd suggest to simply use DSCP as well as connmark, you can classify on whatever you want in tc. Let squid put the DSCP on the connection based on URL, and let your tc script stick those packets in the appropriate bins.

Since squid itself is outputting these packets, you will never get any DSCP other than the ones squid puts. This helps avoid abuse as well.

squid can also put outgoing TOS on the WAN side, of course it can't help you on ingress on the WAN, but you can use delay pools for that.

EDIT: it does use resources though. This is another reason I keep encouraging people to move to x86 based routers: modern networking requirements such as these are well beyond most consumer grade stuff in terms of RAM and CPU.

I'm still unsure x)

Dnsmasq also has support for ipset and connmark.
Which could be easily used to match by domain name.
But that doesn't help to distinguish between bulk http/s or something else like streaming content.
Seems like this can only be done with squid :-/

A lot of traffic is https which squid only sees the hostname, so it can be pretty good to use dnsmasq. Squid can distinguish hosts by name rather than ip, whereas only IP ultimately matters in the ipset stuff. This can be important for content delivery networks and caching like akamai etc

There are many possibilities, I do find squid more flexible, for example time based rules or user based rules

I don't know, how squid is handling http/s.
But i think, there is the need for a custom certificate?
That needs to be imported on all clients?

Squid is quite flexible, indeed.
Caching, ACLs and so on...
If someone only wants to allow http/s traffic, most would say use squid for that.
But isn't it possible to tunnel traffic through an http proxy?

Btw i did some testing with dnsmasq and ipsets.
Works quite well. But the ipset lists grow quite fast.
Over a period of 24h, already over 500 entries.
That is only for 2 domains.
Maybe i switch from hash:ip to hash:net and a run script every hour to convert the plain IP list into and iprange/cidr list?
I was also thinking about using a lower timeout value for the ipset itself, but when clients cache their dns requests, the IP(s) would not be re-added back to the list by dnsmasq.