How do enterprises and government institutions prove their router and firewall setups are correct?
Is everyone just guessing based on the day form of the network engineer in charge that day or is there a proven way and tool to use to prove your setup is correct before or after it’s adopted?
In the enterprise world, proving that something is correct means that somebody else has defined what tests should be performed, and what results are expected.
Could you elaborate? I assume you think "networking is in a [bad] state".
Maybe "YOUR" network is in a bad state - at work - at home?
Dilbert's Pointy Haired Boss springs to mind. The network techs (usually) know what is possible and how to do it, but the typical SME/Corporate business manager wants airy-fairy stuff he thinks exists implemented right away.
I think there’s a famous Gartner quote out there where they surmised 99% of all firewall breaches will be due to misconfiguration and not due to any exploit in the firewall itself. It’s quite clear to me atleast that the way we manage networks today is way too sensitive and error prone because it’s way too complex to understand what’s actually going on.
I hear your opinion. For the average "man in the street" trying his hand at playing with his home, or God forbid, his office router and moaning that the system got bricked because " it’s way too complex to understand what’s actually going on", shouldn't even be touching the router.
Pretty much all Aerospace and Defence companies (perhaps with the exception of one large one), have set procedures that must be followed and every change documented, otherwise we would have aeroplanes falling out of the sky space ships stranding astronauts on the ISS .... all sorts of things going wrong. /s
See what I did there? In the ongoing holiday spirit, I'm simultaneously both agreeing and disagreeing with you
also take a look at the CIS benchmarks https://www.cisecurity.org/cis-benchmarks
and especially the DISA STIGs (Defense Information Systems Agency Security Technical Implementation Guides) for guidelines that will make your ears bleed.
Anyway, there's a minimum level the feds are supposed to meet & better is .. if not encouraged at least usually tolerated.
I have no idea how good the NIST "standards based security tools to automatically perform configuration checking" is now - it/they didn't exist when I worked for the feds. I used RANCID to grab the configs and a home-grown program to do the automatic config checking. Throw in a bit of automation, change control management and a lab that has at least one of everything on the production network so changes can be tested in the lab before taking them to change control to be approved & scheduled and the result is not all that many 'exciting' days in the network neighborhood.
In the fairly large org I worked at, they had well defined policies, standards, tools and a stable seasoned staff. Endpoint devices were part of the standard for the most part. With 14 campuses, about 900 switches and close to 20,000 active ports (endpoints), there were 3 models of edge switches, 2 models of core/distribution switch all coming back to one firewall h/a pair at corporate HQ. With the exception of a few parameters like IP addresses and hostname, each model multi-layer switch had common configs. VLAN and ACL settings per access port were determined via dot1x discovery with endpoint supplicants or proprietary discovery and the port config was pushed from a few redundant servers on every dot1x negotiation. Intra-campus routing protocol config was all dynamic based on vlan ip addresses and a stub config on distribution switches. The vendor management tools provided some auditing and we wrote scripts to audit the configs and report all exceptions. Exceptions had to be justified to and authorized by management yearly. I think we had about 35 exceptions out of over 19,000 active ports when I left. All that networking gear and the peripheral services including application delivery controllers/load balancers, DNS and DHCP servers were managed by a staff of 4 until I left.
Can confirm what previous 2 commenters have been saying. With decently sized companies there is governance in place, meaning people who know what standards should be followed.
There are best practices defined by governing bodies (such as CISA and others), and an overall infrastructure design/service plan (aka what services are operated from where in your network). You combine the two and you have a decent security design as a start. Then you test eternally to ensure you’re compliant against it, update firmwares and configurations every time it’s required to protect against CVEs, making sure only the required services and hosts are allowed to access what they need, also hire external consultants (aka whitehats) to test your infrastructure and follow their advice.
Being “correct” is very relative as new vulnerabilities pop up every day and what was best practice yesterday might be obsolete and dangerous today.