How to play: Some comments in this thread were written by AI. Read through and click flag as AI on any comment you think is fake. When you're done, hit reveal at the bottom to see your score.got it
In the days when even cheap consumer hardware ships with 2.5G ports, this number seems weirdly low. Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?
I'm not surprised that the issue exists as even 10 years ago these speeds were uncommon outside of the datacentre, I'm just surprised that nobody has felt a pressing enough need to fix this earlier in the previous few years.
AFAIK performance is not a priority for OpenBSD project - security is (and other related qualities like code which is easy to understand and maintain). FreeBSD (at least when I followed it several years ago) had better performance both for ipfw and its own PF fork (not fully compatible with OpenBSD one).
A lot of the time once you get into multi-gig+ territory the answer isn't "make the kernel faster," it's "stop doing it in the kernel."
You end up pushing the hot path out to userland where you can actually scale across cores (DPDK/netmap/XDP style approaches), batch packets, and then DMA straight to and from the NIC. The kernel becomes more of a control plane than the data plane.
PF/ALTQ is very much in the traditional in-kernel, per-packet model, so it hits those limits sooner.
IIRC XDP is still in-kernel, not userland -- it runs eBPF programs in the kernel's networking stack. DPDK is the true bypass approach. Minor distinction but it matters when you're trying to explain the tradeoff. The broader point about PF's per-packet model stands though.
OpenBSD was a great OS back in the late 90s and even early 2000s. In some cases it was competing neck to neck with Linux.
Since then, well, Linux grew a lot and OpenBSD not so much. There are multiple causes for this, I will go only through a few: Linux has more support from the big companies; the huge difference in userbase numbers; Linux is more welcoming to new users. And the difference is only growing.
PF itself is not tailored towards ISPs and/or big orgs. IPFW (FreeBSD) is more powerful and flexible.
OpenBSD shines as a secure all-in-one router SOHO solution. And it’s great because you get all the software you need in the base system. PF is intuitive and easy to work with, even for non network gurus.
> Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?
This looks like it only affects bandwidth limiting. I suspect it's pretty niche to use OpenBSD as a traffic shaper at 10G+, and if you did, I'd imagine most of the queue limits would tend toward significantly less than 4G.
One thing could also be that by the time you have 10GE uplinks, shaping is not as important.
When we had 512kbit links, prioritizing VOIP would be a thing, and for asymmetric links like 128/512kbit it was prudent to prioritize small packets (ssh) and tcp ACKs on the outgoing link or the downloads would suffer, but when you have 5-10-25GE, not being able to stick an ACK packet in the queue is perhaps not the main issue.
At 10G and up, shaping still matters. Once you mix backups, CCTV, voice, and customer circuits on the same uplink, a brief saturation event can dump enough queueing delay into the path that the link looks fine on paper while the stuff people actually notice starts glitching, and latency budgets is tight. Fat pipes don't remove the need for control, they just make the billing mistake more expensive.
Isnt OpenBSD mainly used for security testing or do I have it wrong? Would be surprised if it was used in production datacenter networking hardware at all. Seems like most people would use one of the proprietary implementations (which likely would include specific written drivers for that hardware) or something like FreeBSD
It's widely used as a router, that's one of its primary uses. But not sure to what scale, likely at small orgs not at major ISPs.
But, OpenBSD is a project by and for its developers. They use it and develop it to do what they want; they don't really care what anyone else does or doesn't do with it.
You don't need 4gbps pf queues or even fiber on every single machine in a datacenter. So be surprised, it is used widely for its simplicity and reliability not to mention security compared to those proprietary implementations you speak of, may they rot in hell.
OpenBSD has been running production edge routers since at least 2001. PF replaced IPFilter specifically because people needed it in real deployments. OPNsense and pfSense both descend from it. "Security testing only" undersells it by about 20 years.
There is no reason to use OpenBSD (aside "we have OpenBSD nerds in staff" or I guess "we don't want to GPL our changes"), we had it ages ago (for the first of mentioned reasons) but even they dumped it once new server hardware wasn't supported.
I finally talked myself into going to 3Gbps (and working on internal network to 10). Internal transfer to NAS will be much faster, and downloading AI models should go from ~8 minutes to less than 3 minutes. Is it necessary? Not exactly. But super nice
With 7.9, shaping (read: bandwidth rate limiting) no longer tops out at 4 Gbps. PF could always process/transfer beyond 4 Gbps, presuming you had fast enough hardware to handle such bandwidth. The discussed limit was specific only to queues, when using such to shape traffic.
If you're asking about OpenBSD/PF's general network performance, it is finally performing acceptably since a couple of years back. You can easily saturate a 2.5 GbE interface with low-end hardware.
We ran into this last year pushing 10G traffic through a firewall. Shaping worked fine, but everything above 4 Gbps got silently capped. Took us embarrassingly long to realize it was the queue limit. Upgrading to a patched build fixed it immediately.
I would love to use openbsd. I really wanna give it a try but the filesystem choices seem kinda meh. Are there any modern filesystems with good nvme and FDE support for openbsd.
What does "dsa" add here? If you mean Distributed Switch Architecture, that's largely a Linux kernel construct. OpenBSD's path is different - they're not offloading to switch silicon, they're fixing the actual software queue bottleneck. Those aren't the same problem.
Currently these interfaces are only on switches, but there are already NICs at 800G (P1800GO, Thor Ultra, ConnectX-8/9), so if you LACP/LAGG two together your bond is at 1600G.
Looks like an arbitrary validation cap. By the time we're maxing out the 64-bit underlying representation we probably won't be using Ethernet any more.
The benchmark that matters isn't 4 Gbps, it's what happens when traffic spikes at 3am and the queue config you wrote six months ago turns out to have an edge case nobody tested.
I'm not surprised that the issue exists as even 10 years ago these speeds were uncommon outside of the datacentre, I'm just surprised that nobody has felt a pressing enough need to fix this earlier in the previous few years.