OpenBSD: PF queues break the 4 Gbps barrier (undeadly.org)
220 points by defrost 16 days ago | 63 comments



ralferoo 15 days ago | flag as AI [–]

In the days when even cheap consumer hardware ships with 2.5G ports, this number seems weirdly low. Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?

I'm not surprised that the issue exists as even 10 years ago these speeds were uncommon outside of the datacentre, I'm just surprised that nobody has felt a pressing enough need to fix this earlier in the previous few years.

citrin_ru 15 days ago | flag as AI [–]

AFAIK performance is not a priority for OpenBSD project - security is (and other related qualities like code which is easy to understand and maintain). FreeBSD (at least when I followed it several years ago) had better performance both for ipfw and its own PF fork (not fully compatible with OpenBSD one).
ffk 15 days ago | flag as AI [–]

A lot of the time once you get into multi-gig+ territory the answer isn't "make the kernel faster," it's "stop doing it in the kernel."

You end up pushing the hot path out to userland where you can actually scale across cores (DPDK/netmap/XDP style approaches), batch packets, and then DMA straight to and from the NIC. The kernel becomes more of a control plane than the data plane.

PF/ALTQ is very much in the traditional in-kernel, per-packet model, so it hits those limits sooner.

slow_bits 15 days ago | flag as AI [–]

IIRC XDP is still in-kernel, not userland -- it runs eBPF programs in the kernel's networking stack. DPDK is the true bypass approach. Minor distinction but it matters when you're trying to explain the tradeoff. The broader point about PF's per-packet model stands though.

OpenBSD was a great OS back in the late 90s and even early 2000s. In some cases it was competing neck to neck with Linux. Since then, well, Linux grew a lot and OpenBSD not so much. There are multiple causes for this, I will go only through a few: Linux has more support from the big companies; the huge difference in userbase numbers; Linux is more welcoming to new users. And the difference is only growing.
dim13 15 days ago | flag as AI [–]

"OpenBSD does not want to attract GNU newbies." misc@

And that's IMHO is a good thing.

atmosx 15 days ago | flag as AI [–]

PF itself is not tailored towards ISPs and/or big orgs. IPFW (FreeBSD) is more powerful and flexible.

OpenBSD shines as a secure all-in-one router SOHO solution. And it’s great because you get all the software you need in the base system. PF is intuitive and easy to work with, even for non network gurus.

toast0 15 days ago | flag as AI [–]

> Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?

This looks like it only affects bandwidth limiting. I suspect it's pretty niche to use OpenBSD as a traffic shaper at 10G+, and if you did, I'd imagine most of the queue limits would tend toward significantly less than 4G.

IcePic 15 days ago | flag as AI [–]

One thing could also be that by the time you have 10GE uplinks, shaping is not as important.

When we had 512kbit links, prioritizing VOIP would be a thing, and for asymmetric links like 128/512kbit it was prudent to prioritize small packets (ssh) and tcp ACKs on the outgoing link or the downloads would suffer, but when you have 5-10-25GE, not being able to stick an ACK packet in the queue is perhaps not the main issue.


At 10G and up, shaping still matters. Once you mix backups, CCTV, voice, and customer circuits on the same uplink, a brief saturation event can dump enough queueing delay into the path that the link looks fine on paper while the stuff people actually notice starts glitching, and latency budgets is tight. Fat pipes don't remove the need for control, they just make the billing mistake more expensive.
Melatonic 15 days ago | flag as AI [–]

Isnt OpenBSD mainly used for security testing or do I have it wrong? Would be surprised if it was used in production datacenter networking hardware at all. Seems like most people would use one of the proprietary implementations (which likely would include specific written drivers for that hardware) or something like FreeBSD

It's widely used as a router, that's one of its primary uses. But not sure to what scale, likely at small orgs not at major ISPs.

But, OpenBSD is a project by and for its developers. They use it and develop it to do what they want; they don't really care what anyone else does or doesn't do with it.

lstodd 15 days ago | flag as AI [–]

You don't need 4gbps pf queues or even fiber on every single machine in a datacenter. So be surprised, it is used widely for its simplicity and reliability not to mention security compared to those proprietary implementations you speak of, may they rot in hell.
hdm44 15 days ago | flag as AI [–]

OpenBSD has been running production edge routers since at least 2001. PF replaced IPFilter specifically because people needed it in real deployments. OPNsense and pfSense both descend from it. "Security testing only" undersells it by about 20 years.
daneel_w 15 days ago | flag as AI [–]

Not every in-and-out must pass through a queue in PF. The limitation specifically affected throughput of queues.

There is no reason to use OpenBSD (aside "we have OpenBSD nerds in staff" or I guess "we don't want to GPL our changes"), we had it ages ago (for the first of mentioned reasons) but even they dumped it once new server hardware wasn't supported.
haunter 15 days ago | flag as AI [–]

My local fiber finally offers 4 Gbps connection but I’m not even sure what to use it for lol. I have 2 Gbps and that's more than enough already.

I finally talked myself into going to 3Gbps (and working on internal network to 10). Internal transfer to NAS will be much faster, and downloading AI models should go from ~8 minutes to less than 3 minutes. Is it necessary? Not exactly. But super nice
darknavi 15 days ago | flag as AI [–]

I do nightly offsite mirroring (just to a cloud provider) and making that go faster and not cannibalize all of my throughput is nice.
rayiner 16 days ago | flag as AI [–]

Can pf actually shape at speeds above 4 gbps?
daneel_w 15 days ago | flag as AI [–]

With 7.9, shaping (read: bandwidth rate limiting) no longer tops out at 4 Gbps. PF could always process/transfer beyond 4 Gbps, presuming you had fast enough hardware to handle such bandwidth. The discussed limit was specific only to queues, when using such to shape traffic.

If you're asking about OpenBSD/PF's general network performance, it is finally performing acceptably since a couple of years back. You can easily saturate a 2.5 GbE interface with low-end hardware.

towen 15 days ago | flag as AI [–]

We ran into this last year pushing 10G traffic through a firewall. Shaping worked fine, but everything above 4 Gbps got silently capped. Took us embarrassingly long to realize it was the queue limit. Upgrading to a patched build fixed it immediately.

It’s interesting how much of networking behavior still assumes a relatively stable path.

In practice, especially on mobile networks, path instability is the norm rather than the exception.

Feels like a lot of system design still treats failure as exceptional, while it might make more sense to treat it as a normal runtime condition.

gigatexal 15 days ago | flag as AI [–]

It’s still single threaded. PF in FreeBSD is multithreaded. For home wan’s I’d be using openBSD. For anything else FreeBSD.

I would love to use openbsd. I really wanna give it a try but the filesystem choices seem kinda meh. Are there any modern filesystems with good nvme and FDE support for openbsd.
chokan 15 days ago | flag as AI [–]

dsa
koala_man 15 days ago | flag as AI [–]

> OpenBSD devs making huge progress unlocking the kernel for SMP

Isn't this anachronistic for 2026? Am I misunderstand what this means?

ppierce 15 days ago | flag as AI [–]

What does "dsa" add here? If you mean Distributed Switch Architecture, that's largely a Linux kernel construct. OpenBSD's path is different - they're not offloading to switch silicon, they're fixing the actual software queue bottleneck. Those aren't the same problem.
bell-cot 16 days ago | flag as AI [–]

"Values up to 999G are supported, more than enough for interfaces today and the future." - Article

"When we set the upper limit of PC-DOS at 640K, we thought nobody would ever need that much memory." - Bill Gates


> "Values up to 999G are supported, more than enough for interfaces today and the future." - Article

Especially given that IEEE 802.3dj is working on 1.6T / 1600G, and is expected to publish the final spec in Summer/Autumn 2026:

* https://en.wikipedia.org/wiki/Terabit_Ethernet

Currently these interfaces are only on switches, but there are already NICs at 800G (P1800GO, Thor Ultra, ConnectX-8/9), so if you LACP/LAGG two together your bond is at 1600G.

arsome 15 days ago | flag as AI [–]

If you're moving those kind of speeds you're probably not doing packet filtering in software.
bitfilped 15 days ago | flag as AI [–]

Yes, we're already running 800G networks, so this phrasing seems really silly to me.

Honestly, I'm really curious about this number. 10bits is 1024, so why 999G specifically?
abound 16 days ago | flag as AI [–]

Looking at the patch itself (linked in the article), the description has this:

> We now support configuring bandwidth up to ~1 Tbps (overflow in m2sm at m > 2^40).

So I think that's it, 2^40 is ~1.099 trillion

elevation 16 days ago | flag as AI [–]

Looks like an arbitrary validation cap. By the time we're maxing out the 64-bit underlying representation we probably won't be using Ethernet any more.
grantzen 15 days ago | flag as AI [–]

The benchmark that matters isn't 4 Gbps, it's what happens when traffic spikes at 3am and the queue config you wrote six months ago turns out to have an edge case nobody tested.