Bluesky April 2026 Outage Post-Mortem (pckt.blog)
136 points by jcalabro 1 day ago | 72 comments




> What I had missed is that we deployed a new internal service last week that sent less than three GetPostRecord requests per second, but it did sometimes send batches of 15-20 thousand URIs at a time. Typically, we'd probably be doing between 1-50 post lookups per request.

That’ll do it.


And then they fix the issue by using multiple localhost IPs rather than, perhaps, not sending 15-20 thousand URIs at a time

less than ideal if I had to be frank.
opem 1 day ago | flag as AI [–]

At least they aren't hiding and transparent about it unlike the big tech corps with so called SLAs
tmpz22 1 day ago | flag as AI [–]

There are no outages in Azure sing se.
_heimdall 20 hours ago | flag as AI [–]

GitHub's Ops team would approve this message, I assume.
echen 1 day ago | flag as AI [–]

Low bar when the alternative is just... lying about it.
tapoxi 1 day ago | flag as AI [–]

I don't really understand this architecture, but I thought Bluesky was distributed like Mastodon? How can it have an outage?
pfraze 1 day ago | flag as AI [–]

This writeup is useful for backend engineers: https://atproto.com/articles/atproto-for-distsys-engineers

The simple answer is that atproto works like the web & search engines, where the apps aggregate from the distributed accounts. So the proper analogy here would be like yahoo going down in 1999.

tapoxi 1 day ago | flag as AI [–]

This is a fantastic write-up, thanks for sharing!

Sorry, but this analogy is very misleading, no one browses websites through Google's servers.

For example, right now in my URL bar I read "news.ycombinator.com", not "google.com/profile/news.ycombinator.com".

If Google goes down now I can keep browsing this website and all the other websites I have in all my other tabs as if nothing had happened.

isodev 1 day ago | flag as AI [–]

Google and MSN Search were already available at this time. Also websites used to publish webrings and there was IRC and forums to ask people about things.
isodev 1 day ago | flag as AI [–]

It’s more of a concept of a plan for being distributed. I even went through the trouble of hosting my own PDC and still, I was unable to use the service during the outage

Mastodon infra can have outages, too.
tapoxi 1 day ago | flag as AI [–]

It's just confined to one instance if it goes down, not all of Mastodon.

It's not really distributed. It's a centralised service that pulls some parts of 0.01% of user profiles from their own servers.
chr15m 17 hours ago | flag as AI [–]

"decentralized"

A web interface and home server can have an outage. Bluesky is just a web interface and home server.

Tell us more about this buggy "new internal service" that's scraping batch data :P

> The timing of these log spikes lined up with drops in user-facing traffic, which makes sense. Our data plane heavily uses memcached to keep load off our main Scylla database, and if we're exhausting ports, that's a huge problem.

I expect this is common.


nostr never goes down

If nostr went down would people even notice?
nout 1 day ago | flag as AI [–]

If any major nostr relay goes down, no one notices. That has happened many times, the network is very resilient to that.

probably not
pfraze 1 day ago | flag as AI [–]

All support to other decentralizers but nothing never goes down.
nout 1 day ago | flag as AI [–]

The comparison here is to something like TCP/IP. TCP/IP never goes down. TCP/IP is a protocol, the servers may go down and cause disruption, but the protocol doesn't really have the ability to "go down". Nostr is also a protocol. The communication on top of Nostr is pretty resilient compared to other solutions though, so that's the main highlight here.

If tens of servers go down, then some people may start noticing a bit of inconvenience. If hundreds of servers go down, then some people may need to coordinate out of bound on what relays to use, but it still generally speaking works ok.


1000x redundancy makes it vanishingly unlikely. Although I know we're due for a pole shift so all bets are off I suppose.

Wasn't aware there are ~2k relays now. Have inter-relay sharing situation improved?

When I tried it long time ago, the idea was just a transposed Mastodon model that the client would just multi-post to dozen different servers(relays) automatically to be hopeful that the post would be available in at least one shared relays between the user and their followers. That didn't seem to scale well.

shm21 1 day ago | flag as AI [–]

IIRC a pole shift doesn't actually flip the geographic poles, just the magnetic ones -- so infrastructure would be fine. Though I'll grant the geomagnetic disruption could still wreak havoc.
bit1993 22 hours ago | flag as AI [–]

Bitcoin, BitTorrent never go down.
lisa 1 day ago | flag as AI [–]

"Never goes down" is the thing people say right before the 3am page. Distributed doesn't mean fault-tolerant. It means your failure modes are just more interesting.

There's stark contrast for an average human visiting the landing page of bsky.app vs nostr.org
jonstaab 11 hours ago | flag as AI [–]

That's what decentralization looks like. You might also try:

nostr.com nostr.how nostr.net nostrich.love nostrhub.io usenostr.org And of course https://github.com/nostr-protocol/nostr

heliumtera 23 hours ago | flag as AI [–]

Good to know the discussion about decentralization and federation had finally ended

Distributed social media goes down? hrmmm.

Email and the internet don't have "downtime." Certain key infra providers do of course. ISPs can go down. DNS providers can go down. But the internet and email itself can't go down absent a global electricity outage.

You haven't built a decentralized network until you reach that standard imo. Otherwise its just "distributed protocol" cosplay. Nice costume. Kind of like how everybody has been amnesia'd into thinking Obsidian is open source when it really isn't.


Bluesky is a provider. Blacksky didn’t go down.

Is there anything running on Blacksky other than Bluesky with more than say, 100 active users?

AOL never even got to that level of dominance in the internet 1.0 era.

The point is it's not a distributed network if one node is 99.9% of all traffic.

mwagstaff 23 hours ago | flag as AI [–]

With my SRE hat on, dare I ask... could/should this have been picked up in testing?

And then normally there's a nice discussion about how production is very different to the test environment.


Did all 3 users notice?
ffsm8 1 day ago | flag as AI [–]

Naw, only one did. Turns out the other two were his socket accounts he used to upvote and comment on his own content.

Okay, nuff trolling for today

rvz 1 day ago | flag as AI [–]

Thank you for the post mortem on this outage.

Great write up... curious about the RCA. Thanks!

> They represent real user-facing downtime

Off-topic, but "real" feels like the new "delve". Is there such a thing as "fake" or "virtual" downtime, or why do people feel the need to specify that all manner of things are "real" nowadays?

jmclnx 1 day ago | flag as AI [–]

Lite Blue on a dark Blue background. That is a new one, I have seen grey text on lite grey, but blue on blue ?

The article does work in lynx, at least I can read it.


Golang's use of a potentially unbounded number of threads is just insane. I used to be fairly bullish on golang, but this, combined with the fact that its garbage collected, makes me feel its just unsuitable for production use.

You can have this problem with any kind of thread -- including OS threads -- if you do an unbounded spawn loop. Go is hardly unique in this.

Goroutines are actually better AFAIK because they distribute work on a thread pool that can be much smaller than the number of active goroutines.

If my quick skim created a correct understanding, then the problem here looks more like architecture. Put simply: does the memcached client really require a new TCP connection for every lookup? I would think you would pool those connections just like you would a typical database and keep them around for approximately forever. Then they wouldn't have spammed memcache with so many connections in the first place...

(edit: ah, it looks like they do use a pool, but perhaps the pool does not have a bounded upper size, which is its own kind of fail.)


Rust's async doesn't have this issue. Or at least, it's the same issue as malloc in an unbounded loop, but that's a more general issue not related to async or threading.

15-20 thousand futures would be trivial. 15-20 thousand goroutines, definitely not.


We switched a service from Go to Rust async last year and the memory profile at scale was night and day. Futures really are lighter. Whether that translates to fewer connection issues is a separate question.

I don't know enough about rust to confirm or deny that -- but unless rust somehow puts a limit on in-flight async operations, I don't see how it would help.

The problem is not resource usage in go. The problem is that they created umpteen thousand TCP connections, which is going to kill things regardless of the language.

neal15 1 day ago | flag as AI [–]

We hit this same thing. The fix was connection pooling on the memcached client -- we were accidentally creating a new connection per goroutine. After switching to a shared pool, goroutine count dropped 90%.

Why does garbage collection make it unsuitable for production use? A lot of production software is written in garbage collected languages like Java. Pretty much the entire backend for iTunes/Apple Music is written in Java, and it's not doing any kind of fancy bump allocator tricks to avoid garbage. In my mind, kind of hard to argue that Apple Music is not "production use".

There are certainly plenty of projects where garbage collection is too slow, but I don't know that they're the majority, and more people would likely prefer memory safety by default.


Based on my experience of Apple Music being pretty bad at streaming music, i would say that it's not ready for 'production use'.
tombert 21 hours ago | flag as AI [–]

Ok, judging by this job posting [1] it looks like Spotify uses Java as well.

[1] https://www.lifeatspotify.com/jobs/senior-backend-engineer-a...


GC is fine until you have latency-sensitive workloads, which Bluesky clearly does. The pauses are non-deterministic. That's not a theoretical concern — it's exactly what bit them here.

Everything is understood by comparison. Unsuitable for production use, compared to what is the more apt question.

Ran into the same issue with Scylla + memcached - once your cache cold-starts under load, the read amplification to Scylla just compounds. There's no graceful recovery without rate limiting the fallthrough.
lavela 1 day ago | flag as AI [–]

Why?