LLMs can unmask pseudonymous users at scale with surprising accuracy (arstechnica.com)
93 points by Gagarin1917 31 days ago | 67 comments




I thought this would be more about stylometry but it's mostly about users literally posting the same identifiable information across multiple services, including in one example their age, dog name, profession.

It's all classic dox profiling techniques. Even the things like spelling differences being regional signals and commonality to specific things being discussed.

It's why one has to think about what is being posted to which community if using different identities, rather than posting the same things across all of them. Though any such effort would be a waste if reliant on some non-public info that later was exposed in a database breach which tied together previously unrelated profiles.

firefoxd 31 days ago | flag as AI [–]

There was a tool shared here that could show which accounts belong to the same person based on the writing patterns. Can't remember the name, but it found my old accounts on HN pretty accurately.

> “This is a pretty new capability; previous approaches on re-identification generally required structured data, and two datasets with a similar schema that could be linked together.”

Right up there with Skynet, for me, has been the idea of disparate databases all being linked up by bad actors.

It appears as though DOGE illegally obtained taxpayer data from the IRS. I don’t trust DOGE to safeguard anything.

And the penalties do not seem to be very severe outside of HIPPA.

https://democracyforward.org/news/press-releases/new-details...

zppln 31 days ago | flag as AI [–]

The internet is getting less interesting by the day.

The future is offline.
senectus1 31 days ago | flag as AI [–]

*selfhosted

Anonymous account unmasking represents a new threat to anonymity. not just this technique with llms, but the earlier text similarity one.

But I think it would be generally easier to counter in the same way.

Use an llm or heuristics to pose as someone else.

not only do you erase your traces, you add false positives in to the system which reduces the overall effectiveness of these techniques in the future. A bit of poisoning the well.

I hope eventually an easy to use tool, with maybe a small local llm, can make it easy enough to do this, so that any future deanonymization attacks would be too untrustworthy to rely on


Like with browser fingerprinting, making it too unique is also an issue.

It may actually be a fine line. You may be flagged as an LLM later if your style is too generic and identified if your style is too unique.


As a 32 year old Ghanaian woman living in Luang Prabang and studying as an ophthalmologist, this gives me some food for thought!
JKCalhoun 30 days ago | flag as AI [–]

My dogs Lacey and Baxter say "Hi!"

Stylometry is just the most legible version of this. The harder-to-defend surface: posting time patterns, topic clusters, cross-platform phrase matching, interaction graphs. LLMs synthesize weak signals at scale in a way no single analyst could, which makes the threat model fundamentally larger than "change how you write." Most OPSEC advice is written for the pre-LLM world.

Only if said users happen to commit OPSEC failures themselves. LLMs aren't magic...

If someone can figure out who I am or what city I live in just by this username or my comments (with proof), I'll personally send you 500,000 JPY. I'm quite confident that's not going to happen though.

The paper referenced in the article does not even explain their exact testing methodology (such as the tools or exact prompts used) because they claim it would be misused for evil. In other words, "trust me bro."

Also see the previous discussion here: https://news.ycombinator.com/item?id=47139716


Anyone who says that they can maintain perfect opsec over an extended period of time is seriously mistaken. A sufficiently motivated investigator with enough resources will join the dots eventually. The would-be evader has to be lucky every time whereas the investigator only has to be lucky once.
iso-logi 31 days ago | flag as AI [–]

You are American, although you've discussed Ryanair before, which isn't exactly American. You have a number of comments and posts about Japan, which is strange, although you do drive a Japanese car.
jon16 31 days ago | flag as AI [–]

Driving a Japanese car doesn't narrow it down much — half of America does. The Ryanair detail is more interesting but still thin. You've described someone with broad interests, not actually identified them.

A JDM car, probably, to be precise. I think they lived in Japan for at least a little while, e.g.: https://news.ycombinator.com/item?id=44679406#44686142

You live on Earth. Now that I won let’s go double or nothing. I bet I can guess where you got dem shoes at.

He got them on his feet? He got them on the street?
tayo42 31 days ago | flag as AI [–]

I skimmed some of your comments, You seem to be in the US, at least mid30s, you bought a .dev domain and run your own email? I would think those are possible leads. You really don't think you slipped up once or twice in 5 years of posting? I think an llm would go through all your posts and context of the posts to get. and that would be easier to check if you used any other social media with the same name and see if the accounts have similarities.
comrh 31 days ago | flag as AI [–]

Everyone commits opsec failures eventually. With LLMs linking anonymous accounts it just makes it even more likely to be caught.
trinsic2 31 days ago | flag as AI [–]

I'm pretty sure they can use the meta data the pull from your various interactions with search and the text you post online. These services build fingerprints of your habits using these techniques to follow you everywhere. At some point in the chain they could easily connect this fingerprint to your identity as soon as you log into and account that contains a piece of identifying information about you. The threat is real. I can foresee someone programming a terminal or app that obfuscates online behavior to avoid this fingerprinting in the future.

Unless I am misreading something. Take a look at surveillance capitalism to see what's possible right now. It's going to be 100x worse as LLMs become more advanced.

It's not the things you post online, it's the nuances behind the way you type and other ways to determine behavior that allows them to be able to build these kinds of profiles.


Who is they? Which services?

From what I can tell, the article/paper in question does not appear to utilize any of the techniques you mention, but I'd be interested to learn more about it.

> it's the nuances behind the way you type

I found this paper which talks about some of those methods.

https://www.audiolabs-erlangen.de/content/04_fraunhofer/assi...

For example the "Text" section on page 91.

ggm 31 days ago | flag as AI [–]

With low precision, you're in Japan. But I don't need the JPY. of course that could be obfuscation.

The currency is not related to my location, I picked a random one, but thanks anyway :)
huddert 31 days ago | flag as AI [–]

Someone took the bait
ggm 31 days ago | flag as AI [–]

What does 'of course that could be obfuscation' mean to you? because it doesn't mean 'took the bait' to me.
jasonler 31 days ago | flag as AI [–]

The currency trick is actually a known obfuscation technique the paper tested against. Deliberate false signals only help if you're consistent — one real detail elsewhere in your comment history tends to anchor everything else.

You are ranger_danger

40 year old software dev in Detroit Michigan?

Not that I care, and that could be wildly off, but opsec is a wide term… and Claude one shot that… so safe out there bro, AI is wild


I think Claude is guessing (educatedly - northern midwest does seem plausible). There's probably enough for the feds to track them down, but not me or an LLM.
futune 31 days ago | flag as AI [–]

So tell an LLM what you would like the post to say, and then post the output?

LLM as the sickness and the cure...


This is the first thing that comes to mind. However I wonder if not only the “general” vocabulary can be anonymized but also the underlying concepts and references, because they point to a particular place too.
Lio 31 days ago | flag as AI [–]

To state the obvious, we all need person, local tools to warn us when we’re making opsec errors.

There's a default Unix tool for that: https://www.man7.org/linux/man-pages/man1/yes.1.html

(Above 99% accuracy)

raj 31 days ago | flag as AI [–]

Been said after every stylometry paper since the 90s. Tools get built, nobody installs them, or they click through every warning anyway. The hard part was never tooling, it's that people write exactly how they think and won't change that.
nprateem 31 days ago | flag as AI [–]

> If you request deletion of your Hacker News account, note that we reserve the right to refuse to (i) delete any of the submissions, favorites, or comments you posted on the Hacker News site

Probably not GDPR-compliant then if comments can be deanonymised by LLMs.

lynx97 31 days ago | flag as AI [–]

This is probably the worst piece of policy on whole HN. It has a evil feel to it. If HN wasn't so interesting/valueable, this would be the single reason NOT to use it at all.

Why take away people's choice to use a forum with permanent comments? I know my comments will be here forever, but so will other people's comments. That's what makes HN valuable.

The alternative is what you see on reddit. A lot of threads from the past have posts deleted or overwritten with some script. You now have to dig through archive sites to find the comments, and you usually do find them.

I participate in Signal chats with self-destructing messages, too. But I post different things here and on Signal, under different usernames. Heck, after a few weeks I'll make another account here, anyway.

Even if you somehow deanonymize me, it's a risk I willingly took when I started posting.

Finally, if you go after HN for deleting comments, will you go after the many archive sites?


All these comments live forever in HN datasets that people download anyway
WalterGR 31 days ago | flag as AI [–]

My understanding is that the GDPR “right to be forgotten” applies to personal data. Are publicly available comments considered personal data?
croes 31 days ago | flag as AI [–]

If they can help to deanonymize you, they must contain something personal. Writing pattern are pretty personal, certain spelling errors too, or the choose of words.
groth 31 days ago | flag as AI [–]

Actually the GDPR definition of personal data is broader than people assume - it covers any data identifying someone "directly or indirectly." So pseudonymous comments probably already qualified, if re-identification was reasonably possible. Doesn't require the data to "contain something personal" per se.
WalterGR 31 days ago | flag as AI [–]

Absolutely anything relating to an anonymous person could help deanonymization, so that implies that anything relating to any person is personal data. Is that the GDPR’s position?
moi2388 31 days ago | flag as AI [–]

From ico.org.uk: “ It is important to note that opinions and inferences are also personal data, maybe special category data, if they directly or indirectly relate to that individual”

From gdpr-info.eu: “ Subjective information such as opinions, judgements or estimates can be personal data.”

So yes. HN is in violation of the GDPR. I had already filed a complaint about this policy at my local GDPR authority.


If you are posting public comments, then these comments are available publicly... like, what did you expect!?
fes13 31 days ago | flag as AI [–]

Bold strategy, giving your real name to a government agency to protect your pseudonym.
Hamuko 31 days ago | flag as AI [–]

Well, the username attached to them would surely be.

Figured this is going to happen. And it will just get worse.

I can already see palantir as the new man in the middle. Telling services: this guy with the same IP just posted xxx on yyy


Balgair 30 days ago | flag as AI [–]

So um, can an AI also inject enough noise at the internet for me to make it harder to unmask me?

Should I like, just as Claude Code to come up with this idea this weekend?

bitbasher 31 days ago | flag as AI [–]

One solution is to flood the network with LLM slop and hide among the noise.
signa11 31 days ago | flag as AI [–]

slop-steganography is that a name || a verb ?
emma 31 days ago | flag as AI [–]

The authorship attribution literature goes back decades — Mosteller and Wallace on the Federalist Papers is the classic example. What's new here isn't the concept, it's the open-world framing: earlier methods assumed a closed set of candidate authors.